NATS Weekly #5
Week of December 13 - 19, 2021
🗞 Announcements, writings, and projects
A short list of announcements, blog posts, projects updates and other news.
⚡Releases
⚙️ Projects️
- basis-company/nats.php - New PHP client with JetStream support!
🎬 Media
- Mutually Trusted NGS Credentials (video) - Matthias Hanel
💡 Recently asked questions
Questions sourced from Slack, Twitter, or individuals. Responses and examples are in my own words, unless otherwise noted.
Will a NATS client establish a connection to a server with the lowest round-trip time (RTT)?
Given a typical cluster of three nodes and a client with the corresponding three connection URLs, localhost:4222,localhost:4223,localhost:4224
, the client will randomly chose one by default (there is an option to disable randomizing if needed). This means that RTT is not considered when choosing the server.
However, this brings us to a subtly different question, if I have a latency sensitive workload, how can I control RTT?
This largely depends on the deployment topology and NATS provides a few options for optimizing for latency. More generally NATS supports adaptive deployment architectures which means the topology can change fairly easily and dynamically as your system needs change.
The general recommendation for a standard NATS cluster deployment is within a single availability zone on separate servers, typically starting with three servers.
One option is to deploy individual NATS clusters to zones in regions (anywhere in the world) where your clients/users are located.
Since this is a fairly common scenario for national/global services, NATS has an additional feature called gateway connections which form super clusters. These are dedicated connections between clusters which allow for clients to connect to any cluster and publish or subscribe to any subject across clusters, seamlessly.
When determining the right topology for your system there are two workloads to consider, services and streams. A service (implemented with request/reply) and streams (using standard pub/sub or JetStream).
The easier consideration are streams. If the source of these streams are inherently bound to a geography there isn’t much that can be done for optimizing latency when we need to shuttle data across regions. There are workload-dependent options that could work around this depending on what the subscriber of that stream is doing with the data, but generally, if data needs to move to another geo, you are constrained by the speed of light (in the limit).
What about services? If we have a scenario where a client in Japan is making a request to a service only available in New York, that is less than ideal. The simple solution is to deploy a replica of the service in a region of Japan.
When a client connect from Japan, NATS will automatically route requests to the service in Japan rather than New York. This works since the interest graph is tracked across clusters and NATS will opt for the shortest path.
A great side effect of this is that if the service goes down in Japan, for whatever reason, all requests fallback to the New York deployment, transparently. Of course there is higher latency, but the request still can be serviced.
A third option NATS provides are leaf nodes. These are (usually) individual NATS servers that are even more granular and often deployed directly next to a set of clients. Leaf nodes are intended to act as an extension to an existing cluster (or supercluster) but with hyper-locality and a security and authorization boundary in mind.
As it relates to RTT question, this is straight from docs (emphasis mine):
Leaf nodes are useful in IoT and edge scenarios and when the local server traffic should be low RTT and local unless routed to the super cluster. NATS’ queue semantics are honored across leaf connections by serving local queue consumer first.
This follows a similar pattern to individual clusters connected by gateways, but with a lighter footprint. Another key difference is that leaf nodes do not need to serve traffic propagated from the cluster.
Unlike cluster or gateway nodes, leaf nodes do not need to be reachable themselves and can be used to explicitly configure any acyclic graph topologies.
JetStream adds some additional factors to the appropriate solution, but that can be explored another time 😉.
What is the new Msg-Rollup
header and how can it be used?
Message rollups must be explicitly enabled on streams via the --allow-rollup
CLI option or programmatically via [StreamConfig.AllowRollup](https://pkg.go.dev/github.com/nats-io/nats.go#StreamConfig)
in the Go client (other languages should be similar once implemented).
A client can then publish a message with the Msg-Rollup
header specifying sub
or all
which tells the server to delete all prior messages in the stream at the subject-level or entire stream, respectively.
When would you use this? There are two use cases that come in mind to me.
First is a pure state-centric stream where each message represents a snapshot of state at the stream or subject-level. For example, user.profile.1
and user.profile.2
could be subjects who message data corresponds to the most recent snapshot of the profile. Active consumers would of course just behave as normal and replace their local state with the latest snapshot, but for offline consumers and managing stream size, a Msg-Rollup
would result in just the latest snapshot being observed.
All that said, for state-tracking, you should probably just use the KV API.
Another related use case is for event sourced entities, where the stream acts as the source of truth for changes modeled as events. In the purest sense of the concept, we never want to throw events out, however we may want to periodically snapshot the state for a given entity in the stream (again using subjects like user.profile.1
and user.profile.2
).
This is a contrived example since profiles are changed very often, but for the sake of example, lets assume 10 changes were make and therefore 10 events were written to user.profile.1
. A snapshot message could be asynchronously written to the subject containing the state (however the profile entity is modeled). Consumers would see something like SnapshotCreated
event and know that this value can simply replace the existing local copy. Subsequent events would still be written to the stream and applied incrementally like before.
This basically enables constraining the size of the event history for a given subject. In practice this should be coupled with backing up the events that were deleted (due to the rollup). A separate consumer could be responsible for writing events to object storage in batches. Once a batch is committed, a rollup can be done. The rolup message could even include a URL to the location of the batch that previously written if the consumer ever needs to replay historical events.
This second use case will likely be solved by tiered storage which would transparently handle moving older messages to cheaper storage, but in the meantime it could be solution.