NATS Weekly #14
Published on Feb 21st, 2022
Week of February 14 - 20, 2022
🗞 Announcements, writings, and projects
A short list of announcements, blog posts, projects updates and other news.
Official releases from NATS repos and others in the ecosystem.
- nats-io/nats.deno - v1.6.1
- nats-io/k8s - v0.13.1
- nats-io/natscli - v0.0.29
- nats-io/jsm.go - v0.0.29
- nats-io/nats.ws - v1.7.2
- nats-io/nats.js - v2.6.1
- basis-company/nats.php - v0.6.1
- choria-io/asyncjobs - See video below
Github Discussions from various NATS repositories.
- nats-io/nats-server - 2 Node High Availability cluster for nats stream
- nats-io/nats.go - Get structured data when using SubscribeSync
- nats-io/k8s - Authentication and authorization for JetStream controller
💡 Recently asked questions
Questions sourced from Slack, Twitter, or individuals. Responses and examples are in my own words, unless otherwise noted.
What is the trade-off with pull consumer batch size?
Thanks to R.I. Pienaar and Jean-Noel Moyne who provided an answer to the original question on Slack. This is my attempt to expand with a bit more detail.
The primary API for a pull consumer is the
Fetch method on a subscription. Its take a
batch size which is the number of messages it would like to fetch from the server and, optionally, how long to wait until a batch is available. By default, the
MaxWait time defaults to the max request time set on JetStream (which itself defaults to 5 seconds).
To identify the appropriate batch size (and scaling requirements), you can ask yourself a few questions related to the specific workload:
- What is the rate of messages being written to the stream and matched by the consumer? One per second? 100k per second?
- How many messages can be processed per subscription per second? 100 per second? 10k per second?
- What is the max latency desired for any given message to be processed?
The maximum throughput that could be achieved is keeping up with the publishers to the stream (obviously we can't consume more than what is published). To make the math simple, if we have 100 messages per second being published and a subscriber can only process 10 message per second, we would need 10 subscribers bound to that consumer to scale out the processing (and of course this assumes processing across subscribers are not in order).
How does batch size fit into all of this? Each
Fetch results in a request to the server with the desired batch size. Assuming there are always enough messages in the stream to consume, this request will won't block and the server will publish the N messages to the client (over a unique inbox subject). Once the full batch has been received by the subscription, the messages can start being processed. Messages are not available to be processed until the batch has been received. This is apparent in the method signature of the
Fetch method which returns the slice of messages as one full batch.
A related point is that when there are concurrent subscriptions on the same pull consumer (to scale out processing), fetch requests are processed sequentially. This ensures that each batch contains a consecutive set of messages rather than a random subset. The side effect of this is that all subscribers are blocked getting their batches until the request ahead of them has been fulfilled.
The main trade-off here is that decreasing the batch size will decrease the median latency of any given message (no need to wait on other messages to be batched), however overall throughput will decrease (more fetch requests to get messages to subscriber). Conversely, increasing the batch size (to some upper bound) will increase throughput, but will also increase the median latency.
If throughput is the main concern (keeping up with the publish rate), then adding more subscribers would help scale out the processing. If latency is the main concern, you may want to focus on optimizing the processing logic.
Hopefully this gave a bit of intuition, but ultimately you will need to test your workload since there are variables like message size that could impact memory pressure if the batch size is too large, etc. Check out Jean-Noel's video on Benchmarking NATS Core and JetStream using the NATS CLI.
What does "Allow message roll-ups" mean when creating a stream?
I went into a good amount of detail for this feature in a previous newsletter Q&A. The prompt when creating a stream (if using the CLI) determines whether this feature is enabled, i.e. the server will respect the header is provied by the client. The roll-up feature itself is used for summarizing multiple messages into one.
One example is a bank ledger where all the credits and debits for the past month are summed together to be the new starting balance for the next month. Of course all the historical transactions need to be persisted, but that could be done as part of the roll-up operation. Read messages in subject, write to cold strorage, compute new balance, and then publish message with the roll-up header.
Are multiple subject filters supported for a consumer?
Currently, this is not supported. However, depending on your use case, you may be able to get by using a different approach, either multiple consumers or one consumer with a broader subject filter (refer to the link for more detail).