All posts

Why NATS.io JetStream is so well suited to AI at the edge

Jean-Noël Moyne
Oct 25, 2023
Why NATS.io JetStream is so well suited to AI at the edge

AI at the edge is a pretty hot topic right now and going yesterday to an event centered around it reminded me why NATS.io JetStream is such a uniquely well fitted to the AI at the edge use cases.

The whole point of 'AI at the edge' is to push AI processing (not the training of model, the actual use of models) to the edge because many (most?) edge devices are partially connected or have limited bandwidth, and you can't just transmit tons of raw data from the edge up to the cloud where you can use trained AI models to identify things (opportunities, threats, ...) from that raw data.

Pushing AI to the edge means that you can push the processing of that voluminous raw data to where that data is generated, such that you can do that processing continuously and in real time and therefore identify those opportunities or threats as they happen in real time.

What AI at the edge means is the ability to go from large amounts of low-value data (e.g. video frames from a real-time camera feed, streaming sensor data, etc...) where portions of that data can be lost (e.g. loosing a few video frames is ok, there's always new ones coming in) to a much smaller amount of data (i.e. the events the AI model has identified from the raw video or sensor feed data) that is however of much higher importance, and therefore that you do not want to loose on the way from the edge device to wherever that important event is needed (i.e. in the cloud).

Do you want your AI at the edge developer to have to worry about all the things you have to do (e.g. make REST calls, develop something on top of MQTT or web sockets, handle all failure scenarios like when the edge device looses or had poor network connectivity, persist messages locally to file such that even if the device gets stopped or rebooted while disconnected the data is still safely transmitted when the device and connectivity come back up) to transmit those important events back to the cloud?

The answer is (IMHO) a resounding NO: you want the developer to spent it's time implementing and training those AI models and focussing on identify those important events that the business cares about rather than on writing infrastructure code.

And NATS.io JetStream is uniquely positioned as the best kind of infrastructure to safely and reliably transmit events generated by AI at the edge because:

  • It is a message 'streaming' platform which includes its own distributed 'shared-nothing' persistence such that data is safely transmitted even in the face of nodes being shutdown and re-started.

  • Unlike other message streaming platforms, NATS has a built-in extremely flexible and powerful deployment architecture that extends to the edge, including the partially connected edge. You can create clusters, super-clusters of clusters, and (and this is crucial for extending the streaming service to the edge) extend connectivity through "Leaf Node" servers that can safely store and forward messages. Leaf Nodes are something unique to NATS that you will not find in other 'streaming' platforms, and they can even be daisy-chained.

  • Message brokering, persistence, clustering, super-clustering and leaf nodes: everything is included in a single 15 MB binary (and no JVM needed)! Which means it ran run in very small / low power hosts.

  • To deploy that infrastructure globally (and all the way out to the edge) you do not need to create many different independent clusters and then connect them together through separate 'connector' or 'mirror maker' type processes, and mirroring or sourcing of streams between clusters or between leaf nodes and clusters is also built-in.

  • The NATS server can use Websocket as a transport (meaning it has its own set of protocols on top of either TLS or (Secure) Web sockets), because the way from the edge to the cloud is often full of proxies where HTTPS and Websocket are the only things allowed out.

  • The NATS server can also act as an MQTT server: simply point your MQTT client devices to the NATS server (which can be a leaf node) and that's it: no need to deploy some MQTT broker plus a separate MQTT connector.

  • Let's not forget security: multi-tenancy and an extensive security model that allows for delegated administration and can integrate with your existing IdP are also completely built-in.

  • And finally streaming messaging is only one of the features of NATS.io JetStream, from KV and object store to service-mesh functionality, it also does a lot more than just stream logs from point A to B.

So, if you are doing AI at the edge, do yourself (and your developers) a favor and take a look and consider using NATS.io as your communication infrastructure.