NATS and Synadia: Pioneering the Edge-Native Future
Derek Collison highlighted how NATS and Synadia are leading a paradigm shift in distributed systems by focusing first on connectivity, then data, and finally workloads—the opposite of traditional approaches. NATS and Synadia enable seamless communication across multiple regions, clouds, and edge locations without complex infrastructure like DNS tricks or load balancers.
This architecture has proven particularly valuable for AI applications, from data collection to inference and agentic systems, as well as for managing massive fleets of connected devices spanning factories, vehicles, and consumer products. As we transition from cloud-native to edge-native computing, NATS is positioned to help organizations deliver innovation in days rather than months.
“As much as the rules shifted and changed as we made a shift from data center to cloud and cloud native, there'll be an even bigger transition to edge native.”
— Derek Collison, Creator, NATS
Go Deeper
Full Transcript
Derek Collison:
Thanks, Nate. Welcome to Rethink 2025. We're happy to have everyone here for this year's event. When we started Synadia over seven years ago, we bet on several things. We bet on distributed systems continuing to evolve and have more moving parts.
We bet on the parts continually changing to be very dynamic and agile, where things come and go and move around. And we bet that the systems will be stretched out across regions, across cloud providers, and out to edge locations. And we bet on the fact that folks in this new era would prioritize decreasing latency to access data and services. Our approach is very different, as Justina pointed out. We started with intelligent connectivity, then moved to data, and then workloads.
This is the exact opposite of most approaches. Let's start with workloads, add in networking to access some data, and then layer security on top. For connectivity, we introduced a very few powerful concepts, location independent, end to end communications, and pull and push patterns. For data, we built on the connectivity layer providing synchronous and asynchronous replication and materialized views. And then finally, we looked at workloads built on top of the previous two layers.
They're federation free. They have no requirement for printer based security models. So this is not a competitor of things like Kubernetes or OpenStack or Docker or any of that stuff. And it's important to note that what Synadia is doing is it's we're changing the how, not the what. Meaning that systems are being designed with microservices and key values and object stores, but the how, the Synadia advantage here, is what is really different.
Synadia's tech stack, based on the NATS ecosystem, provides some very powerful use cases. For example, Synadia Cloud, our multi-tenant SaaS, does over a hundred billion messages per day, and it's probably the cheapest way to access multiple regions, multiple cloud providers, and extend out to the edge. Within the financial systems within the EU, where the requirements around single region but multiply multiple cloud, requirements, Synadia was able to provide access to both Azure, GCP, and AWS in single regions for many of our financial customers. This highlights the flexibility of the system topology that Synadia and NATS can offer. Global microservices at scale, any region, any cloud, any edge.
The key here is there there's no DNS tricks are needed, no load balancers or GSLBs, no API gateways, WAFs, no service messages required. And then fleet management. Leaf nodes are a superpower within the NATS ecosystem. They provide a logical and secure separation between organizations, companies, partners, etcetera. The extensive server topologies for cloud and edge that Synadia allows provides powerful constructs for secure access to fleet management situations.
We have digital twins and mirrors. We have source muxing and demuxing, which is the way of saying we can collect data from all of the fleet items and collect them within the cloud for quick observability. We can also demux those. Right? So in other words, we can pull things out of those mux streams and key value stores in order to demux those things on the cloud side.
And this can be applicable to cars, factories, distribution centers, remote charging stations, satellites, and even medical devices. Ongoing work on massive observer populations within this effort from Synadia, For example, key values with over a hundred thousand or 1,000,000 observers is something that is unique to what Synadia can do. Traditional consumers are very heavyweight and can clog systems down. Yet Synadia's approach in this area and our innovation here allow things like republishing direct gets with global sequencing to allow this type of observability. Synadia allows companies to deliver innovation faster across cloud and edge.
Let's talk a little bit about the elephant in the room these days, AI. First in India, we started with a lot of AI startups in terms of AI data generation and collection from the edge, meaning that they didn't want their applications that were generating the data to be concerned about getting the data to the cloud for training purposes or quality assurance purposes, things like that. With Synadia and the NATS ecosystem, data is stored locally, and then the NATS system takes over and mirrors that into the cloud. It self heals itself. It puts itself back together such that the applications don't have to worry about it, and they can get back to what they're designed to do.
We then started seeing customers of ours start to use East West signaling for things like anomalies, where they wanted to know if an anomaly in one location was being replicated in other locations. Then things started to get really, really interesting. About eighteen months to two years ago, we really believed that the inference ecosystem would develop into its own tech stack and that Synadia could help. From prompt augmentation to RAG and RAG plus, right, we felt that location independent access to this data was extremely critical, especially not knowing where things might be, as well as the the ability for us to do push versus pull. And what I mean by that is that if prompt augmentation, let's say, was trying to pull data all the time to augment a prompt, if all of a sudden the inference layer was running at a million requests per second, the RAG layer would have to also support 1,000,000 messages per second, right, or 1,000,000, you know, request replies per second.
With Synadia, you know, we allow systems to be either push or pull. That's kinda one of our superpowers at the connectivity layer. Meaning that I could simply say, hey. Whenever this data changes, just let all of the inference endpoints know so that they don't even have to ask. They actually have the data locally for them.
When we moved on from prompt augmentation and looked at the model traversal, we believe that there would be multiple models, traversal and flow. For example, in a connected car, right, we might be talking to an LLM within the cloud, which might access multiple models for, you know, different types of operations that might be on the vehicle, might be on cell towers, might be in the cloud. You don't necessarily know where they are. And now in 2025, you know, agentic AI and the promise of what this is going to bring, it has similar challenges that Synadia is extremely well suited to solve for the largest AI companies in the world. The same things.
Right? Location independence, end to end communications, agile and dynamic systems that are secure across all topologies. We also started to see a manufacturing renaissance, and we'll hear more about this later within the conference. But combining information flows with low level device information from MQTT devices, for example, allows this notion of manufacturing to combine both low level data processing at the device level with higher level information processing. Not to mention the fact that the physical embodiment of AI that is rapidly progressing beyond anything that I thought was going to happen within the next couple of years is, again, kind of keeping the the renaissance moving forward.
And finally, fleet management and edge. As we discussed earlier, from factories to cars to cafes to toothbrushes, everything securely connected, everything dynamic and agile, everything observable, from tens to hundreds of thousands to millions of managed items within a fleet. In closing, no time in history is like the present. The amount of progress and innovation is an all-time high. The technology ecosystem is moving at a faster pace than I've ever seen in my career.
As much as the rules shifted and changed as we made a shift from data center to cloud and cloud native, there'll be an even bigger transition to edge native. Longtime NATS users will want to explore workloads and connectors, make sure you're getting the most out of microservices, k v streams, and object stores. For new users, now is a great time to explore what Synadia has to offer and design a modern edge aware system that innovates and can be delivered in days, not months or years. There is a massive and dramatic shift in distributed systems design caused by edge and now AI, and Synadia’s technologies can help companies and individuals deliver innovation faster. Thanks again for joining us.
I truly hope you enjoy the conference.