View all talks

Seamlessly Integrate NATS with External Systems

Bridge NATS and your existing infrastructure using Synadia Connectors to create sources and sinks that can run anywhere

Built on the new NATS Execution Engine (allowing connectors to run anywhere cloud, on-prem, edge), Connectors have a clear, simple responsibility: moving data in or out of NATS.

Synadia Connectors support flexible runtimes, empowering you to integrate with any external system, including legacy databases or specialized applications, regardless of the programming language. Lightweight transformations ensure you can shape messages efficiently without the complexity of full data pipelines.

With built-in tools for filtering and mapping, Connectors simplify integration workflows—enabling you to seamlessly bridge your NATS ecosystem and existing infrastructure.

"When we talk about connectors, one immediate question that you get is how fast is it? How much data can we put through it per second?

Or can it deal with this specific system that we can only address using this programming language?

We came up with the idea of runtimes. Now a runtime is basically an abstraction.

It's actually the thing that will run your connector. And it can be written in any language as long as it is conformed to the connector specification. As long as it is conformed to that, you can manage whatever runtime you have, whatever connectors that are using that runtime through the connector tooling, which is really, really great."

— Daan Gerits, Synadia

Go Deeper

Full Transcript

Daan Gerits:

Thank you very much, Nate. Hello, everyone. My name is Dan, and today, we're going to talk about connectors and what they mean in the context of NATS. Now, before we get started, there is one question that comes back every single time again, and that's how do I connect NATS to the rest of my organization? How do I make it talk to that database that is sitting down there or to that legacy system that is sitting down there that we can only address through one specific programming language?

How do we deal with that in the context of NATS? And up until recently, we didn't really have a clear answer for that. Basically, you would be on your own and write a little application to communicate and to send the messages over. Now come to think of it, we already have something to running to run applications. Yeah.

That's right. We have that thing that Jordan presented, the workloads. So what we actually did is build the Connectors framework on top of workloads. And this allowed us to basically run your connector anywhere, whether it's in the cloud, whether it's on prem, or whether it's on the edge. It doesn't really matter.

That's the beauty of using workloads for this. On the other hand, there are some things that you want connectors to do and not to do. Well, what is a connector? It's a very good question. For us, it's very, very straightforward what it should be.

A connector should have one responsibility. It's getting data into NATS or getting data out of NATS. That's the whole point. Yes, you can have little transformation sitting there on a per message basis, but that's pretty much it. We do did not want to build full data pipelines or even allow you to build full data pipelines as part of your connector.

And the reason why we don't want to do that is because we have something bigger in mind for that. NATS is built and is a system that relies on messages. So what I what we actually want to do is give you the power to basically pass these messages between different components. So when you look at a data pipeline, you would actually see something like this. At one stage, you're getting data in from what is a system that you already have and you will put it on NATS.

Now, it could very well be that there is a little workload running somewhere that will take those messages off, do transformations or whatever it needs to do, and put the result of those transformations back onto NATS. And this goes on and on and on and on until at some point, you want to write that information back to an existing system. And for that, you use a connector again. I will show you this in a demo later on, but there is something that we have to talk about. When we do or when we talk about connectors, one immediate question that you get is how fast is it?

How much data can we put through it per second? Or can it deal with this specific system that we can only address using this programming language? Now that was a tricky one. It took us a while, but we came up with the idea of runtimes. Now a runtime is basically an abstraction.

It's actually the thing that will run your connector. And it can be written in any language as long as it is conformed to the connector specification. As long as it is conformed to that, you can manage whatever runtime you have, whatever connectors that are using that runtime through the connector tooling, which is really, really great. The next great thing is that these runtimes are actually running as workloads inside Synadia, inside NATS, and that's great as well. Keep in mind, workloads can run anywhere as I said previously.

So not only in Synadia, it can run on your computer, it can run on that Raspberry Pi that you have at home. Doesn't really matter. So these runtimes are a secret superpower of sorts that allows us to do all of these things. Now it's very easy to talk a lot about these things, but we especially you guys want to see what it actually works like or what it what it what it feels like. And, yeah, we're getting to that demo.

So I've set up a little demo that will read data from MongoDB. It will do, like, change data capture. So listen for the changes that are being or that are happening on a sensor readings collection and use a connector to send that data onto a subject on NAS. Now once that data has been written, I will use a second connector to listen to those messages. And every time that I find an outlier, I'm going to write that outlier back to, MongoDB.

And to make things a little bit more visual, we create a little dashboard that will show you exactly what is going on and what is going through. So without further ado, let's jump into the demo. As part of this demo, we will be reading information from MongoDB and send it to NATS. For doing that, I created a MongoDB instance. And as you can see over here, I have sensor readings.

Now if I do a quick refresh, you can see that there are already documents in there with sensory data, in this in this case, temperature data. Now let's create that connector, and we do so by pressing the create connector button and selecting the connector type. Now the kind of connector that we want to create is called an inlet. It's called so because it is taking information or taking data from an external data source and putting it into NATS. So in our case, we will be reading from a MongoDB change stream.

And this MongoDB change stream needs to be configured, obviously, so we will need to provide it with a URL at the very least. Now only providing a URL means that everything is being captured. Every single change to that MongoDB instance is being captured. Well, that's a bit too much, and we want to scope things down to a specific database in one case and a collection within that database just to narrow it down even further. Now second part is that we need to write that information or write those messages somewhere, and we will be writing it to the NATS demo instance, one that you can actually connect to right now and start playing around and experimenting with NATS.

We will also be sending it to a specific subject. And in this case, it's going to be the Rethink Connect Sensor Data subject. This will show up just in a moment when we start deploying our connector. Last but not least, we will be giving our connector a specific name, And we will also give it a description so we can tell the part once we have a whole bunch of connectors out there. Okay.

Let's deploy this connector. There we go. And as you can see, we see messages basically flowing through. Now what you can immediately tell is that these messages don't really look like the messages that we have inside of MongoDB. Right?

Instead, we get a whole lot of extra information that we're actually not that interested in for this specific use case. So what we want to do is we want to make our messages cleaner, and we're going to do so by using a transformer. Now a transformer is a piece of logic that we can add to a connector, basically for transforming a message as it flows through the connector itself. Now there are two types of connectors of transformers that we can add to a connector. The first one being a mapping, which is a little script that we can add in which we say how the message should be mapped, or we can use a service transformer, which allows us to call out to a different service on NATS and basically send our data there, do something with it, and get the response back and use that in the fur in further proceedings of our connector.

So in this case, we're going to use a mapping, and I already written my mapping, which looks like this. So, basically, I take the full document, and I replace everything that was in there. I'm I put everything on the at the root level of my message. So let's save this and simply stop our connector and start our connector again. There we go.

Now if we take a look, we can already see that this message looks a lot better. And this actually concludes the first step of our demo. Now what do we actually want to accomplish? Well, I explained earlier what the goal was of this demo, and it was reading sensor data, but then also detecting the outliers. Well, we are reading the sensor data, but we're not detecting the outliers just yet.

In order to do that, we need to build a new connector. So let's do just that. Press on the create connector. And in this case, we're going to select an outlet because we will be reading data from NATS, and we will be sending it to MongoDB if it it adheres to a specific criteria that we set up upfront. So we have the outlet.

The first thing that we need to do is define where the messages need to come from. And, obviously, we will be reading from the demo NATS environment, and we will be reading from the sensor data topic like we did previously. Now I will also add a queue just to make sure that if we have multiple instances of our connector that things don't get messed up, like you won't get multiple instances that will be processing the same message. Now we will be writing to MongoDB, so I select MongoDB, and then I need to provide a little bit of information again about MongoDB, the first being the URL that we need to connect to it. Once we have the URL, again, we need to specify which database we want to write to and then which collection in that database we want to write to as well.

Now the document map is something specific where we basically say how a message, a NATS message, needs to be transformed into a document. Now the messages already look pretty okay, so we can just pass them on as they go along. There are some other options that we can select here as well, like the the operation that we want to perform. In this case, we're just going to rely on insert one, which is basically inserting a record for every message that it that is being passed. We will give a specific name for it as well, so let's call it analyze sensor data, and we will give it a description like so.

Now we have our connector, but it's not finished just yet because if you would start it like this, it would just get the information from NATS core and send it to MongoDB one on one, which isn't what we want. We actually want to do filtering. We want to make sure that only messages that have a temperature higher than 35 degrees centigrade, that only those will be sent through. So what we do is we will add a new mapping, and this mapping will contain or look like the following. So in this case, we will check if the value is higher than 35.

And if it is, then we will just pass on the message. If it isn't, we will drop a tombstone, which is as much as saying, oh, we're not interested in this message anymore, and you shouldn't be processing it any further in this in this connector. So by doing so, we now have the filtering in place for getting only the outliers and sending them off to MongoDB. And this means that we can now deploy our our connector. So there we go.

It is deployed. And now if we take a look at our temperatures dashboard, it might take a while before the the outliers are basically coming in, but you can already see, like, these are the outliers that are detected. The the redder the squares are, the more times an outlier was detected for that specific device and that specific temperature sensor. So we have the data, the sensor readings that are in MongoDB. They're being sent to NATS.

They're from NATS. They're being sent to MongoDB again, but only if they match, if they are greater than a specific temperature. You all might have a lot of questions at this point, and I'm here to answer pretty much all of them if you want to. But I also want to redirect you to where you can find more information. Now first of all, there is a documentation website.

If you go to Synadia well, docs.synadia.comconnect, you can find more information on what is already available, which components are already available in Synadia Connect. Now the other thing that you might want to do is get that CLI that you can play around with. And then also get an SDK and start building either your own runtimes or build and build other solutions with it. Then last but not least, there is the Connectors channel at on the not Slack. Jump in.

Post a message. I'm usually around there. So if you have any questions, reach out, and I'll be happy to talk to you about pretty much anything. Thank you very much, and enjoy RethinkOn 2025.

Nate Emerson:

Alright. Well, thank you so much to Don. Derek's dropping some links there in the chat if you'd like to check out those resources on Connect. Can you send out an email with a combined list of all of the project product URLs mentioned in these talks? That's a great call.

Yeah.

Daan Gerits:

Yeah. Sure thing. Please send out

Nate Emerson:

a follow-up there. That should help. And here is Don for some live q and a. Thank you so much for that presentation. The demo was fantastic.

Really enjoyed that. First off, pulling up the pulling up the live q and a here. So first question we have is what language is used when filtering the messages?

Daan Gerits:

Yeah. So the language that is used is actually dictated by the runtime that's running underneath. This one in particular is using a language called BlobLang, which is an well, purposely built, yeah, mapping language that makes it rather easy to go from one or to manipulate data, basically. It's it came from a project called called Benthos that we were very big fans of.

Nate Emerson:

Yeah. And there was another question there actually saying, is the underlying runtime RPC Red Panda Connect slash Benthos?

Daan Gerits:

It's actually Wombat. Wombat is a Ford that I made a whole while back before before actually, before yeah. Just before Rockpanda acquired Benthos with the whole idea of I wanted to have something that was independent and give people the ability to experience Benthos like we used to do before. So, yeah, it's it's running on top of Wombat.

Nate Emerson:

And then looking at

Daan Gerits:

other One thing one thing to add, Nate. So this is one specific runtime. You're perfectly free to to build whatever runtime you see fit in whatever language you see fit. If you want to build one in in in Rust, please do so. That's going to be pretty fun.

Or Java or whatever. As long as you as I said in the presentation, as long as you are conformed to what that what that specification looks like, then you're fine.