Using NATS with ACI BASE24-eps card payments engine
Adrian Florea shares how ACI Worldwide leverages NATS for a mission critical payment system in the retail domain with strict non-functional requirements (NFRs).
“Let's say I benchmarked some of the traditional alternatives to NATS without giving names. Obviously one thing that stands out with NATS, and everybody said it, the simplicity. We at ACI love simple, scalable, resilient things. When we saw one binary that can do it all, we said, ‘yep, that's the one.’”
— Adrian Florea, ACI Worldwide
Go Deeper
Full Transcript
Adrian Florea:
Hello! I'm Adrian Florea, a Senior Architect at ACI Worldwide. Today I'm going to talk briefly about using NATS for a banking domain application for card payments.
This will show an example of NATS usage for a mission critical payment system, specifically for the retail Payments domain. This domain is known for very strict non-functional requirements (NFRs) like low latency, high availability, and high volume.
As a bit of background I'll talk about ACI BASE24-eps. This is a card payments switch and authorization engine that can also do non-card transactions.
The profile of the application is based on high volume, which currently sits around 2,000 TPS, very low latency. We have a maximum of 200 milliseconds latency per transaction, that includes messaging subsystem, database access, and hardware security devices. It's fault tolerant and highly available, with near 0 RTO and RPO. Horizontally scalable. And it is a C++ application that's currently deployed as a module in Monolith.
In the works we have containerized deployments. It's based generally on asynchronous service invocations, and we have a few synchronous use cases.
In 2022, as part of a modernization effort, I ran a series of messaging systems PoCs. NATS was part of that, and we had positive results from those PoCs. As a follow up in 2024 and this year, we're going to adopt NATS for several ACI products. It's mainly used as an internal messaging system for BASE24-eps for cards, which means on the path to and from our external facing Router I6S and among our internal services. All usage is based on BASE24-eps symbolic destination that's mapped to NATS constructs like subjects, streams, and consumers. We use this for ISO payment messages coming from external sources like networks, ATMs, PoS and so on. It's used for internal service calls for command messages within our processes, and those are based on subject filtering.
We use NATS for alert or event logging messages. We also use the KV Store to maintain application context. We use our own congestion control algorithm called Network Overload Management, and this is leveraging some stream characteristics like non-pending and max messages. From here we derived a configurable application selective message rejection algorithm, and we use a predefined subject hierarchy.
Here are some observations from implementing NATS with BASE24-eps. We observed low latency on the pub-sub path and request-reply, simplicity is a major advantage. We know that NATS is very easy to install and operate, and we confirmed it this time. This shows a very good opportunity to reduce the Total Cost of Operation (TCO).
We like the very good C API. NATS fits in our distributed horizontally scalable model, being so easy to deploy clusters, and NATS has excellent support for memory and persistent streams via JetStream.
We plan to start offering this as the main messaging option for public/private cloud deployments. Basically any Kubernetes based ecosystem, as a first phase.
This last diagram here shows a basic message flow from an external enquirer endpoint interacting with NATS. This central piece here is NATS infrastructure. To the right we have the card switch application. We showed the use of subject-based messaging streams and the KV Store.
Thank you.
Nate Emerson:
Thank you so much, Adrian.
Adrian Florea:
Sure. Thank you for having me
Nate Emerson:
First off, you did some benchmarking before choosing NATS. What other systems did you evaluate? And what were the high level takeaways from that PoC that you were doing?
Adrian Florea:
Let's say I benchmarked some of the traditional alternatives to NATS without giving names necessarily in the meeting here. But obviously one thing that stands out with NATS, and everybody said it, is the simplicity. We at ACI love simple, scalable, resilient things. When we saw one binary that can do it all. Yep, that's the one.
In terms of the benchmark itself at that time, which was like a couple of years ago, I designed a prototyping workflow for types of payments. And ran the same prototype against a few messaging solutions. The main proving points for us were:
- Low latency - one of the main considerations,
- The fact that is distributed,
- And especially for card payments, the KV.
Recently, when we got to work more with the KV store, it's one of the main items that is not so easy to achieve with other messaging systems (and it's a great plus).
Nate Emerson:
Yeah, absolutely. We're curious to hear when we've heard with the NATS 2.11 release coming up, and also some of the Orbit client extensions, what parts of the new server update or other upcoming features are you most excited about for your systems?
Adrian Florea:
Yeah, probably the features related to TTL (Time to Live) of messages. And I think there is something related to selective delete tags or something like that. Those are the ones I'm looking forward to the most. I just briefly scan through the features. But yeah, we are trying to move along and move into load and stress, testing with NATS.
Nate Emerson:
Yeah, I think a lot of people are looking forward to how much that will help with the management of KVs in particular and cleaning things up.
Adrian Florea:
KVs are awesome. The fact that it's distributed and so simple, it’s hard to beat.
Nate Emerson:
Yes, absolutely any final thoughts or anything else you wanted to share?
Adrian Florea:
Just that for us it aligns very well with our philosophy, with a modernized payment solution. Speed, performance, simplicity, horizontally scalable. That's what we need.
Nate Emerson:
Yeah, that's some common refrains, for sure. And I think the simplicity, particularly with just how much that smooths out the architectural needs and the operational costs, especially on Core NATS. It's just very simple and straightforward to get it going.
Well, thank you so much, Adrian, for joining us today.
Adrian Florea:
Thanks, all