All posts

14 factors of adaptive edge (part one)

Synadia
Jun 20, 2022
14 factors of adaptive edge (part one)

How the next generation of distributed, multi-cloud and edge-autonomous systems are forcing the evolution of message brokers and streaming data systems into adaptive connection fabrics that are lightweight, secure, distributed and environment-agnostic.

We have entered a new era of distributed applications and infrastructure driven by performance, security and resilience on the Edge. Makers of distributed applications require new capabilities that allow applications to continue functioning effectively, even in conditions of poor or intermittent connectivity. As performance expectations and service level agreements approach near real-time for a growing array of functionalities, applications require greater persistence and autonomy on far more widely distributed arrays of nodes; the Edge becomes the compute fabric, the memory and the application tier, all rolled into one. The Adaptive Edge is the new construct that fulfills these new and more exacting requirements. Just like web applications forced a rapid evolution for technology architectures in the early Cloud Era, Edge Applications are forcing a similarly rapid evolution in the Edge Era. This paper attempts to define a set of design patterns and principles for the applications and architectures on the Adaptive Edge.

14 Factors

Message brokers and event streaming software are critical infrastructure for modern software applications and technology infrastructure. These systems make it possible for applications, systems, and services to exchange information, even if they are not written in the same software language or are deployed on different platforms or environments. Message brokers emerged as software and hardware became more distributed and composed of discrete services. Event streaming software — or event buses — emerged from the message broker world to handle massive volumes of streaming information and ensure that data could be found by users that need it.

Message brokers and event streaming both relied on one-to-many architectures. Over time, as systems grew still more distributed, the publish-subscribe (pub/sub) model for message brokers and event streaming became more common, allowing end users or services to subscribe to and store streams of data or messages that they needed to consume.

While the core functionality provided by legacy designs for message brokers and event buses remains crucial, current systems can no longer handle the requirements of modern distributed applications. These applications are rapidly moving to the “Edge” and are requiring local and adaptive processing and persistence capabilities. These capabilities are either not readily available or are complicated and expensive to provide with current message broker or streaming data technologies. This is largely because the paradigm for moving and exchanging data and messages has shifted from a one-to-many to many-to-many.

Because legacy message and eventing systems are not designed to live on the edge, they often suffer from high-latency and inconsistent delivery of key data to or between edge clients. Because the edge is where data is generated by distributed applications interacting with users and sensors, the question becomes whether to move that data back to the cloud for processing or to process it on the edge with distributed business logic. By extension, this raises the requirement of managing and scaling data aggregation on the edge in a manner that allows for distributed applications to make decisions based on the latest information and not be forced to “trombone” large volumes of data back to cloud data centers.

Further, the requirements of many-to-many architectures require the ability to continuously shape and reshape the topography, taking into account intermittent connectivity, concerns about data transit and storage costs, and the limitations of SLAs and capabilities of cloud computing providers. Enterprises not only want the ability to run applications on the edge but also to allow multiple clients on the edge to act in concert until connectivity is re-established.

Applications such as IoT or mobile applications today need the ability to run either as a full messaging and connection server on a small form-factor or to function merely as a simple client, bridging the nodes of the application back to more centralized compute and storage. Even architects of more traditional applications that are not required to run on the edge are seeking out adaptive capabilities that require a more flexible connection fabric to handle messaging and streaming; for example, companies are looking to stand up the same application in multiple clouds while maintaining centralized awareness and management of processes and data. These cloud-agnostic designs require a unified service that can provide messaging and eventing within one cloud or across multiple clouds without a dependence on a hosted service in any single cloud.

Another emergent use case is building an internal SaaS capability that extends an external SaaS service into an organization but allows for an on-premise version that mirrors all capabilities and benefits from product and feature improvements of the external service without exposure to the public Internet. (This configuration is increasingly popular in regulated industries or in at-scale organizations seeking to reduce costs).

All of these emerging use cases are versions of a similar architecture — what we are calling the Adaptive Edge. The Adaptive Edge Architecture is a flexible deployment topology overlaid atop of legacy architectures based on physical data centers and cloud computing. Adaptive Edge can either integrate with these constructs or function as an independent construct. that supports one-to-many, many-to-many, and combinations of the two along with sharding of the connection fabric into local clusters that can continue to deliver back-end processing and application functionality even with eventual and discontinued connectivity.

The Adaptive Edge also requires not only persistence but a lightweight attached data store (such as a key/value or object store) that can make persistence intelligent to reduce data roundtrips and better enable edge application functionality. A crucial lynchpin in this Adaptive Edge Architecture is a multi-tenant security model that makes it easier to create tightly partitioned namespaces and sub-groups for data connections, messaging and eventing and other actions. These capabilities make the Adaptive Edge a more flexible paradigm that becomes a general purpose massively distributed control plane. This plane is needed for the next generation of multi-cloud and edge-native applications.

Designing Systems for the Adaptive Edge

A common pattern among distributed systems has emerged with a central group of applications servicing and receiving data from edge nodes. Usually there is telemetry coming from the edge. Sometimes edge nodes have their own services around command and control and access to local data. In essence, this is a federated edge, loosely coupled and malleable for supporting existing and creating new services. Often identified as an IoT artifact, this pattern also works quite well for other use cases that benefit from an adaptive edge layer and flexible service and message delivery.

Enterprises are executing this architecture by integrating multiple technology systems and hacking them into the Adaptive Edge pattern. These designs require large amounts of resources, staff time, and infrastructure in order to become operational. Yet, in the end, hacked-together Adaptive Edge architectures generate systems that are fragile and break easily, difficult to monitor, and challenging to secure.

To build successfully for the Adaptive Edge, a different approach is required, one that looks at creating a federation of equally important or capable nodes or servers, which can reside in any form factor and are just as comfortable in a small Windows PC as in a cloud server. For a truly Adaptive Edge, communications systems must support and even improve on incumbent paradigms for both one-to-one and one-to-many communications models. But Adaptive Edge systems must also function as a superset of both one-to-one and one-to-many, behaving more like a fabric or mesh that enables almost any shape of communications network topology. These communication fabrics behave more like a graph, allowing connections to form, self-organize and exist as part of the greater whole but remain as independent subsets or connect back to the communication fabric opportunistically and when possible.

Discover more in Part 2...