Suppose you operate popup clinics in rural villages and remote locations where there is no internet. You need to capture and share data across the clinic to provide vital healthcare, but if the apps you use require an internet connection to work, they can’t operate in these areas.
Or perhaps you’re an oil and gas operator that needs to analyze critical warning data from a pressure sensor on a platform in the North Sea. If the data needs to be processed in cloud data centers, it has to travel incredible distances — at great expense — over unreliable networks. This incurs high degrees of latency, or network slowness, so by the time a result is sent back to the platform, it could be too late to take any action.
These kinds of use cases represent a growing class of apps that require 100% uptime and real-time speed, guaranteed — regardless of where they are operating in the world.
A fundamental challenge in meeting these requirements remains the network — there are still huge swaths of the globe with little or no internet — meaning apps that depend on connectivity cannot operate in those areas.
Emerging advances in network technology are closing those gaps, but no matter the coverage, reliability or speed of a network, it will inevitably suffer slowness and outages that affect the applications that rely on it, resulting in a poor user experience and business downtime.
The Responsible Development Choice
How do you guarantee availability and ultra-low latency for apps, especially when operating in internet dead zones? This is made possible by understanding the challenges in network connectivity and working around them.
The responsible development choice is to architect and build applications that:
- Can still operate when network connectivity is interrupted or unavailable.
- Can make the most efficient use of network connectivity when it is available, because it can be fleeting and may not always be fast.
To do this, you must bring data processing and compute infrastructure to the near side of the network — that is, to the literal edge, such as in the popup clinic van or on the oil platform, reducing dependencies on distant cloud data centers.
Taking It to the Edge
A cloud computing architecture assumes that data storage and processing is hosted in the cloud. In this depiction, application services and the database are hosted and running in the cloud, accessed from edge devices via REST calls:
The cloud architecture depends on the internet for apps to operate properly. If there is any network slowness or interruption, the apps will slow or stop.
Edge computing architectures bring data processing to the edge, close to applications, which makes them faster because data doesn’t have to travel all the way to the cloud and back. And it makes them more reliable because local data processing means they can operate even without the internet.
It’s not about getting rid of the cloud; you still need that eventual aggregation point. It’s about extending the cloud to the near side of the network. Edge architectures use the network for synchronization, where data is synced across the application ecosystem when connectivity is available.
And it’s important to note that by “sync ” we mean something more than just using the network to replicate data. It’s also about using the precious and fleeting bandwidth as efficiently as possible when it’s available.
Sync technology provides cross-record compression, delta compression, batching, filtering, restartability and more — and because of these efficiencies it pushes less data through the wire, which is critical when on slow, unreliable or shared bandwidth networks.
Simply put, an edge architecture allows you to:
- Capture, store and process data where it happens, providing availability and speed.
- Sync data securely and efficiently throughout the app ecosystem as connectivity allows, providing consistency.
Now let’s explore how to adopt an edge architecture.
All You Have to Do Is ASC
Over the past couple of years, we’ve seen a growth of next-gen technologies designed to make applications more available in more places and for more users than ever before.
These advances are lowering the bar and making it easier for organizations to adopt edge architectures to guarantee speed, uptime and efficient bandwidth use for applications, especially those that operate in remote locations and internet dead zones.
To build an edge architecture, you need four fundamental system components:
- A cloud compute environment.
- An edge compute environment.
- A network connecting the cloud and the edge.
- A database that synchronizes from the cloud to the edge.
Here we combine three state-of-the-art technologies to create an edge architecture that can operate at high speed, all the time, anywhere on the planet.
We call it the ASC stack:
- AWS Snowball
- SpaceX Starlink
- Couchbase Capella
What Is AWS Snowball?
AWS Snowball is a service that provides secure, portable, rugged devices (called AWS Snowball Edge devices) that run AWS infrastructure for powering applications at the edge.
The devices are about the size of a suitcase and deliver local computing, data processing and data storage for disconnected environments such as ships, mines, oil platforms, field clinics and remote manufacturing facilities. Wherever AWS infrastructure is required but unfeasible due to lack of reliable internet, Snowball provides a portable solution.
Described in simpler terms, Snowball is an “AWS-data-center-in-a-box” that arrives at your door preconfigured with AWS services and ready to go. It supports AWS S3, EC2, Lambda, EBS and more. You plug it in, then access and manage the environment via the AWS Control Plane over local networks.
By providing a portable, familiar, standards-based infrastructure, AWS Snowball makes it easy for anyone to set up and run edge data centers without worrying about internet connectivity.
What Is SpaceX Starlink?
Starlink is a next-gen satellite internet service from SpaceX. It is made up of “constellations” of thousands of small satellites in low earth orbit — about 340 miles in space. This is opposed to traditional large-scale geostationary satellites that orbit in a fixed location about 22,000 miles up.
Because of the shorter physical distance between the customer’s dish and the satellite, Starlink can deliver 20- to 50-millisecond latency on average, which is much faster than traditional satellite internet (which, due to the greater distance, can suffer latencies up to 600 milliseconds or more).
The lower orbit and smart network technology allows Starlink to offer performance comparable to terrestrial networks. Their “Business” service offers download speeds of up to 350 Mbps and latency of 20-40 ms.
While Starlink provides vital internet connectivity to areas with few or no other options, it is not foolproof. Connections can suffer slowdowns during peak hours when most users in a given cell are likely to be sharing bandwidth, or if the dish experiences interference from nearby household appliances, fluorescent lights or other Wi-Fi networks. And obstructions such as cloud cover, tree branches or thick walls can interrupt the connection.
As such, it’s important to develop apps that can withstand intermittent slowness and interruptions and remain fully available. To do so, you must maximize the efficient use of this precious shared network resource by moving the smallest amount of data possible, in its most compact form.
What Is Couchbase?
Couchbase is a NoSQL cloud database platform with in-memory speed, SQL familiarity and JSON flexibility. It natively supports the edge architecture by providing:
- Couchbase Capella: A fully managed cloud database-as-a-service (DBaaS).
- Capella App Services: Fully managed services for file storage, bidirectional sync, authentication and access control for mobile and edge apps.
- Couchbase Lite: A lightweight embeddable version of the Couchbase database.
Capella App Services synchronizes data between the backend cloud database and edge databases as connectivity allows, while during network disruptions apps continue to operate thanks to local data processing.
With Couchbase, you can create multitier edge architectures to support any speed, availability or low bandwidth requirement.
Testing the Stack
Couchbase Engineering wanted to determine a baseline for how the ASC stack works better together, with each technology working to enhance and augment the others functionality.
To do so, we set up the stack in a remote location in a classic edge architecture:
- AWS Snowball Edge provides computing infrastructure at the edge.
- Couchbase is deployed to the Snowball Edge device for local data storage and processing.
- Couchbase Capella serves as the hosted backend DBaaS in the cloud.
- Starlink provides the network from the Snowball Edge device to Couchbase Capella.
- Couchbase Capella App Services provides secure synchronization between the edge database and the cloud database.
With this basic edge architecture in place, we set out to measure its effectiveness in reducing latency and bandwidth consumption as compared to a cloud architecture where the app reads and writes over Starlink via REST.
We ran four test scenarios:
- Test 1 was to write 1,000 new docs to the Snowball Edge over a wired local area network (LAN) and measure the amount of data transferred and latency per operation.
- Test 2 was to sync the 1,000 new docs from Snowball to Capella over Starlink and measure the amount of data transferred and complete time to transfer.
- Test 3 was to write 1,000 new docs to Capella over Starlink and measure the amount of data transferred and latency per operation.
- Test 4 was to sync the 1,000 new docs from Capella to Snowball over Starlink and measure the amount of data transferred and complete time to transfer.
The tests used a real-world product catalog dataset (1,000 products), at 650 bytes per record on average.
Apps accessing the Couchbase database running on the local Snowball device showed significantly reduced latency as compared to accessing the cloud database.
For reads and writes, results showed that the edge architecture reduced latency by 98% as compared to the cloud architecture:
When comparing the edge architecture and the cloud architecture for bandwidth usage, results showed that the total data volume sent over Starlink decreased substantially on the edge architecture.
Because of efficiencies enabled by synchronization, like cross-record compression, delta compression, batching, filtering, restart ability and more, the edge architecture makes the best use of shared bandwidth — critical for peak periods, when connecting under heavy cloud cover, or in remote areas like forests or jungles where obstructions may impede speed and throughput.
Syncing updates over the edge architecture reduces the amount of data transferred over Starlink by 42% as opposed to using REST calls to a cloud architecture.
These test results establish a conservative baseline for latency and bandwidth improvements that can be seen when using the ASC stack for a basic edge architecture. In a large production environment, improvements are likely to be more substantial.
The Edge Is Closer Than You Think
Couchbase has a long history of helping customers meet critical requirements for real-time speed and 100% uptime for their applications. And with the ASC stack, Couchbase joins forces with AWS Snowball and SpaceX Starlink to help organizations adopt edge computing faster, easier and in more places than ever before.
And the best part is, the stack is so portable, you can literally take it with you wherever you go!
See for yourself how easy it is to get started. Try Couchbase Capella today for free.