Image: Deemerwha studio/Adobe Stock Must-read big data coverage The hype around edge computing is growing, and rightfully so. By bringing compute and storage closer to where data is generated and Continue Reading
The hype around edge computing is growing, and rightfully so. By bringing compute and storage closer to where data is generated and consumed, such as IoT devices and end-user applications, organizations are able to deliver low latency, reliable and highly available experiences to even the most bandwidth-hungry, data-intensive applications.
While delivering fast, reliable, immersive, seamless customer experiences are among the key drivers of the technology, another reason that’s often understated is that edge computing helps organizations adhere to stringent data privacy and governance laws that hold businesses accountable for transferring sensitive information to central cloud servers.
Improved network resiliency and bandwidth costs also incentivize adoption. In short, without breaking the bank, edge computing can enable applications that are compliant, always on and always fast — anywhere in the world.
SEE: Research: Digital transformation initiatives focus on collaboration (TechRepublic Premium)
It’s no surprise that market research firm IDC is projecting edge networks to represent more than 60% of all deployed cloud infrastructures by 2023, and global spending on edge computing will reach $274 billion by 2025.
Plus, with the influx of IoT devices — the State of IoT Spring 2022 report estimates that around 27 billion devices will be connected to the internet by 2025 — enterprises have the opportunity to leverage the technology to innovate at the edge and set themselves apart from competitors.
In this article, I’ll run through the progression of edge computing deployments and discuss ways to develop an edge strategy for the future.
From on-premises servers to the cloud edge
Early instantiations of edge computing deployments were custom hybrid clouds. Supported by a cloud data center, applications and databases ran on on-premises servers that a company was responsible for deploying and managing. In many cases, a basic batch file transfer system was usually used to move data between on-premises servers and the backing data center.
Between the capital and operational expenditure costs, scaling and managing on-premises data centers can be out of scope for many organizations. Not to mention, there are use cases such as off-shore oil rigs and airplanes where setting up on-premises servers simply isn’t feasible due to factors such as space and power requirements.
To address concerns around cost and complexity of managing distributed edge infrastructures, it’s important for the next generation of edge computing workloads to leverage the managed edge infrastructure solutions offered by major cloud providers, including AWS Outposts, Google Distributed Cloud, and Azure Private MEC.
Rather than having multiple on-premises servers storing and processing data, these edge infrastructure offerings can do the work. Organizations can save money by decreasing expenses related to managing distributed servers, while benefiting from the low latency offered by edge computing.
Furthermore, offerings such as AWS Wavelength allow edge deployments to make use of the high bandwidth and low latency features of 5G access networks.
Leveraging managed cloud-edge infrastructure and access to high bandwidth edge networks solve part of the problem. A key element of the edge technology stack is the database and data sync.
In the example of edge deployments that use antiquated file-based data transfer mechanisms, edge applications run the risk of operating on old data. Therefore, it’s important for organizations to build an edge strategy that takes into account a database suitable for today’s distributed architectures.
Using an edge-ready database to bolster edge strategies
Organizations can store and process data in multiple tiers in a distributed architecture. This can happen in central cloud data centers, cloud-edge locations and on end-user devices. Service performance and availability gets better with each tier.
To that end, embedding a database with the application on the device provides the highest levels of reliability and responsiveness, even when network connectivity is unreliable or nonexistent.
However, there are cases where local data processing isn’t sufficient to derive relevant insights or where devices are incapable of local data storage and processing. In such cases, apps and databases distributed to the cloud-edge can process data from all the downstream edge devices while taking advantage of low latency and high bandwidth pipes of the edge network.
Of course hosting a database at the central cloud data centers is essential for long term data persistence and aggregation across edge locations. In this multi-tier architecture, by processing the bulk of data at the edge, the amount of data backhauled over the internet to central databases is minimized.
With the right distributed database, organizations are able to ensure data is consistent and synchronized at every tier. This process isn’t about duplicating or replicating data across each tier; rather, it’s about transferring only the relevant data in a way that isn’t impacted by network disruptions.
Take retail, for example. Only data related to the store, such as in-store promotions, will be transferred down to store edge locations. The promotions can be synced down in real-time. This ensures store locations are only working with data relevant to the store location.
SEE: Microsoft Power Platform: What you need to know about it (free PDF) (TechRepublic)
It’s also important to understand that in distributed environments, data governance can become a challenge. At the edge, organizations are often dealing with ephemeral data, and the need to enforce policies around accessing and retaining data at the granularity of an edge location makes things extremely complex.
That’s why organizations planning their edge strategies should consider a data platform that is able to grant access to specific subsets of data only to authorized users and implement data retention standards across tiers and devices, all while ensuring sensitive data never leaves the edge.
An example of this would be a cruise line that grants access to voyage-related data to a sailing ship. At the end of the trip, data access is automatically revoked from cruise line employees, with or without internet connectivity, to ensure data is protected.
Moving forward, edge first
The right edge strategy empowers organizations to capitalize on the growing ocean of data emanating from edge devices. And with the number of applications at the edge rising, organizations looking to be at the forefront of innovation should expand their central cloud strategies with edge computing.
Priya Rajagopal is the director of product management at Couchbase, (NASDAQ: BASE) a provider of a leading modern database for enterprise applications that 30% of the Fortune 100 depend on. With over 20 years of experience in building software solutions, Priya is a co-inventor on 22 technology patents.