It’s been a common problem for years. If you gather large amounts of data from a device or other source, and you need to process that data instantly, then moving it to a centralized database each and every time introduces latency.
IoT must deal with this issue time and again. For example, say there is a machine on a factory floor that analyzes the quality of an auto part that it makes. If the part is not up to specification, as determined by an optical scanner, then it’s automatically rejected. While this keeps a human from looking at the part, and thus slowing down the process, it also takes a great deal of time. The system must transmit the data and image back to the centralized database and compute engine where a determination is made as to the success of the manufacturing process. Then the results are communicated back to the machine.
The cloud complicates this process even more. Instead of sending the data back to the data center, it is sent to a remote server that can be thousands of miles away. To make matters worse, we send it over the open Internet. However, considering the amount of processing that needs to occur, the cloud may offer the best bang for the buck.
Overcoming the Latency Challenge
To address the latency problem, many suggest “computing at the edge.” It’s not a new concept, but it’s something that was recently modernized. Computing at the edge pushes most of the data processes out to the edge of the network, close to the source. Then it’s a matter of dividing the work between data and processing at the edge, versus data and processing in the centralized system.
The concept is to process the data that needs to quickly return to the device. In this case, it’s the the pass/fail data that indicates the success or failure of the physical manufacturing of the auto part. However, the data should also be centrally stored, and, ultimately, all of the data sent back to the centralized system, cloud or not, for permanent storage and future processing.
Edge processing means that we replicate processing and data storage that’s close to the source. But it’s more of a master/slave type of architecture, where the centralized system ultimately becomes the point of storage for all of the data, and the edge processing is merely a node of the centralized system.
To accommodate edge processing, we need to think a bit harder about how to build our IoT systems. That means more money and time must go into the design and development stages. However, the performance that well-designed IoT systems will provide to meet the real-time needs of IoT will more than justify the added complexity.
I suspect that computing at the edge architecture will become more popular as IoT becomes more popular. We’ll get better at it, and purpose-built technologies will start to appear. Computing at the edge of an IoT architecture is something that should be on your radar, if IoT is in your future.
A Few Key Points to Remember
Edge computing is about putting processing and data near the end points. This saves the information from being transmitted from the point of consumption, such as a robot on a factory floor, back to centralized computing platforms, such as a public cloud.
The core benefit of edge computing is to reduce latency, and, as a result, increase performance of the complete system, end to end. Moreover, it lets you respond to critical data points more quickly, such as shutting down a jet engine that’s overheating, without having to check in with a central process.
Although this latency reduction can aid all types of systems, it’s mostly applicable to remote data processing, such as IoT devices.
Edge computing is not about snapping off parts of systems and placing them at the edge, but rather about the ability to look at data processing as a set of tiered components that interact, one to another, each playing a specific role.
The data that’s processed and stored at the edge typically only resides there temporarily. It’s ultimately moved to centralized processing, such as a public cloud, at certain intervals. That central location’s copy becomes the data of record, or the single source of truth.
Don’t do edge computing unless you have a specific need for it. Edge computing is a specialized approach to solving specialized problems. Enterprises are often guilty of adopting technology just because it’s mentioned more than once in the tech press. But doing so will cost you more money and add risk — and edge computing falls into this category.
So, What Does This All Mean?
Edge computing is a tactical way to solve the latency issues, built upon many tried-and-true architectures of the past. However, what’s new is the element of the cloud, and the ability to leverage edge systems as if they were centralized. The new cloud element is bringing new relevance to edge computing.
This article was written by David Linthicum. David is a cloud computing visionary and pundit. He has written 13 books, published 3,000 articles and presented at over 500 conferences on cloud computing. His views are his own.