Edge computing: What You Need to Know

By - September 8, 2020

In IT, edge computing is a big buzzword. From a purely business standpoint, edge computing is one of these high growth markets that companies are willing to invest in now.

Edge computing, itself, is really just a mechanism for where processing happens. The edge is the end of your organization’s network, the end of your reach. Edge computing is putting Internet of Things (IoT) devices closer to the locations they’re serving. The biggest benefits of edge computing are bandwidth savings and reduced latency, or the time it takes data to travel. 

How edge computing works

Imagine if your company used internet-based security cameras. In a traditional computing model, those cameras would just send all their information to a centralized location on the network. Then the data would be run through a processing system that looked for scenes that included movement, keeping all the motion-activated footage from the cameras and discarding the rest.

However, there’s a problem with that model. Sending everything captured by the cameras to a remote location requires a lot of bandwidth. If you put those cameras at the edge of your network, instead, they could process the data before transferring only the motion-activated scenes to your central server, which would reduce the bandwidth needed to send the information across the internet.

Now, think about an autonomous car. Its processing power is in the car itself, at the very edge of what it touches. From a time standpoint, requesting all the data needed to operate a vehicle from a remote server would be too impractical. The reaction times need to be as fast as possible so that the car can stay on the road and avoid accidents. The data processing has to be done very close to where the action is or the product doesn’t work at all. 

In the case of a camera or your car, a lot of these sensors are out in the world. You’ve got to be able to access them, get data, and process that in close to real-time to make them function better. Unlike data centers, which typically need to run in cool dry atmospheres with little variability in temperature, edge computing devices are meant to operate in real-world conditions. They are usually more robust against heat and humidity. They also require less power, because they’re not located in these deep data centers that consume large amounts of electricity. 

Disadvantages of edge computing

Edge computing requires a lot of hardware. For example, a large distributor or manufacturer who wanted to use IoT security cameras would need to have those edge devices at all of their hubs or warehouses. Decentralizing your equipment can present logistical challenges when it comes to management and maintenance.

Another drawback of edge computing is an increased attack vector. Because these internet-facing devices are not behind well-secured corporate networks with giant firewalls, they are more vulnerable to being hacked. A malicious user might try to steal your data or just commandeer your devices.

In the consumer space, a lack of security updates is a problem for products like smart lightbulbs, doorbells, and home sensors. As vulnerabilities are found, these devices are not being patched to fix the glitches, so a person could get unwarranted access onto your home network or take over the operation of your lightbulbs or doorbell. You don’t see this issue as much in the corporate world, however, due to additional checks and balances.

Receive Posts by Email

Subscribe and receive notifications of new posts by email.