Why autonomous vehicles will rely on edge computing and not the cloud

Why autonomous vehicles will rely on edge computing and not the cloud

When driving a vehicle, milliseconds matter. Autonomous vehicles are no different, even though it may be your AI that drives them. AI = data + compute, and you want your compute to be as close to your data as possible. Enter edge computing.

We all know and love the cloud. What’s not to love about not having to bother with what your own devices can do, and having near-infinite, elastic storage and compute power at your fingertips?

Well, a few things actually. In the end, as the aphorism goes, the cloud is just someone else’s computer. Okay, it may be millions of computers, thoughtfully arranged in clusters in super efficient data centers — but all those are someone else’s computers.

Still, does it matter, if that someone can provide everything you need, probably more efficiently than your own organization could, along with guarantees in terms of security? In many cases, it doesn’t. But it does matter a great deal when it comes to autonomous vehicles.

To understand why, let’s consider the notion of autonomy. Autonomy is defined as ‘independence or freedom, as of the will or one’s actions’. Can you be autonomous, when relying on someone else’s computer? Not really.

Yes, there is redundancy, and yes, there may even be SLAs in place. But when all is said and done, using the cloud means you are connecting to someone else’s computer, usually over the internet. When you are in a moving vehicle, and this vehicle relies on cloud-based compute for its essential functions, what happens if you run into connectivity issues?

This is not the same as a lag in loading your favorite cat pictures. A lag in a moving vehicle scenario is a matter of life and death. So what can be done in situations like these? Enter edge computing.

Edge computing is the notion of having compute as close to the data as possible, in scenarios where data is generated outside of the data center. What this translates to in real life is very small, prefabricated data centers.

Small is a relative term, of course. Is something the size of a container small? Maybe, if you compare it to a data center like the ones cloud providers have. But it’s not something most of us could, or would, have in our homes.

Still, our homes are hosts to some of the primary use cases for edge computing. Connected devices communicating over IoT sensors for smart home or smart city scenarios are a good match for edge computing. Fully blown, these scenarios could involve a substantial number of devices, collecting and sharing a substantial amount of data.

In scenarios like this, incurring the cost of a round trip to the cloud does not make sense. Using a small, local data center is much more viable. Of course, this begs the question — how small is small, and how local is local?

A container deployed by your local 5G antenna is relatively small, and relatively local. A couple of computers running controller software in your basement connecting to your devices over wi-fi is smaller, and more local. Devices that come with their on-board compute and can connect to each other without a central controller are even smaller, and more local.

All of the above can be considered edge computing examples, and can be applied to autonomous vehicles, too — just replace ‘basement’ with ‘trunk’. The smaller you go, the more local you can get, and thus you gain in round-trip times; this is the advantage that edge computing provides. The flip side of this is, the smaller you go, the less compute power you can accommodate, and thus you lose in compute times.

Moore’s law, the empirical rule that states compute power roughly doubles every two years, has been questioned for a while now, but somehow it seems to still be in effect. As a result, an average mobile phone today has more compute power than was available around the globe some years back. In 1969, astronauts had access to only 72KB of computer memory. By comparison, a 64GB cell phone today carries almost a million times more storage space.

This is what makes edge computing viable today. The trade-off in compute power versus network latency is an essential differentiation between edge computing and cloud computing. But there is more. Although in theory there should not be much difference, in practice standards for edge computing are in flux.

When we talk about the edge, we must somehow differentiate between data consumers and data producers in the network. Like the internet, nodes in edge networks are not symmetrical in capabilities. In edge networks, we have many IoT devices, which act almost exclusively as data producers. Therefore, IoT standards are key for edge networks.

Read the full article on ZDNet

Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives. Analysis, Essays, Interviews, News. Mid-to-long form, 1-3 times/month.

 
 

Write a Reply or Comment

Your email address will not be published.