WP_Term Object
(
    [term_id] => 21
    [name] => Ceva
    [slug] => ceva
    [term_group] => 0
    [term_taxonomy_id] => 21
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 160
    [filter] => raw
    [cat_ID] => 21
    [category_count] => 160
    [category_description] => 
    [cat_name] => Ceva
    [category_nicename] => ceva
    [category_parent] => 178
)
            
CEVA PR Rebrand SemiWiki 800x100 231207 (1)
WP_Term Object
(
    [term_id] => 21
    [name] => Ceva
    [slug] => ceva
    [term_group] => 0
    [term_taxonomy_id] => 21
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 160
    [filter] => raw
    [cat_ID] => 21
    [category_count] => 160
    [category_description] => 
    [cat_name] => Ceva
    [category_nicename] => ceva
    [category_parent] => 178
)

The Cloud-Edge Debate Replays Inside the Car

The Cloud-Edge Debate Replays Inside the Car
by Bernard Murphy on 10-25-2018 at 7:00 am

I think we’re all familiar with the cloud/edge debate on where intelligence should sit. In the beginning the edge devices were going to be dumb nodes with just enough smarts to ship all their data to the cloud where the real magic would happen – recognizing objects, trends, need for repair, etc. Then we realized that wasn’t the best strategy; for power because communication is expensive, for security and privacy because the attack surface becomes massive, and for connectivity because without the connection is down, the edge node becomes an expensive paperweight.

22460-intelligent-car-min.jpeg

Turns out the same debate is playing out inside the smart car, between the sensors on the edge (cameras, radars, ultrasonics, even LIDAR) and the central system, and for somewhat similar reasons. Of course in the car, most of the traffic is going through wired connections, most likely automotive Ethernet. But wired or wireless doesn’t change these concerns much, particularly when the edge nodes can generate huge volumes of raw data. If you want to push all of that to a central AI node, the ethernet will have to support many Gbps. The standard is designed with that in mind, but keep adding sensors around the car and you have to wonder where even this standard will break down.

Hence the growing interest in moving more intelligence to the edge. If a camera for example can do object recognition and send object lists to the central node rather than a raw data stream, bandwidth needs should be significantly less onerous. Just like putting more intelligence in IoT devices, right? Well – the economics may be quite different in a car. First, a dumb camera may be a lot cheaper than a smart camera, so the initial cost of a car outfitted with these smart sensors may go up quite a bit.

There’s another consideration. A lot of these sensors sit in the car’s fenders/bumpers. What’s one of the most common bodywork repairs on a car? Replacing the fender. In a traditional car, labor and painting aside, this may cost somewhere in the range of $300 to $700. That goes up to over $1,000 if the fender includes lights and (dumb) sensors. Make those sensors smart and the cost will go up even further. So adding intelligence to sensors in a car isn’t the obvious win it is in IoT and handset devices.

Safety requirements create some new challenges in this “cloud” versus edge use-case. Assuring an acceptable level of safety requires a lot of infrastructure, such as duplication and lock-step computing in the hardware, but also significant work in the software. One argument has it that this is best centralized where it can be most carefully managed and ensured, relying on only modest capabilities in edge nodes to avoid heavy costs in duplicating all that infrastructure.

But that’s not ideal either. If everything is centralized, guaranteeing response times for safety-critical functions becomes more challenging, particularly when dealing with huge volumes of raw data traffic. If instead sensors have more local intelligence, you can take all the necessary functional safety steps within such a sensor, and since you’re communicating much less data to the central node, safety measures in the interconnect become less costly.

In some cases the OEM may want both object lists and raw data. What?! Think about a forward-facing camera. Object recognition in this view is obviously useful (pedestrians, wildlife, etc) for triggering corrective steering, emergency braking and so on. But it may also be useful to feed the raw view with object identification to the driver’s monitor. Or to feed an enhanced view in poor visibility conditions, this potentially requiring more AI horsepower than an edge camera can provide (such as fusion from other sensors).

Perhaps by now you are thoroughly confused. You should be. This is not a domain where all the requirements are settled and component providers merely have to build to the spec. Like most aspects of autonomy or high-automation, advanced vision and machine learning, the guidelines are still being figured out. OEMs are finding their own paths through the possibilities, creating need for flexible solutions from edge node/sensor providers – dumb, intelligent or a bit of both (to paraphrase Peter Quill in Guardians of the Galaxy). CEVA can help those product companies build in that flexibility. You can learn more HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.