WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 569
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 569
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 569
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 569
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

10 signs on the neural-net-based ADAS road

10 signs on the neural-net-based ADAS road
by Don Dingee on 06-24-2016 at 12:00 pm

Every day I read stuff about the coming of fully autonomous vehicles, and it’s not every day we get a technologist’s view of the hurdles faced in getting there. Chris Rowen, CTO of Cadence’s IP group, gave one of the best presentations I’ve seen on ADAS technology and convolutional neural networks (CNNs) at #53DAC, pointing toward 10 signs on the road ahead.

Rowen’s list, with his talking points and my comments interspersed, cuts to the chase. I really appreciated this wasn’t a hard-core product pitch but a very thoughtful problem and opportunity discussion.

1) Neural networks will expand rapidly into real-time embedded functions in cars.
Anyone armed with server-class GPUs and gigantic FPGAs and racks of equipment in a data center can research CNNs. Getting that research into a car, with multiple sensors in play including cameras, radar, and LIDAR, requires a new generation of SoC.

2) Power constraints and extreme throughput needs will drive CNN optimization in both embedded and server platforms.
Here’s one of Rowen’s charts positioning the available technologies (and clearly showing why mobile SoCs won’t cut it in ADAS applications).


3) Real-time neural networks evolve from object recognition to action recognition.
Bad lighting, faded paint, scratches, bent metal, stickers, “and in the US, bullet holes” – just recognizing a sign accurately is a problem. Add other vehicles, pedestrians, bicycles, deer, rocks, tree branches, footballs, and other objects into a road scene, and layer on detection rates not even at three-9s yet.


4) Expect a mad – sometimes unguided – scramble for expertise, data, and applications.
CNNs are the perfect university research problem. There is a contest to produce the fastest ImageNet categorization benchmark driving researchers, and in just the last 2 years a half-dozen major CNN algorithms have appeared. State of the art is moving rapidly, and software is ahead of hardware right now.

5) Near-term, >100x energy and >20x bandwidth architecture optimizations are coming.
Rowen’s point is we can put together a lot of MACs. For 1T of MACs running AlexNet, 350 GB/sec memory bandwidth for floating point coefficients is needed. Embedded CNN research is driving ways to reduce parameters and streamline coefficients (debate ongoing on fixed versus floating point), cutting power and using bandwidth more efficiently.

6) Long-term, 1000 teraMAC (or petaMAC) embedded neural networks are coming.
Embedded, not server class. Remember, the training can be done offline on a much bigger system, but the key to getting to five-9s or better detection rates is bigger and faster embedded CNN architectures.

7) Network optimization evolves from ad-hoc exploration to more automated “synthesis” – a new kind of EDA.
Seems self-serving from the company that brought you EDA360, but I agree with the sentiment. Embedded CNNs are an ultimate case of hardware and software co-optimization. The Tensilica “CNN DSP” is a first attempt moving in this direction, but looking at Google TensorFlow and startups like KNUPATH, this is far from over. Repeat after me: CNN differentiation and customization enabled by EDA tools.

8) New value chains emerge around IP, tools, and data services, and swing between vertical integration and disintegration.
A good example is what Mobileye is doing, trying to go vertical to provide auto manufacturers with a turnkey ADAS solution. There’s also NVIDIA, NXP, and others with ADAS “boxes”, essentially putting a server in the trunk. Right now, vertical is in vogue, but as Cadence, CEVA, and others innovate in IP the pendulum will swing.

9) Players with the most road data win.
Rowen’s point on moving from recognition to action requires road data – a lot of it. CNNs will be found inside larger algorithms to handle higher levels of autonomy, levels 4 and 5 in the SAE definition. He said “virtual driving” will be required to qualify solutions before deployment, and over-the-air updates will be essential as algorithms continue to evolve rapidly.


10) Potential backlash over “rise of the machines”.
The way I see it, we’re still tragically having trouble with shift-into-park in the name of stunning industrial design (looking at you, Fiat Chrysler). Trust is going to be a huge factor, maybe even a bigger one in cars since people have had total control of driving for over a century.

What do you think of Rowen’s list? One point I see missing is the role of HBM and 2.5G packaging to address the memory bandwidth issues – the server GPU guys are already there. I’ll have another perspective on CNN software from another vendor shortly. Are there other points where ADAS innovation is needed?

Share this post via:

Comments

0 Replies to “10 signs on the neural-net-based ADAS road”

You must register or log in to view/post comments.