WP_Term Object
(
    [term_id] => 64
    [name] => Solido
    [slug] => solido
    [term_group] => 0
    [term_taxonomy_id] => 64
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 58
    [filter] => raw
    [cat_ID] => 64
    [category_count] => 58
    [category_description] => 
    [cat_name] => Solido
    [category_nicename] => solido
    [category_parent] => 157
)

Machine Learning in EDA Flows – Solido DAC Panel

Machine Learning in EDA Flows – Solido DAC Panel
by Tom Simon on 07-12-2017 at 12:00 pm

At DAC this year you could learn a lot about hardware design for AI or Machine Learning (ML) applications. We are all familiar with the massively parallel hardware being developed for autonomous vehicles, cloud computing, search engines and the like. This includes, for instance, hardware from Nvidia and others that enable ML training and ML inference. However, the most interesting wrinkle in this story is how ML is gaining traction in the software tools used for hardware design. Ever since I started working in the EDA field, it was apparent that the cycle of using current generation hardware/software to design next generation hardware was like a dog chasing its tail – always just a bit behind and never going to catch up.

Indeed, the history of EDA is one of using prodigious software and compute resources to eke out the next generation of hardware. Machine Learning is a massive discontinuity that is disrupting many applications – medical, data mining, security, robotics, autonomous vehicles and too many more to name. So now we see that Machine Learning is also delivering a huge discontinuity in the field of electronic design itself – even to the point of allowing the dog to finally catch its tail.

I attended a panel on using ML in semiconductor design hosted by Solido, arguably the company at the forefront of using ML in design. The panel featured presentations by Nvidia, e3datascience and Qualcomm. These names should be enough to tell you that ML is becoming an important and permanent part of chip design.

Ting Ku from nVidia covered the fundamentals of the field. His main point was to differentiate the terms point automation, machine learning and deep learning. He broke them each down based on three traits – style (deterministic or statistical), presence of a database, and whether the algorithm has predefined features. See the diagram below from his presentation to understand the differences.


Internally Nvidia is applying ML to new areas to improve efficiency. One of the more unique ones was their use of ML to compare data sheets of new products to previous versions to make sure there are no errors. One of Ting’s main points was that ML should be used to not just give designers more information, but rather it should add a layer to help make decisions. He cited their own use of Solido ML applications to arrive at a statistical PVT fully covering 4 sigma with only 300 simulation runs.

Eric Hall spoke next. He is presently at e3datascience, but was previously at Broadcom. He provided an introduction to the distinction between classification and regression. Classification is what we most commonly associate with ML – identifying things based on training. Regression is the ability to predict numerical values based on inputs. Regression is the application for ML that can help with power, area and timing trade offs. It is also extremely useful for the multidimensional analyses that are common in EDA.

Eric’s examples focused on finding optimal memory configurations. If you think of the plane of point defined by all the area and power combinations possible, you want to know the set of those points that are optimal and available for a specific application. Within this set of points there will be a power versus area trade off. But by applying ML regression it is possible to identify the best subset of optimal area and power configurations.

Eric talked about his experience creating his own ML-based memory characterization estimator. After this project, he had a chance to speak with Solido’s Jeff Dyck about their FFX regression technology. Eric felt that the Solido solution filled some of the gaps he encountered in his own endeavor. The slide below covers Eric’s experience with the Solido solutions.

The last speaker was Sorin Dobre of Qualcomm. He has been involved in new process bring up for a long time and is now focusing on 7nm and beyond. He sees bringing up each new node as a challenging undertaking that requires more time for each new technology. Yet, each new node needs to roll out on schedule or it can imperil leading edge projects. The underlying reasons are, of course, exploding data size and complexity.

In this environment Sorin sees major opportunities for ML. These include design flow optimization, IT resource allocation optimization, IP characterization and data management. One of the key benefits of using ML is that you can reduce resources – use fewer CPU’s running for less time – to get the same or better job done. Some of the specific design tasks he sees benefiting from ML are yield analysis, characterization, timing closure, physical implementation, and functional verification.

With three speakers, each having experience at some of the largest semiconductor companies, talking plainly about the practical benefits of ML, it seems that we are about to see some really interesting shifts in design flows as they incorporate ML. I’m not saying that it will make chip design become like autonomous driving. Nevertheless, it should make the designer’s job go faster and give them new tools to improve yield, power and performance. It will be interesting to see by next year’s DAC in San Francisco just how much further things have come. For more information on ML tools available now from Solido, visit their website.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.