WP_Term Object
(
    [term_id] => 64
    [name] => Solido
    [slug] => solido
    [term_group] => 0
    [term_taxonomy_id] => 64
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 58
    [filter] => raw
    [cat_ID] => 64
    [category_count] => 58
    [category_description] => 
    [cat_name] => Solido
    [category_nicename] => solido
    [category_parent] => 157
)

We Need Libraries – Lots of Libraries

We Need Libraries – Lots of Libraries
by Tom Simon on 05-08-2017 at 12:00 pm

It was inevitable that machine learning (ML) would come to EDA. In fact, it has already been here a while in Solido’s variation tools. Now it has found an even more compelling application – library characterization. Just as ML has radically transformed other computational arenas; it looks like it will be extremely disruptive here as well.

By now we are all familiar with machine learning as it is used in recognition applications. Rather than writing hard coded software to recognize a specific object, like a stop sign, thousands of examples of stop signs (and not stop signs) are given to a machine learning training application. The output of this training is a data set that can be used by the recognition engine to identify a stop sign, or whatever object the training was done with. The beauty of it is that no object specific code needs to be written. Next time the same ML software can be trained and used for recognizing pedestrians, other cars, faces, or just about anything. Interestingly these algorithms are now performing better than humans at picking a face out of a crowd.

So, what about library characterization? Well instead of recognition, it relies on ML’s ability to create response surface models. These are used for simulating complex multivariable systems. Conceptually it works the same way as recognition. When it is given a ‘sparse’ data set, it can extrapolate with extremely high accuracy to provide results across the full spectrum. Much the same as how ML can recognize a face in the rain on a grainy image with low light – a situation it had not encountered before. Oh, and in case I forgot to mention it, ML does this fast.

Moving to the library problem, the ‘sparse’ data comes from representative cells, one for each family of cells. Then for this reduced set of cells, Solido’s ML Characterizer automatically selects what are called anchor corners. These are where there are distinctive features in the response surface. The anchor corners of the representative cells are used as training data to create a complete response surface model. With the response surfaces in hand the ML software can predict cell performance to produce new libraries over a wide range of processes, voltages, temperatures, back biases, etc. Oh, and in case I forgot to mention it, ML does this fast.

One example Solido discusses is for a library of 475 cells. To obtain 61 PVTB corners at one threshold, they started by running 36 of the corners with Liberate. The remaining 25 corners were produced with Solido’s ML Characterizer. It only required 3 hours and 20 minutes, running on 50 CPU’s to compete this task. More time could have been saved by pruning the cells that were used for training.

To ensure library quality, comparisons have been made with traditional characterization and the results differ by low single digit percentages. This puts it in the same league as the level of correlation one would look for in comparing SPICE simulators. Solido also offers a method of adjusting the tilt on the error so it is biased toward being pessimistic, if desired, to prevent optimistic results from interfering with yield later on.

The other component of the Solido ML Characterization Suite is their Statistical Characterizer which is used for producing Monte Carlo accurate statistical timing models. It can produce SPICE accurate models on the order of 1000 times faster than brute force. Its output includes AOCV, LVF and POCV values, and works well with non-Gaussian distributions (none of us live in an ideal world after all really).

Waiting for updated libraries when new PDK’s are released by the foundry can add lengthy delays to projects on new process nodes. Alternatively, design teams can discover late in the development process that libraries for a critical process corner are not available. Waiting for new libraries can be excruciating and expensive – especially if tapeouts are delayed. Finally, having run STA on a few additional corners can bring a great deal of peace of mind.

After learning about the ML Characterization Suite from Solido, I thought about the scene in the original Matrix move when they needed weapons before going back into the Matrix. They were standing in the empty “construct” and then racks of endless numbers of weapons appeared out of nowhere for them to use. While perhaps not quite with such a dramatic flair, ML Characterization will forever profoundly change how libraries are created and used for design projects. Solido has more information about their ML Characterization Suite on their website.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.