WP_Term Object
(
    [term_id] => 57
    [name] => MunEDA
    [slug] => muneda
    [term_group] => 0
    [term_taxonomy_id] => 57
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 43
    [filter] => raw
    [cat_ID] => 57
    [category_count] => 43
    [category_description] => 
    [cat_name] => MunEDA
    [category_nicename] => muneda
    [category_parent] => 157
)
            
Pic800x100 1
WP_Term Object
(
    [term_id] => 57
    [name] => MunEDA
    [slug] => muneda
    [term_group] => 0
    [term_taxonomy_id] => 57
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 43
    [filter] => raw
    [cat_ID] => 57
    [category_count] => 43
    [category_description] => 
    [cat_name] => MunEDA
    [category_nicename] => muneda
    [category_parent] => 157
)

Using Sequential Testing to Shorten Monte Carlo Simulations

Using Sequential Testing to Shorten Monte Carlo Simulations
by Tom Simon on 12-27-2017 at 7:00 am

When working on an analog design, after initial design specs have been met, it is useful to determine if the design meets specs out to 3 or 4 sigma based on process variation. This can serve as a useful step before going any further. It might not be a coincidence that foundries base their Cpk on 3-sigma. To refresh, Cpk is the ratio of the lesser of the upper or lower process parameter specification boundary and their 3-sigma deviation– making a Cpk of 1 working out to meeting process specs at 3 sigma. Higher Cpk’s point toward meeting spec out to a higher sigma – providing better yields. Still, running Monte Carlo analysis on a design across process variations to validate proper performance out to 3 or 4 sigma can be a daunting task.

20891-muneda-logo-min.jpeg

During the MunEDA user group meeting in Munich during November I had the opportunity to hear a presentation on an interesting technique that can possibly reduce the number of Monte Carlo runs necessary to reject or qualify a design during Monte Carlo variation analysis. The name of the technique is Sequential Testing. The short version is that it uses the results from a smaller number of samples to determine the likelihood of the final result being above or below thresholds for acceptance or rejection. Let’s break this down a bit.

If you have a jar of 100 randomly mixed black or white marbles and you draw a small number, you will start to get an idea of the composition of the entire jar. Of course, there will be some uncertainly, but if you are willing to accept a range as your answer, you can get a pretty good idea of the percentage of black or white balls with just a few samples. In essence, we are talking about using a smaller number of samples to get a probability that we meet spec at a specific sigma.

This process works better when the design in question is further away, either better or worse, from the target sigma. Any way you look at it when you can have confidence that a design is either failing or beating its target sigma, you can save a lot of time running Monte Carlo simulations. So, as you might gather, the key is selecting the right level for acceptance or rejection. These are known as the acceptance quality limit (AQL) and the rejection quality limit (RQL), respectively. Given that we are chip designers and not statisticians it’s nice that MunEDA offers some help here. Their Dynamic Sampling option in their Monte Carlo simulator will help automatically set the percentages for AQL and RQL.

So how does this translate into time savings on Monte Carlo analysis? Their presentation contained some examples of applying this feature in their tool. If we look at a circuit that has a 3 sigma requirement and run a full Monte Carlo we expect to have 5000 runs. However, if the circuit we are analyzing only has a sigma-to-spec robustness of 2.5 sigma, we can expect to learn this after only 192 simulations when we use the sequential testing feature. This results in an impressive 26x speed up. Though we won’t be happy to learn the design fails, at least significant Monte Carlo simulation time is saved.

The same effect would be observed if the circuit exceeded the target sigma by a margin. If the circuit yielded out to 3.5 sigma, this can be predicted with only 318 runs. Still far fewer than 5000. To use the interface, users specify the desired yield and then choose dynamic as the sampling method. Simulations will be run until one of the specs is rejected, or until all specs are accepted.

20891-muneda-logo-min.jpeg

MunEDA offers the sequential testing option in both their WiCkeD Monte Carlo and their BigMC tools. In the WiCkeD tool they offer pass/fail and sigma-to-spec sampling. In BigMC they offer sigma-to-spec sequential testing. Both help with automatic determination of RQL and AQL. In particular, BigMC is interesting because it can handle very large netlists, ~100MB or 500k devices. Overall MunEDA’s prowess in statistical analysis show through quite clearly. During the user group meeting in Munich there were many papers presented on diverse topics – from flip/flop optimization to using their worstcase analysis to model a MEMS design. For more information on this and the other topics, I suggest looking at their website.

Share this post via:

Comments

One Reply to “Using Sequential Testing to Shorten Monte Carlo Simulations”

You must register or log in to view/post comments.