You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • High-Sigma Monte Carlo Analysis Approaches

    Article: Taiwan Semiconductor Manufacturing Corporation (NYSE: TSM)-solido-1-jpg
    This article reviews the problem of high-sigma analysis, then various approaches such as Monte Carlo, Importance Sampling, linear / Worst-Case Distance, and High-Sigma Monte Carlo.

    High-sigma IC components such as bitcells, sense amps, and digital standard cells can tolerate only a few defects in hundreds of millions or billions of instances. To start to get a feel for the problem, let’s simulate 1 million Monte Carlo (MC) samples for a 6 transistor bitcell, measure the read current, and examine the distribution. We use a modern industrial process, with 10 local-variation process variables per device, totaling 60 process variables. The figure below illustrates the distribution, in QQ plot form, where each dot is an MC sample point. QQ plots make it easier to see the tails of a distribution. In a QQ plot, the x-axis is the circuit output and the y-axis is the cumulative density function (CDF) scaled exponentially. In a circuit with linear response of output to process variables, the QQ curve will be linear – a straight line of dots from the bottom left to the top right. Nonlinear responses give rise to nonlinear QQ curves.

    In thebitcell QQ plot, the bend in the middle of the curve indicates a quadratic response in that region. The sharp dropoff in the bottom left shows that for process points in a certain region, the whole circuit shuts off, for a current of 0. The curve’s shape clearly indicates that any method assuming a linear response will be extremely inaccurate, and even a quadratic response will suffer.

    The plot below shows the QQ for delay of a sense amp, having 15 devices and 150 process variables.The three vertical “stripes” of points indicate three distinct sets of values for delay -- a trimodal distribution. The jumps in between the strips indicate discontinuities: a small step in process variable space sometimes leads to a giant change in performance. Such strong nonlinearities will make linear and quadratic models completely fail; in this case they would completely miss the mode at the far right at delay of about 1.5e-9 s.
    Article: Taiwan Semiconductor Manufacturing Corporation (NYSE: TSM)-solido-2-jpg

    In this analysis, we have shown the results of simulating 1M MC samples. But that can be very expensive, taking hours or days. And 1M MC samples only covers circuits to about 4 sigma. To find on average a single failure in a 6 sigma circuit, one would need to do 1 billion MC samples. (Where a failure is either failing a spec, or failing to simulate which also implies failing spec.) The figure below left illustrates.But as the figure below right illustrates, fewer MC samples mean there will be no failures.
    Article: Taiwan Semiconductor Manufacturing Corporation (NYSE: TSM)-solido-3-jpg

    Since simulating 1B MC samples is not feasible, one common workaround is to simulate 10,000 to 1M MC samples, then extrapolate in QQ space. The figures below show what happens on 1M MC samples, for the bitcell and sense amp examples given previously. Clearly, extrapolation fails. The failure of quadratic on the sense amp is almost humorous: the curve starts bending downwards which is mathematically impossible. So much for extrapolation!
    Article: Taiwan Semiconductor Manufacturing Corporation (NYSE: TSM)-solido-4-jpg

    Another idea is to do one-variable-at-a-time perturbation analysis, simulate, construct a linear model, then find the most probable point on the model that causes infeasibility. This is the “Worst-Case Distance” (WCD) method. But recall how strongly nonlinear the QQ plots for the bitcell and sense amp are. A linear approach would have a linear QQ curve, and therefore would simply be wrong: the curves will overshoot or undershoot as they extend from nominal, leading to overly optimistic or pessimistic estimates of yield. Another way to visualize the breakage is to consider the images below. The figure below left illustrates the WCD / linear approach, which separates feasible from infeasible points in process space with a line or plane. This means that it cannot capture a nonlinearity feasibility region as shown below (middle), or a disjoint nonlinear feasibility region below (right).
    Article: Taiwan Semiconductor Manufacturing Corporation (NYSE: TSM)-solido-5-jpg

    Another idea to handle high-sigma designs is to do rejection sampling on the MC samples as they appear, with the help of a nonlinear classifier. This is the so-called “Statistical Blockade” (SB). It starts by simulating about 1000 MC samples; then constructing a nonlinear classifier (a support vector machine). From then on, it generates MC samples one at a time. For each MC sample, the classifier is “sure” that the sample is feasible (or in the extreme 2% of output values), it treats it as such; but if it is not sure then it will simulate it to see. The figure below illustrates. Unfortunately, if the classifier has more than a couple % inaccuracy on any incoming sample, it will inadvertently “classify away” those samples. For example, it would (in all probability) miss the far right tail region in the sense amp QQ plot above.

    Article: Taiwan Semiconductor Manufacturing Corporation (NYSE: TSM)-solido-6-png

    Importance sampling (IS) is another approach to handle high-sigma designs. The general idea is to change the sampling distribution so that more samples are in the region of failure. It is typically composed of two steps, as shown in the figure below. The first step finds the new sampling region (yellow), which may be via uniform sampling, a linear / WCD approach, or a more general optimization approach. The step typically finds a “center” which is simply a new set of mean values for the sampling distribution. The second step continually draws and simulates samples from the new distribution; it calculates yield by assigning a weight to each sample based on the point’s probability density on the original and new sampling distributions.
    Article: Taiwan Semiconductor Manufacturing Corporation (NYSE: TSM)-solido-7-jpg

    While IS has strong intuitive appeal, it turns out to have very poor scalability in the number of process variables, causing inaccuracy. Here’s why: step one needs to find the most probable points that cause infeasibility; if it is off even by a bit then the average weight of the infeasible samples will be too low, giving estimates of yield that are far too optimistic.For example, in one of our tests, a canonical IS estimated sigma to be >10 when the true sigma was 6. To reliably find the most probable points amounts to a global optimization problem, which has exponential complexity in the number of process variables – it can handle 6 or 12 variables (search space of ≈106 or 1012), but not e.g. 60 or 150 as in the industrial bitcell and sense amp problems given above (space of 1060 or 10150).

    Let us consider a high-sigma approach that reframes the problem and associated complexity, by operating on a finite set of MC samples. If we have 1B MC samples, then that is an upper complexity of 109. While “just” 109is much better than the 10150complexity of IS, it is still too expensive to simulate 1B MC samples. But what if we were sneaky about which MC samples we actually simulated? Let us use an approach that prioritizes simulations towards the most-likely-to-fail cases.It never does an outright rejection of samples in case they cause failures; it just de-prioritizes them. It can learn how to prioritize using modern high-dimensional machine learning, adapting based on feedback from SPICE. We call this approach High-Sigma Monte Carlo(HSMC) because it uses MC samples, yet can handle high-sigma problems. The QQ plots below illustrate HSMC results (in red) on the 60-variable bitcell, and 150-variable sense amp, where HSMC used 100M generated MC samples. In <10K simulations for each problem, HSMC effectively found the tails of each distribution. This means a 10,000x speedup for highly nonlinear high-dimensional problems. The sense amp delay results are particularly significant: HSMC is the only technique that we tested that was able to reliably solve it.
    Article: Taiwan Semiconductor Manufacturing Corporation (NYSE: TSM)-solido-8-png

    Because of its speed, accuracy, and scalability, the HSMC approach is quickly finding broad industrial deployment, as part of the design / verification flows at several leading semiconductor firms and foundries. As an example, let us consider a case study at Nvidia: 6-sigma analysis of a flip flop digital standard cell, having 1000 process variables. On this circuit, HSMC was able to find the tails of 5 billion MC samples (6 sigma) in 4000 MC simulations. This is a >1,000,000x reduction in the number of simulations. Nvidia reported that in general, HSMC allowed them to do high-sigma analysis on memory and standard cell designs, on a number of applications where it wasn’t previously feasible. The details of the case study are at: http://www.deepchip.com/items/0492-10.html. There are several patents pending regarding HSMC and the surrounding technologies.

    This whitepaper provides more detail on HSMC and other high-sigma approaches:
    http://www.solidodesign.com/technology/high-sigma-verifier/

    --Trent McConaghy, Co-founder and CTO, Solido Design Automation, Inc.