WP_Term Object
(
    [term_id] => 64
    [name] => Solido
    [slug] => solido
    [term_group] => 0
    [term_taxonomy_id] => 64
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 58
    [filter] => raw
    [cat_ID] => 64
    [category_count] => 58
    [category_description] => 
    [cat_name] => Solido
    [category_nicename] => solido
    [category_parent] => 157
)

High Yield and Performance – How to Assure?

High Yield and Performance – How to Assure?
by Pawan Fangaria on 04-16-2012 at 7:30 am

In today’s era, high performance mobile devices are asserting their place in every gizmos we play with and guess what enables them work efficiently behind the scene – it’s large chunks of memory with low power and high speed, packed as dense as possible. Ever growing requirement of power, performance and area led us to process nodes like 20nm, but that has a burgeoning challenge of extreme process variation limiting the yield. However there is no escape from detecting the failure rate early in the cycle to assure high yield.

In case of memory, there can be billions of bit cells with column selectors and sense amplifiers and you can imagine the read / write throughput on those cells. Although redundant columns and error correction mechanisms are provided, they are not sufficient to tolerate bit cell failure above a certain number. The requirement here is to detect failure in the range of sigma of 6.

So, how do we detect failure at such high precision? Traditional methods are mostly based on Monte Carlo (MC) simulation, the idea first invented by Stanislaw Ulam, John Neumann and Nicholas Metropolis in 1940. To get a feel of this, let’s consider a bit cell of 6 transistors with 5 process variables per device, making a total of 30 process variables. Below is the QQ plot of distribution of bit cell read current (cell_i) on x-axis and cumulative density function (CDF) on y-axis. Each dot on the graph is a MC sample point. There are 1 million samples simulated.


QQ plot of bit cell read current with 1M MC samples simulated

The QQ curve is a representation of the response of output to process variables. The bend in the middle of the curve means a quadratic response in that region. The sharp drop off in bottom left means a circuit cut off in that region. Clearly any method assuming linear response will be extremely inaccurate.

Now consider the QQ plot for delay of a sense amplifier having 125 process variables.


QQ plot of delay of sense amplifier with 1M MC samples simulated

The three stripes indicate three distinct sets of delays indicating discontinuities; a small step in process variable space sometimes leads to major change in performance. Such strong nonlinearities will make linear and quadratic models completely fail. It must also be noted that the above result is obtained after 1M MC samples which covers circuits of about 4-sigma. For 6-sigma, one would need about 1 billion MC samples, not practical.

In order to detect rare failures with lesser samples, many variants of MC method and other analytical methods have been tried, but each of them lacks in either of robustness, accuracy, practicality or scalability. Some of them can work with only 6 to 12 process variables. A survey of all of them is provided in a white paper by Solido Design Automation.

Solido has developed a new method; they call it HSMC (High Sigma Monte Carlo) which is promising; fast, accurate, scalable, verifiable and usable. This method has been implemented as a high quality tool in Solido Variation Designer Platform.

The HSMC method prioritizes simulations towards the most-likely-to-fail cases by adaptive learning through feedback from SPICE. It never rejects any sample in case it causes failure, hence increasing accuracy. The method can produce extreme tail of the output distributions (like in QQ plot), using real MC samples and SPICE accurate results in hundreds or a few thousand simulations. The flow goes something like this –

  • 1. Extract 6-sigma corners by simply running HSMC, opening the resulting QQ plot, selecting the point at the 6-sigma mark, and saving it as a corner.
  • 2. Bit cell or sense amplifier designs are tried with different sizing. For each candidate design, one only needs to simulate at the corner(s) extracted in the 1[SUP]st[/SUP] step. The output performances are at “6-sigma yield”, but only with a handful of simulations.
  • 3. Finally, verify the yield by doing another run of HSMC. The flow concludes if there are no significant interactions between process variables and outputs, which is generally the case. Otherwise, a re-loop is done, by choosing a new corner, designing against it and verifying.

Let’s look at the results of HSMC applied on the same bit cell and sense amplifier designs –


Bit cell_i – 100 failures in first 5000 samples


Sense amp delay – 61 failures in first 9000 samples


QQ plot of cell_i – 1M MC samples and 5500/100M HSMC samples
MC would have taken 100M samples against 5500 with HSMC


QQ plot of sense amp delay – 1M MC samples and 5500/100M HSMC samples

The process is extended further for reconciliation between global (die-to-die, wafer-to-wafer) and local (within-die) statistical process variation. It is clear that this method is fast due to handful of samples to be simulated, accurate as no likely failure is rejected, scalable as this method can handle 100s of process variables, verifiable and usable.

The details can be looked into the actual white paper, “High-Sigma Monte Carlo for High Yield and Performance Memory Design”, written by Trent McConaghy, Co-founder and CTO, Solido Design Automation, Inc.

By Pawan Kumar Fangaria
EDA/Semiconductor professional and Business consultant
Email:Pawan_fangaria@yahoo.com

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.