Their designs are primarily based on the following GLOBALFOUNDRIES processes: 22nm fully depleted SOI technology, 14nm FinFET and 7nm which is in development. In the case of a single bit cell in any of these processes, assuring high yield is fairly straightforward. Indeed, if you want 99.865% reliability in theory a 3 sigma analysis will suffice. However, a few problems arise for memory chip designers. For one, the distribution curves for semiconductor yields are not ideal bell curves, they have long tails that can skew ideal statistics. The even bigger problem is that the chips like those that John works on have 300 million bit cells, so even with a 1 in 300 million bit cell failure rate, every chip is likely to fail.
To get orders of magnitudes fewer failures, analysis is needed out to 6 sigma. At this level, you can expect to see 10 failures per ~10B. This is more in the desirable range for a device with 300M instances of the cell in question.
Monte Carlo simulation is the favored approach for ensuring yield across the range of variation expected for designs like the Invecas memory chips. With Monte Carlo simulation huge numbers of simulations are run with varying process corner and variation parameters. The result provide a good look at performance under the bell curve. However, to get a better look at the troublesome long tail, truly enormous numbers of simulations are usually called for. Let’s say you run between 100K and 1M simulations. In this case you can see from the illustration below that we are just getting into the tail, where the most important and interesting results are sitting.

Solido has a clever solution to this problem that gives designers access to the simulation results well into the tail without having to run millions or billions of simulations. With Solido’s High-Sigma Monte Carlo, the parameters for a large number of potential simulation runs are generated. A subset of these are run based on a preselection criterion. The results of these simulations is used intelligently by the Solido software to further select and refine the specific samples that need to be run to populate the tail selectively. There is a feedback loop that ensures the correct order prediction.
The net result is that the most interesting part of the distribution curve – the tail – is thoroughly explored without having to brute force simulate all the samples. Going back to John’s run on his Ivencas example, we see that he generated 623M samples in order to get to 5.5 sigma. However, he only needed to actually simulate less than 13K of those to obtain useful results. If instead he had extrapolated the results based on the median results, he would have been off by a significant margin.

Hearing a discussion of a real life case is pretty interesting. If you want to see the entire talk by John Barth at Invecas, you can find it here on the Solido website.
Mentor FINALLY Acquires Solido Design
Daniel Nenni 11-20-2017