Statistical Variation of IC Processes
It's no secret in our semiconductor industry that the spread in SPICE process corners like FF and SS is becoming larger as the node size gets smaller. Just look at the following chart which shows how normalized delays are changing when moving from 65nm (red line) to 40nm (blue line):
So where does all of this process variation come from? The sources of local random variability are numerous:
- Random Dopant Fluctuation (RDF)
- Line Edge Roughness (LER)
Another way to think about process variation is to know that we're dealing with the intrinsic statistical physics of nature in our semiconductor materials.
In addition there are global statistical variation sources for a process node:
- Die to die
- Wafer to wafer
- Lot to lot
- Fab to fab
The IC manufacturing process is a complex set of steps that involve Temperatures, times, pressures, cleanliness and gas compositions. All of this in turn effects transistor characteristics like Vth, Tox, IDon, IDoff, etc. The accumulation of device parameter variability can be plotted as a set of distribution curves:
We can even look at variability by category, and notice the trends when moving from 65nm down to 28nm:
Using statistical corners for modeling is a conservative approach. Here's an actual flow that accounts for variability when creating SPICE models:
One drawback that you quickly discover with this approach is that the results (blue dots) don't correlate well with process corners (red):
A second drawback is that the results (blue dots) don't correlate well with the design corners (red) either:
Any automatically generated SPICE models based on this first approach would require fixing to agree with measured results. What we really require is a fast statistical model generation along with a fast statistical model adjustment without changing our corners.
The traditional method in SPICE to work with variations is to use lots of Monte-Carlo simulations, however running Monte-Carlo takes too much time, so what we really want are faster Monte-Carlo simulations along with faster statistical optimizations.
Machine learning can be applied to this optimization approach because:
- Current statistical models are based on corner specs
- Some specs especially the design specs are highly nonlinear
Here's a flow for how optimization is accomplished using cascaded machine learning:
Related blog - Are your transistor models good enough?
The extraction results (blue dots) from this cascaded machine-learning algorithm match the corners quite well (red):
Automatic model generation has a modeling flow for advanced statistical analysis of four steps:
- Origin data
- Corner model
- Mismatch model
- Statistical model
The key techniques to make these four steps happen efficiently are fast Monte-Carlo, a projection based method, cascaded machine-learning, and intelligent optimization. Take a look at how good the mismatch modeling is with this approach (red versus blue lines):
Look at how tight the data points are inside of each elliptical region for the fast generation and characterization of an 11-corner model:
Related blog - Is that PDK safe to use yet?
What does all of this wonderful technology mean to an IC designer? Well, an IC designer can now perform statistical analysis to identify issues, and they can fix design correlation without changing corner specs as shown below:
I hope that you learned something new today about automatic SPICE model generation, because the approach from Platform DA in their MeQLab tool looks very promising.