You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!




Results 1 to 7 of 7

Thread: Semiconductor Process Variation Wiki

  1. #1
    Admin Daniel Nenni's Avatar
    Join Date
    Aug 2010
    Location
    Silicon Valley
    Posts
    4,363
    Thumbs Up
    Received: 891
    Given: 2,346

    Semiconductor Process Variation Wiki

    "The primary problem today, as we take 40 nm into production, is variability, he says. There is only so much the process engineers can do to reduce process-based variations in critical quantities. We can characterize the variations, and, in fact, we have very good models today. But they are time-consuming models to use. So, most of our customers still dont use statistical-design techniques. That means, unavoidably, that we must leave some performance on the table. Dr. Jack Sun, TSMC Vice President of R&D

    Process variation is the naturally occurring variation the attributes of transistors (length, widths, oxide thickness) when integrated circuits are fabricated. It becomes particularly important at smaller process nodes (<65 nm) as the variation becomes a larger percentage of the full length or width of the device and as feature sizes approach the fundamental dimensions such as the size of atoms and the wavelength of usable light for patterning lithography masks.

    Process variation causes measurable and predictable variance in the output performance of all circuits but particularly analog circuits due to mismatch.If the variance causes the measured or simulated performance of a particular output metric (bandwidth, gain, rise time, etc) to fall below or rise above the specification for the particular circuit or device it reduces the overall yield for that set of devices. Dr Mark Liu, TSMC Vice President, Operations:

    In this generation (40nm), what we find, whats important is the design, layout styles because in our products, we do see the design has a because a different product has a different yield showing and it ranged quite widely and we find that for those products, the yield is low is mainly because of the design layout dependence. What we call Design for Manufacturing. That is in plain English is when the design cannot be completely described by the design rule

    Transistor level design which lnclude Mixed Signal, Analog/RF, Embedded Memory, Standard Cell, and I/O, are the most susceptible to parametric yield issues caused by process variation. Process variation may occur for many reasons during manufacturing, such as minor changes in humidity or tempature changes in the clean-room when wafers are transported, or due to non uniformities introduced during process steps resulting in variation in gate oxide, doping, and lithography; bottom line it changes the performance of the transistors.

    The most commonly used technique for estimating the effects of process variation is to run SPICE simulations using digital process corners provided by the foundry as part of the spice models in the process design kit (PDK). This concept is universally familiar to transistor level designers, and digital corners are generally run for most analog designs as part of the design process.

    Semiconductor Process Variation Wiki-solido-graphic-1.jpg


    Digital process corners are provided by the foundry and are typically determined by Idsat characterization data for N and P channel transistors. Plus and minus three sigma points maybe selected to represent Fast and Slow corners for these devices. These corners are provided to represent process variation that the designer must account for in their designs. This variation can cause significant changes in the duty cycle and slew rate of digital signals, and can sometimes result in catastrophic failure of the entire system.

    However, digital corners have three important characteristics that limit their use as accurate indicators of variation bounds especially for analog designs:
    • Digital corners account for global variation, are developed for a digital design context and are represented as slow and fast which is irrelevant in analog design.
    • Digital corners do not include local variation effects which is critical in analog design.
    • Digital corners are not design-specific which is necessary to determine the impact of variation on varying analog circuit and topology types.

    These characteristics limit the accuracy of the digital corners, and analog designers are left with considerable guesswork or heuristics as to the true effects of variation on their designs. The industry standard workaround for this limitation has been to include ample design margins (over-design) to compensate for the unknown effects of process variation. However, this comes at a cost of larger than necessary design area, as well as higher than necessary power consumption, which increases manufacturing costs and makes products less competitive. The other option is to guess at how much to tighten design margins, which can put design yield at risk (under-design). In some cases under and over-design can co-exist for different output parameters for a circuit as shown below. The figure shows simulation results for digital corners as well as Monte Carlo simulations which are representative of the actual variation distribution.

    Semiconductor Process Variation Wiki-solido-graphic-2.jpg


    To estimate device mismatch effects and other local process variation effects, the designer may apply a suite of ad-hoc design methods which typically only very broadly estimate whether mismatch is likely to be a problem or not. These methods often require modification of the schematic and are imprecise estimators. For example, a designer may add a voltage source for one device in a current mirror to simulate the effects of a voltage offset.

    The most reliable and commonly used method for measuring the effects of process variation is Monte Carlo analysis, which simulates a set of random statistical samples based on statistical process models. Since SPICE simulations take time to run (seconds to hours) and the number of design variables is typically high (1000s or more), it is commonly the case that the sample size is too small to make reliable statistical conclusions about design yield. Rather, Monte Carlo analysis is used as a statistical test to suggest that it is likely that the design will not result in catastrophic yield loss. Monte Carlo analysis typically takes hours to days to run, which prohibits its use in a fast, iterative statistical design flow, where the designer tunes the design, then verifies with Monte Carlo analysis, and repeats. For this reason, it is common practice to over-margin in anticipation of local process variation effects rather than to carefully tune the design to consider the actual process variation effects. Monte Carlo is therefore best suited as a rough verification tool that is typically run once at the end of the design cycle.

    Semiconductor Process Variation Wiki-solido-graphic-3.jpg


    The solution is a fast, iterative statistical design flow that captures all relevant variation effects into a design-specific corner based flow which represents process variation (global and local) as well as environmental variation (temperature and voltage).


    Graphical data provided by Solido Design Automations Variation Designer.


    Sources:
    1. https://www.semiwiki.com/forum/conte...25-solido.html
    2. http://www.solidodesign.com/

    0 Not allowed!
     

  2. #2
    Administrator admin's Avatar
    Join Date
    Jul 2010
    Posts
    306
    Thumbs Up
    Received: 4
    Given: 48
    I don't believe that many senior HW chip project managers feal confortable with the concept of SSTA reporting. The idea that there is a probability related number in the sign-off report versus the traditional "zero is good, negitive is bad" concept is a hard pill for many to swallow.

    Personally, I would perfer to extend the existing OCV (not AOCV) methodology to include proximity information (POCV), at least in clock tree analysis. The proximity calculation would be related to source and capture cells, and a derating factor would be calculated based on the distance between cells in these two branches. Maybe the distance between cells at the same "level" of logic.

    However, I can see that some in EDA would be against such an idea, since their tools don't support the concept of distance between objects.

    If we could use standard OCV for datapath, and the POCV for clock paths, I think the risk of On Chip Variation issues that could impact synchronous designs could be midigated with out leaving as much performance on the table as standard OCV/AOCV or SSTA methodologies do... Miken

    0 Not allowed!
     

  3. #3
    Administrator admin's Avatar
    Join Date
    Jul 2010
    Posts
    306
    Thumbs Up
    Received: 4
    Given: 48
    I contend it is important at every technology node. Entropy is the only constant. The issue when technology scales is only the magnitude of what you consider important changes. The pefrfect process produces a result that can be measured by an impulse. It does not exist in real life. The goal of every process owner and tool owner should be to get as close to that ideal as possible.

    0 Not allowed!
     

  4. #4
    Administrator admin's Avatar
    Join Date
    Jul 2010
    Posts
    306
    Thumbs Up
    Received: 4
    Given: 48
    It is worth noting in addition to the issues faced at the finer nodes, process variations are the ultimate limit to the quality of all analog devices. This issue forms one of the major tradeoffs in analog design at all nodes. The reliability of foundry PDK's to accurately represent true 3 sigma process corners is a statement about the quality of foundry service, not all models/foundry's are alike!!!

    0 Not allowed!
     

  5. #5
    Administrator admin's Avatar
    Join Date
    Jul 2010
    Posts
    306
    Thumbs Up
    Received: 4
    Given: 48
    Thanks to the need to keep pace with Moore's law you will incur variation merely by your choice of transistor placement, on top of the variations that is quoted - namely L,W, Gox, implants etc. If you are working in 28nm HiKMG then some of the proximity induced variation is temporarily relaxed due to the change in the process but will be back with a vengeance at the next node. Also due to the fact that litho technology has not kept paced with the dimensional scaling you will find the need for DPL to extend the current immersion litho and it is deja-vu for those who have experience with designing for alternating phase-shift.

    0 Not allowed!
     

  6. #6
    Administrator admin's Avatar
    Join Date
    Jul 2010
    Posts
    306
    Thumbs Up
    Received: 4
    Given: 48
    Not only the yield is endangered by variation. There is another snake under this rock - the reliability risk. It would be wonderful if the variation always led to test fails!

    0 Not allowed!
     

  7. #7
    Administrator admin's Avatar
    Join Date
    Jul 2010
    Posts
    306
    Thumbs Up
    Received: 4
    Given: 48
    Mark Rencher • Dan, excellent forum to have open dialog on relevant issues. I wanted to weigh on this topic of process variation as it continues to be relevant especially with recent Intel announcement and recall. While publishing the EETimes published articles “What's Yield Got to Do with IC Design” there was a consistent mis-understanding on how to account for and what corrective action could/should be taken to a design for process variation.

    One of the most important messages conveyed in the EETimes articles was the ability to determine a design sensitivity to Process, Voltage, Temperature, mismatch, lithography and reliability variation. It is only when a designer understands the design sensitivity, that an objective methodology can be applied to reduce the sensitivity or increase the design robustness.
    Because there are different variation sources, different methods are required to identify and reduce the design sensitivity. PVT is different than mismatch, which is different than lithography or reliability. The PVT and reliability methods have been well known since the mid ‘90s. Litho from the mid ’00.

    DFY is always a trade off of pay now or pay later. Pay later and there is the risk of poor yield, bad press. The recent Intel announcement should be a reminder that as painful design for yield is, DFY methods always has a positive ROI over time.

    0 Not allowed!
     

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •