You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • Replacing the British Museum Algorithm

    Article: Analog FastSPICE added to Tanner EDA-britmuseum.jpgIn principle, one way to address variation is to do simulations at lots of PVT corners. In practice, most of this simulation is wasted since it adds no new information, and even so, important corners will get missed. This is what Sifuei Ku of Microsemi calls the British Museum Algorithm. You walk everywhere. And if you don’t walk to the right place, you miss something. We've probably all had that sort of experience in large museums like the Met or Le Louvre.

    At DAC, Solidio organized a panel session Winning the Custom IC Race with participants from Microsemi, Applied Micro Circuits and Cypress (along with Solido themselves). The session was videoed and the videos and their transcripts are now available.

    The session kicked off with Amit Gupta, the CEO of Solido, showing the results of a blind survey that they had commissioned earlier in the year. You can look at the transcript for the full results but here are a few of the highlights:
    • 60% of companies use 2 or more SPICE simulation vendors, only 40% use just one
    • ¼ of respondents were planning on evaluating or adding a new simulator in the next year
    • the most important features of a simulator (in order) were accuracy, foundry models, speed, capacity. Not really a big surprise. Nobody wants inaccurate results really fast.
    • 73% use Cadence, 52% use Synopsys, 33% use Mentor and there is another 25% for other simulators such as Agilent and Silvaco (of course these add up to more than 100% because most people use 2 or more)
    • the top drivers for variation aware tools were to reduce overdesign (43%), avoid respins (42%), reduce underdesign (39%), and some other factors at much lower percentages
    • when it came to actually using or planning to use variation-aware tools the percentages were 28% already using, 9% planning to, 29% planning to evaluate and 34% with no plans
    • the big requirements for such a tool were SPICE simulation reduction (50%), accuracy (49%), integration (41%) and some other much lower factors

    It only gets worse with more and more advanced process nodes. As Amit said to wrap up:
    In addition to ultra-low power design where the amount of margin goes down and therefore the amount of variability goes up – we’re also seeing moving to smaller nodes variation becoming more of an issue, 16 nanometer FinFET transistor design or even we’re starting to see FD-SOI transistor design and at 10 nanometer multi-patterning and spacer effects.

    First up of the customers was Sifuei Ku of Microsemi in the SoC division (the old Actel) who had needed an alternative to the British Museum Algorithm mentioned above, in the middle of a project. Since they are designing FPGAs they are pushing transistors to the limit and often designing beyond the well-categorized regions.

    Microsemi wanted to supplement their traditional PVT and Monte Carlo (MC) with high-sigma MC. When Sifuei was asked how this compared to their previous software he had to admit that they didn't have any previous solution. So they evaluated the possibilities. Obviously the first criteria (criterium?) was that the software needed to be able to detect issues. But since they were adopting this in the middle of the execution phase for 28nm arrays they needed it to be easy for the CAD people to set up and for the designers to use.

    When they evaluated Solido's variation designer they found some problems. Real problems.
    During the eval cycle it did catch several issues. We had a chip that came back, it was last year, and we did have an issue in the pulse generator in the nonvolatile memory section; the designer after a while had figured out what the problem was. However, we did the evaluation and we sent the exact circuit to Solido and their software caught it right away. So it actually zeroed into the issue that we believed the problem was. And if I remember correctly, we engaged the evaluation on Thursday, the CAD people got it up and running on Friday. The designer actually played with it on the weekend and actually on the designer’s first design, a level shifter, he found the issues on the weekend.

    They also got good results with the high-sigma MC. They could never run something like this before because it would have taken literally billions of simulations which is obviously intractable. But with Solido most circuits actually converged between 500 and 1000 simulations, which is actually better than Solido's own number where they say it will converge in 3-4000 simulations.

    More recently they have evaulated Hierarchical Monte Carlo and ended up buying the tools after 2 weeks of evaluations. They did the evaluation on the full memory arrays.
    Solido was able to construct this memory – this 15.6 million cell array – with all the bit lines and sense amps. And actually, what we do is that we run the Monte Carlo for our chip to 3 sigma, the sense amp is to 5.1 sigma and the bit cells to 6.2 sigma.

    Bottom line: they improved the design 17-55% with a saving of 1.7M times in simulation time (from a theoretical 18B to 8,000 simulations). Unfortunately they were outside their budget cycle so they had to beg for more money from their VP, but they ended up getting it so the story has a happy ending.

    More from the session in another blog. The videos and transcripts for the whole session are here.