You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
Last edited by Daniel Nenni; 02-06-2011 at 10:40 PM.
Cadence at DesignCon 2011
I met with Rahul Deokar, Product Manager this morning to review 9 slides that tell the story of Giga-gates and GigaHz systems design at Cadence. Their updated P&R system now completes jobs 2X faster for 28nm designs.
Silicon Realization Trends and Challenges:
Silicon Realization end to end digital flow. No more foucs on just point tools, instead we're organized end to end flow based. Top 3 of best tool innovations at DesignCon (nominated).
Challenges (2) Japanese key customers confirm the needs. How to get faster ARM cores, .8 to 1GHz.
Low Power, Mixed Signal more automation (power shutoff, dynamic voltage scaling), tool interoperability
Adv tech 3D, 20nm. How to make mobile video? There are node Migration risks.
3. Traditional tools are breaking, convergence issues (Physical synthesis), The 3D tool flow is very different from previous tool flows. Abstraction models at chip and package levels are new.
4. Intent-abstraction-convergence. Supports 28nm flow. Faster P&R (2x). New power intent architect (graphical UI), shipped in December.
Abstraction (patented) gate level netlist analysis with logic and physical, compress db size of 80%, stores more efficiently. Renesas paper has numbers on db size. Hierarchical modeling of IP for power budget.
Convergence Physical synthesis, ECO flow improvements to reduce the rework back into RTL. Promote an all-Cadence tool flow.
Litho hot spots, how to fix? In the past simulation approaches to litho with long run times. InDesign DFM from Clearshap a few years ago. DRC+ is a new technique with pattern-based, see Global Foundries press release. DFM and Litho is not an afterthought, using DRC+ and InDesign together.
Q: How does this compare to Mentor's approach of DFM in the loop?
A: No comment on Mentor.
IC (Virtuoso), Package (Allegro) - all play together for 3D design.
Mixed-signal how to do timing? Build macros. Tool now on the fly can traverse digital and do STA, mixed-signal optimization is done on the fly. Analog is still transistor-level optimization.
5. How to design 3D and analyze. 3D config file ties all the domains together. Beyond just 2 to 3 die stack, expect 7 stacks. Concurrent design now possible. Allegro shows the 3D view for 3D design. Interposer example has multiple routing levels in it. Foundries have new design rules (TSMC, Global Foundries) for 3D.
Thermal plot showing 2D and 3D views of gradients. Thermal results fed back to timing.
Q: How does this compare to Gradient DA or Apache DA?
A: No comment on competition.
6. ARM relationship, they used a Cadence flow. Out of box scripts, use Cadence services to increase the CPU core speeds.
8. Customer feedback
Q: Summary can I still use OpenDoor partners?
A: Yes, We have a framework with OA, and CPF standards.
3D white paper is available.
DRC+ paper with Global Foundries
I spent Tuesday at DesignCon. I expected more EDA participation. All the big guys were there with skeleton crews and a few middle sized guys but they were outnumbered by all sorts of companies that are peripheral to design. How many companies making various forms of connectors are there?
It turned out one of the most interesting things was a lunch meeting with Tyco Electronics. To be honest, I only went because they provided somewhere to sit and a free lunch. They were announcing some new technology around RJ-45 (the plug of the internet, that you plug wired ethernet into). I had no idea that those sockets contained transformer isolation, tiny wires threaded by hand through little ferrite cores. The threading is done by hundreds of Chinese women and it has a number of big disadvantages. One, the variability is really high from one part to the next so needs tuning components on the board. Second, it requires a lot of people and so doesn't scale. When Cisco couldn't ship routers early last year it was because lead times for the RJ-45 connectors had gone out to 8 months since you can't add and train new manpower that fast. What Tyco has done is fine a much more scalable and automated way of making the components. They use a multi-layer printed circuit board. The center layer has a hole containing the ferrite. The patterns on the other layers form the coils. They can manufacture huge boards containing dozens or hundreds of these parts. They then slot into the back of the RJ-45 connector. Cheaper, less variability, more scalable. Nothing not to like. The sandwich was not bad either.
DesignCon was well attended. By the time I arrived, an hour before the exhibition opened, the parking lots for the convention center were full and we had to park over the road in the Great America parking lot. Hopefully that is a leading indicator of how strong electronic design is going to be this year.
Not to put a damper on things related to electronic industry economic recovery (which I think is happening), the full parking lot syndrome had somewhat to do with a concurrent conference event (Strata, A data mining event). This got me thinking that the economic drivers could be in web3.0, clouds and all the weather sounding startups and how we could harness that for our industry.
My take aways from this year's Design Con:
ARM just keep expanding their reach, and it's getting cheaper all the time - I have acquired a bunch of development boards for free in the last year. The EDA companies seem to be tagging along (particularly Mentor), and maybe the lack of an independent Wind River will assist in their expansion into that area. However EDA companies are not software companies, so it will be interesting to see what happens when the software guys start working into what is more traditionally EDA space (e.g. FPGAs - of which there were plenty of vendors).
Intel gave an interesting talk on the problems of power supply noise in PCBs. It was interesting in that the circuit topology didn't look any different to what I worked with 25 years ago - with a good chance the snubber was there for secondary breakdown prevention in a BJT (now redundant). Given the vast numbers of transistors now available to power engineers, I would have thought buck converter design would have advanced more. The fact that they also used HSpice for simulation convinced me that the EDA companies are not really doing much to improve design methodology in this area.
I would have loved to go to one of these conferences ... but it seems that there is nothing new going on, neither has anything changed. Can someone post something interesting that happened at this conference that is not "just more hot air" ?
On Monday evening I checked into the HSPICE SIG event hosted by Synopsys and viewed the HSPICE partners invited:
Most of the EDA vendors were offering tools that did Signal Integrity (SI) analysis at the chip, package or system levels. Solido is focused on variation-aware custom IC design methodology.
Over dinner we heard from six HSPICE users. The moderator asked, "Who used HSPICE 30 years ago?"
I raised my hand, because I was at Wang Laboratories in Massachusetts back in 1981 doing custom chip design and we used HSPICE.
Here's what I gleaned from the speakers:
AMD: “Accurate HSPICE Simulation of the Global Clock Distribution in a High-Performance GPU”
Tony Todesco – 3 billion transistors at 40nm CMOS for the full chip.
Top metal layers are 5X thicker and wider than lower levels, idea for clock nets.
Use Inductance (L) in clock nets. Waveforms show that included L in clk interconnect. RCLK. Clock analysis run daily for tuning. Use Calibre to extract clock netlist, Star RC Custom used, Transistor SPF, HSPICE.
No reduction used at all for the clock tree, because they wanted the best accuracy. 15 million element to simulate: 12 million C, 200K L, 200K K, 2 million R. HSPICE run times were 11 hrs 42 mins, 42 GB, 8 cores, AMD Opteron 3.1GHz. Early access to HPP (HSPICE Precision Parallel) for speed ups.
Waveforms across chip – tight distribution across clock mesh. Minimize skew goal. Under 6 ps variation of skew.
Want more accurate C extraction.
Want statistical model of clock network.
Want correlated variation of transistors.
ARM: “Accelerating Library Verification Using HSPICE Multi-threading Technology”
Tom Mahatdejkul (does PDK evaluations). Did SPICE modeling.
Silicon Validation – why does silicon not match design?
PDK is the link between foundry and ARM for design.
Test IP is too big to fit into HSPICE, too many RC values.
Not enough RAM to fit a big design. Solution?
ARM – test chip with test structures. 10K gate NAND2 delay chains.
Time to extract and simulate a test case was 7 days with a single CPU. With MT now 3 days.
Using MT: 1.7X faster with 4 CPUs (32nm design).
IO design: 2.5X faster with 5 CPUs (saturation after that). 32nm, small netlist.
Multiprocessing - .alters, MC, .data.
New engine, use: -hpp (Hspice precision parallel technology)
HPP – max improvement of 3X with this new switch. Comparison within 3% of non-HPP results, goal is 1%.
Why not other SPICE tools? Special HSPICE switches for convergence.
Why not Fast SPICE? Don’t want to lose accuracy in timing and leakage currents. Customers won’t argue that HSPICE is the golden reference.
Cisco Systems: “Using HSPICE and Custom Explorer in DDR3 System Design” Jianming Li
Memory Expansion system. DDR3 design challenges (CPU, package, trace, DIMM socket, DRAMS)
T line modeling. Perl scripts. ACE in Custom Explorer.
Used 2D field solver to create accurate transmission line values.
Use the W element for transmission line modeling.
HSPICE transient correlated very well with test measurement.
W elements must be broken up into multiple segments above 15 inch traces.
Field solver inside HSPICE.
Juniper Networks: “Simultaneous Switching Noise and Crosstalk Simulation for ASIC Packages” Amir Mohammed
Design and layout of ASIC packages for networking use.
Issues: crosstalk, sso noise,
Are IBIS models sufficient? No, not really.
Need to model loop inductance.
Package models using s-parameters with IBIS models.
IBIS is static, doesn’t take into account power fluctuations.
2nd choice is to use full transistor models instead.
Package models – RLGC with frequency dependence. W elements are not sufficient for package because of VDD/VSS effects.
Need low frequency (VDD/VSS) and high frequency (interconnect, crosstalk).
PowerSI – Sigrity used. High performance and fast enough, 2.5D field solver (FEM).
HSPICE – uses s-parameters, plus transistor models. Warn if model is not passive. 95K mosfets simulated.
How much does noise effect the timing ? (Cannot be done with IBIS, because no timing effects.) IBIS is good for crosstalk, but not for transistor timing.
NVIDIA: "Explorations with HSPICE StatEye for Statistical Link Budgeting”
Brute force HSPICE isn’t sufficient for link BER analysis, must use statistical approach to get results quickly.
Example PCI Express Link – need to analyze jitter transfer.
Simple syntax in HSPICE to do statistical simulation, output is a 2D BER contour eye.
Inputs – transmit jitter, receive jitter (require massive MC simulations to set these numbers).
HSPICE distributed for MC simulations in compute farms. Old way took 137 hours, farm done in 36 hours.
Probability Density Functions (PDF). Double edge (Rise / Fall) – can see subtle differences.
Reference – full transient analysis.
Synopsys: "High-Speed Transceiver and Analog IP Design with HSPICE"
Bob Lefferts (Hillsboro) – mixed signal IP design group. Design IP, tapeout, measure silicon. Port old IP to new nodes. 450 design engineers.
PHY – uses 3.5 CPU years of simulation time. Compute farms.
Custom digital not characterized. Use own std cell libraries, but run in HSPICE. Verilog A used for slow digital, serial channels in HSPICE.
Build an HSPICE model to mimic transistor models. Exact model is not practical IP to share.
Because it was the 30th anniversary of HSPICE we enjoyed a special cake and libations.
FYI - photos and video courtesy of Synopsys.
Last edited by Daniel Payne; 03-22-2011 at 12:14 PM. Reason: added new link
At DesignCon this past week it was interesting to note that there were close to a dozen IP vendors represented on the exhibit floor. Particularly well represented were the Non-Volatile Memory (NVM) companies including Kilopass, NSCore and Novocell. It was also notable that Sidense who is locked in a heated patent litigation with Kilopass and appears to have won the latest skirmish in the on-going was not there.
Based on the number of players and the current and emerging technologies and techniques for protecting and preserving data, there is a growing interest and growing need for this technology. There seem to be 6 primary considerations in choosing which vendor your technology.
2. Power requirements
3. Performance Metrics
There are three common approaches to storing and protecting data. Each with benefits and drawbacks.
1. Poly fuse, which is typically provided by the foundries
2. Floating gate or charge trapping where programing is done through carrier injection.
3. Anti-fuse where the data is stored and protected with a hard oxide breakdown of a gate resulting in the required resistive change.
KiloPass makes the important point that while area, power and performance are key issues, understanding or managing failure risk should be the single most important consideration. They make the point that if a $.05 IP fails after the part has been shipped to the end user it mean that a device costing hundreds or thousands of dollars becomes a worthless piece of junk resulting in expensive merchandise returns and loss of reputation. For this reason the invest heavily in testing and failure analysis.
NSCore says their PermSRAMฎ requires just a fraction of the silicon are that is e-Furse requires and that, because they have a lower programming voltage requirement a charge pump is not required, further reducing area. Finally because data communication is only done on silicon, the contents of the stored data are almost impossible to see from the outside providing a strong protection of customer data.
Novocell based in Pittsburgh PA has developed the first multiwrite Anti-fuse technology. They claim 100% reliability, and 30 year data retention. It can be implemented on standard logic CMOS.
These are not the only vendors and I do think that over time you will see NVM playing a bigger role in more devices in the on-going challenge of protecting proprietary ideas and content. The increased demand will most certainly result in faster safer NVMs.
Okay, I stole the title from EETimes Mark LaPedus, you can read his write up HERE. It’s actually not bad even if he refuses to acknowledge my existence. I was the moderator of this panel and not one mention!?!?!?! I thought we were friends Mark? If SemiWiki does put EETimes out of business you can join us, no problem. You will have to write more detailed articles though. ;-)
It was a good panel and here’s why: NO SLIDES! I had the panelists send me a paragraph, introduced them one at a time, asked if there were questions for that person, then opened it up to the entire panel. There were lots of questions, 1+ hour DISCUSSION, not preaching or selling. This was a real technical panel, an exchange of ideas and opinions, not marketing fluff.
Yes Cadence was the bad guy for keeping a closed PDK but John Stabenow defended the Cadence position quite well. Are you as a CAD Manager or AMS Designer willing to trade design quality of results for an OPEN PDK?
I also understand TSMC’s position. There are dozens of PDKs for dozens of process nodes and the development and support of those PDKs costs millions of dollars that we as electronic gadget customers end up paying for. The TSMC iPDK initiative levels the playing field by providing a platform for the process interface and letting the EDA companies develop competitively on top of that. It not only reduces millions of dollars of costs, it speeds PDK delivery, and encourages the EDA vendors to compete on innovation and tool costs versus proprietary formats (SKILL and PCELLS).
The point I made to Cadence is that change is coming whether they like it or not. Synopsys is not only the #1 EDA vendor by a very large margin, Synopsys is the #1 IP company, and Synopsys also has the largest AMS design group with 500+ people world wide.
I worked for Virage, they used Cadence tools, and the story I heard was that after purchasing Virage, Synopsys gave the Virage engineers 30 days to switch from Cadence to Synopsys tools without schedule slips. The Virage guys also told me that the Synopsys AMS design tools are very good and the support is excellent. The product feedback loop in a company using their own tools is incredible.
Bottom line: Synopsys is very focused on AMS design and is force to be reckoned with. For whatever reason, 20nm will see some of the top semiconductor companies switching from a Cadence closed PDK to a Synopsys iPDK, believe it.
Last edited by Daniel Nenni; 02-11-2011 at 11:34 AM.