WP_Term Object
(
    [term_id] => 34
    [name] => Ansys, Inc.
    [slug] => ansys-inc
    [term_group] => 0
    [term_taxonomy_id] => 34
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 256
    [filter] => raw
    [cat_ID] => 34
    [category_count] => 256
    [category_description] => 
    [cat_name] => Ansys, Inc.
    [category_nicename] => ansys-inc
    [category_parent] => 157
)
            
ansys sim world 2024 800X100 reg a (1)
WP_Term Object
(
    [term_id] => 34
    [name] => Ansys, Inc.
    [slug] => ansys-inc
    [term_group] => 0
    [term_taxonomy_id] => 34
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 256
    [filter] => raw
    [cat_ID] => 34
    [category_count] => 256
    [category_description] => 
    [cat_name] => Ansys, Inc.
    [category_nicename] => ansys-inc
    [category_parent] => 157
)

Margin Call

Margin Call
by Bernard Murphy on 06-04-2017 at 7:00 am

A year ago, I wrote about Ansys’ intro of Big Data methods into the world of power integrity analysis. The motivation behind this advance was introduced in another blog, questioning how far margin-based approaches to complex multi-dimensional analyses could go. An accurate analysis of power integrity in a complex chip should look at multiple dimensions: a realistic range of use-case simulations, timing, implementation, temperature, noise and many other factors. But that would demand an impossibly complex simulation; instead we pick a primary topic of concern, say IR drop in power rails, simulate a narrow window of activity and represent all other factors by repeating analysis at a necessarily limited set of corners of margins on the other factors.


That approach ignores potential correlation between these factors, which worked well in simpler designs built in simpler technologies but is seriously flawed for multi-billion-gate designs targeted to advanced technologies. Ignoring correlations requires you to design to worst-case margins, increasing area and cost, blocking routing paths, delaying timing closure and still leaving you exposed, because without impossibly over-safe margins you’re still gambling that worse cases don’t lurk in hidden correlations between the corners you analyzed.


Ansys big data technology (called SeaScape) aims to resolve this problem by getting closer to a true multi-dimensional analysis, tapping existing distributed data reserves of simulation, timing, power, physical and integrity data through distributed processing. This breadth of analysis should provide a more realistic view across multiple domains, providing both efficiency and safety; you don’t overdesign for “unknown unknowns” and you don’t under-design because you see a much broader range of data. Ansys have had a year since my first blog on the topic, so it seems reasonable to call this – did they pull it off?

It’s always difficult to get direct customer quotes on EDA technology, so I must talk here in general terms, but I believe there will be some joint presentation activity at DAC, so look out for that. The technology first appears in RedHawk-SC and has been proven in production with at least two of the top 10 design companies that I know of, building the biggest and most advanced designs around today. I was told that 16 of those designs are already in silicon and around twice that many have taped-out.

Off-the-record customer views on the value-add are pretty clear. The most immediately obvious advantage is in run-times. Since much of the processing is distributed, they can get results on a block within an hour and a (huge) full-chip overnight. It becomes practical to re-run integrity analysis on every P&R update. They can run four corners simultaneously for static IR, EM and DVD transients. They can profile huge RTL FSDBs and parallel-solve for multiple modes to find the best vectors with best activities for EM and IR stressing. And that provides the confidence to be more aggressive in reducing over-design, which in turn accelerates closure (less blockages). This customer also commented on the elasticity of this approach to analysis. Previously, running faster was capped by the capabilities of the biggest systems they could use for analysis. Now, since analysis is distributed, they found it much easier to scale up by simply adding access to more systems.

Faster is always good, but what about impact on the final design? One very compelling customer example looked at die-size reduction. In that case they removed M2 over standard cell rows, then added it back in only where this more refined analysis showed it was needed to meet power integrity margins. They found that they could reduce the overall die size by ~ 5% by freeing up more resources for signal routing. Which resulted in a reduction of P&R block size by 10%. That’s an easily understood and significant advantage, enabled by big data analytics.

All this is great for teams building multi-billion gate chips in 16 or 7nm, but I was interested to hear that both customers saw significant value for analyzing and optimizing blocks, between 1M and 8M gates, in around 50 minutes, which they found helped them close faster and more completely on physical units than was possible before. So the technology should also have value for less challenging designs.

Given this, my call is that Ansys delivered on the promise. But don’t take my word for it. Check out what they will be presenting at DAC. You can learn more about SeaScape HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.