WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 569
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 569
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 569
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 569
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Innovation in a Commodity Market

Innovation in a Commodity Market
by Bernard Murphy on 05-29-2018 at 7:00 am

Logic simulation is a victim of its own success. It has been around for at least 40 years, has evolved through multiple language standards and has seen significant advances in performance and major innovations in testbench standards. All that standardization and performance improvement has been great for customers but can present more of a challenge for suppliers. How do you continue to differentiate when seemingly everything is locked down by those standards? Some may be excited the potential for freeware alternatives; however, serious product companies continue to depend on a track-record in reliability and support, while also expecting continuing improvements. For them and for the suppliers, where do opportunities for evolution remain?

21683-cornflakes-min.jpeg

Performance will always be hot. Progress has been made on a bunch of fronts, from parallelism in the main engine (e.g. Xcelium) to co-modeling with virtual prototyping on one side (for CPU+SW) and emulation on the other (for simulation acceleration). However, I was struck by a couple of points Cadence raised in an SoC verification tutorial at DVCon 2018, which I would summarize as raw simulator performance only delivers if you use it effectively. Some of this comes down to algorithms, especially in testbenches. It’s easy to write correct but inefficient code; we’ve all done it. Being intelligent about limiting complex calculations, and using faster algorithms and better data structures, these are all performance optimizations under our control. Coding for multi-core is another area where we really shouldn’t assume tools will rescue us from ourselves. (You can check out the tutorial when these are posted by DVCon).

We can optimize what we have to repeat on each run. I’ve written before about incremental elaboration – rebuilding the simulation run-time image as fast as possible given design changes. Incremental compile is easy, but elaboration (where modules and connections are instantiated) has always been the bottleneck. Incremental elaboration allows for large chunks of the elaborated image to remain untouched while rebuilding just those parts that must be changed. Save/Restart is another widely used feature to minimize rework, since getting through setup can often take 80% of the run-time. However, this capability has historically been limited to understanding only the simulation model state. Now that we have test environments reading and writing files and working with external code (C/C++/SystemC), that basic understanding has limited checkpointing to “clean” states, which can be very restrictive. The obvious refinement is to save total model state in the run, including read and write pointers and the state of those external sims. Which you now can.

An obvious area for continued innovation is around AMS support, and one especially interesting domain here is power modeling in mixed-signal environments. This gets a little more complicated than in a digital UPF view since now you have to map between voltage values in the analog and power states in the UPF, among other things. The basics are covered in the standards (UPF and Verilog-AMS) but there’s plenty of room to shine in implementation. After all, (a) there aren’t too many industry-hardened mixed-signal simulators out there and (b) imagine how much power you could waste in mixed-signal circuitry if you don’t get it right. Cadence has notes on a few updates in this domain here, here and here.

X-propagation is another area related to power. Perhaps you thought this was all wrapped up in formal checks? Formal is indeed helpful in X-prop, but it can only go so far. Deep-sequence checks are obviously much more challenging, potentially unreachable in many cases. These problems are particularly problematic between (switched) power state functions. Missing isolation on outputs from such a function should be caught in static checks, but checking that isolation remains enabled until the block is fully powered up and ready to communicate, this ultimately requires dynamic verification.

However, there’s room to be clever in how this is done. Simulation can be pessimistic (always X when possible) or somewhat more optimistic, propagating only the cases that seem probable. Maybe this seems unnecessary; why not just code X’s into the RTL for unexpected cases? It seems the LRM can be overly optimistic (in at least some cases?), whereas X-prop handling through the simulator (no need to change the RTL) gives you more control over optimism versus pessimism. You can learn more about how Cadence handles X-prop in simulation here.

So yes, the innovation beat goes on, even in simulation, a true veteran of EDA. Which is just as well since it still dominates functional verification and is likely to do so for a long time yet 😎

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.