WP_Term Object
(
    [term_id] => 14
    [name] => Synopsys
    [slug] => synopsys
    [term_group] => 0
    [term_taxonomy_id] => 14
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 665
    [filter] => raw
    [cat_ID] => 14
    [category_count] => 665
    [category_description] => 
    [cat_name] => Synopsys
    [category_nicename] => synopsys
    [category_parent] => 157
)
            
arc v 800x100 High Quality (1)
WP_Term Object
(
    [term_id] => 14
    [name] => Synopsys
    [slug] => synopsys
    [term_group] => 0
    [term_taxonomy_id] => 14
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 665
    [filter] => raw
    [cat_ID] => 14
    [category_count] => 665
    [category_description] => 
    [cat_name] => Synopsys
    [category_nicename] => synopsys
    [category_parent] => 157
)

Synopsys Opens up on Emulation

Synopsys Opens up on Emulation
by Bernard Murphy on 07-31-2017 at 7:00 am

Synopsys hosted a lunch panel on Tuesday of DAC this year, in which verification leaders from Intel, Qualcomm, Wave Computing, NXP and AMD talked about how they are using Synopsys verification technologies. Panelists covered multiple domains but the big takeaway for me was their full-throated endorsement of the ZeBu emulation solution. Synopsys historically has been a little shy about sharing their accomplishments in this domain. They weren’t shy in this event.


Michael Sanie, VP of marketing for the Verification Group, gave a quick overview of recent accomplishments in verification, first by noting that Synopsys has the fastest growing emulation business in the industry. Of course it’s easier to grow fastest when you start small, but the growth rate is notable. And for those of us who thought the emulation biz for Synopsys was mostly just Intel, the following panelists made it apparent that its growth is broad as well as deep.

Chris Tice, Synopsys VP for verification continuum solutions (previously VP/GM for hardware verification at Cadence) followed, talking about the petacycles challenge in bridging between growing software cycles and hardware, while also dealing with accelerating expectations in integrity, reliability and safety – and of course power for mobile applications. He stressed fast emulation will be central to satisfying these needs, and that the fast refresh cycle we’re seeing in FPGAs is central to delivering solutions.

In an otherwise emulation-centric discussion, Iredomala Olopade from Intel talked about making static verification more effective for huge designs. He offered what ought to be a widely-shared piece of wisdom – to get a truly high signal to noise level in static analysis you need both a design-centric view and tool expertise. He cited as an example a CDC analysis in which 75k paths were reported, of which only 100 were real (sound familiar?). Applying these principles and working with Synopsys, Intel reduced false violations to only 1% of total violations. Pushing even harder, they got down to even less false violations and, interestingly, found 30 new and real violations.

Sanjay Gupta, engineering director at Qualcomm, leading the verification team, brought the discussion back to emulation. Qualcomm of course has huge designs and is dominant in mobile so they must deal with increasing functionality, very complex low power strategies and cycle times of 6 months to a year for each SoC. In an earlier design, one missed power error led to a DOA chip making them paranoid about power verification, to the point that they now will only do power signoff in gate-level simulation (because only then do you model the real PG connectivity). He acknowledged that this is expensive but working with Synopsys they have been able to deploy an end-to-end solution for power-aware regressions using ZeBu, with improved runtime and performance.

Jon McCallam, global emulation architect lead at NXP followed with their use of ZeBu in verification for automotive applications. He stressed the importance of having emulation mirroring simulation as closely as possible especially when dealing with limited model accuracy for some IP (analog and other non-synthesizable IP). He also stressed the importance of quick turnaround time for classifying hardware versus software problems and fixing bugs quickly, especially in ECO turns for mask sets. I noticed that NXP depends heavily on the unified flow, from simulation, to ZeBu, to HAPS, to Verdi for verification, and in driver development. In fact, they were able to prove out/develop firmware, drivers, Linux clock and power architecture and more before first silicon, and bring up Linux within 8 days of receiving first silicon.

Alex Starr, a senior fellow at AMD, echoed this “get your software ready pre-silicon” theme. He showed a progression of slides in which they use their VirtualBox virtual platform to model a complete system including the SoC and software from OS up to applications. The SoC is modeled through the SimNow platform, starting with a virtual platform model, which supports getting the main software concepts sorted out. Within SimNow they can then switch the whole design to ZeBu. Or they can switch to modeling just the graphics IP running in ZeBu with the rest of the SoC running on the virtual platform, communicating with the hardware model through transactors. For really detailed hardware diagnosis, they can drop pieces of the model down to VCS-FGP. And when they get silicon, they plug that in, also under SimNow. Throughout all of this, changes in the SoC model are transparent to software developers/validators and the software stack remains the same from the SoC point of view. AMD have been developing these platforms for a while, precisely to push this aggressive shift left (plus ability to drill down for debug) – to get to the point that software comes up at the same time as hardware, where they converge together when silicon is ready.

For the stats-geeks, Alex wrapped up by saying that they can now get to under 1 day compile (on ZeBu) for 1.6B gates and with tuning they are finding they are approaching the lower end of FPGA prototyping performance, an important endorsement in validating that the Verification Continuum can span use-needs from simulation accuracy to prototype speed.

We’ve had two majors dominating news in this space for a while. Another viewpoint can only make the debate more interesting 😎. You can watch the panel discussion HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.