WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 567
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 567
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 567
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 567
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Cadence Explores Smarter Verification

Cadence Explores Smarter Verification
by Bernard Murphy on 07-10-2017 at 7:00 am

Verification as an effectively unbounded problem will always stir debate on ways to improve. A natural response is to put heavy emphasis on making existing methods faster and more seamless. That’s certainly part of continuous improvement but sometimes we also need to step back and ask the bigger questions – what is sufficient and what is the best way to get there? Cadence hosted a panel at DAC this year on that topic, moderated by Ann Mutschler of SemiEngineering. Panelists were Christopher Lawless (Intel Director of customer-facing pre-silicon strategies), Jim Hogan (Vista Ventures), Mike Stellfox (Cadence fellow) and David Lacey (Verification scientist at HP Enterprise). I have used Ann’s questions as section heads below.

What are the keys to optimizing verification/validation?
Chris said that the big challenge is verifying the system. We’re doing a good job at the unit level, both for software and hardware components, but complexity at the (integrated) system level is growing exponentially. The potential scope of validation at this level is unbounded, driving practical approaches towards customer use-cases. David highlighted challenges in choosing the right verification approach at any point and how best to balance these methods (e.g. prototyping versus simulation). He also raised a common concern – where is he double-investing and are there ways to reduce redundancy?

Jim saw an opportunity to leverage machine learning (ML), citing an example in a materials science company he advises. He sees potential to mine existing verification data to discover and exploit opportunities that may be beyond our patience and schedules to find. Mike echoed and expanded on this saying that we need to look at data to get smarter and we need to exploit big data analytics and visualization techniques, along with ML.

Of the verification we are doing today, what’s smart and what’s not?
David said a big focus in his team is getting maximum value out of the licenses they have. Coverage ranking (of test suites) is one approach, cross coverage ranking may be another approach. Mike said that formal is taking off, with interest in using these methods much earlier for IP. And that creates need to better understand the coverage contribution and how that can be combined with simulation-based verification. He added that at the system level, portable stimulus (PS) is taking off, there’s more automation appearing and it is becoming more common to leverage PS across multiple platforms. Chris was concerned about effectiveness across the verification continuum and wanting to move more testing earlier in that continuum. He still sees need for hardware platforms (emulation and prototyping) but wants them applied more effectively.

What metrics are useful today?
Chris emphasized that synthetic testing will only take you so far in finding the bugs that may emerge in real OS/applications testing. Real workloads are the important benchmark. The challenge is to know how to do this effectively. He would like to see ML methods extract sufficient coverage/metrics from customer use-cases ahead of time, and to propagate appropriate derived metrics from this across all design constituencies to help each understand impact on their objectives.

Mike felt that, while it may seem like blasphemy, over-verifying has become a real concern. Metrics should guide testing to be sufficient for target applications. David wants to free up staff earlier so they can move on to other projects. In his view, conventional coverage models are becoming unmanageable. We need analytics to optimize coverage models to address high-value needs.

Is ML driving need for new engineers?
Jim, always with the long view, said that the biggest engineering department in schools now is CS, and that in 10 years it will be cognitive science. He believes that we are on cusp of cognitive methods which will touch most domains. Chris wouldn’t commit to approaches but said Intel is exploring ways to better understand and predict. He said that they have many projects stacked up and need to become more efficient, for example in dealing with diverse IoT devices.

David commented that they are not hiring in ML but they are asking engineers to find new ways to optimize, more immediately to grow experience in use of multiple engines and to develop confidence that if something is proven in one platform, that effort doesn’t need to be repeated on other platforms. Following this theme, Chris said that there continues to be a (team) challenge in sharing data across silos in the continuum. And Mike added that, as valuable as UVM is for simulation-centric work, verification teams need to start thinking about building content for multiple platforms; portable stimulus is designed to help break down those silos.

Where are biggest time sinks?
David said, unsurprisingly, that debug is still the biggest time sink, though the tools continue to improve and make this easier. But simply organizing all the data, how much to keep, what to aggregate, how to analyze it and how to manage coverage continues to be a massive problem. This takes time – you must first prioritize, then drill down.

Chris agreed, also noting the challenge in triage to figure out where issues are between silos and the time taken to rewrite tests for different platforms (presumably PS can help with this). Jim wrapped up noting that ML should provide opportunities to help with these problems. ML is already being used in medical applications to find unstructured shapes in MRI data – why shouldn’t similar approaches be equally useful in verification?

My take – progress in areas we already understand, major moves towards realistic use-case-based testing, and clear need for big-data analytics, visualization and ML to conquer the sheer volume of data and ensure that what we can do is best directed to high-value testing. I’ll add one provocative question – since limiting testing to realistic customer use-cases ensures that coverage is incomplete, how then do you handle (hopefully rare) instances where usage in the field strays outside tested bounds? Alternatively, is it possible to quantify the likelihood of escapes in some useful manner? Perhaps more room for improvement.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.