WP_Term Object
(
    [term_id] => 53
    [name] => Oski Technology
    [slug] => oski-technology
    [term_group] => 0
    [term_taxonomy_id] => 53
    [taxonomy] => category
    [description] => 
    [parent] => 14433
    [count] => 9
    [filter] => raw
    [cat_ID] => 53
    [category_count] => 9
    [category_description] => 
    [cat_name] => Oski Technology
    [category_nicename] => oski-technology
    [category_parent] => 14433
)

An Advanced-User View of Applied Formal

An Advanced-User View of Applied Formal
by Bernard Murphy on 03-08-2018 at 7:00 am

Thanks to my growing involvement in formal (at least in writing about it), I was happy to accept an invite to this year’s Oski DVCon dinner / Formal Leadership Summit. In addition to Oski folks and Brian Bailey (an esteemed colleague at another blog site, to steal a Frank Schirrmeister line), a lively group of formal users attended from companies as diverse as Cisco, NVIDIA, Intel, AMD, Teradyne and Fungible (apologies to any I missed). I find what customers are really doing is an enlightening counterbalance to product company viewpoints, so kudos to Oski for inviting media representatives to the meeting.

21280-lively-meeting-min.jpeg

Register here for Oski’s upcoming Decoding Formal session Tuesday March 20th 11:30am-5pm Pacific

Kamal Sekhon of Oski kicked off with a joint project they drove with Fungible. Fungible is an interesting company. Barely a year old and headed by a founder of Juniper Networks it is a startup focused on compute / storage /networking functions for datacenters (details are hazy). They are building their own silicon, which is where formal comes in.

The presentation nicely prompted debate on a number of topics, one being formal-only signoff. Some of what was discussed here inevitably circled around what signoff means. Even for dynamic verification, signoff at RTL can be a qualified signoff these days since many product teams now require gate-level timing signoff for multiple reasons (to cover power-related transformations at implementation for example). I think everyone around the table also agreed that they still include formally-proven assertions in their RTL simulation signoffs – just in case.

This naturally led into a discussion on coverage. Within the formal domain, I was interested to hear that still not too many people know about formal core coverage. This is where you look at what part of the cone-of-influence (COI) was needed in proving an assertion – generally a smaller space than the full COI. Point being that if you think your assertions are ideal for (formal) signoff but the formal core coverage is low, you probably need to expand the set of assertions you are testing. Mutation was also mentioned and generated interest as a way to prompt further assertion tuning (or constraint de-tuning) to solidify coverage. (Mutation is a technique in which the tool introduces bugs in the RTL to test whether they are detected in proving. If such a bug escapes detection, that’s further evidence that your assertions and constraints need work.)

There was a sidebar on how to combine formal and dynamic coverage in signoff. One of the group talked about their work using Cadence vManager. I get to talk to multiple vendors, so I’ll have some insight on this topic from a Cadence folks in upcoming blogs. Meantime some confusion remains about mergingversus combining formal and dynamic coverage. The state of the art still seems to be combining (showing side by side) rather than somehow folding coverage metrics together. Another interesting view at the dinner, related to coverage, is that dynamic coverage is (above some level) in a sense a kind of black-box testing whereas formal coverage is, at least as practiced by many, a form of white-box testing and perhaps (my guess) this is why they have to be combined rather than merged? This question would make a good topic for another blog perhaps (hint to the formal tool suppliers)?

Another interesting area of debate, always popular with Oski, was around architectural formal verification. Quick recap – you have a verification problem at the system-level (not contained inside a component), there are no obvious long-pole candidates to abstract so instead you abstract everything, replacing components with smaller FSM models to enable verifying control interactions. A typical candidate is cache-coherency verification.

Some of the group are already doing (or have done) this, others had concerns about the validity of the method. There seems to be some sort of quantum barrier to acceptance in this area, not just at the dinner but in general. The folks who get over (or rather through :)) the barrier do it because they have no choice; the problem they face has no reasonable coverage solution in simulation. An interesting perspective for me is that this kind of architectural formal is nowhere near as complex as some of the theorem-proving techniques already used (Murphi, Spin, …) in validating floating-point units for example; so the issue can’t be “too complex to trust”. Maybe it’s just a question of familiarity (Kamal hinted at this).

One last topic on PSS and formal, on which I won’t spend a lot of time since Brian was very passionate at the dinner and I’m sure will expound at length on this topic in his blog. Brian feels that PSS and formal should naturally be very close (at least in the graph-based view of PSS). It wasn’t clear that many (or any) of the other diners had a view on this. In fairness some of them didn’t know what PSS was, which in itself may tell us something. One person said that it wasn’t clear to him that dynamic and formal needs always overlap. A deadlock check is one example – makes sense in formal, not so much in simulation. As for me, I’m uncertain. From a perspective of graphs and pruning and so, on it does seem like there should be commonality. But I wonder if the commonality is more like that found in Berkeley ABC – common underlying data structures and engines to serve both formal verification and synthesis – rather than a commonality in application-level usage, particularly with other disjoint engines for verification. Just a thought.

Oski always provides entertaining and thought-provoking insights and debates around the formal verification domain. I stronglyrecommend you attend their next Decoding Formal session (Tuesday March 20th 11:30am-5pm Pacific), in which you’ll get to hear:

  • A talk from an expert on Meltdown/Spectre, including a discussion on how formal helped find these problems and could have helped prevent them,
  • Experiences in building formal specialists and a formal methodology in Cisco,
  • Applying formal in support of ISO 26262 needs
Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.