WP_Term Object
(
    [term_id] => 53
    [name] => Oski Technology
    [slug] => oski-technology
    [term_group] => 0
    [term_taxonomy_id] => 53
    [taxonomy] => category
    [description] => 
    [parent] => 14433
    [count] => 9
    [filter] => raw
    [cat_ID] => 53
    [category_count] => 9
    [category_description] => 
    [cat_name] => Oski Technology
    [category_nicename] => oski-technology
    [category_parent] => 14433
)

Formal Signoff – a Cisco Perspective

Formal Signoff – a Cisco Perspective
by Bernard Murphy on 05-08-2018 at 7:00 am

The second segment of Oski’s most recent “Decoding Formal” event was a talk by Anatoli Sokhatski (formal tech lead at Cisco) on training and methodology development for a structured and scalable approach to formal verification, particularly with emphasis on formal signoff.

21594-signoff-min.jpeg

Anatoli stressed that he and others in the team did not go into this as novices in formal, but they found they needed a more consistent methodology they could carry forward between designs. Formal had been used successfully on previous products, however learning hadn’t really been captured before the experts moved on. I’m beginning to think this is a root-cause for why many organizations still consider formal to be hard. Imagine if you had to re-create, from scratch on each program, UVM and constrained-random expertise, along with coverage measures, assertions and all the other paraphernalia of modern dynamic verification. You don’t have to do that because you can start from a lot of process, assets and training built up over many programs. The same approach equally makes sense for formal.

Anatoli’s talk had three main sections: why formal is so important in verification for networking, a few examples of the methods they use to manage complexity in proofs, and the process they developed around formal signoff to ensure this expertise could carry forward onto other programs.

21594-signoff-min.jpeg

Formal is important in many contexts, but it’s always interesting to understand the application-specific reasons that make it important to a given domain. For Anatoli, in networking the problem starts with wide busses, deep pipelines and big memories. This is compounded by protocol challenges – packets spread across many cycles, packet interleaving and priority packets breaking into lower priority packets. Together these create one of those challenges so suited to formal – a huge range of combinational possibilities with some sequential depth but not too much, in proving that there is no possible case in which data could be corrupted in some manner.

Anatoli’s group kicked off with an intensive 2-week training Oski training session (lectures and labs) which gave them a solid grounding in flows, principles for calculating required proof depths (needed when justifying bounded proofs), techniques to overcome complexity (which I’ll briefly mention next), abstraction methods, constraints and managing over-constraints, wrapping up with principles of formal coverage and formal signoff.

21594-signoff-min.jpeg

Anatoli discussed several concepts which I’ve seen touched on in advanced formal usage, though rarely explained in depth. I’m going to give a quick summary here (thanks to info provided by Roger Sabbagh, VP Applications Engineering at Oski). I hope to do a deeper dive on these in a later blog. All of these are prompted by a basic question: how do you check the correctness/ integrity of packets passing through a pipeline when of course those packets can contain arbitrary data?

The starting point (at least for me) is something called the floating pulse method. This is a technique in which the formal tool can assert a single pulse at some arbitrary time after reset, and on that pulse do something special. The special thing that is done in this context is to tag a word in an incoming packet word with an easily recognized ID, a process widely known as coloring. That color can then be spotted at a later/deeper cycle, allowing for various kinds of check.

So for example, Anatoli said they applied this method to check that the time between the floating pulse on an incoming packet, and when that forwarded colored data appeared at the output. This should fall within a required maximum number of cycles. The clever trick here is that, thanks to the floating pulse method, the formal tool effectively tracks anyelement of anyincoming packet, therefore verifying that this maximum number of cycles holds for all possible packet sequences.

Anatoli talked about other checks, but I’ll mention just one that I particularly like, not least because it has confused me for a long time. This is Wolper coloringand is used (at least here) for data integrity checks. The same general approach is followed as above, but in this case, twoconsecutive words are colored, presumably differently, in the incoming packet. The check at the output is then that the same words are seen consecutively, in the correct sequence, with no additional colored words around them. This confirms that no words were dropped, no words were replicated, words weren’t swapped and nothing was inserted between words. In other words, data integrity was preserved. Again, pretty clever.

The third part of Anatoli’s presentation was on how they setup a repeatable and scalable flow, particularly for formal signoff though I imagine elements of this approach would be useful even for basic property proving. The first step towards not having to reinvent the wheel each time in a process must be documentation and templates. He detailed their approach:

  • A document describing phases & milestones

    • FV Planning, Basic Function Complete, Main Function Complete, Block Ready for Speculative Freeze, FV complete
  • A detailed description for/definition of each phase; for example for Main Function Complete:

    • Checkers implemented for all non-error cases
    • Constraints implemented for all non-error cases
    • Over-constraints applied to disable error cases
  • Defined exit criteria for each phase, for example for FV Complete:

    • Final FV Environment Specification review
    • Final FV Test Plan review
    • Checklist approved

Templates include a checklist for each milestone (e.g. review required proof depth per assertion, review cover properties), a documentation template (e.g. interface components, E2E checkers, testbench configurations). Finally, they capture this in their (in-house) testplan tracking tool (similar I would guess to Cadence vPlanner/vManager). May all seem like a bunch of bureaucratic overhead, but anyone who has had to launch similar tasks more than twice or been tasked with training new hires on verification setup will instantly understand the value of this process.

Naturally they support all of their regression testing through in-house Makefile flows, which they use to run both simulation and formal testing (with different goal scripts per task). One aspect I found interesting is that they control proof parameters through Makefile macros, which simplifies setting up case-split runs (though no doubt they may need to experiment with this to ensure they cover all the parameters on which they might want to split).

Anatoli made a final point on interface checkers. Even though they are aiming for formal signoff, they still want to roll up interface checks (assertions and constraints) to simulation so they capture these in a separate file which can easily be reused in sim regressions. A reminder that even the more advanced users of formal methods (E2E checks) still want to sanity check against simulation. You can watch the video HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.