You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • The Wolper Method

    If you read around topics in advanced formal verification youíre likely to run into something called Wolper coloring, or what Vigyan Singhal (Chief Oski at Oski) calls the Wolper method. Many domains have specialized techniques but whatís surprising in this instance is a seeming absence of helpful on-line explanations (though there are plenty of resources which cite and use the method without explanation, as if we should already know what it is.) The original source is a paper by Pierre Wolper which may be a little heavy going for some (me too), so I asked Vigyan for help, which he happily provided, adding also some interesting background. What follows is my attempt to provide an explanation for those of us who arenít CS theoreticians.

    Article: Is the RTL Design Flow Broken?-vigyan-min.jpg


    Letís start with the problem the method aims to address. When you want to verify the correctness of data transport logic (in network switches or on-chip interconnects or memory subsystems, for example), checking the protocol is one part of the job, with well-understood dynamic and formal approaches to verification. Checking integrity of the payloads flowing through the network Ė can they be corrupted in some way Ė is a different task. At first glance, this could be extremely difficult. In simulation you canít practically check all possible data values in potentially long sequences and yet it may be far from obvious what corner cases would provide good coverage. And formal methods, even using clever techniques, normally have problems with very long sequences.

    Wolperís contribution was to discover and prove that, as long as control behavior in the logic is independent of payload data, it isnít necessary to test all possible payload values or long sequences. In fact it is sufficient in formal proofs to use one or two bits in a (payload data) sequence, from which you can provably infer behavior for sequences of arbitrary length. Iím not going to attempt a proof of this; you can read the paper or take it on trust. Instead Iíll give a little background on how this technique found its way into formal verification for hardware, along with a couple of examples of application.

    Vigyan told me that when he was working on his doctorate at UC Berkeley, Prof. Bob Brayton directed his formal group to study each week certain papers he would recommend, looking for possible applications in formal methods. Based on the Wolper paper they developed a formal proof approach to checking properties related to a sequence of items transported through a design, in which the design is making decisions about routing, merging and other transport-related activities. So thatís where it all started in our world.

    Quite generally they found that using this Wolper technique they could formally detect any possibility that a design could drop, duplicate, corrupt or reorder data, for any possible data sequences. Again, the naive formal approach would check all possibilities out to some sequential depth and then run out of gas. But thanks to Wolperís insight, they could prove correct behavior using a few specific sequences composed of just a few bits of data, and from that infer correct behavior over arbitrary length sequences.

    Using this technique in the simplest case, you would look at only a stream of single bits coming into the router. You constrain to have just two consecutive bits in the stream set to 1 and all others are constrained to zero; importantly, the position of these consecutive bits in the stream should be unconstrained. Itís easy to get tripped up here by a simulation mindset (I did at first). Donít think of how to set up specific sequences. Think instead of what will be constrained in the proof - two consecutive bits somewhere (unconstrained) set to 1 and all others set to zero. The formal engine will take care of looking for any possible counter-example to this being undisturbed at the output.

    These kinds of constraints are what is often referred to as Wolper coloring. You can add an assertion to check transmission at the output of the design using a small state machine. This state machine will accept any sequence of zeros, followed by two consecutive 1ís, followed by any sequence of zeros. But it will error on a single 1 (a bit dropped) or a 101 sequence (maybe an erroneous data insertion or reordering). If the assertion triggers, you have a bug in the transport logic. And if you donít get an assertion trigger you know, thanks to Wolper, that the transport logic is bug-free for any sequences.

    Article: Is the RTL Design Flow Broken?-state-machine-wolper-coding-min.png


    You can continue to refine this to handle more complex transport Ė if bytes are merged into words on the output, you have to adjust for two streams flowing into one stream. If the design is required to repeat if no response after some time, the check has to allow for that possibility, And so on.

    Which makes it rather challenging to produce a canned application (app) to do Wolper checking. Each variant to handle product differentiation, merging options, better error handling, more complex routing, etc, requires modifications to the proof. Perhaps best to think of this as a powerful technique for validating transport correctness, to be used by the full property-checking experts. Where, naturally, Oski would be happy to advise 😎

    BTW I have also seen this method referred to as a data-abstraction technique. You are probably familiar with data abstractions when handling memories (reducing a large memory to a single word, byte or bit to simplify a proof). Think of the Wolper method as a way to do a similar thing with data streams - reducing an arbitrary-length stream to just a few bits in the stream.