You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • A random walk down OS-VVM

    Unlike one prevailing theory of financial markets, digital designs definitely don’t function or evolve randomly. But many engineers have bought into the theory that designs can be completely tested randomly. Certainly there is value to randomness, exercising all combinations of inputs, including unexpected ones a designer wouldn’t try but a test engineer without a priori bias would.

    What randomness misses is sequencing. Few designs today are entirely synchronous, especially given the presence of tens and more tens of IP blocks running on different clock and power domains. Add in computational blocks executing code, and coverage in a completely random testbench becomes very suspect.

    To combat that, best brains concocted “constrained random testing”. In short, in CR one creates a randomization constraint model, and a functional coverage model, and hopes there is enough simulation horsepower available to create a randomized set of stimulus within the constraints yet hitting the coverage holes.

    Article: Top 5 Reasons for Wasting Power-constrained-random.jpg

    Many observers have noticed it takes simulators a very long time to solve the problem – as Chris Wilson put it in “Bugs Are Easy”:

    … uniformly randomizing across a constrained input space is an NP-hard problem.
    Without jetting off into a discussion of computer science, this translates to it is REALLY difficult to make things REALLY random given the constraints. What you usually get on a model beyond trivial complexity is an approximation of randomness, and the resulting coverage can hit a wall at well less than 100%. While sets of researchers have scurried off trying to compute sorta-random solutions within sorta-accurate constraint models with sorta-complete coverage faster, a better approach may be to just change methodology.

    Jim Lewis of SynthWorks has been leading an effort to create Open Source VHDL Verification Methodology, or OS-VVM for short. Rather than completely discard randomness, OS-VVM applies it in a different way. One still builds a functional coverage model, same as before, but uses “intelligent coverage” – randomizing across holes in the coverage – to generate N unique test cases in N randomizations. The coverage can be refined, as little or as much as desired, using directed, algorithmic, file-based, or further randomization methods.

    The benefits of OS-VVM according to Lewis start with time. By not taking time to build a randomization constraint model, focusing instead on functional coverage, the modeling effort is cut in about half. Without the redundant stimuli generated by a solver rattling around generating cases within the constraints, simulation time can be cut to 1/5th of a CR methodology.

    The other benefits of OS-VVM are ease of use. It works in any VHDL environment, with readable code that verification and RTL engineers can understand. It is free and open source (I couldn’t readily find details on what license agreement they are using, if any), with two distinct packages – CoveragePkG, which supports generation of bins for one-dimensional and cross-coverage, and RandomPkg, which as it suggests provides randomization routines.

    If you happen to be at @50thDAC in Austin next month, you may want to register for a live, in-person technical session by Aldec – who has multiple folks participating actively in OS-VVM, an all-volunteer community project – to get more insight. To reserve your seat, pre-register for:

    Session 6: VHDL 2008 and Beyond: OS-VVM Continues to Grow

    <script src="//" type="text/javascript">
    lang: en_US </script> <script type="IN/Share" data-counter="right"></script>

    Article: Top 5 Reasons for Wasting Power-900x108_semiwiki.gif