You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • Hybrid Emulation

    Hybrid emulation is when part of the system is run in the emulator and part of the system is run in a virtual prototype. Typically a model of the processor(s) is run in the virtual platform and then the rest of the design is modeled by running the RTL on the emulator. I talked to Tom Borgstrom at Synopsys about what technology they have in this space.

    Emulation has changed qualitatively with the recent improvements in performance since this hybrid emulation environment now runs fast enough that it can form the basis for early software development. Unlike other modeling approaches, which are slow and hard to keep synchronized with the actual design, this hybrid approach guarantees that the models are accurate and up-to-date.

    Article: Intel Aims for the Upper, Upper Decks-he0-jpg

    There are three main uses for hybrid emulation:
    1. Architectural validation with accurate models: the high-speed models used to model processors (which are actually just-in-time compilers) are only functionally accurate but they are not cycle-accurate and you can't get accurate performance data from them at that level of detail. Of course you could just run the RTL of the processor on a simulator but that is too slow.
    2. Early software development: often transactional level models are not available, are too slow, or don't exactly match the RTL for the block. But a hybrid approach with the processor in a virtual platform and all or most of the rest of the design in the emulator is fast enough for productive software development
    3. Hardware verification running an actual software load. This is, in some ways, the same as item #2 above, but with the focus shifted to running stable software on a hardware design that is still in development.

    Article: Intel Aims for the Upper, Upper Decks-he2-jpg

    One challenge with the hybrid emulation approach is that it can generate insane amounts of data very fast. As a result, there is often a window of a million cycles or so that are recorded and can be used when a bug is encountered to investigate the root cause. But running software and emulation and doing something like a system boot, or bringing up a WiFi connection, may take billions of instructions. The point that the bug was injected by an error, and the point at which is is first observed, may be too far apart for the million cycle window to be useful. Instead Zebu post-run debug works like this. Every so often (say every few seconds of wall-clock time) a DUT checkpoint is made. All the inputs to the system are captured. The system can then be rerun from any one of those DUT checkpoints, and is completely deterministic (so the bug will occur just as before). This enables billions of cycles to be recorded without actually having to record them.

    Article: Intel Aims for the Upper, Upper Decks-he1-jpg

    Synopsys has a nice case study of this approach with Ricoh. They needed a complete model for software development but they only had an RTL version of the image processing engine. It would take too much time and effort over several months to develop a SystemC model of the engine and co-simulation of the RTL model with the virtual prototype was too slow for the software developers. So they went with hybrid emulation, ran the image processor in the Zebu emulator along with the virtual platform (see the diagram above).

    The performance was high enough for software development. As the management of Ricoh said:
    The resulting system was quite impressive and our software developers accepted using it without any resistance.

    To learn more, get the (free) book Better Software. Faster! here.
    There is also a Synopsys webinar on hybrid emulation here.

    <br> <a href=/cgi-bin/>More articles by Paul McLellan…</a>