You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • Virtual Reality

    In the world of hardware emulators, virtualization is a hot and sometimes contentious topic. Itís hot because emulators are expensive, creating a lot of pressure to maximize return on that investment through multi-user sharing and 24x7 operation. And of course in this cloud-centric world it doesnít hurt to promote cloud-like access, availability and scalability. The topic is contentious because vendor solutions differ in some respects and, naturally, their champions are eager to promote those differences as clear indication of the superiority of their solution.
    Article: Why are AMS designers turned off by Behavioral Modeling?-virtual-reality-min.jpeg
    Largely thanks to contending claims, I was finding it difficult to parse what virtualization really means in emulation, so I asked Frank Schirrmeister (Cadence) for his clarification. I should stress that I have previously talked with Jean Marie Brunet (Mentor) and Lauro Rizzatti (Mentor consultant and previously with Eve), so I think Iím building this blog on reasonably balanced input (though sadly not including input from Synopsys, who generally prefer not to participate in discussions in this area).

    Thereís little debate about the purpose of virtualization Ė global/remote access, maximized continuous utilization and 24x7 operation. There also seems to be agreement that hardware emulation is naturally moving towards becoming another datacenter resource, alongside other special-purpose accelerators. Indeed, the newer models are designed to fit datacenter footprints and power expectations (though there is hot 😎debate on the power topic).

    Most of the debate is around implementation, particularly regarding purely ďsoftwareĒ (RTL plus maybe C/C++) verification versus hybrid setups where part of the environment connects to real hardware, such as external systems connecting through PCIe or HDMI interfaces for example. Pure software is appealing because it offers easy job relocation, which helps the emulator OS pack jobs for maximum utilization and therefore also helps with scalability (add another server, get more capacity).

    In contrast, hybrid (ICE) modeling requires external hardware and cabling to connect to the emulator, also Article: Why are AMS designers turned off by Behavioral Modeling?-ice-min.jpgspecific to a particular verification task, and that would seem to undermine the ability to relocate jobs or scale and therefore undermine the whole concept of virtualization. In fact, this problem has been largely addressed in some platforms. You still need the external hardware and cabling of course but internal connectivity has been virtualized between those interfaces and jobs running on the emulator. Since many design systems want to ICE-model with a common set of interfaces (PCIe, USB, HDMI, SAS, Ethernet, JTAG), these resources can also be shared and jobs continue to be relocatable, scalable and fully virtualized.

    Naturally external ICE components can also be virtualized, running on the software host, or the emulator or some combination of these. One appeal here is that there is no need for any external hardware (beyond the emulation servers), which could be attractive for deployment in general-purpose datacenters. A more compelling reason is to connect with expert 3rd-party software-based systems to model high levels and varying styles of traffic which would be difficult to reproduce in a local hardware system. One obvious example is in modelling network traffic across many protocols, varying rates and SDN. This is an area where solutions need to connect to testing systems from experts like Ixia.

    Article: Why are AMS designers turned off by Behavioral Modeling?-lauterbach-jtag-min.jpg

    You might wonder then if the logical endpoint of emulator evolution is for all external interfaces to be virtualized. Iím not wholly convinced. Models, no matter how well they are built, are never going to be as accurate as the real thing, in real-time and asynchronous behaviors and especially in modeling fully realistic traffic. Yet virtual models unquestionably have value in some contexts. I incline to thinking that the tradeoff between virtualized modeling and ICE modeling is too complex to reduce to a simple ranking. For some applications, software models will be ideal especially when there is external expertise in the loop. For others, only early testing in a real hardware system will give the level of confidence required, especially in the late stages of design. Bottom line, we probably need both and always will.

    So thatís my take on virtual reality. You can learn more about the vendor positions HERE and HERE.