You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • Analysis and Signoff for Restructuring

    For the devices we build today, design and implementation are unavoidably entangled. Design for low-power, test, reuse and optimized layout are no longer possible without taking implementation factors into account in design, and vice-versa. But design teams canít afford to iterate indefinitely between these phases, so they increasingly adapt design to implementation either by fiat (design components and architecture are constructed to serve a range of implementation needs and implementation must work within those constraints) or through restructuring where design hierarchy is adjusted in a bridging step to best meet the needs of power, test, layout and other factors.

    Article: Variation-Aware Design: A Hands-on Field Guide-puzzlesignoff-min-jpg

    This raises an obvious question in the latter case Ė ďOK, Iím willing to restructure my RTL, but how do I decide what will be a better structure?Ē This requires analysis and metrics but itís not easy to standardize automation in this domain. SoC architectures and use-cases vary sufficiently that any canned approach which might work well on one family of SoCs would probably not work well on another. But what is possible is to provide a Tcl-based analytics platform, with some supporting capabilities, on which a design team can quickly build their own custom analytics and metrics.

    A side note here to head off possible confusion. Many of you can probably construct ways this analysis could be scripted in synthesis or a layout tool. But if this has to be done in RTL chip assembly with a minimal learning curve, and the restructuring changes have to be reflected back in the assembly RTL for functional verification, and if youíre also going to use a tool to do the restructuring, then why not use the same platform to do the analysis? This is what DeFacto provides in the checking capabilities they include in their STAR platform.

    Article: Variation-Aware Design: A Hands-on Field Guide-flow_compare_connectivity-min-jpg

    This always centers around some form of structural analysis but generally with more targeted (and more implementation-centric) objectives than in global hookup verification. One objective may be to minimize inter-block wires to limit top-level routing. Or you may want to look at clock trees or reset trees to understand muxing and gating implications versus partitioning. And you want to be certain you know which clock is which by having the analysis do some (simulation-free) logic state probing to see how those signals propagate. This logic-state probing is also useful in looking at how configuration signals affect these trees. Again, you will script your analysis on top of these primitives as part of inferring a partitioning strategy.

    Another driver of course is partitioning for low power. I mentioned in my last blog the potential advantages of merging common power domains which are logically separate in the hierarchy. Another interesting consideration is in daisy-chaining power switch enables to mitigate inrush current when these switches turn on. How you should best sequence these will depend in part on floorplan and that depends on partitioning, hence the need to analyze and plan your options. In general, since repartitioning may impact the design UPF, you will want to factor that impact into you partition choices.

    Article: Variation-Aware Design: A Hands-on Field Guide-rtl_dft_handshake-min-jpg

    Similar concerns apply to test. I talked previously about partitioning to optimize MBIST controller usage. Naturally a similar consideration relates to scan-chain partitioning (collisions between test and power/layout partitioning assumptions don't help shift-left objectives). There are sometimes considerations around partitioning memories. Could a big memory be split into smaller memories, possibly simplifying the MBIST strategy? This by the way is a great example of why canned solutions are not easy to build; this is a question requiring architecture, applications and design input.

    Article: Variation-Aware Design: A Hands-on Field Guide-ccm-min-jpg

    Finally, the tool provides support for an area I think is still at an early stage in adoption but is quite interesting Ė complexity metrics. This idea started in software, particularly in the McCabe cyclomatic complexity metric, as a way to quantify testability of a piece of software. Similar ideas have been floated from time to time in attempt to relate RTL complexity to routability (think of a possible next level beyond Rentís rule). You might also imagine there being possible connections to safety and possibly security. DeFacto have wisely chosen not to dictate a direction here but rather provide extraction of a range of metrics on which you can build your own experiments. One customer they mentioned have been actively using this approach to drive partitioning for synthesis and physical design.

    In all these cases, the goal is to be able to iterate quickly through analysis of options and subsequently through restructuring experiments. The STAR platform provides the foundation for you to build your own analytics and metrics highlighting your primary concerns in power, test and layout, to translate those into a new structure and then to validate your analytics and metrics confirm that you met your objective. You can learn more about STAR-Checker HERE.