WP_Term Object
(
    [term_id] => 77
    [name] => Sonics
    [slug] => sonics
    [term_group] => 0
    [term_taxonomy_id] => 77
    [taxonomy] => category
    [description] => 
    [parent] => 14433
    [count] => 49
    [filter] => raw
    [cat_ID] => 77
    [category_count] => 49
    [category_description] => 
    [cat_name] => Sonics
    [category_nicename] => sonics
    [category_parent] => 14433
)

Finding under- and over-designed NoC links

Finding under- and over-designed NoC links
by Don Dingee on 11-24-2015 at 12:00 pm

When it comes to predicting SoC performance in the early stages of development, most designers rely on simulation. For network-on-chip (NoC) design, two important factors suggest that simulation by itself may no longer be sufficient in delivering an optimized design.

The first factor is use cases. I think I’ve told the story before of the early days of PowerPC, when the PowerQUICC team at Motorola Semiconductor asked us poor slobs at the Motorola Computer Group why the heck we were trying to use those two particular peripherals simultaneously. Oh, I dunno, maybe because a customer asked us why it didn’t work, and there’s nothing in the fine manual (see: RTFM) saying it won’t work.

A chip has to know its limitations. What goes wrong in that scenario is the design team only imagined and tested certain use cases, and we (well, our customer) found one that didn’t work. In defense of the designers, there is a significant degree of difficulty in modeling on-chip traffic and setting up test cases. The last few percentage points of coverage get tough to achieve, and if simulation or real-world test cases take forever to run, setting up one more obscure case nobody will ever think to use gets prohibitive.

The other factor is everyone breathes a huge sigh of relief when the simulation or test works, but they rarely ask why. Is it barely passing? Is it passing at an acceptable margin? Is it passing by 2x or 3x? Simulation is great at reporting stuff that is broken or under-designed, but not so good at pointing out when a system may be over-designed in certain areas.

Both experience and simulation can be expensive, but the cost of failed silicon or over-designed silicon is also expensive. This is another area where NoCs are not “just software”, and some intelligent exploration of the topology and links can make a huge difference. Design teams can certainly simulate the NoC within the context of the SoC design, but it might take a while and burn valuable brain cells that could be spent doing other stuff.

Sonics has come up with a better solution – static performance analysis, in the latest release of the SonicsStudio GUI-based design tool for their NoC products. We zoom into portions of the screenshot of the tool to look at the highlights.


On the left is a simple topology, 4 initiators, 3 targets, a few router bubbles for sharing, and the links themselves. Peak bandwidth is defined by link width and clock. Buffering and router combinations factor in to configured capacity. Requested capacity is a factor of the traffic model versus a credit-based flow control system that prevents traffic from piling up.

There is still a step in imagining the use cases (entered in a table or extracted from simulation stimuli), but once the model is dialed in SonicsStudio quickly and automatically calculates what the NoC link performance looks like. It reports by link what percentage of bandwidth is used (both configured and requested), and identifies any limits as peak, flow control, or downstream. The connections are hyperlinked, so jumping to reconfigure buffers, clocks, or even link widths is easy.


This static performance analysis run is much faster than a corresponding simulation and provides more robust reporting. The nice part about this tool is it shows you where the problem actually is. When links combine through a router, there may in fact be a problem on the main link, but insufficient bandwidth elsewhere on a single link could be the cause. (In this scenario, somebody goofed and made the peripheral link narrower than it should have been. Even though it “works”, it sets off a lot of red in the table.)

Also evident are optimization steps, particularly clock speed and buffers. Quick what-ifs show if a NoC link can be cut back and still perform, saving real estate and power, or if a link is just barely hanging in there and may need a slightly faster clock or a couple more buffers.

Once the NoC exploration phase using static performance analysis in SonicsStudio is done, the resulting design can be put back into detailed simulation for a final pass. Iterations are fewer, confidence is higher, and understanding of what is really happening in the NoC increases. (And, someone won’t be RTFMing your oversight later if you don’t use this tool.) This represents a major improvement to designer productivity with Sonics NoC technology.

The full screenshot is attached below for reference.

Full press release:

Sonics Upgrades SoC Development Environment and Flagship NoC to Improve Chip Architecture Optimization and SoC Resiliency

More articles from Don…

Share this post via:

Comments

0 Replies to “Finding under- and over-designed NoC links”

You must register or log in to view/post comments.