You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • Arteris Unveils Solution for Heterogeneous Cache Coherent SOC's

    Designing SOCís for markets like automotive and mobile electronics requires taking advantage of every opportunity for optimization. One way to do this is through building a cache coherent system to boost speed and reduce power. Recently, NXP decided to go about this on their automotive MCU based SOCís by using Arterisí just-announced Ncore cache coherent interconnect. Letís dig into the Arteris announcement to see what advantages there were for NXP in going this route.

    In all likelihood, many of the blocks they were using already had a cache coherent interface, for instance something like the AMBA ACE from ARM. Nevertheless, it might be that not every IP block used in the design is using ACE - it could be a heterogeneous cache coherency environment. Even ARM appreciates that this can happen. ARMís Charlene Marini, Vice President Segment Marketing states in the Arteris Ncore announcement that their collaboration with Arteris ďwill further drive innovation in heterogeneous cache coherent systems.Ē

    Ncore can be configured to so that agents support a variety of cache coherency protocols. Ncore also can be configured to handle different numbers of ports on an individual IP and it supports different cache sizes on different agents. For IP that does not support local caching, nCore allows designers to add proxy caches for these IP blocks that will be fully coherent, thus providing improved performance to more of the SOCís blocks that perform memory access.

    Beyond the obvious benefit to non-cache IP of less time on fetches for cached data there are several less obvious advantages. The bridge agents used to interface to non-coherent IP blocks can do prefetching on initial reads to lower DRAM fetch overhead. Also writes can be gathered to make saving data to DRAM more efficient.

    DesignCon 2013 Call For Papers - Deadline is Friday, August 13-snoop-min-jpg


    One of Ncoreís significant innovations is the addition of multiple snoop filters. Each snoop filter can be individually configured to be the optimal size. This is a more efficient system than relying on a single larger snoop filter.

    Ncore is real estate efficient because it is distributed and requires fewer wires in routing channels. Building it on top of their FlexNoC transport IP makes closing timing easier and optimizes allocation of interconnect resources. The use of FlexNoC also permits mixing agents with different clock and power domains so it can further reduce system power consumption.

    Anyone who has used FlexNoC knows it has a sophisticated user interface for SOC architects to assist in planning sizing and placement of data communication resources while taking into consideration timing, bandwidth, distance, area and power. FlexNoC is so efficient in using routing channels that communication and repeater logic can go in the channels with signal lines. This is in part due to careful consideration of layer assignment for the data lines and communications modules.

    Ncore extends and coexists with FlexNoc and uses an integrated design planning platform, making the SOC architectís job easier and providing more predictability to the overall process.

    DesignCon 2013 Call For Papers - Deadline is Friday, August 13-ncore-flexnoc-min-jpg


    Arteris has broken new ground with Ncore, but history has favored that companies that focus on solving customer design problems instead of just providing products. Also it is good to see that they have been working with customers and have successes to report at their Ncore product announcement. More information and product details are available on the Arteris website.