You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • Enterprise SSD SOC's Call for a Different Interconnect Approach

    The move to SSD storage for enterprise use brings with it the need for difficult to design enterprise capable SSD controller SOCís. The benefits of SSD in hyperscale data centers are clear. SSDís offer higher reliability due to the elimination of moving parts. They have a smaller foot print, use less power and offer much better performance. SSDís are also more scalable, a big plus where storage needs run into the petabyte range.

    Nevertheless, SSDís create the need for more complex and sophisticated controllers. Unlike early SSD implementations that used SATA, SAS or Fibre Channel to connect to their hosts, enterprise SSD use NVMe protocol to directly connect to PCIe. NVMe was developed specifically for SSD memory and takes advantage of its low latency, high speed and parallelism. The table below from Wikipedia shows the comparison.

    Article: Extreme Ultra Violet (EUV)-nvme-min.jpg
    Enterprise SSD controllers connect to many banks of NAND memory and deal with low level operations such as wear leveling and error correction, both of which have special requirements in this application. The SSD controller must offer low latency, extremely high bandwidth, low power, and internal and external error correction.

    A large number of unique IP blocks must be integrated to deliver a competitive SSD controller SOC. Here is a short list of necessary IPís that are commonly used: ARM R5/7, PCIe, DDR3/4, NVMe, DMA, RAM, SRAM, RAID, NAND, GPIO, ECC and others. The parallel operation of these IP blocks presents a significant design problem for IP interconnection and internal data movement.

    Article: Extreme Ultra Violet (EUV)-ssd-soc-blocks-min.jpg
    Designing the interconnections between all the functional units has become one of the most critical aspects of these designs. Due to larger IP blocks with wide buses and increasing need for interconnect using wires that are not scaling with transistor sizes, the design effort and chip resources consumed by on-chip interconnections are becoming a large burden to design teams.

    Buses and crossbars are running out of steam in these newer designs. For example, going to AMBA 4 AXI for 64 bits requires a width of 272 wires, and 408 wires for 128 bits of data. The other problem is that many of these wires are idle for much of the time. For example, a four cycle burst write transaction only uses the 56 wire write address bus in 25% of the cycles.

    Networks on Chips (NoC) dramatically reduce the difficulties that would be encountered with large bus structures. Arteris, a leading provider of NoC IP, has just published a white paper on the advantages of using their FlexNoC to facilitate implementation of enterprise SSD controllers. The biggest advantages come from simultaneously reducing the widths of the block interconnections and tailoring them to the predicted traffic. Itís well understood that the earlier in the design process issues are addressed the easier it will be to deal with the downstream effects of that issue. Instead of waiting for the P&R stage to grapple with interconnect across the chip, FlexNoC planning and implementation starts at RTL, making the process more efficient and easier.

    FlexNoC works by converting a wide variety of IP protocols at their source to agnostic serialized packet data and routing it to its target, where is it reassembled upon delivery. There are RTL elements required for the NoC to operate but the overall area required by the NoC IP and interconnect wires is significantly less than the equivalent bus or crossbar structures. Because the NoC data can be pipelined and buffered, it is actually faster than high drive strength busses. The NoC RTL can be synthesized and placed so that it conforms to the predefined routing channels.

    The overall effect is less routing congestion, leading to a smoother back end implementation flow. The resulting design benefits from less latency and even more robust data integrity due to built in error correction within FlexNoC.

    To gain a deeper understanding of the benefits of using FlexNoC I suggest reviewing the Arteris white paper located here. There are a number of additional benefits and implementation details that are covered in this and the other available Arteris downloads

    More articles from Tom...