WP_Term Object
(
    [term_id] => 497
    [name] => Arteris
    [slug] => arteris
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 136
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 136
    [category_description] => 
    [cat_name] => Arteris
    [category_nicename] => arteris
    [category_parent] => 178
)
            
Arteris logo bk org rgb
WP_Term Object
(
    [term_id] => 497
    [name] => Arteris
    [slug] => arteris
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 136
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 136
    [category_description] => 
    [cat_name] => Arteris
    [category_nicename] => arteris
    [category_parent] => 178
)

SSD Storage Chips: Basic Interconnect Considerations

SSD Storage Chips: Basic Interconnect Considerations
by Majeed Ahmad on 07-31-2015 at 4:00 pm

The joint development of 3D XPoint memory technology from Intel and Micron has once more brought the spotlight on data centers and chips for solid-state drives (SSDs). The two semiconductor industry giants claim that 3D XPoint memory is1,000 times faster than NAND Flash: the underlying memory content for SSDs. Such developments underscore the tectonic shift in the IT industry from HDDs to SSDs.

There is an inevitable switch from rotary storage to solid-state storage, and here, controller chips for SSDs are far more complex to develop than controller chips for HDDs. Then, there are design intricacies related to Flash memory that add to the complexity of an SSD controller chip. According Kurt Shuler, Vice President of Marketing at Arteris Inc., Flash is like a helicopter. It destroys itself as it is operating.

That’s especially true in the case of an enterprise SSD controller chip in which the “mother” controller talks to multiple “daughter” controllers, which are attached to specific banks of Flash. Unlike a consumer SSD, which maxes out at 1 TB, the controller chip-based enterprise storage devices scale to 10 TB and beyond.


SSD marks an inflection point as data centers move away from HDD storage

The SSD controller chips use a cascading slices approach to accommodate more storage because enterprise SSD solutions need to be highly scalable. That leads to the use of more IP block functions than the most complicated application processor designs. Not surprisingly, therefore, the huge chip size and design complexity are a major consideration for chipmakers offering the SSD controller system-on-chip (SoC) solutions.

SSD Design Challenges

Arteris’ Shuler says that data protection is the number one design goal for storage chips. Data protection ensures that there is no data loss by detecting errors and correcting them. It’s extremely important because the enterprise SSD companies know that they have to offer the same reliability as HDD even though they are using an inherently self-eating technology like Flash.

“Data protection can add data bits and logic for error-correcting code (ECC) and hardware duplication,” Shuler said. “In fact, the ECC engine is often the largest IP block on the die.” But it’s these interconnect reliability features that help make Flash a possibility in a market that so far has been ruled by HDD.


The high-level design of an enterprise SSD chip

Second, storage chips also boast extremely complex power management and dynamic voltage frequency scaling (DVFS). Power consumption is the biggest concern in data centers where low-power functioning is required for both compute and air conditioning operations. Around 33 percent of data center power is used for cooling.

Third, there are different quality-of-service (QoS) requirements for storage chips. The SSD chips are of huge size, and they require more bandwidth-balancing QoS schemes than the DRAM-centric end-to-end QoS schemes that are used in mobile application processors. Moreover, there are a number of clock domain crossing and clock propagation and balancing issues.

Interconnect Spaghetti in Storage Chips

Enterprise SSD chip architectures are even more complex power-wise than, for instance, the TI CC26xx IoT chip design that the Dallas, Texas–based semiconductor supplier has recently announced. There are more independent engines on an enterprise SSD controller than there are in a smartphone application processor, and the connections between them can overwhelm the layout team with a tangled mess of metal line spaghetti.

Here, a specialized interconnect technology like FlexNoC enables fewer wires and less logic as well as distributed interconnect design. That helps chip designers avoid routing congestion and the resulting timing closure problems the enterprise SSD industry struggles with while using older interconnect technologies like ARM’s NIC-400.

Let’s take the core design issue of data protection as an example of how the effective use of interconnect technology can drastically enhance the scalability of enterprise SSD system requirements. The storage chips are so big, and the interconnects are so complex that they have to be protected. An SoC interconnect implements ECC, parity and hardware duplication to protect data paths in a storage chip.

Shuler claims that Arteris’ FlexNoC Resilience Package creates more physical awareness earlier in the chip design process and facilitates data protection in tasks such as ECC, parity, hardware duplication and BIST. “The FlexNoC interconnect IP automatically ensures data safety when dealing with asynchronous clock domains and power domains.”


Clock tree power and unit-level clock gating in a storage SoC interconnect

Next up, let’s see how the low-latency requirement is balanced with extremely high bandwidth in data centers as an example to highlight the critical importance of interconnect in large and powerful storage chips. The ultra-low latency factor is especially crucial because of the communication to and from ARM Cortex-R5 low latency peripheral port (LLPP).

There are a lot of challenges in implementing this communication in the ARM world because of the ARM AMBA AXI 4 KB restriction per transaction. On the other hand, enterprise SSD chips require huge block transfers in their logical block addressing (LBA) schemes. “The FlexNoC interconnect IP bridges the gap between ARM architecture and enterprise SSD controller architecture,” Shuler said.

Also read:

Is Interconnect Ready for Post-Mobile SoCs?

Arteris Flexes Networking Muscle in TI’s Multi-Standard IoT Chip

Arteris Sees Computational Consolidation Amid ADAS Gold Rush

Share this post via:

Comments

0 Replies to “SSD Storage Chips: Basic Interconnect Considerations”

You must register or log in to view/post comments.