WP_Term Object
(
    [term_id] => 10
    [name] => eSilicon
    [slug] => esilicon
    [term_group] => 0
    [term_taxonomy_id] => 10
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 83
    [filter] => raw
    [cat_ID] => 10
    [category_count] => 83
    [category_description] => 
    [cat_name] => eSilicon
    [category_nicename] => esilicon
    [category_parent] => 386
)
            
WP_Term Object
(
    [term_id] => 10
    [name] => eSilicon
    [slug] => esilicon
    [term_group] => 0
    [term_taxonomy_id] => 10
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 83
    [filter] => raw
    [cat_ID] => 10
    [category_count] => 83
    [category_description] => 
    [cat_name] => eSilicon
    [category_nicename] => esilicon
    [category_parent] => 386
)

2.5D supply chain takes HBM over the wall

2.5D supply chain takes HBM over the wall
by Don Dingee on 04-11-2016 at 4:00 pm

SoC designers have hit the memory wall head on. Although most SoCs address a relatively small memory capacity compared with PC and server chips, memory power consumption and bandwidth are struggling to keep up with processing and content expectations. A recent webinar looks at HBM as a possible solution.

Stacked die came to prominence in SoC design early. For example, Apple and Samsung implemented SRAM for the S5L8900 processor as package-on-package in the original iPhone design. By reducing trace length and capacitance, driver requirements were simplified to keep power consumption low while delivering significantly improved bandwidth.

HBM uses a similar concept, stacking unpackaged DRAM die using a through-silicon via (TSV) interconnect. Since initial success in the AMD Fiji design with HBM 2.5D technology, other opportunities for memory subsystem performance enhancement using HBM have drawn rapid attention. A team of HBM supply chain vendors has aligned around a formula for SoC solutions.

One of the pluses of HBM technology is its multi-sourced memory supply; however, DRAM is just one part of the entire solution. SK Hynix has teamed with four partners to bring fast, validated HBM solutions together: Northwest Logic for an HBM memory controller, Amkor Technology for the 2.5D interposer and assembly, Avery Design Systems for HBM memory models and testbenches, and eSilicon for an HBM PHY and overall ASIC design.


Putting these vendors together in one recent webinar resulted in a fast-paced introduction to many of the facets involved in an HBM solution. Some quick highlights:

Kevin Tran of SK Hynix says HBM is effectively another level of cache intended for relatively small capacities typical of mobile SoCs, GPUs, and HPC nodes. HBM1 provides 128 GB/sec at 3.3W, 60% less power compared to GDDR5.

Paul Silvestri of Amkor lays out the challenges with 2.5D interposers and a primer on microbumping. Amkor has tuned their methodology to achieve better than 98% yield on their 300mm K4 line in Korea.

Bill Isaacson of eSilicon describes their MoZAIC (modular Z-axis IC) program that brings everything together. eSilicon has also created high-performance HBM2 PHY IP supporting eight channels and IEEE 1500 wrapper support for test and microbump repair (a fascinating concept if you haven’t seen it).

Brian Dallenbach of Northwest Logic looks at their HBM memory controller IP. The base controller implements pseudo channels, look-ahead and pre-charge, bank refreshes and more. Add-on cores provide options such as multiple ports, ECC, advanced test and other capabilities that help customize configurations.

Chris Browy of Avery Design Systems describes their sophisticated HBM memory models and verification IP, including an HBM protocol analyzer that runs some 225 checks. Avery is able to gather and characterize over 60 performance metrics in their testbench; for example, they tackle 38 command combinations just looking at latency.

This webinar was a serious eye-opener for me as to how much goes into design of a high-performance memory subsystem, and how an ecosystem has rallied to provide all the capability required to deliver a 2.5D HBM solution. The level of specialization in controller, PHY, and verification IP is noteworthy. This isn’t just theory; Northwest Logic indicated they currently have seven customer designs underway with their controller IP.

There is a replay of the archived webinar plus a white paper available for download (both in the open, refreshing since this is very educational) on the eSilicon website:

Start Your HBM/2.5D Design Today

eSilicon has delivered HBM in 28nm HPC, and is working on 14nm LPP for delivery shortly, and 16nm FF+ by year end. The technology is just getting started with 2.5D implementations – when companies realize the full potential with reliable vertical stacking of HBM die in 3D, memory density will shrink dramatically.

The memory wall is coming down very quickly with HBM.

Share this post via:

Comments

0 Replies to “2.5D supply chain takes HBM over the wall”

You must register or log in to view/post comments.