You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • How HBM Will Change SOC Design

    High Bandwidth Memory (HBM) promises to do for electronic product design what high-rise buildings did for cities. Up until now, electronic circuits have suffered from the equivalent of suburban sprawl. HBM is a radical transformation of memory architecture that will have huge ripple effects on how SOC based electronics are designed and assembled.

    Instead of laying memory out horizontally, HBM vertically sandwiches memory silicon to create stacks of memory chips that are connected by using Through Silicon Vias (TSVís). The JEDEC JESD235 standard for HBM was adopted in Oct 2013. The goal of the standard was to add bandwidth, reduce area, lower energy and increase functionality.

    Article: ARM and TSMC Beat Revenue Expectations Signaling Strength in a Weakening Economy?-hbm-25d-chip1-min-jpg


    Weíve been hearing about stacking die and using TSVís for a while, but they were primarily the domain of large FPGA or GPU companies. 2015 was a significant year for this technology as it has started to become much more readily incorporated into new designs. Still, the main application areas are high performance computing, graphics and networking. Nevertheless, because of the power and area advantages, HBM will be used in applications like laptops and mobile. Itís not just for the data center.


    To put the significance of HBM in perspective consider that a single stack of HBM offers bandwidth of 128-256 gigabytes per second. One stack is approximately 5mm by 7mm. More than one stack can be used in a single package where it can be interfaced directly to an SOC. Typical arrangements use an interposer to combine multiple HBM stacks with their own controllers and a large SOC, such as a GPU, into a single package.

    Article: ARM and TSMC Beat Revenue Expectations Signaling Strength in a Weakening Economy?-stackup-min-jpg

    In HBM, each die in a stack has two fully independent memory channels of 1 to 32 gigabits. A stack consists of 4 die. When 4 die are stacked with, for example, 4 gigabits per channel they provide 4 gigabytes (8*4Gb) of storage. Each channel is completely independent and offers all of its signals to the bottom of the stack through TSVís. This accounts for 193 signals per channel, of which 128 are data.

    For a memory channel offering 128GB/sec, the controller need only be clocked at 500MHz, making closing timing relatively straightforward. Power savings are around 60% for HBM1 compared to GDDR5. Additionally, GDDR5 requires high drive strength PCB buses that consume board real estate and add design complexity. If 4 HBM2 Stacks (5mmx7mm) are used in a design, it would offer 1TB/sec bandwidth. To reach the same bandwidth with DDR4 would require 40 modules. The HBM stacks above could be easily added to a single 50mm square SIP.

    eSilicon has been working with HBM since 2011. In 2014 they started with limited volume using HBM1 at 28nm. In 2015 they taped out 7 test chips. They are continuing support for HBM2 with a 28nm test chip that taped out in December of 2015. But their HBM capabilities are rapidly moving to 14nm and 16nm. Part of eSiliconís business model is to provide one stop shopping for HBM based designs by bringing together design, test, and manufacturing to deliver a final yielded product.

    Article: ARM and TSMC Beat Revenue Expectations Signaling Strength in a Weakening Economy?-2015-hbm-200pxw-min-png


    To help designers understand the benefits and design process for HBM, eSilicon hosted a seminar recently that brought together SK Hynix, Amkor Technology, Northwest Logic, and Avery to speak on each step of delivering an HBM based product. For those who were not able to attend the seminar in Mountain View, eSilicon is hosting a webinar broadcast of the event scheduled for March 29 2016 at 8AM and again at 6PM PDT. There was a lot of useful information presented regarding the supply chain and design considerations for the memory die, PHY layer, HBM Controller and 2.5D design choices. This seminar is well worth watching if you care about higher bandwidth, lower power and area, among other things.