WP_Term Object
(
    [term_id] => 10
    [name] => eSilicon
    [slug] => esilicon
    [term_group] => 0
    [term_taxonomy_id] => 10
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 83
    [filter] => raw
    [cat_ID] => 10
    [category_count] => 83
    [category_description] => 
    [cat_name] => eSilicon
    [category_nicename] => esilicon
    [category_parent] => 386
)
            
WP_Term Object
(
    [term_id] => 10
    [name] => eSilicon
    [slug] => esilicon
    [term_group] => 0
    [term_taxonomy_id] => 10
    [taxonomy] => category
    [description] => 
    [parent] => 386
    [count] => 83
    [filter] => raw
    [cat_ID] => 10
    [category_count] => 83
    [category_description] => 
    [cat_name] => eSilicon
    [category_nicename] => esilicon
    [category_parent] => 386
)

High Performance Ecosystem for 14nm-FinFET ASICs with 2.5D Integrated HBM2 Memory

High Performance Ecosystem for 14nm-FinFET ASICs with 2.5D Integrated HBM2 Memory
by Mitch Heins on 02-07-2018 at 10:00 am

21054-hbm-system-min.jpg
High Bandwidth Memory (HBM) systems have been successfully used for some time now in the network switching and high-performance computing (HPC) spaces. Now, adding fuel to the HBM fire, there is another market that shares similar system requirements as HPC and that is Artificial Intelligence (AI), especially AI systems doing real-time image recognition. I traded notes with Mike Gianfagna at eSilicon to get more information and he pointed me to a webinar that eSilicon had recently presented (link below) in conjunction with Samsung, Rambus, Northwest Logic and the ASE group.

21054-hbm-system-min.jpgI reviewed the webinar recordings to which Mike had referred me and learned a great deal more about HBM-based systems. According to Lisa Minwell of eSilicon, both networking and AI applications typically have large ASIC die, greater than 400mm[SUP]2[/SUP], containing high-performance cores, up to 1 gigabit of configurable multi-port embedded memories, and high-bandwidth wide-word interfaces to HBM2 stacked memories all integrated in a 2.5D system-in-a-package (SiP).

21054-hbm-system-min.jpgThese SiPs use cutting-edge technology and as a result are complex and require an ecosystem of partners to ensure successful design, implementation and test. And that, as it turns out, was exactly what the webinar was about.

The webinar had a ridiculously long title, something like “FinFET ASICs for Networking, Data Center, AI, and 5G using 14nm 2.5D HBM2 and SERDES”. I think that was more for Google search engines – and so I include it here as well. True to form though, the webinar did in fact cover all those topics and at pretty good depth, much more than I can summarize here. As mentioned, the webinar included panelists from the companies listed above and covered the following areas:

  • HBM2 memories – Samsung Electronics
  • 14nm FinFET silicon – Samsung Foundry
  • 2.5D packaging, interposers assembly/test and micro-bump road maps – the ASE group
  • ASIC design services, configurable memory compilers and PHY IP – eSilicon
  • High-speed SERDES IP – Rambus
  • HBM2 memory controller IP – Northwest Logic

Each company gave a brief overview of their offerings along with road map data for their part in the overall solution. There was a ton of excellent data in the webinar that simply would not fit in this space. If you are interested in road map data for any of these areas, please make sure to follow the link below to watch the webinar. The recording is indexed by subject matter so that you can quickly go to the section of your interest.

One thing all the members made sure to point out was that this wasn’t “futures” work. The work they were doing with HBM2 was being used in real products with significant performance, power and area improvements for their customers. Note the ~2.5X improvement in overall system performance gained over DDR architectures when using HMB2. HBM3 (generation 3), due out sometime toward the end of the decade, is supposed to have 2X more performance than HBM2.

21054-hbm-system-min.jpg
One of the interesting parts of this type of design is that you are dealing with multiple components from different companies. The tricky part of course is where to look when things don’t work as planned. This is where the eco-system partners were all quick to jump in and ensure their listeners that they were all there to work out any issues that come up. And… given their previous history of working with each other, the message was clear that they had figured out how to do this in an efficient manner.

The other thing that came across from the webinar is that none of the systems that were discussed were exactly alike. In fact, just the opposite was true. While they all shared common characteristics, each design had been customized in some way and it was evident that each of the eco-system partners were prepared to help their customer in this customization process, whether that meant changing the amount or speed of the HBM2 stack, customizing different memory mapping for the stack, creating unique multi-port embedded memories for the ASIC, customizing a set of SERDES or a creating a customized interposer.

And that, is what makes their joint solution so compelling. It is the ability to use and customize a design using production proven 14nm FinFET technology with silicon verified IP blocks that have been verified against each other. That’s hard to do when all the pieces are coming from different places. If you are doing networking, HPC or AI applications you may want to check out this webinar at the link below!

See Also: (in the order in which they presented)
Webinar Link
Samsung HBM2 website
Samsung Foundry website
the ASE Group website
eSilicon website
Rambus website
Northwest Logic website

Share this post via:

Comments

5 Replies to “High Performance Ecosystem for 14nm-FinFET ASICs with 2.5D Integrated HBM2 Memory”

You must register or log in to view/post comments.