You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!

  • Deep Learning and Cloud Computing Make 7nm Real

    Article: Apache Presents: ESD analysis-esilicon-7nm.jpgThe challenges of 7nm are well documented. Lithography artifacts create exploding design rule complexity, mask costs and cycle time. Noise and crosstalk get harder to deal with, as does timing closure. The types of applications that demand 7nm performance will often introduce HBM memory stacks and 2.5D packaging, and that creates an additional long list of challenges. So, who is using this difficult, expensive technology and why?

    A lot of the action is centering around cloud data center buildout and artificial intelligence (AI) applications – especially the deep learning aspect of AI. TSMC is teaming with ARM and Cadence to build advanced data center chips. Overall, TSMC has an aggressive stance regarding 7nm deployment. GLOBALFOUNDRIES has announced 7nm to support for, among other things, data center and machine learning applications, details here. AMD launched a 7nm GPU with dedicated AI circuitry. Intel plans to make 7nm chips this year as well. If you’re wondering what Intel’s take is on AI and deep learning, you can find out here. I could keep going, but you get the picture.

    It appears that a new, highly connected and automated world is being enabled, in part, by 7nm technology. There are two drivers at play that are quite literally changing our world. Many will cite substantial cloud computing build-out as one driver. Thanks to the massive, global footprint of companies like Amazon, Microsoft and Google, we are starting to see compute capability looking like a power utility. If you need more, you just pay more per month and it’s instantly available.

    The build-out is NOT the driver however. It is rather the result of the REAL driver – massive data availability. Thanks to a new highly connected, always-on environment we are generating data at an unprecedented rate. Two years ago, Forbes proclaimed: “more data has been created in the past two years than in the entire previous history of the human race”. There are other mind-blowing facts to ponder. You can check them out here. So, it’s the demand to process all this data that triggers cloud build-out; that’s the core driver.

    The second driver is really the result of the first – how to make sense out of all this data. Neural nets, the foundation for deep learning, has been around since the 1950s. We finally have data to analyze, but there’s a catch. Running these algorithms on traditional computers isn’t practical; it’s WAY too slow. These applications have a huge appetite for extreme throughput and fast memory. Enter 7nm with its power/performance advantages and HBM stacks. Problem solved.

    There is a lot of work going on in this area, and it’s not just at the foundries. There’s an ASIC side of this movement as well. Companies like eSilicon have been working on 2.5D since 2011, so they know quite a bit about how to integrate HBM memory stacks. They’re also doing a lot of FinFET design these days, with a focus down to 7nm. They’ve recently announced quite a list of IP targeted at TSMC’s 7nm process. Here it is:

    Article: Apache Presents: ESD analysis-esilicon-7nm.jpg

    Check out the whole 7nm IP story. If you’re thinking of jumping into the cloud or AI market with custom silicon, I would give eSilicon a call, absolutely.