You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
The question was also asked in the final chapter of our book Fabless: The Transformation of the Semiconductor Industry. More than a dozen CEOs and industry luminaries commented on it.
Bottom line: Semiconductors have changed the world and will continue to do so in many more ways and in much higher volumes than before, absolutely.
Note about half way through on a chart labeled Purley where they talk about an all new memory architecture.
Lower in cost than DRAM, 500X faster than NAND and persistent! Could Intel finally have mastered Phase Change? Micron’s gotta be involved with this…right?
If so this could have lots of interesting implications not only for the big data centers but for mobile spaces. You could have a fully functional laptop on your phone. Bigger picture is that new memory architectures lead to massive levels of innovation.
Lower in cost than DRAM, 500X faster than NAND and persistent! Could Intel finally have mastered Phase Change? Micron’s gotta be involved with this…right?
If I can put it in word, it's SoC. More and more semiconductor building blocks and functionality will be integrated onto a single piece of die and that will continue to transform the chip landscape in the next 10 years.
The other revolutionary changes will occur, but most likely on the fringes of the SoC juggernaut. Serving as SoC technology peripherals.
It would be wonderful to see a successor technology to FinFET that would return us to the economic benefit of Moore's Law where each successive generation has a lower cost per transistor.
If I can put it in word, it's SoC. More and more semiconductor building blocks and functionality will be integrated onto a single piece of die and that will continue to transform the chip landscape in the next 10 years.
The other revolutionary changes will occur, but most likely on the fringes of the SoC juggernaut. Serving as SoC technology peripherals.
No amount of hoping or assuming will fix a fundamental economics problem -- rapidly rising entry cost to each new process node causes an equally rapid drop in the number of designs that can afford it, especially since the big-volume markets like mobile AP now also tend to be short lifetime (1 year).
Maybe at 10nm (or 7nm) only Apple and Samsung can make money out of this, nobody else will sell enough high-end phones (high tens of millions) to pay for the chips to put in them. Or maybe the market stops throwing away phones every year and replacing them with a new faster shinier one with a new chip, so chip lifetimes (and volume and revenue) go up to 2 years. Or maybe the lower-cost suppliers like Mediatek/Xiaomi take market share away and push volumes up and prices down compared to Apple/Samsung. In any case, unless you're going to sell more than $1B worth of a chip, you're not going to be in the AP market any more.
This has never been the case before. Still think 10nm isn't going to be an exception?
Of course everyone hopes this will all change at 7nm (too late for 10nm now...) when the EUV fairy finally waves her magic wand and the costs drop back again -- assuming that EUV doesn't need double-patterning by then (which it probably will) and that the ASML scanners and all the other EUV kit won't be ridiculously expensive (which it probably will be)...
I agree that the issues that you have outlined for 10nm are serious. It reminds me of TI, the company which practically invented the application processor, left the mobile SoC market a while ago because Apple and Samsung were making their own application processors.
Still, my argument is that semiconductor is know for its tradition of innovation and solution seeking. And I am sure they will get around the challenges prevailing 10nm node right now.
I certainly hope a solution will be found. But all the previous "brick-walls"/"challenges" have been technical, and overcome with new technical solutions. The 10nm and beyond challenge is economic right now, until something like EUV gets us back on the cost-down curve and keeps spiralling NRE under control.
And right now there is *nothing* which anyone thinks can solve this problem and realistically be available in time for 10nm mass production, given the time to get any new solution (EUV, ebeam, DSA...) into mass production.
IanD - even without gate cost, 10nm have lots of advantages, some directly monetary ,some not. Maybe some form of structured asic for 10nm+ would be the way to use those advantages ?
Looking at some research on structured asic : "an average of 1.76x/1.41x increase in area/delay compared to ASICs"[1] , this certainly seems possible.
Anyway, this kinda looks like the area of expertise for FPGA companies, which might be another explanation why Intel bought Altera.
IanD - even without gate cost, 10nm have lots of advantages, some directly monetary ,some not. Maybe some form of structured asic for 10nm+ would be the way to use those advantages ?
Looking at some research on structured asic : "an average of 1.76x/1.41x increase in area/delay compared to ASICs"[1] , this certainly seems possible.
Anyway, this kinda looks like the area of expertise for FPGA companies, which might be another explanation why Intel bought Altera.
The problem with structured ASIC -- or any form of reconfigurable logic, no matter how efficient -- is that it eats up the area and power saving of going to the next process node. The end result is a more expensive (and higher power) chip than an ASIC in the older node. Since the main reason of going to the next process node is power saving, why do this?
The problem with structured ASIC -- or any form of reconfigurable logic, no matter how efficient -- is that it eats up the area and power saving of going to the next process node. The end result is a more expensive (and higher power) chip than an ASIC in the older node. Since the main reason of going to the next process node is power saving, why do this?
Let say for a specific design you can only afford 28nm NRE. If go for 14nm/10nm optimal structured asic, you might end up with power consumption equivalent of 14nm/20nm , equal cost per gate to 28nm(or maybe lower), and the tools you'll get from a sAsic company will improve NRE and time-to-market over 28nm asic. So there might be an interesting design point there.
Let say for a specific design you can only afford 28nm NRE. If go for 14nm/10nm optimal structured asic, you might end up with power consumption equivalent of 14nm/20nm , equal cost per gate to 28nm(or maybe lower), and the tools you'll get from a sAsic company will improve NRE and time-to-market over 28nm asic. So there might be an interesting design point there.
Nope, doesn't work like that. Cost per gate is the same or a little higher for 14nm/10nm compared to 28nm, structured ASIC needs more gates, 14nm/10nm sASIC will have higher unit cost than 28nm ASIC and similar power consumption. Design time for a 14nm/10nm sASIC is also probably similar to 28nm ASIC, double patterning and much more OPC cancels out the fact that "only" the metal layers are being customised -- and these are now the slowest and most expensive part of the process. NRE will be similar or maybe even higher for the same reason.
Yes , it makes sense. Structured asic might not make sense.
So another solution ? since a lot of the NRE is the cost and complexity of tools(for ex. OPC) , wee know that the software industry is pretty good at commodization (either through business models, or through new tools, etc) , it might be solvable , and maybe we'll come back in a few years and find lower NRE ?
A case in point is IBM's new offering , they talk about half the price.
Also there might be a move to HLS(for ex. chisel looks nice) which again will reduce the NRE.