Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/can-intel-recover-even-part-of-their-past-dominance.23972/page-3
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Can Intel recover even part of their past dominance?

Beyond a given point, lowering voltage further increases power consumption per operation.

If you look at power-delay product, this tells you the energy needed to do something, Dynamic power (CV^2) drops with lower voltages (leakage less so) but so does clock speed, more rapidly as you get closer to the threshold voltages. For a given gate type (e.g. ELVT, ULVT, LVT, SVT) and clock speed (and activity percentage) there is a supply voltage where PDP reaches a minimum, and this where the power consumption is also a minimum -- as VDD drops you have to run slower but with more parallel circuits, which works for many things but not all. And if you're really bothered about power efficiency, you also need to vary VDD with process corner and temperature, and also circuit activity and clock speed.

For the circuits we've looked at in N3 and N2 which are relatively high activity (e.g. DSP, FEC...) the lowest PDP is usually with ELVT, but has never been as low as 0.225V -- for lower activity circuits where ELVT leakage is too high compared to dynamic power, ULVT can be better. But there's no single "best" answer (transistor type, voltage, frequency), it all depends on what the circuits are doing... ;-)
Electricity accounts 80% of BTC mining cost.
You can try a better solution and there's a lot of money to be made .
Intel was once a wantobe player.
 
Electricity accounts 80% of BTC mining cost.
You can try a better solution and there's a lot of money to be made .
Intel was once a wantobe player.
Which is true -- I was trying to correct the misapprehension that lower voltage is always better for efficiency/energy use, because it's not. However the minimum PDP VDD is well below anything used by chips like CPUs and GPUs today where lower voltage does always improve efficiency, depending on process corner (and circuit, and clock speed, and activity, and transistor type, and phase of the moon...) it's usually around 0.4V or a bit lower which is in the depths far below where CPUs lurk... ;-)

But you can't just take a chip designed for "normal-voltage" operation and drop the supply voltage massively because it won't work, at least not reliably -- if you want to operate down in this region you need to use special libraries and also tool precautions and new timing checks, because gate delay variation and sensitivity to supply voltage drops gets rapidly worse. TSMC enforce special rules for ULV operation, and the voltage where this happens varies with transistor type (ELVT, ULVT, LVT, SVT) -- which then causes bigger issues with mixing transistor types (e.g. uncorrelated Vth) because the delay tracking between types gets worse and worse.

All this imposes some penalties on design which reduce performance and increase area (as does going slower and more parallel), so you don't want to do this for a chip which spends most of its time (and dissipates most of its power) at higher Vdd (e.g. 0.5V and above) like a CPU. However if you have a chip which has one job to do and where power consumption is all-important and you're willing to use adaptive supply voltage, it's a price worth paying -- we've been doing this for some time now, the typical power saving is similar to a complete process node step, and the worst-case saving is closer to two process nodes... :-)
 
If Intel design an SRAM that scales, they could regain the lead. But SRAM is like other memory, scaling seems to have ended. SRAM is 90% of the area of a logic chip.

This leads to an observation, who knows memory better than Samsung? Maybe Samsung is a dark horse in the battle to scale SRAM.
 
Here is the story:

HP had a big R&D group which spun out Agilent Technologies which became Avago. This was Hock Tan's doing. Avago had an IP group that had the lead in SerDes and other IP so based on that IP Avago did custom ASICs. This was back when IBM, LSI Logic, VLSI Technologies, NEC, and other Japanese semiconductor companies owned the ASIC market. IBM really was a force of nature back then. Avago bought LSI Logic, GlobalFoundries bought IBM Semiconductor and there was other consolidation. Avago became Broadcom, again Hock Tan's doing, and the ASIC business grew. Last I heard it was $30B+ of BRCM revenue.

Avago did Google's first TPUs but Google built up internal teams so they do most of their own design now. Avago still handles some of the backend stuff. I worked for an EDA company who was inside Google for 16nm, 7nm, 5nm, and 3nm. The TSMC N2 TPU is now in process.. They wrote some very big checks and are a coveted EDA/IP customer. Broadcom, on the other hand, has always been cheap on EDA tools. I worked on a couple of projects with them back in the 1990s and it was rough. From what I hear Hock Tan has continued that tradition of sharp penciling.
The analyst Beth Kindig just published a piece saying that Broadcom sells these TPUs to Google for $13,000 a piece. (not bad for just doing some of the backend stuff) There is also an order backlog of $73B in the next few quarters. (not just Google)
 
Back
Top