Oops -- corrected...I think you meant Logic is shrinking faster than Caches no?
Array
(
[content] =>
[params] => Array
(
[0] => /forum/threads/can-intel-recover-even-part-of-their-past-dominance.23972/page-4
)
[addOns] => Array
(
[DL6/MLTP] => 13
[Hampel/TimeZoneDebug] => 1000070
[SV/ChangePostDate] => 2010200
[SemiWiki/Newsletter] => 1000010
[SemiWiki/WPMenu] => 1000010
[SemiWiki/XPressExtend] => 1000010
[ThemeHouse/XLink] => 1000970
[ThemeHouse/XPress] => 1010570
[XF] => 2030770
[XFI] => 1060170
)
[wordpress] => /var/www/html
)
Oops -- corrected...I think you meant Logic is shrinking faster than Caches no?
Because no other chip's power consumption accounts for 80% of total cost.Adding lots of latches/D-types to shorten pipelines (e.g. to "double frequency") does put clock rates up, but usually increases power per gate transition/operation because the D-types don't contribute any useful function, they just take power (both to propagate data and for clocking). Yes this increases OPS/mm2 and clock rate, but usually total power for a given function also increases -- this is what Intel found out the hard way with NetBurst... ;-)
We've done exactly this comparison many times in DSP design, and the conclusion is always that it's better to have more gate depth between latches and fewer latches and clock more slowly, so long as you can afford the extra silicon area because more parallel paths running more slowly decreases power but increases area for the same task.
This may not work for bitcoin miners because they also have to worry about die size/cost and squeezing more MIPs out of each mm2, and in this case extra pipelining might help meet this requirement.
You're talking about architecture changes here, all of which which are perfectly valid but are independent of the tradeoff between PDP and voltage. That suggests to me that you're well familiar with optimizing architectures (gate level design), but not with optimizing library conditions (transistor level design and choice of library operating conditions i.e. building custom gate libraries).Because no other chip's power consumption accounts for 80% of total cost.
Let's check 5nm as an example. If Latch/DFFs account for 30% power.
Double the Latches will have the power to 1.3X.
Lower the voltage from 0.4 -> 0.3 will have power at (0.3/0.4)^2 = 0.5625.
Since the Vth is around 0.2, so the speed will be same.
Dynamic latch is very small, maybe 5% area.
1.3 * 0.5625 * 1.05 = 0.77.
You can reduce costs by 20%, which could double your profit.
I might be telling my granny to suck eggs, but make sure you use realistic values for gate type (not just inverters), fanout (to match real circuit, typically 3 or so), tracking load, and interconnect resistance (advanced technologies are often interconnect-dominated) -- otherwise you can get unrealistic results... ;-)Good discussion. I'm going to run spice today.
I think you misinterpret decisions required due to failure with poor strategy. No company, including Intel can do what ever it puts its mind to.Looking back at how Intel lost it's dominance, it's almost as if someone came from the future to sabotage the company:
i don't know where did you get this but they can do graphics and their new IP is wayyy better than larabee like they are beating AMD with their product in many casesIntel cannot do graphics, they are too slow and failed (larrabee wasnt cancelled. it failed)
On this one Intel shot themselves in the foot by restricting itself to Intel products only. The tech is very good and would have been a money printing machine in AI but it doesn't exist.Optane is 5x slower than DRAM, no one wanted it and intel delayed it too long. It was cancelled when they realized there were no sales.
same result tbh it's not just foundry design fumbled as well you can see it with MTL/ARL their GPU designs are bad in perf/mm2 but the architecture is good.Side note: what if during that time, Intel had decided to outsource manufacturing (like Nvidia, Apple, Broadcom, Qualcomm, AMD, IBM) and put the resources into CPUs and Accelerators?
Yeah only one acquisition was successful Movidius that's why i don't like Intel buying SamanovaIntel purchase multiple AI companies.... and derailed them so they disappeared.
They have enough money to take a look into every technology but not enough money to invest into every technology.I think you misinterpret decisions required due to failure with poor strategy. No company, including Intel can do what ever it puts its mind to.
As I have said before, Intel was a leader in Mobile strategy, GPU strategy, AI accelerator strategy. Cost effective execution was the weakness.
Powered by Nvidia?Private Girls From Your City - No Verify - Anonymous Sex Dating
https://privateladyescorts.com
Private Lady In Your Town - Anonymous Adult Dating - No Verify
Did you try to click through?Powered by Nvidia?
I don't click shady linksDid you try to click through?
It leads to amazon. Do’h
I needed to look that one up. I learned that "horses were for courses" from you a few years back. Only somebody in the land of steeplechases would come up with that. Your technical tidbits are appreciated. You just came up with an new concern that I didn't worry about in the past... the resistance of the low level interconnect. I typically just slap down a horizonal M4 preroute with an M5 hit point, then autoroute and just do Cx below M4. Our tools (we compete with VXL) allows the user fetch the RCx below M4, but not by default.I might be telling my granny to suck eggs
LBT talked about cancelling 14A since no one was committing to it and it costs billions per year to keep development going. We will see if it changes.
Intel cannot do graphics, they are too slow and failed (larrabee wasnt cancelled. it failed)
Optane is 5x slower than DRAM, no one wanted it and intel delayed it too long. It was cancelled when they realized there were no sales.
Intel could not do mobile, they are too expensive and slow. They did many leading edge parts and lost billions
Intel decided to outsource manufacturing because internal was 2x the price and not a node ahead. 2027 will tell us if anything changed.
Pat decided to do foundry (Knowing that Intel loses tons on manufacturing), but Intel was arrogant and told customers what they should do and no one signed up but the USG (LBT fixed that).
Intel purchase multiple AI companies.... and derailed them so they disappeared.
LBT talked about cancelling 14A since no one was committing to it and it costs billions per year to keep development going. We will see if it changes.
Side note: what if during that time, Intel had decided to outsource manufacturing (like Nvidia, Apple, Broadcom, Qualcomm, AMD, IBM) and put the resources into CPUs and Accelerators?
Intel is a very smart compute company that was very slow, expensive and customer unfriendly. LBT can make them a very smart compute company that prioritizes its strengths not its weaknesses. Not to regain 1995.... but succeed in 2028.
I think you misinterpret decisions required due to failure with poor strategy. No company, including Intel can do what ever it puts its mind to.
As I have said before, Intel was a leader in Mobile strategy, GPU strategy, AI accelerator strategy. Cost effective execution was the weakness.
Really? Intel's integrated CPU graphics are arguably the world's highest volume production graphics. Datacenter and supercomputing GPUs were delivered, but Intel wasn't serious enough in time for the emergence of transformer neural networks, and Nvidia (and arguably AMD) had substantial technology leadership. I think your judgments about what Intel can and can't do, or any other company's abilities, are biased. It's just a matter of corporate leadership and hiring the right people.Intel cannot do graphics, they are too slow and failed (larrabee wasnt cancelled. it failed)
Optane is slower than DRAM, but the reasons Optane failed are not so simple as you post.Optane is 5x slower than DRAM, no one wanted it and intel delayed it too long. It was cancelled when they realized there were no sales.
I'm not convinced of the slowness assertion, but Intel's fab processes were aimed at high performance CPUs with a power-be-damned objective to get the highest clock speeds. On the too expensive point, I think that's fair and accurate.Intel could not do mobile, they are too expensive and slow. They did many leading edge parts and lost billions
I agree. Google has proven that dedicated AI processors are practical, but owning the entire software stack was apparently too daunting to Intel. Amazon is also proving that dedicated AI chips are practical when you control the entire stack, but Google and Amazon are not merchant chip vendors. Nvidia and AMD took a more evolutionary GPU approach, and it appears Intel did not have the investment endurance to go down that path, so they kept trying the dedicated AI design approach (Nervana and Habana). Dedicated accelerators are always more complex to bring to market than more general purpose approaches. I'm also having difficulty getting excited about SambaNova for inference. AI technology looks like it's probably changing too quickly for narrow dedicated processors from merchant vendors. In Google and Amazon's cases, because they own the basic AI research and the entire software stack, they can see the software requirements coming more than one generation away.Intel purchase multiple AI companies.... and derailed them so they disappeared.
