Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/lip-bu-tan-update-on-18a-and-14a.24340/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Lip-Bu Tan Update on 18A and 14A

2) the 10nm fiasco was blamed on EUV, but that wasnt the major issue or the main reason for 10nm delay (which is best described as a disaster, followed by a failed rescue, followed by another failed rescue)
As I, very much not a fab guy, understand it, Intel's 10 nm was very roughly equivalent to TSMC's N7 in what it targeted, but a bit more aggressive.

I strongly suspect Intel's legendary high level bad management not only seeped down to its crown jewel of fab technology, but was responsible for those multiple failed rescues. Did the company even internally acknowledge it was a failure until the single SKU Cannon Lake launch, with i's biggest piece of silicon, the iGPU fused off, with essentially no laptops using it for sale? BK was purged from Intel three months later.

So neither company planned on using EUV for their first iteration of this general node, but TSMC's success allowed it to ease into EUV use with N7+ for "up to four of its critical layers." This was not much of an option for Intel until the base node worked well enough (and even then, there was a terrible mistake made with a clock tree circuit for Intel 7+ higher end Raptor Lake SKUs).
 
Intel did try its hand in the then hot discrete graphics chip market with an i740 project that launched in early 1998 and quickly failed, but was followed by other projects based on it if I read Wikipedia's history article correctly. They count 12 generations through Intel Xe, but it looks like after gen 4 they went from embedding in the Northbridge, which probably counts as an iGPU, to iGPUs in the CPU.

So that's two strategies but without Intel trying to move to GPGPU type market until Larrabee? Then they launched a "real" GPGPU project in 2017, but it was led by a guy who had been let go from AMD. One of BK's last major hires I assume, but then again who good would sign on to work on this project at Intel? It was a rolling disaster until pretty recently.

One minor addition - Intel did make another "serious" attempt to grow the iGPU into something more substantial in 2015 -- the Broadwell-C (and later Skylake-C) chips had beefy iGPUs with a L4 eDRAM cache to reduce it's bandwidth requirements. (i.e. desktop version i7-5775C). They seemed to have given up on eDRAM after this (which could be an interesting tech today for AI), and they went back to smaller die iGPUs for a while after this. (Right after this, they launched a sku or two with an AMD GPU integrated to an Intel CPU before switching back to in-house iGPUs exclusively again).

All that to say, Intel's GPU division seems to have been consistently, chaotic, in terms of strategy, execution, and executive support. They always kept a torch lit, but it was pretty dim at times.

I hope they don't give up now, because between Battlemage and Panther Lake - they seem to have a decent engine to work with.
 
10nm failure was cobalt, that's why it was walked back in subsequent nodes.

I was always curious - was the Cobalt implementation literally just brittle and would fail at "random" times (making it look viable when it wasn't), or was it consistently bad and something they should have given up earlier?
 
10nm failure was cobalt, that's why it was walked back in subsequent nodes.
You are correct.

Some issues
1) at least as of 3 years ago, There were LTD process integration people at Intel who said the process was fine, It was manufacturing and product mistakes. To this day, there is Intel disagreement on why things failed.
2) The first rescue mission made some changes to have a less marginal process, But then the process file and design rules were off so the products had validation issues and had to tweak the process and product.
3) Ultimately this led to the decision to oursource to TSMC circa 2020 and then Pats decision to charge product groups TSMC market price which is obviously way lower than Intel cost on all processes. we will see if LBT and 18A change this.
 
No matter who leads Intel, Bob Swan, Pat Gelsinger, or Li-Bu Tan, they all face the same fundamental challenge: under the IDM business model, Intel’s Capex and R&D spending are allocated to both manufacturing (foundry) and product division. The money are spread too far, too thin, and across too many directions as the company tries to compete in too many markets. Without changing the IDM model itself, any Intel CEO can only delay or disguise a structural problem that eventually becomes a crisis.

On the manufacturing side, whether serving internal or external customers, Intel Foundry cannot compete or move quickly enough with the level of Capex and R&D it has. It is outgunned by TSMC while stretching its own capital to an unsustainable level.

On the product and design side, Intel struggles to keep up with fabless, Capex light competitors such as Nvidia, AMD, Broadcom, Qualcomm, MediaTek, Apple, Microsoft, Google, and Amazon, all of whom can develop new products faster and capture emerging opportunities more effectively.

Under the IDM business model, Intel’s revival or even its survival depends on increasing both Capex and R&D spending. But Intel has very little room left to push those investments further.

If we measure Intel’s R&D effectiveness by comparing the current year’s net profit to the previous year’s R&D spending, it becomes obvious that Intel cannot continue on its current trajectory.



Comparison of Capex Spending:


View attachment 4052

View attachment 4053


View attachment 4054
Nvidia Fiscal Year Ended in January each year.


Comparison of R&D Spending:


View attachment 4051

View attachment 4049


View attachment 4050
Nvidia Fiscal Year Ended in January each year.
thanks
 
You are correct.

Some issues
1) at least as of 3 years ago, There were LTD process integration people at Intel who said the process was fine, It was manufacturing and product mistakes. To this day, there is Intel disagreement on why things failed.
I have heard the same no one knows exactly why it happened but i think we all know the guy that tried to cover it up and was successful for 2 years
2) The first rescue mission made some changes to have a less marginal process, But then the process file and design rules were off so the products had validation issues and had to tweak the process and product.
3) Ultimately this led to the decision to oursource to TSMC circa 2020 and then Pats decision to charge product groups TSMC market price which is obviously way lower than Intel cost on all processes. we will see if LBT and 18A change this.
I doubt Intel 4/3 is such a bad node it would not surprise me even with TSMC pricing it would be profitable and as for 18A zinser has said Intel 7 is so costly that 18A is the similar cost and if the ASP difference iS 3X as he says than i guess the cost of the Intel 7 it's simple to calculate if you know N7 Pricing Intel has given us that info.

- I7 pricing is N7 pricing
- 18A ASP is 3X I7 ASP
- 18A Cost is I7 Cost
if i have to guess the cost of Intel 7 maybe 1.5-1.66X of N7 Price if only IFS reported gross margin we could extrapolate
 
Last edited:
I hope they don't give up now, because between Battlemage and Panther Lake - they seem to have a decent engine to work with.
My impression GPU is hard for them serving Nvidia as one of their masters. They have a decent low end GPU which Linus praises. No one makes a low end graphics chip anymore. Intel are serving a niche there.

Intel capital structure, trying to serve Softbank, Nvidia, Trump, good lord so many masters. And it will get worse before it gets better. They have to survive this way for years. They might be able to split in 2030s.
 
My impression GPU is hard for them serving Nvidia as one of their masters. They have a decent low end GPU which Linus praises. No one makes a low end graphics chip anymore. Intel are serving a niche there.
Not really iirc Nvidia didn't get voting power also Intel GPU IP is here to stay cause they are more close to Nvidia than AMD as a software and hardware package on client.

edit: It’s a private placement, not a public-market nibble, and Intel’s earlier filing emphasized Nvidia wasn’t getting special governance or information rights beyond what shareholders already get.
so they have decent sway i do wonder why there was no clause for this for anti competitive practices.
 
Not really iirc Nvidia didn't get voting power also Intel GPU IP is here to stay cause they are more close to Nvidia than AMD as a software and hardware package on client

'Hammer Lake' is supposed to have an Nvidia GPU; I think Nvidia's partnership sort of implies "if you want us to use your fabs for GPUs in the future, don't attack us too hard on the GPU front". Not that Intel is a threat right now to Nvidia, but it could imply pressure in the future.

I do think the Intel iGPU is here to stay, but some higher end aspirations may have "political" connotations as long as the fab is part of Intel.
 
'Hammer Lake' is supposed to have an Nvidia GPU; I think Nvidia's partnership sort of implies "if you want us to use your fabs for GPUs in the future, don't attack us too hard on the GPU front". Not that Intel is a threat right now to Nvidia, but it could imply pressure in the future.
That is Serpent Lake fitting name if you see Nvidia's Working i might say Hammer Lake has Intel iGPU
I do think the Intel iGPU is here to stay, but some higher end aspirations may have "political" connotations as long as the fab is part of Intel.
Yeah
 
I have heard the same no one knows exactly why it happened but i think we all know the guy that tried to cover it up and was successful for 2 years
I can think of two people....

But the bigger point is that Intel's corporate culture and structure allowed such a thing to happen, something that was increasingly visible outside the company and blatant with Cannon Lake's "introduction."

An IDM completely failing with a new node and not treating it as an existential crisis for the whole company, that's unforgivable, once it went from "OK, we're having problems like we did with 14 nm initially," to "This is bad."

Also emphasizes the importance of the trustworthiness we hear about TSMC. The impression I get is that if they have a problem, they'll work with their customers to deal with it as best they all can. A study of N3(B) and their and some of their customers including Apple stepping back for other N3 nodes to e.g. N5 level SRAM density should be good evidence of this.
 
Back
Top