Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/lip-bu-tan-update-on-18a-and-14a.24340/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Lip-Bu Tan Update on 18A and 14A

Love the enthusiasm on 14A. If LBT is as savvy and pragmatic a businessman this forum would led me to believe, that’s big that here’s stating this publicly after his initial “expectation reset” he did when he joined Intel.
 
Love the enthusiasm on 14A. If LBT is as savvy and pragmatic a businessman this forum would led me to believe, that’s big that here’s stating this publicly after his initial “expectation reset” he did when he joined Intel.
LBT is actually giving customers what they want. Intel was previously starting negotiations with "we will let you work with IFS, if you behave properly"
That said, the Intel challenge is not capability in processing. It is financial success in foundry and being able to deliver what customers need. This includes Intel product group as well.

@Daniel Nenni (and others) Question: Is Intel better at external foundry business than GF today? Not just most advanced node ... foundry business. Are they better than Samsung today?
 
LBT is actually giving customers what they want. Intel was previously starting negotiations with "we will let you work with IFS, if you behave properly"
That said, the Intel challenge is not capability in processing. It is financial success in foundry and being able to deliver what customers need. This includes Intel product group as well.
Intel is also killing it's Open developer ecosystem under the new head of DCAI which will bite them in the long run if they don't stop Part of the reason Nvidia is where it is today is the dev ecosystem You can prototype your code and run it everywhere. Intel has great software ecosystem and tooling but they are scrapping many software projects and some good one even.
 
Intel is also killing it's Open developer ecosystem under the new head of DCAI
From this article which started with discussions of Intel sharply curtailing its open source software efforts and losing many software engineers::
[During the first week of October 2025] at the Intel Tech Tour in Arizona statements were made that Intel will look at limiting/refraining from open-source contributions if in effect it means potentially helping their competitors or rather how to position open-source for Intel more favorably from their competition. Kevork Kechichian who is the new leader of Intel's EVP and GM Data Center group spoke during his keynote pertaining to the upcoming Xeon 6+ Clearwater Forest announcement that the company needs to figure out how to make open-source more of an advantage to Intel and not its competitors.

Kevork stated when talking about providing customers with end-to-end solutions, "an era where platforms are super important", and on Intel investing in open-source infrastructure:

"We need to find a balance where we use [our open-source software] as an advantage to Intel and not let everyone else take it and run with it."
At first questioning to myself what I had just heard... He further added a few minutes later in his presentation, Kevork stated as part of making a successful data center business at Intel:

"We are very proud of our open-source contributions. We are going to keep on doing that. However, like I mentioned, I want to make sure that it gives us an edge against everyone else."

[...]

I talked with several Intel representatives at the Arizona event in being shocked by these comments. Eight days later and less than 24 hours prior to the embargo lift, I received a follow-up statement from Intel on the matter:

"Intel remains deeply committed to open source. We’re sharpening our focus on where and how we contribute—ensuring our efforts not only reinforce the communities we've supported for decades but also highlight the unique strengths of Intel. Open source is a strategic focus designed to deliver greater value to our customers, partners, and the broader ecosystem."
 
Intel's current capital expenditure (CapEx) plan does not include investments in 14A capacity for third-party clients. Hence, even if Intel lands an order from a major customer (think Apple, AMD, Nvidia, or Qualcomm), it will have to invest in additional capacity, which will delay Intel Foundry reaching the breakeven point.

 
  • Like
Reactions: VCT
Intel's current capital expenditure (CapEx) plan does not include investments in 14A capacity for third-party clients. Hence, even if Intel lands an order from a major customer (think Apple, AMD, Nvidia, or Qualcomm), it will have to invest in additional capacity, which will delay Intel Foundry reaching the breakeven point.


No matter who leads Intel, Bob Swan, Pat Gelsinger, or Li-Bu Tan, they all face the same fundamental challenge: under the IDM business model, Intel’s Capex and R&D spending are allocated to both manufacturing (foundry) and product division. The money are spread too far, too thin, and across too many directions as the company tries to compete in too many markets. Without changing the IDM model itself, any Intel CEO can only delay or disguise a structural problem that eventually becomes a crisis.

On the manufacturing side, whether serving internal or external customers, Intel Foundry cannot compete or move quickly enough with the level of Capex and R&D it has. It is outgunned by TSMC while stretching its own capital to an unsustainable level.

On the product and design side, Intel struggles to keep up with fabless, Capex light competitors such as Nvidia, AMD, Broadcom, Qualcomm, MediaTek, Apple, Microsoft, Google, and Amazon, all of whom can develop new products faster and capture emerging opportunities more effectively.

Under the IDM business model, Intel’s revival or even its survival depends on increasing both Capex and R&D spending. But Intel has very little room left to push those investments further.

If we measure Intel’s R&D effectiveness by comparing the current year’s net profit to the previous year’s R&D spending, it becomes obvious that Intel cannot continue on its current trajectory.



Comparison of Capex Spending:


1768287681614.png


1768287719170.png



1768287773892.png

Nvidia Fiscal Year Ended in January each year.


Comparison of R&D Spending:


1768287600680.png


1768287258337.png



1768287528562.png

Nvidia Fiscal Year Ended in January each year.
 
Last edited:
No matter who leads Intel, Bob Swan, Pat Gelsinger, or Li-Bu Tan, they all face the same fundamental challenge: under the IDM business model, Intel’s Capex and R&D spending are allocated to both manufacturing (foundry) and product division. The money are spread too far, too thin, and across too many directions as the company tries to compete in too many markets. Without changing the IDM model itself, any Intel CEO can only delay or disguise a structural problem that eventually becomes a crisis.

On the manufacturing side, whether serving internal or external customers, Intel Foundry cannot compete or move quickly enough with the level of Capex and R&D it has. It is outgunned by TSMC while stretching its own capital to an unsustainable level.

On the product and design side, Intel struggles to keep up with fabless, Capex light competitors such as Nvidia, AMD, Broadcom, Qualcomm, MediaTek, Apple, Microsoft, Google, and Amazon, all of whom can develop new products faster and capture emerging opportunities more effectively.

Under the IDM business model, Intel’s revival or even its survival depends on increasing both Capex and R&D spending. But Intel has very little room left to push those investments further.

If we measure Intel’s R&D effectiveness by comparing the current year’s net profit to the previous year’s R&D spending, it becomes obvious that Intel cannot continue on its current trajectory.



Comparison of Capex Spending:


View attachment 4052

View attachment 4053


View attachment 4054
Nvidia Fiscal Year Ended in January each year.


Comparison of R&D Spending:


View attachment 4051

View attachment 4049


View attachment 4050
Nvidia Fiscal Year Ended in January each year.
I certainly am not going to argue with numbers.

However, when Intel was in the process lead, few questioned them being an IDM.
 
I certainly am not going to argue with numbers.

However, when Intel was in the process lead, few questioned them being an IDM.

Intel is in the process lead today with 18A in HVM and 14A fast following. Unfortunately, for the foundry business it is not all about the simulated PPA of the PDK. It is more about the ecosystem and the TSMC ecosystem cannot be replicated in a matter of months or years. It will take tens of years.

Thankfully the NOT TSMC market is all about wafer price, capacity insurance, and now build in America politics, so the ecosystem will follow. Customers can prepay to support the required CAPEX. Intel has fab shells being built in AZ and OH. Those can be populated in less time than a fresh design start takes to tape-out.
 
Is the rate of CAPEX necessary for newer nodes rising more slowly as scaling slows, is it flat, or is it accelerating?

i.e. the CAPEX spend % increase required to go from traditional full nodes, i.e., 180nm to 130nm to 90nm --- how goes it compare to 3nm to 2nm to 1.4nm or similar "modern full nodes"?

That would lend weight/flavor to hist78's argument either direction.. (I agree with synopsis of Intel running out of financial room over time - Intel even predicted this themselves with that 2011 deck talking about increasing business required to sustain future nodes).
 
Thankfully the NOT TSMC market is all about wafer price, capacity insurance, and now build in America politics, so the ecosystem will follow. Customers can prepay to support the required CAPEX. Intel has fab shells being built in AZ and OH. Those can be populated in less time than a fresh design start takes to tape-out.

The “Not TSMC” foundry market does give Intel some potential breathing room. But that market is not reserved for Intel alone.

On mature nodes, many established foundries are already competing, while Intel has limited or no capacity and capability to serve those segments.

On leading edge nodes, even if we ignore competition from Samsung and the emerging efforts of Rapidus or SMIC, Intel still lacks the cost structure, operational efficiency, return on investment (ROI), and financial strength needed to compete effectively.

Furthermore, while we analyze the potential of the “Not TSMC” foundry market, Intel is facing an even larger and more dangerous threat, one that endangers its bread and butter Product division and, ultimately, Intel’s long term viability.

I call this the “Not Intel” market.

Companies such as Apple, Nvidia, AMD, Qualcomm, MediaTek, Broadcom, Marvell, Google, Microsoft, Amazon, Meta, Samsung, Tesla, Realtek, and many others now design their own semiconductor products for internal and/or external customers. Even worse for Intel, many of these companies surpass it in market capitalization, revenue, profit, and R&D spending. They are capturing market that Intel’s Product division once dominated, as well as markets where Intel has no presence at all.

This “Not Intel” competition is eroding Intel’s product business. Without a strong Intel Product division to generate the R&D funding and Capex needed to support both Intel Foundry and Intel’s own products, Intel and Intel Foundry are fighting for survival. This is exactly what we are witnessing today.
 
Last edited:
I certainly am not going to argue with numbers.

However, when Intel was in the process lead, few questioned them being an IDM.

Many people, including bloggers here on SemiWiki and myself, have criticized Intel’s IDM business model for a long time.

Intel’s leadership has used various methods to delay or obscure this fundamental business model problem for years, until it could no longer be hidden. Even worse, some senior Intel leaders convinced themselves that the IDM model was invincible.

For example, in 2012:

"SAN FRANCISCO – It’s the beginning of the end for the fabless model according to Mark Bohr, the man I think of as Mr. Process Technology at Intel.

Bohr claims TSMC’s recent announcement it will serve just one flavor of 20 nm process technology is an admission of failure. The Taiwan fab giant apparently cannot make at its next major node the kind of 3-D transistors needed mitigate leakage current, Bohr said.

“Qualcomm won’t be able to use that [20 nm] process,” Bohr told me in an impromptu discussion at yesterday’s press event where Intel announced its Ivy Bridge CPUs made in its tri-gate 22 nm process. “The foundry model is collapsing,” he told me.

Of course Intel would like the world to believe that only it can create the complex semiconductor technology the world needs. Not TSMC that serves competitors like Qualcomm or GlobalFoundries that makes chips for Intel’s archrival AMD."




"This is complete nonsense. This is not a David versus Goliath situation, this is hundreds of Davids versus Goliath. This is crowd sourcing, not unlike Twitter and Facebook where millions of people around the world collaborated and toppled ruthless dictators. This is the entire fabless semiconductor ecosystem (Synopsys, Cadence, Mentor, ARM, TSMC, UMC, GlobalFoundries, QCOM, BRCM, NVDA, AMD, and hundreds of other companies) against Intel. Hundreds of billions of dollars in total R&D versus Intel’s billions."

 
I certainly am not going to argue with numbers.

However, when Intel was in the process lead, few questioned them being an IDM.
This is the fundamental problem. Times change. people need to adapt. Intel did not (well they did in 2020 but Pat reversed the decision).

When I started (late 80s). Greate compute companies had to have fabs. IBM, DEC, AMD, Intel and others. Their technologies helped differentiate. the margins paid for the RnD. IBM and DEC were ahead of Intel.

IF you have scale, and a technical lead, and high margins, and no serious competitors. It works. They all learned their lesson and TSMC delivered and the rest is history. Some smart man wrote a book "Fabless"

Today, all great compute companies outsource manufacturing. Advanced technology, tons of flexibility.

We will soon learn: "sometimes you are so busy seeing if you CAN do a new technology, you didnt stop to say whether you SHOULD do a new technology". Intel CAN do 18A and 14A.


Side note: I love the numbers in the Charts. Intel outspent ALL other companies in RnD in 2020 (TSMC, AMD, Nvidia combined). Intel didnt spend too little... , thanks @hist78
 
The “Not TSMC” foundry market does give Intel some potential breathing room. But that market is not reserved for Intel alone.

On mature nodes, many established foundries are already competing, while Intel has limited or no capacity and capability to serve those segments.
Intel doesn't compete on mature node they compete on leading edge the only mature node they have are Intel 16nm and UMC 12.
This is the fundamental problem. Times change. people need to adapt. Intel did not (well they did in 2020 but Pat reversed the decision).
I still believe the issue was Intel screwing 10nm loosing the architecture lead and stagnating not starting on GPUs sooner rather than being an IDM which pat didn't have anything to do with cause he was not there. If Foundry was not worth saving LBT would have given up he doesn't looks to me the kind of guy that would allow running a loss leading business.
 
This is the fundamental problem. Times change. people need to adapt. Intel did not (well they did in 2020 but Pat reversed the decision).

When I started (late 80s). Greate compute companies had to have fabs. IBM, DEC, AMD, Intel and others. Their technologies helped differentiate. the margins paid for the RnD. IBM and DEC were ahead of Intel.

IF you have scale, and a technical lead, and high margins, and no serious competitors. It works. They all learned their lesson and TSMC delivered and the rest is history. Some smart man wrote a book "Fabless"

Today, all great compute companies outsource manufacturing. Advanced technology, tons of flexibility.

We will soon learn: "sometimes you are so busy seeing if you CAN do a new technology, you didnt stop to say whether you SHOULD do a new technology". Intel CAN do 18A and 14A.


Side note: I love the numbers in the Charts. Intel outspent ALL other companies in RnD in 2020 (TSMC, AMD, Nvidia combined). Intel didnt spend too little... , thanks @hist78

I combined the analysis for current year net profit over previous year R&D spending into one table and also added AMD data.

From 2014 to 2024, for every dollar Intel invested in R&D, it never generated two dollars or more in net profit. This stands in sharp contrast to TSMC and Nvidia. In fact, from 2015 to 2017 and again from 2022 to 2024 (and very likely in 2025 as well), Intel generated less than one dollar of net profit for every dollar it spent on R&D.

Why? I don’t believe that tens of thousands of Intel’s R&D engineers and managers are consistently lazy, incompetent, or uneducated.


1768324434637.png
 
I combined the analysis for current year net profit over previous year R&D spending into one table and also added AMD data.

From 2014 to 2024, for every dollar Intel invested in R&D, it never generated two dollars or more in net profit. This stands in sharp contrast to TSMC and Nvidia. In fact, from 2015 to 2017 and again from 2022 to 2024 (and very likely in 2025 as well), Intel generated less than one dollar of net profit for every dollar it spent on R&D.

Why? I don’t believe that tens of thousands of Intel’s R&D engineers and managers are consistently lazy, incompetent, or uneducated.
Intel spent more than any company.... regardless of profit.

the issue historically is that Intel is slow to deliver, and has a high R&D cost

Intel has high Fab process development cost... tons of people and equipment.
Intel puts 100s of people on a new product idea when maybe 20 will do.
Intel is slow to deliver. All of the graphics and AI products started as brilliant ideas. But if you dont deliver for 4 years, they are now out of date.

From what I hear, LBT has fixed the culture issues around all of these and will get to a reasoble metric. LBT believes (And I defineitely do) that less people is better and leads to faster results. The bureaucracy examples at Intel were hilarious.

But now Intel has the consequences: Intel is not perceived as the tech leader, Intel is not the most valuable company, Intel is not the highest volume fab company, Intel is not seen as a innovator. Deliver and these issues go away.
 
Intel fumbled the ball related to EUV.
Perhaps. But some thoughts

1) Intel looked at EUV before most other companies. They had it in development for a long time
2) the 10nm fiasco was blamed on EUV, but that wasnt the major issue or the main reason for 10nm delay (which is best described as a disaster, followed by a failed rescue, followed by another failed rescue)
3) After Intel implemented EUV on Intel 3 and Intel 4, those technologies have lower margins than Intel 10/7 today.
 
I still believe the issue was Intel screwing 10nm loosing the architecture lead and stagnating not starting on GPUs sooner rather than being an IDM which pat didn't have anything to do with cause he was not there.
Pat did agree to reverse a possible design only course Swan opened and so tried to rescue the company as an IDM+Foundry. Besides financial forecasting missing the shift to AI datacenter spending, his biggest failure was not landing any 18A whales, right? Something he bet the company on.

For GPGPUs, I start with the paragon of Nvidia, which under one man's leadership decided on an adequate at minimum total ecosystem strategy with the start of the CUDA software project in 2004. And they stuck to that strategy and executed well enough for two decades and eventually reaped wild rewards with the AI boom.

Intel did try its hand in the then hot discrete graphics chip market with an i740 project that launched in early 1998 and quickly failed, but was followed by other projects based on it if I read Wikipedia's history article correctly. They count 12 generations through Intel Xe, but it looks like after gen 4 they went from embedding in the Northbridge, which probably counts as an iGPU, to iGPUs in the CPU.

I'm not directly familiar with this because my requirements have been well satisfied by an OK 2D (i)GPU for decades. But I can't forget the Larrabee, started in one form in 2005 per an IEEE article, was canceled before launch after much hype, Gelsinger played a major role in it, and many think he got fired for that in the year before. For GPGPU type usage without any GPU features Xeon Phi followed, and shipped product but failed and was discontinued in 2020 per Wikipedia.

So that's two strategies but without Intel trying to move to GPGPU type market until Larrabee? Then they launched a "real" GPGPU project in 2017, but it was led by a guy who had been let go from AMD. One of BK's last major hires I assume, but then again who good would sign on to work on this project at Intel? It was a rolling disaster until pretty recently.

Enough so Lip-Bu Tan has admitted Intel has lost its chance at the current AI training market, and will now try for the inference market.
 
Back
Top