Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-14a-risk-production-in-2028-hvm-in-2029-lip-bu-tan-at-cisco-ai-summit-3-feb-2026.24482/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030871
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Intel 14A risk production in 2028, HVM in 2029: Lip-Bu Tan at CISCO AI Summit 3 Feb 2026

user nl

Well-known member
Nice to see LBT speak about IF at CISCO AI summit yesterday, he starts around 48 minutes:


It seems no more uncertainty about 14A, it will ramp.......
 
Last edited:
So they said Risk Production was 2027 in the IFS Connect 2025 now it's 2028 are they moving timelines?
That's what LBT says at 51:40

LBT clearly has two major concerns regarding 14A, the variability of the yield and the IP availability. It seems they simply need more time to get that in a better shape. He only wants 5 or 10% of a major product of early customers for 14A to commit to that so that he can get the trust of those companies.

So perhaps Apple, NVIDIA, Microsoft will commit to some small wafer orders to be manufactured in 2029........He will not give the names of the intended customers that he hopes will sign purchase agreements in H2-2026.

I think this may be quite smart (humble) timing of INTEL, these customers have their main orders with TSMC being made in 2028, they trust that that will come. And then INTEL will deliver some small overflow capacity in 2029 of say 5-10%. In that way these customers do not risk major parts of their production if INTEL fails. Makes sense for both sides.

Will be interesting to see if TSMC now goes full in on A14, scheduled for HVM in 2028? They have started building the new fab for A14:
https://en.eeworld.com.cn/news/manufacture/eic712929.html#:~:text=Furthermore, TSMC notified its customers,platform officially put into operation.

1770204937960.png
 
Last edited:
I think this may be quite smart (humble) timing of INTEL, these customers have their main orders with TSMC being made in 2028, they trust that that will come. And then INTEL will deliver some small overflow capacity in 2029 of say 5-10%. In that way these customers do not risk major parts of their production if INTEL fails. Makes sense for both sides.
problem is small volume won't justify 14A it needs big volume also i think that 28 risk production is for external and 27 for Internal
 
That's what LBT says at 51:40

LBT clearly has two major concerns regarding 14A, the variability of the yield and the IP availability. It seems they simply need more time to get that in a better shape. He only wants 5 or 10% of a major product of early customers for 14A to commit to that so that he can get the trust of those companies.

So perhaps Apple, NVIDIA, Microsoft will commit to some small wafer orders to be manufactured in 2029........He will not give the names of the intended customers that he hopes will sign purchase agreements in H2-2026.

I think this may be quite smart (humble) timing of INTEL, these customers have their main orders with TSMC being made in 2028, they trust that that will come. And then INTEL will deliver some small overflow capacity in 2029 of say 5-10%. In that way these customers do not risk major parts of their production if INTEL fails. Make sense for boith sides.

Will be interesting to see if TSMC now goes full in on A14, scheduled for HVM in 2028? They have started building the new fab for A14:
https://en.eeworld.com.cn/news/manufacture/eic712929.html#:~:text=Furthermore, TSMC notified its customers,platform officially put into operation.

View attachment 4141
I hate the way they always pick a comparison that makes something look artificially good -- why isn't the comparison A14 vs. N2P, which is the real alternative?

(which claims 5% more speed and 10% lower power than N2, meaning A14 would presumably give 5~10% more speed and 10~20% lower power, rather less attractive...)
 
In spite of the labels, for real applications isn't Intel 14A (with BSPD) really competing against TSMC A16 (N2P+SPR, maybe a year earlier) or A14+SPR (maybe a year later), not A14 (FSPD, about same time) ?

IMHO nobody is going to look at porting/switching between BSPD and FSPD, they're just too different from so many points of view (tools, IP/libraries, packaging, cooling, cost...) -- they'll choose whichever best suits their application, then a vendor. Which for FSPD means TSMC (N2 or A14), while for BSPD they also have the Intel option...
 
Last edited:
The "start risk production" is for PR. Intel revenue doesnt show up until about 1 year after they claim risk production:

No 14A products out in 2027. Most likely no significantly volume out in 2028. Reminder, 18A does not have significant volume in 2026

Intel does not currently plan any significant external wafer volume in total through 2027.

I would be interested to see if PDK on 14A has or does change much. The plan is to make it less capex intensive. the spreadsheet for IFS keeps getting worse.
 
I've seen an interesting tweet here, that says that Intel's 16A / 14A are useless for mobile applications, because of the compulsory BPD
The reason TSMC emphasized that A16 is HPC-exclusive is that implementing BPD requires a specific process step that must be performed during the wafer flip, and that step significantly degrades the heat dissipation (thermal spreading) performance of backside power delivery. As a result, chips fabricated on the A16 node inevitably become HPC chips that mandatorily require liquid cooling.

 
I've seen an interesting tweet here, that says that Intel's 16A / 14A are useless for mobile applications, because of the compulsory BPD


If it didn't come directly from TSMC***, that could have be lifted what from what I posted here earlier today -- after being simplified by about 100x... ;-)


*** it didn't -- it came from an earlier post I made in Semiwiki on the same subject. It's all getting a bit circular... ;-)
 
If it didn't come directly from TSMC***, that could have be lifted what from what I posted here earlier today -- after being simplified by about 100x... ;-)


*** it didn't -- it came from an earlier post I made in Semiwiki on the same subject. It's all getting a bit circular... ;-)
And the post greatly exaggerates what I wrote, it makes out that BSPD (especially Intel) is pretty much unusable except for HPC -- the reality is that it's *recommended for/targeted at* HPC because there the advantages are larger and the negative points (thermal, cost) can be dealt with, so FSPD is the best choice for most other applications especially lower-power ones or ones which are harder to cool or more cost-sensitive.

That doesn't mean you *can't* use BSPD (from Intel or TSMC), but as a customer why would you when the advantages are relatively small and the disadvantages relatively large? It would mean that your products are likely to be uncompetitive with those from others who use TSMC FSPD... :-(

Unless you desperately *want/need* to use Intel*** not TSMC, and they only have BSPD... ;-)

*** for reasons like local supply or national security or keeping Intel/USA semis alive...
 
It would mean that your products are likely to be uncompetitive with those from others who use TSMC FSPD... :-(

Unless you desperately *want/need* to use Intel*** not TSMC, and they only have BSPD... ;-)

*** for reasons like local supply or national security or keeping Intel/USA semis alive
Having products uncompetitive sounds like a show stopper for me. If you want local supply the obvious solution would be to just use TSMC US facilities
 
Nice to see LBT speak about IF at CISCO AI summit yesterday, he starts around 48 minutes:


It seems no more uncertainty about 14A, it will ramp.......

Because it’s an almost six‑hour conference video, I used Google NotebookLM to create the following briefing report:

Cisco AI Summit: Defining the Future of the AI Economy

This briefing document synthesizes the key themes, technological insights, and strategic perspectives presented at the second annual Cisco AI Summit. The event featured leaders from OpenAI, Microsoft, AWS, Anthropic, Intel, World Labs, and global policy experts to discuss the shift from experimental AI to agentic and physical implementation.

--------------------------------------------------------------------------------


Executive Summary

The primary consensus among industry leaders is that 2026 will be the definitive turning point for AI, transitioning from intelligent chatbots to "agentic applications"—autonomous systems capable of executing complex tasks, using software like humans, and interacting with the physical world.

Critical Takeaways:

The Rise of Agents: AI is shifting from a transactional tool to a "teammate." Future software will be rewritten specifically for AI agents, potentially moving beyond the "prompt box" to persistent, autonomous execution.

Infrastructure as the Primary Bottleneck: Physical constraints—specifically power, memory, and high-performance silicon—are the most immediate threats to scaling. Memory supply is predicted to remain tight until 2028.

Trust and Security as Prerequisites: Security is no longer a post-adoption feature but a requirement for deployment. If users do not trust the data privacy or deterministic guardrails of a model, they will not adopt it.

Geopolitical Competition: A "two-horse race" exists between the U.S. and China. While the U.S. leads in frontier models, China is rapidly closing the gap through infrastructure optimization, open-source aggression, and massive engineering talent.

The Productivity Imperative: AI is viewed as a necessary intervention for global population decline. In countries like Japan, AI and robotics are essential to maintain quality of life as the workforce shrinks.

--------------------------------------------------------------------------------


1. The Technological Shift: From Chatbots to Agentic and Physical AI

The industry is moving through distinct phases of AI evolution, summarized by Cisco’s G2 Patel as the transition from "magic" chatbots (2022) to experimental agents (2025) and realized ROI (2026).

Agentic AI and Knowledge Work

Teammate vs. Tool: Sam Altman (OpenAI) noted that newer models (like Codex) feel less like a tool and more like a collaborator.

Autonomous Software Use: The future of knowledge work involves agents having "full use" of computers and browsers to edit documents, manage communications (e.g., Slack), and build software.

Rewriting Software: Current software is designed for humans. Altman predicts software will be rewritten to be equally usable by humans and AIs, or even optimized primarily for agents.

Spatial Intelligence and "World Models"

Dr. Fei-Fei Li (World Labs) argued that language is a new form of intelligence compared to perception, which is half a billion years old.

Spatial Intelligence: The next frontier is the ability for AI to understand, reason with, and navigate the 3D/4D physical world.

Marble Model: World Labs’ "Marble" model takes multimodal inputs to generate navigable, geometrically consistent 3D worlds for robotics, gaming, and healthcare simulation.

Generalized Robotics: While "squareish" robots (cars) operate in 2D, the goal is 3D robots that can interact with the world without breaking things. This is a high-dimensional problem currently limited by a lack of 3D training data.

--------------------------------------------------------------------------------


2. Infrastructure: The Physical Constraints of Progress

Despite the digital nature of AI, its growth is tethered to physical limitations in power, manufacturing, and hardware.

The Memory and Silicon Gap

Intel’s Lip-Bu Tan identified memory as the single most severe constraint, stating there will be "no relief until 2028" due to the massive requirements of next-generation chips.


1770247336821.png


Saudi Arabia’s Infrastructure Strategy

Tarek Amin (Humane) detailed a massive regional push in Saudi Arabia to become an AI infrastructure hub:

Power Abundance: The Kingdom offers north of 15GW of power capacity, with costs 20–30% lower than global averages.

Execution Speed: Through public-private partnerships, the Saudi Ministry of Energy identified 211 sites with existing substation capacity in just six weeks.

Agentic OS: Humane is developing "Humane OS," a customized Linux distro where applications are "second-class citizens" and intent-driven agents handle the actual work.

--------------------------------------------------------------------------------


3. Trust, Security, and "Agentic Ops"

A recurring theme was the "trust deficit." Cisco leaders argued that trust in data, models, and agents is the primary barrier to enterprise deployment.

Security as a Prerequisite: Unlike previous tech cycles where productivity and security were trade-offs, AI adoption requires security to be built in from the start.

Agentic Ops: To manage the complexity of AI-driven infrastructure, Cisco and others are building "Agentic Ops" to detect and remediate issues proactively before they cause outages.

Determinism vs. Probabilism: Aaron Levie (Box) noted that as non-deterministic agents proliferate, the value of the System of Record (the deterministic traffic cop) increases to manage permissions and prevent agents from accessing sensitive data.

--------------------------------------------------------------------------------


4. The Economic Impact: ROI and Workplace Transformation

While experimentation is rampant, companies are struggling to articulate uniform ROI.

ROI Success Criteria

Matt Garman (AWS) observed that many "proof of concepts" fail to move to production because they lack success criteria.

Direct vs. Indirect Savings: In healthcare, "ambient listening" AI may not save money by reducing headcount, but it provides ROI by reducing doctor/nurse attrition and burnout.

Alpha through Re-engineering: Aaron Levie argued that "Alpha" (competitive advantage) belongs to companies willing to re-engineer their business processes to support agents, rather than forcing agents to adapt to messy human workflows.

The AI Coding Revolution

Coding is currently the only field showing "true full utilization" of AI capabilities.

100% AI Code: Cisco reported that 70% of its AI products use AI-generated code. Anthropic and Box are both moving toward 100% AI-written products for certain segments.

The Shift in CS Education: Kevin Scott (Microsoft) expressed a desire for computer science to move away from "vocational programming" (writing characters) back toward "algorithmic thinking" and problem decomposition.

--------------------------------------------------------------------------------


5. Geopolitics and Global Competition

The summit highlighted the high stakes of the AI race between the U.S. and China.

The U.S.-China Race

China’s Optimization: Because China is constrained by U.S. chip export policies, they have become "masters of optimization," finding ways to run advanced models on older hardware (e.g., DeepSeek).

Manufacturing Edge: Anne Neuberger noted that China’s manufacturing data and expertise give them a potential lead in embodied AI (robotics).

Values-Driven Tech: Marc Andreessen argued that it is critical for the world to run on American AI rather than Chinese AI, as American models reflect values like IP respect and privacy, whereas Chinese models are specifically tested for alignment with "Marxism and Xi Jinping thought."

Open Source as a Strategic Tool

Price Floor: Open source AI (like Llama or DeepSeek) acts as an "asteroid strike" on proprietary profits, forcing the price of AI down to the cost of inference.

U.S. Leadership: Sam Altman expressed concern that the U.S. lacks a substantial lead in open source, which is increasingly important for users who want to run private models locally.

--------------------------------------------------------------------------------


Conclusion: The "Non-Zero Sum" Future


The summit concluded with a call for a shift in the human approach to technology. Kevin Scott (Microsoft) and Chuck Robbins (Cisco) emphasized that AI should be viewed as a tool to turn "zero-sum" problems (scarcity and population decline) into "non-zero-sum" opportunities for abundance and productivity. Success will be defined not by the technology itself, but by the "design craft, taste, and point of view" of the humans who deploy it.
 
Back
Top