Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-foundry-reportedly-secures-contract-to-build-microsofts-maia-2-next-gen-ai-processor-on-18a-18a-p-node.23844/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Intel Foundry reportedly secures contract to build Microsoft's Maia 2 next-gen AI processor on 18A/18A-P node

Fred Chen

Moderator
When Intel and Microsoft announced their plan to build a 'custom processor' on Intel's 18A fabrication process in early 2024, neither of the companies even hinted at the purpose of that silicon, leaving a lot of space for guesswork and interpretation by industry observers. Today, SemiAccurate reportedly broke the silence about Intel Foundry's 18A foundry customers, reporting that Intel Foundry (IF) is on track to produce an AI processor on 18A or 18A-P for Microsoft.

So far, Intel Foundry has officially landed only one major external customer for its 18A manufacturing technology, which is Microsoft. But while we tend to think about Microsoft as a cloud and software giant, the company has quite a potent hardware development (or at least hardware definition) team that builds custom silicon for a variety of data center applications, including Cobalt CPUs, DPUs, and Maia AI accelerators, just to name some. As it turns out, one of Microsoft's next-generation AI processors will reportedly be made by Intel Foundry.

If true, the deal would give Microsoft access to a US-based chip supply chain that isn't as exposed to the capacity constraints that we see with both chip manufacturing and advanced packaging at TSMC. Additionally, the deal could be seen as favorable to Microsoft in other ways, given the US government's investment in Intel.

For the lack of details, we can only speculate which of the next-generation Maia processors will be produced by IF, but this would be a big development for Intel. Since we are dealing with data center-grade silicon, we are talking about processors with a fairly large die size. Hence, if they are on track to be produced at Intel Foundry, then the company's 18A (or 18A-P with 8% higher performance) fabrication process is projected to be good enough not only for Intel itself (which is on track to ramp its Xeon 6+ 'Clearwater Forest in 2026), but also for its foundry customers. The contract could be a sign of good yields on Intel's node: Yields would have a significant impact on a large processor like the Maia, meaning Microsoft would have likely opted for using a product based on a smaller die instead if there were yield issues with Intel's node.

Microsoft's original Maia 100 processor is a massive 820 mm^2 piece of silicon that packs 105 billion transistors, and which is larger than Nvidia's H100 (814 mm^2) or B200/B300 compute chiplets (750 mm^2). While the lion's share of Microsoft's Azure offerings for AI run on Nvidia's AI accelerators, the company is investing a lot to co-optimize its hardware and software to achieve higher performance while increasing efficiency and thus lowering the total cost of ownership. As such, Maia is an important project for Microsoft.

Assuming that Microsoft's AI processors, to be made by Intel Foundry, continue to use near-reticle-sized compute dies, then Intel's 18A manufacturing process is on track to achieve a defect density that is low enough to ramp such chips with decent yields. Of course, Microsoft could partition its next-gen AI processor into several smaller compute chiplets linked through Intel's EMIB or Foveros technologies, but that could impact performance efficiency, so we are most likely talking about a big die or dies close to the reticle size of EUV tools, which is 858 mm^2.

To de-risk such a large component, Intel and Microsoft are almost certainly running DTCO loops, where Intel tunes transistor and metal stack parameters for Maia's workloads and performance targets. In addition, Microsoft could embed spare compute arrays or redundant MAC blocks into next-gen Maia layout to enable post-manufacturing fusing or repair, which is what companies like Nvidia do with their designs.

Meanwhile, the big question is what exactly Intel Foundry will produce for Microsoft and when. Based on the latest rumors, Microsoft is currently working on its next-generation codenamed Braga (Maia 200?) processor that will use TSMC's 3nm node and HBM4 memory, supposedly due in 2026, as well as Clea (Maia 300?) due later.

 
On the other hand..https://www.reuters.com/business/microsofts-next-gen-ai-chip-production-delayed-2026-information-reports-2025-06-27/

June 27 (Reuters) - Microsoft's (MSFT.O) next-generation Maia AI chip is facing a delay of at least six months, pushing its mass production to 2026 from 2025, The Information reported on Friday, citing three people involved in the effort.

When the chip, code-named Braga, goes into production, it is expected to fall well short of the performance of Nvidia's (NVDA.O) Blackwell chip that was released late last year, the report said.

Microsoft had hoped to use the Braga chip in its data centers this year, the report said, adding that unanticipated changes to its design, staffing constraints and high turnover were contributing to the delay.

Microsoft did not immediately respond to a Reuters request for comment.

Like its Big Tech peers, Microsoft has focused heavily on developing custom processors for artificial intelligence operations and general purpose applications, a move that would help reduce the tech giant's reliance on pricey Nvidia chips.

Cloud rivals Amazon (AMZN.O) and Alphabet's (GOOGL.O) Google have both raced to develop chips in-house, customized for their specific needs with the goal of improving performance and reducing costs.

Microsoft had introduced the Maia chip in November 2023, but has lagged its peers in ramping it up to scale.

Google, meanwhile, has seen success with its custom AI chips - called Tensor Processing Units - and in April unveiled its seventh-generation AI chip designed to speed the performance of AI applications.

Amazon in December also unveiled its next-generation AI chip Trainium3 that is set to be released late this year.
 
Whoa, that does not sound like a good combination. Anyone here with insights about this situation?

I think we all knew Microsoft would be a likely Intel Foundry customer. The question is: Is the delay due to 18A or Microsoft? Reports cited staffing issues, design changes, and employee turnover. I hope there is a spirited Q&A on the investor call this coming Thursday. Lip-Bu needs to bring his A game, absolutely!

During the latest conferences and Silicon Valley rumors suggest Intel 18A/P is doing quite well and customers are lining up so the NOT TSMC market is alive and well. From ChatGPT:

1760806188808.png
 
I asked ChatGPT:

Who coined the term NOT TSMC Market?

Daniel Nenni, founder and editor of SemiWiki.com, coined the term "NOT TSMC Market Segment" to describe the portion of the semiconductor foundry market consisting of companies and customers seeking alternatives to TSMC for manufacturing, often due to factors like risk diversification, geopolitical concerns, or capacity constraints.

Key Details on the Term and Its Origin
  • First Usage: Nenni introduced the concept in a May 20, 2025, SemiWiki article titled "Intel Foundry is a Low Risk Alternative to TSMC". He explicitly states: "As I have said before, there is a foundry market segment that I call the 'NOT TSMC' market, companies who want an alternative to TSMC." This phrasing indicates it may have been used informally in prior discussions (e.g., forums or events), but this article marks its clearest public attribution.
  • Context: The term highlights opportunities for competitors like Intel Foundry, Samsung, or GlobalFoundries in non-TSMC-dominated segments. Nenni notes challenges, such as Samsung's struggles at advanced nodes (e.g., 3nm and 2nm), which caused the "NOT TSMC market [to come] crashing down." He positions Intel as a "low-risk" option, emphasizing integrated design-foundry models and nodes like Intel 14A.
Nenni, a veteran semiconductor analyst with decades in EDA and IP, frequently uses SemiWiki to break down foundry dynamics. If you're referencing a specific article or want analysis of related terms (e.g., how this ties to TSMC's non-AI segments), let me know for more details!

:cool:
 
Back
Top