Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/intel-foundry-is-way-behind-tsmc-but-the-goal-is-2-by-2030.24411/page-3
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030871
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Intel Foundry is way behind TSMC, but the goal is #2 by 2030

Without High NA EUV
Without Dry Photoresist
Without Pattern Sharpening

Only rely on Double / Quad Patterning, we know the story of Intel 14nm++++++
Good luck, we saw that road and that don't seems to work (one is Intel 14nm / 10nm then is the N3B)
are the number of plus in 14nm correct? I thought there were only 5 plus
 
">20% density gain ... full-node scaling" = trigger warning for me :)
That's the reality nowadays -- basic pitches that set cell size (M0 and CPP) are almost identical for N3/N2/A16/A14, the gate density increases come mainly from other layout/library tweaks usually referred to as DTCO, with labels such as FlexFin and NanoFlex and NanoFlex Pro (and BSPD, and COAG, and SDB, and...) -- or other "special" design rules only allowed in very specific layout regions, like those TSMC introduced in N2 to lower access resistance and parasitic capacitance in "digital-only" areas. Plus the fact that nanosheet gives more drive current in a given area than finFET, so minimum size gates are faster and high drive gates are smaller.

In other words "full-node scaling" is largely a fiction nowadays, it doesn't really mean scaling any more at all -- it means the next node with a different set of design rules and new DTCO enhancements, as opposed to a "half-node" which means the same process tweaked to improve PPA slightly (e.g. 10%)... :-(
 
are the number of plus in 14nm correct? I thought there were only 5 plus
I think there is more in this case

+++++++ is Intel Style

But all of those "+" is what is TSMC is doing

N5 > N5++ > N5+++ > N5++++ > N5+++++ > N5++++++ > N5+++++++ > N5++++++++ > ....... > N5++++++++++++++
N5 > N4 > N4P > N4X > N3B > N3E > N3P > N3X > ..... > A16 (who knows might be still 0.021um^2)

All is finfet, all is 0.021um^2 SRAM, I just lost count on how many ++++++++, 8 / 9 / 10
 
I think there is more in this case

+++++++ is Intel Style

But all of those "+" is what is TSMC is doing

N5 > N5++ > N5+++ > N5++++ > N5+++++ > N5++++++ > N5+++++++ > N5++++++++ > ....... > N5++++++++++++++
N5 > N4 > N4P > N4X > N3B > N3E > N3P > N3X > ..... > A16 (who knows might be still 0.021um^2)

All is finfet, all is 0.021um^2 SRAM, I just lost count on how many ++++++++, 8 / 9 / 10

I don't think what TSMC is doing is quite equivalent to Intel's 14++++ execution. Each + for Intel 14nm actually decreased transistor density for logic, in order to improve performance characteristics of the transistors. (Easy example - look up Coffee Lake's density on 14++ vs. Kaby Lake's 14+).

In contrast, TSMC's various processes are still improving logic density (in general), while having side variants focused on performance and/or density. When you go from N5 to N3 to N2 to A16 - you're still getting denser logic.

However, the "side processes" -- N4 to N4P to N4X may have some +/- on density, though I think they're focused more on cost/performance than outright performance, unlike 14++++.

Some articles on Logic (vs. SRAM) density changes between TSMC nodes:
https://semiwiki.com/wikis/industry-wikis/tsmc-n3-process-node-3nm-wiki/ (Compares N3 to N5)
https://semiwiki.com/wikis/industry-wikis/tsmc-n2-process-technology-wiki/ (Compares N2 to N3, scroll down to bottom)
 
You mean add another not-really-big-enough-volume (and late to market!) process to the one they're already got -- plus you need different libraries and IP which means more effort and investment... :-(

It works for TSMC because they have both the handful of huge-volume HPC customers for BSPD (e.g. A16) and a massive number of small-to-pretty-damn-big customers for FSPD (e.g. N2), the production volumes and revenue for both variants are easily big enough to justify the investment and IP generation by the (also massive) TSMC ecosystem.

I just don't see this working for Intel -- their least bad option is to do what they're doing now which is focus all their efforts on BSPD to try and grab a few of the HPC customers for 14A, always assuming they have enough fab capacity to meet demand. They can't compete against TSMC for the FSPD market (where they would also be very late to market, and with higher costs and lower yields) which means supporting a large number of diverse customers (which they're not set up to do) with a wide and deep IP ecosystem (which they don't have).

They're never going to win here, it would only increase their costs further and divert effort from BSPD.
You don't need bspdn in the first place, it's garbage
 
But it's not just about cost, the amount of extra resource -- meaning, engineers! -- needed to support both BSPD and FSPD processes is huge because so many things are different. For starters the layouts are completely different so all IP (both internally and externally sourced) has to be rebuilt from scratch, and it's not just a few layout tweaks it's a major rethink -- plus the extraction is different, thermal properties are very different, libraries (standard cell and SRAM) have to be redone from scratch, tool costs double, customer support effort doubles. You also have to duplicate all the process qualification/reliability analysis because the processes are physically fundamentally different, and this alone is a massive effort and takes a lot of time and wafers to do.

Intel are already likely to be stretched in all these areas just to support BSPD because traditionally they only had to support internal design teams (so crappy documentation is "OK"), much more effort/resource is needed to properly support external customers -- been there in the past, got the T-shirt. Suggesting that they could easily do all this again for FSPD is not credible, they'd end up with terrible support for both processes instead of barely adequate support for one -- TSMC have being doing all this for years on multiple processes, but that doesn't mean Intel can do the same...

It doesn't matter how much Intel might *want* do support both, the question is whether they *can* support both -- and I don't think they can, at least not today.

There's also the question of why they would realistically want to do this, because all the things that FSPD customers are looking for -- fast TTM, strong IP ecosystem, low cost, high yield, high density, quick TAT -- are the things that TSMC is *very* good at (which is why everyone uses them) and Intel is historically bad at (and still not competitive today). Fighting an opponent on a battleground where they're strong and you're weak is never going to end well... :-(

People who don't understand the huge differences between the two processes are grossly underestimating the cost and difficulty of supporting both, see post from @MKWVentures above... ;-)
So we don't need Intel, and the US doesn't need semiconductor sovereignty.

All we need is TSMC's US factory.
 
Back
Top