Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/updating-our-current-logic-density-benchmarking-methodologies.19620/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Updating our current logic density benchmarking methodologies

How do we measure the maximum library density for nodes that support mixed row.

  • Based on the density of the highest density library that is usable in a standalone fashion

    Votes: 3 75.0%
  • The geo mean of the libraries that make up the highest density configuration

    Votes: 1 25.0%

  • Total voters
    4
Well, they do have a track record of talking down their competition. Rather like how Steve Jobs swore that humans only needed 72dpi screens when Windows supported high res for years, right up until Apple shipped Retina. FWIW, Intel claims it reduces cost.

You have a pretty much unobstructed access to power with the Intel approach (which is not the one IMEC has published) with very low resistance vias. Any source/collector can connect without fuss, from the look of it. The supply connections look very regular, leaving freedom for the signal lines. Hard to see why that makes things more difficult for analog, or for digital.

Emphasis on working with the tools vendors. Intel have likely learned that lesson.

I agree, it has risks. I see those mostly around the integrity and reliability of the very thin remaining silicon layer. Heat removal may be too, though there will be a lot of copper in the backside and a thin distance to the heat removal solution.

Overall I think it is a smart bet for Intel. One of the few things that could conceivably put them out front again, after years. Shows their engineers - and their managers - still have gumption.

Don't see where "talking down the competition" comes from -- we have no intention of using Intel until their level of support for foundry customers and IP availability is comparable with TSMC, which I suspect will never happen, so there's no incentive for TSMC to talk down BSP. In fact the opposite is true, they've been clear about its advantages -- but also the disadavantages in the short term, especially cost and IP availability which they see as hugely important, the TSMC IP ecosystem is one reason they are so successful.

Nobody is arguing about the technical advantages of BSP, they're crystal clear -- what I'm talking about here is the commercial and risk issues of switching to it, especially IP support. If you've never done any tricky N3 layout (e.g. high-speed analogue) you won't have any idea about just how much time and effort it actually takes to not just find the best solution -- which is not always obvious because of the many restrictive and interacting rules -- but get a DRC-clean layout, IIRC the manual is over 1600 pages long -- and I expect N2 will be even worse. IP suppliers are not going to invest in doing this -- and especially the BSP learning curve -- unless they're convinced there will be enough customers for their IP, which I think is by no means clear for N2 though will almost certainly be the case for the following node.

There's also the question of what benefits it delivers for the customer, and again it's not clear that N2 with BSP is worth it for many ASIC customers -- we are a TSMC customer, and knowing what our products are and what BSP would do for them TSMCs advice was that BSP was unlikely to be appropriate for us in N2 -- though it is for things like CPUs and HPC which have very different priorities, so you're right to say that it *is* the right decision for Intel because this is where most of their products are.

The same can be seen for the basic process, including density -- Intel has always made choices which prioritise speed over density and cost (including yield!), because it's the right thing to do for high-margin high-power high-speed CPUs. TSMC have put more priority on density, cost and yield because this is what the majority of their customer base wants -- though recent processes have also had a lot of options (both raw process and DTCO) targeted at HPC and CPU applications, so this isn't as true as it once was.

So BSP is undoubtedly the right choice now for Intel, and -- assuming no killer problems emerge! -- will be for mainstream TSMC processes in future. But I still think that N2 with BPD will be a bit of a "dip-your-toes-in-the-water" process for TSMC, to see how much traction it actually gets from their customers, and that most will wait until the following node to go down the BSP route.

It's not all about the raw technology and its advantages, in real life other things are as or more important... ;-)
 
Back
Top