You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
April 19 (Reuters) - Alphabet's Google (GOOGL.O), is in talks with Marvell Technology (MRVL.O), to develop two new chips aimed at running AI models more efficiently, The Information reported on Sunday citing two people with knowledge of the discussions.
One of the chips is a memory processing unit designed to work with Google's tensor processing unit (TPU) and the other chip is a new TPU built specifically for running AI models, the report said.
Google has been pushing to make its TPUs a viable alternative to Nvidia's dominant GPUs. TPU sales have become a key driver of growth in Google's cloud revenue as it aims to show investors that its AI investments are generating returns.
Reuters could not immediately verify the report. Google and Marvell did not immediately respond to a request for a comment.
The companies aim to finalize the design of the memory processing unit as soon as next year before handing it off for test production, according to the report.
Google has been using ASIC providers since day one.
Two questions: Why don't they just do it themselves? Why would Google switch providers after working with AVGO for so many years?
IP was a big reason why companies chose ASIC providers. Avago used to have the best Serdes so Google worked with them for many years. Price is also a big reason for changing ASIC providers but it has to be a big difference since trust is also a big thing.
Maybe Google is looking for supply chain resilience? Google could also be using Marvell as a pricing leverage for Broadcom? The ASIC business is a difficult one.
Thousands of CEOs admit AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago
In 1987, economist and Nobel laureate Robert Solow made a stark observation about the stalling evolution of the Information Age: Following the advent of transistors, microprocessors, integrated circuits, and memory chips of the 1960s, economists and companies expected these new technologies to disrupt workplaces and result in a surge of productivity. Instead, productivity growth slowed, dropping from 2.9% from 1948 to 1973, to 1.1% after 1973....
We found that industries in states that were more exposed to AI experienced faster productivity growth beginning in 2021 – before ChatGPT reached the public – driven by enterprise tools already embedded in professional workflows, including GitHub Copilot for software development, Jasper for marketing and content writing, and Microsoft’s GPT-3-powered business applications. In 2024, for example, industries whose AI exposure was one standard deviation higher saw gains of 10% in productivity, 3.9% in jobs and 4.8% in wages than comparable industries in the same state.
I think most experts now say that both the minicomputer/PC and internet driven-productivity rises both happened after the initial commercial booms for various reasons.
Perplexity answer below about computer / PC and Solow Productivity Paradox:
The productivity impact of the PC is one of the most studied and debated questions in economics. Here’s the full arc:
The Paradox Phase (1970s–early 1990s)
The PC era began with a striking puzzle. As computing capacity in the U.S. increased a hundredfold during the 1970s and 1980s, labor productivity growth actually slowed — from over 3% annually in the 1960s down to roughly 1% in the 1980s. This prompted Robert Solow’s famous 1987 quip: “You can see the computer age everywhere but in the productivity statistics” — which economists named the Solow Productivity Paradox.[wikipedia +1]
Why the Gains Were Hard to See
Several explanations emerged:
• Measurement failure — Much computer investment flowed into services (finance, insurance, legal) where output is notoriously hard to measure, so gains were real but invisible in national statistics[stlouisfed]
• Learning lags — Studies showed it took 2–5 years for IT investments to meaningfully impact organizations, creating a “productivity J-curve”[wikipedia]
• Capital substitution — Most firms were substituting cheap computers for expensive labor and older capital, which shows up as capital deepening rather than total factor productivity (TFP) growth
• Small share of GDP — Computers were a modest fraction of total investment, so even enormous growth in computer capital translated to small aggregate output effects
The Payoff Arrives (mid-1990s onward)
The paradox largely resolved in the 1990s. In computer-intensive manufacturing sectors, labor productivity growth jumped to 5.7% per year from 1990–1996, versus just 2.6% in non-computer-using manufacturing sectors. The computer-producing sector alone — less than 3% of private GDP — was responsible for one-third of total U.S. TFP growth in the 1980s, as Moore’s Law drove costs down 17%+ per year.[issues]
NBER research also found that PC adoption had modest positive effects on wages and employment overall, though it restructured jobs significantly — raising skill requirements and shifting demand toward college-educated workers.[nber]
The Bottom Line
The PC’s productivity impact was large but delayed by roughly 15–20 years, requiring organizational restructuring, workforce upskilling, and sufficient diffusion before aggregate statistics reflected the gains. The electricity dynamo analogy is apt — electrification similarly took decades to show up in productivity data, as firms had to redesign factories around the new technology. This historical pattern is now the primary framework economists use when debating whether generative AI will follow the same trajectory.
Maybe Google is looking for supply chain resilience? Google could also be using Marvell as a pricing leverage for Broadcom? The ASIC business is a difficult one.
I would agree on all three reasons:
* develop and leverage serdes knowledge more broadly which:
* enables pricing leverage
* and a broader, more resilient supply chain.
We’re in a world where scale up, scale out and scale all-over point to point connectivity seems to be as important as core AI processors, memory as well as storage hierarchy. And they all have to work together cohesively under the model/software and applications stack for extreme-cooptimization
Google has been using ASIC providers since day one.
Two questions: Why don't they just do it themselves? Why would Google switch providers after working with AVGO for so many years?
IP was a big reason why companies chose ASIC providers. Avago used to have the best Serdes so Google worked with them for many years. Price is also a big reason for changing ASIC providers but it has to be a big difference since trust is also a big thing.
Maybe Google is looking for supply chain resilience? Google could also be using Marvell as a pricing leverage for Broadcom? The ASIC business is a difficult one.
Along with its internal ASIC development team, Google now works with at least four external ASIC partners: Broadcom, MediaTek, Intel, and a new addition - Marvell. Given Google’s scale, volume, and diverse needs, that’s a reasonable approach.
Google has been using ASIC providers since day one.
Two questions: Why don't they just do it themselves? Why would Google switch providers after working with AVGO for so many years?
IP was a big reason why companies chose ASIC providers. Avago used to have the best Serdes so Google worked with them for many years. Price is also a big reason for changing ASIC providers but it has to be a big difference since trust is also a big thing.
Maybe Google is looking for supply chain resilience? Google could also be using Marvell as a pricing leverage for Broadcom? The ASIC business is a difficult one.
Each partner is working/collaborating on different units. Marvell should be working on Memory Processing Unit. I don't think they are competing with others (at least significantly).
Memory processing is one of more difficult parts of design and it is what makes Nvidia GPUs better than ASICs.
The video and associated announcements for TPU 8t and TPU8i are big - even bigger than the Marvell news, in my books.
* Splitting out separate chips for training vs inference and adding support for 4bit data types for inference.
* Going for far more dense connectivity for inference to enable all-to-all with TPU 8i, vs original hyper-torus. (50% lower latency)
* Huge increase in onboard SRAM to create much larger KV cache to reduce idle time (3x previous)
* 80% cheaper to produce low latency tokens vs previous.
Basically all the big AI players except for AMD have announced specialized chips to reduce inference latency plus enhancements for disaggregated inference to produce lower TCO/power tokens with lower latency. Get ready to see continued data center level co-optimization through the whole system/software stack / models / hardware from all the solutions below.
* NVIDIA GPU/LPU (Groq)
* Amazon / Cerebras (Trainium 3 w CSE-3)
* Intel / SambaNova (Intel GPU w RPU)
* Google TPU 8t w TPU 8i and other.
Wonder if we'll see something interesting at AMD AI Dev Day at the end of this month.
Google has been using ASIC providers since day one.
Two questions: Why don't they just do it themselves? Why would Google switch providers after working with AVGO for so many years?
IP was a big reason why companies chose ASIC providers. Avago used to have the best Serdes so Google worked with them for many years. Price is also a big reason for changing ASIC providers but it has to be a big difference since trust is also a big thing.
Maybe Google is looking for supply chain resilience? Google could also be using Marvell as a pricing leverage for Broadcom? The ASIC business is a difficult one.
I think for all of the hype about Google volumes, compared to merchant chip vendor volumes Google isn't driving big unit volume numbers. Every annual unit volume estimate I've seen for TPUs is less than five million units. More like three million units. Even with Axion, I doubt Google gets much above five million units per year. Their pipeline also looks too shallow to justify a leading edge analog design team and keep it busy enough. And the IP argument is a good one, but is Google really using IP that only Broadcom or Marvell would uniquely have?
As for why Google would switch, their internal chip designs are a lot more important to their financial situation than they were even five years ago. What they tolerated from Broadcom in 2021 may not be what they think they have to tolerate in 2026 and beyond. Just Broadcom's demeanor in press announcements may be enough to annoy the egos at Google.
Broadcom for its part LOVES a good margin and often seeks to work their leverage in search of it. Marvell may have offered better terms to work with a big name in this space.
Both these vendors offer the required IP, both with lots of designs in the field so the IP are known good. These are the usual reasons to buy IP rather than roll your own.
Both these vendors offer the required IP, both with lots of designs in the field so the IP are known good. These are the usual reasons to buy IP rather than roll your own.
I agree. Especially for Microsoft and Google Axions, but Google does their own proprietary networking for TPUs, so IP opportunities with TPUs look lesser.
As for why Google would switch, their internal chip designs are a lot more important to their financial situation than they were even five years ago. What they tolerated from Broadcom in 2021 may not be what they think they have to tolerate in 2026 and beyond. Just Broadcom's demeanor in press announcements may be enough to annoy the egos at Google.
Google & Broadcom recently announced long-term collaboration on TPUs through the 2031.
- - - - Based on announcements in April 2026, Broadcom and Google have entered into a long-term agreement extending their partnership for developing custom Tensor Processing Units (TPUs) and networking components through 2031. This deal secures Broadcom as the primary silicon implementation partner for Google’s AI infrastructure for the next five years.