Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/alphabet-hits-4-trillion-valuation-as-ai-refocus-lifts-sentiment.24342/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030770
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Alphabet hits $4 trillion valuation as AI refocus lifts sentiment

Daniel Nenni

Admin
Staff member
Alphabet hits $4 trillion valuation as AI refocus lifts sentiment

Jan 12 (Reuters) - Alphabet hit a $4 trillion market valuation on Monday, as the Google parent's sharpened artificial intelligence focus allayed doubts about its strategy and thrust it back to the forefront of the high-stakes race.

The tech giant on Wednesday surpassed Apple in market capitalization for the first time since 2019, becoming the second most valuable company in the world.

The milestones mark a remarkable change in investor sentiment for Alphabet, with its stock surging about 65% in 2025, outperforming its peers on Wall Street's elite group of stocks, the so-called Magnificent Seven.

The stock has gained another 6% so far this year, and was last up 1.1%.

The shift was fueled by the company quelling concerns that it let an early AI advantage slip by turning a once-overlooked cloud unit into a major growth engine and drawing a rare tech investment from Warren Buffett's Berkshire Hathaway.

Its new Gemini 3 model has also drawn strong reviews, intensifying pressure on OpenAI after GPT-5 left some users underwhelmed.

A Reuters report said that Samsung Electronics plans to double this year the number of its mobile devices with AI features powered by Google's Gemini.

Google Cloud's revenue jumped 34% in the third quarter, with a backlog of non-recognized sales contracts rising to $155 billion.

Renting out Google's self-developed AI chips that were reserved for internal use to outside customers has also enabled the unit's breakneck pace of growth.

Indicating the rising demand, The Information reported that Meta Platforms was in talks to spend billions of dollars on Alphabet's chips for use in its data centers starting from 2027.

Meanwhile, the company's dominant revenue generator - the advertising business - has largely held steady in the face of economic uncertainty and intense competition.

Alphabet is the fourth company to hit the $4 trillion milestone after Nvidia, Microsoft and Apple.

The stock has also benefited after a U.S. judge in September ruled against breaking up the company and allowing it to retain control of its Chrome browser and Android mobile operating system.

(Reporting by Zaheer Kachwala, Shashwat Chauhan and Johann M Cherian in Bengaluru; Editing by Sriraj Kalluvila)

 
Thank you AI. It is hard to imagine China or anyone else catching up to the US in AI. We now have two multi $T AI companies with leading edge silicon. Google has hundreds of chip designers and they are hiring big time. Look for a new Google EDA AI group like the other big semiconductor companies have in process.

I would, however, like to know why it took Reuters 4 people from India to write this puff piece? AI could have done a better job in a matter of seconds. Big changes are coming, absolutely. :ROFLMAO:
 
It will be interesting to see how this evolves.
Apple may start by using Google's Cloud -- but I'd guess Apple will want to control their data (used for AI training), costs and avoid lock-in over time. So they will use Google's technology & chips (and software) and build their own Data Centers. From public announcements, it sounds like Anthropic will build their Google-based Data Centers using that model also.
 
Last edited:
It will be interesting to see how this evolves.
Apple may start by using Google's Cloud -- but I'd guess Apple will want to control their data (used for AI training), costs and avoid lock-in over time. So they will use Google's technology & chips (and software) and build their own Data Centers. From public announcements, it sounds like Anthropic will build their Google-based Data Centers using that model also.
I keep reading about other companies using Google chips in their datacenters, but I continue to be a skeptic. It might be practical for other companies to buy Google's Axion CPU, because Axion uses all standard interfaces like PCIe, Ethernet, and DDR, but for TPUs, I don't see how it's practical. TPUs only connect to Google proprietary networks, so you need their network chips and specialized software stacks, which probably means you also need to run a Google server OS, which is a proprietary version of Linux. Even assuming Google agreed to license all of this stuff, Google would have to have an external support group, which for so much proprietary technology is going to be expensive. And annoying, because once you have external customers they'll want a voice in future hardware and software features, and then Google starts looking like Intel or AMD, but without so much product revenue.

So I'm not buying this notion that companies like Apple will be licensing TPUs or even Axions, no less proprietary networks. If anyone wants Google technology it only looks practical by using Google Cloud.
 
I keep reading about other companies using Google chips in their datacenters, but I continue to be a skeptic. It might be practical for other companies to buy Google's Axion CPU, because Axion uses all standard interfaces like PCIe, Ethernet, and DDR, but for TPUs, I don't see how it's practical. TPUs only connect to Google proprietary networks, so you need their network chips and specialized software stacks, which probably means you also need to run a Google server OS, which is a proprietary version of Linux. Even assuming Google agreed to license all of this stuff, Google would have to have an external support group, which for so much proprietary technology is going to be expensive. And annoying, because once you have external customers they'll want a voice in future hardware and software features, and then Google starts looking like Intel or AMD, but without so much product revenue.

So I'm not buying this notion that companies like Apple will be licensing TPUs or even Axions, no less proprietary networks. If anyone wants Google technology it only looks practical by using Google Cloud.
Exactly also why would Apple buy an inferior core design from Google that won't make sense
 
I keep reading about other companies using Google chips in their datacenters, but I continue to be a skeptic. It might be practical for other companies to buy Google's Axion CPU, because Axion uses all standard interfaces like PCIe, Ethernet, and DDR, but for TPUs, I don't see how it's practical.
Has Axion gotten any traction ? (compared to TPUs?)
 
The core used by Google Axiom it is Neo verse V2 for Axiom and Apple's CPU core in their M4/M5 Processor.
The Neoverse cores are typically used in CPUs and ASICs where per core performance is not absolutely critical, and to minimize development costs. For example, every superNIC I'm aware of, with the exception of the Fungible thing Microsoft acquired, uses Neoverse cores. (Fungible uses MIPS.)

The AWS Graviton CPU uses Neoverse cores. Graviton's value proposition to AWS is lots of cores, lots of memory channels, and lots of PCIe for no core development cost, low power (compared to x86), and low cost compared to merchant CPUs. Up to 192 cores per chip, a massive L3 cache, and 96 lanes of PCIe 6.

As I'm sure you know, the Apple M-series is a different animal. The M-series uses a custom core, not an Arm core, and it's not a server chip. For anything other than maximizing gaming performance, especially on laptops, I think the M-series is overall the best client CPU available. For gaming with over-clocking, obviously not, but that's not my thing.
 
Has Axion gotten any traction ? (compared to TPUs?)
Within Google Cloud, yes. Axion is a close analog to the AWS Graviton, and used mostly in cloud infrastructure applications, like storage and database services, supporting video services (ad servers), and like Graviton is on AWS, Axion vCPUs can be leased on Google Cloud. Microsoft has a similar strategy with their Azure Cobalt 100 CPUs. All three cloud CPUs simply displace merchant server CPUs, and I'd make a rough guess at >50% less cost per unit, and the power savings is probably quite substantial.
 
I should also reiterate my opinion that I doubt any of the custom cloud chips, CPUs, TPUs, or networking, will be sold as merchant products (meaning to third parties). If you want to use Google, Microsoft, or AWS chips you have to use them through their designers' cloud services.
 
Last edited:
I should also reiterate my opinion that I doubt any of the custom cloud chips, CPUs, TPUs, or networking, will be sold as merchant products (meaning to third parties). If you want to use Google, Microsoft, or AWS chips you have to use them through their designers' cloud services.
From CNBC: (among other sources)
  1. Broadcom said in a September call that it had signed a customer that had placed a $10 billion order for custom chips.
  2. On Thursday, CEO Hock Tan revealed that the mystery customer was Anthropic.
  3. Anthropic placed an order for Google’s AI chips, Tan said. Broadcom makes custom chips, including Google’s, which some experts say are more efficient for certain AI algorithms than Nvidia’s chips.
  4. Tan on Thursday also said Anthropic had placed an additional $11 billion order with Broadcom in the company’s latest quarter.
 
From CNBC: (among other sources)
  1. Broadcom said in a September call that it had signed a customer that had placed a $10 billion order for custom chips.
  2. On Thursday, CEO Hock Tan revealed that the mystery customer was Anthropic.
  3. Anthropic placed an order for Google’s AI chips, Tan said. Broadcom makes custom chips, including Google’s, which some experts say are more efficient for certain AI algorithms than Nvidia’s chips.
  4. Tan on Thursday also said Anthropic had placed an additional $11 billion order with Broadcom in the company’s latest quarter.
I know. So, how is Anthropic going to deploy TPUs without detailed hardware and software specifications, licensed software, and networking chips and boards, all of which have to come from Google? Something smells funny in these announcements.
 
I keep reading about other companies using Google chips in their datacenters, but I continue to be a skeptic. It might be practical for other companies to buy Google's Axion CPU, because Axion uses all standard interfaces like PCIe, Ethernet, and DDR, but for TPUs, I don't see how it's practical. TPUs only connect to Google proprietary networks, so you need their network chips and specialized software stacks, which probably means you also need to run a Google server OS, which is a proprietary version of Linux.

Are the businesses using it "for their own stuff", or is it really just co-locating Google's Cloud in their own datacenter?

i.e. a business that wants to use the software stack, but for some reason has a requirement to keep all data "local"
 
Are the businesses using it "for their own stuff", or is it really just co-locating Google's Cloud in their own datacenter?

i.e. a business that wants to use the software stack, but for some reason has a requirement to keep all data "local"
Funny, I've been asking around my "usual suspects" about the possibility of TPU system co-location. One sent me: :ROFLMAO: Another sent me a quote from their Gemini AI facility:

Google's Tensor Processing Units (TPUs) are deployed within Google's own highly optimized data centers and are available to external customers via the Google Cloud platform, not through a general third-party co-location model where customers bring their own hardware.

There are rumors though:


My skeptical usual suspect mentioned above responded to this article by saying Google would have to sell Meta systems, not just chips. He can't imagine chip sales. Frankly, I can't either. Getting large scale-up / scale-out systems with thousands of nodes to work reliably is not easy. And then Google creates a new generation of the TPU and its interconnects and then what? Meta starts all over again? If I were Meta I'd just use Google Cloud. The incremental R&D cost for Meta to make systems out of Google hardware has got be less more than the possible negotiated mark-up on rented hardware from Google Cloud. How could it not be more? And it would be bad form for Meta to poach Google engineers to do the work. :)

(Note the edit from less to more. Senility must be setting in.)
 
Last edited:
I know. So, how is Anthropic going to deploy TPUs without detailed hardware and software specifications, licensed software, and networking chips and boards, all of which have to come from Google? Something smells funny in these announcements.
Semianalysis did a report on this a couple of months ago.
https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-swing-at-the
I don't think answers all questions - but it provides some interesting background and direction. Below is a section:
- - - - - - - - - -
Beyond renting capacity in Google datacenters through GCP, Anthropic will deploy TPUs in its own facilities, positioning Google to compete directly with Nvidia as a true merchant hardware vendor.

In terms of the split of the 1M TPUs:

  1. The first phase of the deal covers 400k TPUv7 Ironwoods, worth ~$10 billion in finished racks that Broadcom will sell directly to Anthropic. Anthropic is the fourth customer referenced in Broadcom’s most recent earnings call. Fluidstack, a gold-rated ClusterMax Neocloud provider, will handle on-site setup, cabling, burn-in, acceptance testing, and remote hands work as Anthropic offloads managing physical servers. DC infrastructure will be supplied by TeraWulf (WULF) and Cipher Mining (CIFR).
 
Semianalysis did a report on this a couple of months ago.
https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-swing-at-the
I don't think answers all questions - but it provides some interesting background and direction. Below is a section:
- - - - - - - - - -
Beyond renting capacity in Google datacenters through GCP, Anthropic will deploy TPUs in its own facilities, positioning Google to compete directly with Nvidia as a true merchant hardware vendor.

In terms of the split of the 1M TPUs:

  1. The first phase of the deal covers 400k TPUv7 Ironwoods, worth ~$10 billion in finished racks that Broadcom will sell directly to Anthropic. Anthropic is the fourth customer referenced in Broadcom’s most recent earnings call. Fluidstack, a gold-rated ClusterMax Neocloud provider, will handle on-site setup, cabling, burn-in, acceptance testing, and remote hands work as Anthropic offloads managing physical servers. DC infrastructure will be supplied by TeraWulf (WULF) and Cipher Mining (CIFR).
Yeah, I read that semianalysis report (I'm on their distribution list), pretty much everything it said made no sense to me at all. It was clear the people who wrote that had never worked on complex scale-out systems and proprietary networks before. The only part of the semianalysis article that made any sense at all was that the initial $10B order from Broadcom was not for just chips, but finished systems. Beyond that, even the finished systems part makes little sense in the context of Google proprietary networks. I'm just not buying it. And Anthropic using Fluidstack to build the systems when Google isn't even listed on Fluidstack's website as a partner (but Nvidia is), makes the story look unlikely.

But the best part is what Google posted on their own website:


SUNNYVALE, Calif. and SAN FRANCISCO, Oct. 23, 2025 /PRNewswire/ -- Anthropic today announced a landmark expansion of its use of Google Cloud's TPU chips, providing the company with access to the capacity and computing resources required to train and serve the next generations of Claude models. In total, Anthropic will have access to well over a gigawatt of capacity coming online in 2026.

This represents the largest expansion of Anthropic's TPU usage to date. Anthropic will have access to up to one million TPU chips, as well as additional Google Cloud services, which will empower its research and development teams with leading AI-optimized infrastructure for years to come. Anthropic chose TPUs due to their price-performance and efficiency, and the company's existing experience in training and serving its models with TPUs.

Anthropic and Google Cloud initially announced a strategic partnership in 2023, with Anthropic using Google Cloud's AI infrastructure to train its models and making them available to businesses through Google Cloud's Vertex AI platform and through Google Cloud Marketplace. Today, thousands of businesses utilize Anthropic's Claude models on Google Cloud, including Figma, Palo Alto Networks, Cursor, and others.

"Anthropic and Google have a longstanding partnership and this latest expansion will help us continue to grow the compute we need to define the frontier of AI," said Krishna Rao, CFO of Anthropic. "Our customers—from Fortune 500 companies to AI-native startups—depend on Claude for their most important work, and this expanded capacity ensures we can meet our exponentially growing demand while keeping our models at the cutting edge of the industry."

"Anthropic's choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years," said Thomas Kurian, CEO at Google Cloud. "We are continuing to innovate and drive further efficiencies and increased capacity of our TPUs, building on our already mature AI accelerator portfolio, including our seventh generation TPU, Ironwood."
Not one mention of Fluidstack or selling TPUs as a merchant vendor. Only Google Cloud, which makes perfect sense. So I'm confused and skeptical.
 
Back
Top