Ottellini was just dumb. Minicomputers were also cheaper per unit than mainframes, and microcomputers were cheaper than minicomputers.You can't compare units directly, because the cost of a mobile chip is quite a bit less than for a CPU. An A-18 chip is reported to sell for $45. An A-20 chip is reported to sell for $280. An Intel I9 14900KS lists for $730. From what I have heard that was why Otellini walked away from the iPhone deal.
As the price and form factor goes down, the volume goes up. A lot. I think the smartphone market moves like 4x more units than the PC market. And that is if you ignore tablets.
As time passes the smaller solution also usually grows upwards and threatens at least the bottom end of the bigger solution market. So you now have ARM on laptops and beyond.
He is right than inference workloads will dominate training. But I suspect the ideal inference architecture will definitively not be a GPU.Lip-Bu Tan seems to think that inference and agentic AI are going to be much bigger than the training market that Nvidia is currently dominating and is taking steps to get into that market. You'll have to forgive me for assuming he knows what he is doing. His track record is hard to argue with.
