Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/exclusive-jensen-huang%E2%80%99s-remark-sparks-storage-rally%E2%80%94phison-ceo-responds-from-ces.24331/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030871
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Exclusive: Jensen Huang’s Remark Sparks Storage Rally—Phison CEO Responds from CES

karin623

Member
A single remark from Jensen Huang was enough to jolt the global storage market. During his CES keynote, the Nvidia CEO described a new AI storage architecture that could become “the largest storage market in the world,” triggering a sharp rally across memory stocks. As traders rushed to interpret the implications, a photo circulating online appeared to show a Phison flash controller inside Nvidia’s next-generation Vera Rubin server—fueling speculation that a major supply-chain shift was underway.

Speaking directly from CES, Phison CEO K.S. Pua offered an exclusive and more nuanced response, cutting through the market noise. This piece unpacks what Huang’s comment really means, where the true storage opportunity lies, and why flash memory is becoming unavoidable in the future of AI—even if the biggest winners are not who the market initially expects.

 
Great report.

I really do not understand why Micron does not have grander expansion plans. Are they not as competitive as Samsung and Hynix? Why is the manufacture in America not big for memory? Without memory there will be no need for logic.
 
It looks like the context is AI SSDs:
Yup, data center disaggregated inference can benefit tremendously from new tiering of fast, high-capacity shared KV cache storage instead of leveraging the traditional CPU-oriented storage hierarchy/tiering.


We’re seeing GPU-centric hardware turn into application-specific accelerators for transformer-based LLMs using specialized and co-optimized:
* Context/prefill hardware - Rubin CPX
* Shared KV cache storage (inference context memory)
* Decode - Groq (coming) ?
* Processor interconnect/networking/switching

Also, it's probably worth reading Jensen's post-CES keynote Q&A with analysts to ge more of the context.

 
Last edited:
I really do not understand why Micron does not have grander expansion plans.

perhaps they don´t believe the demand is sustainable themselves?

edit: I am hearing some rumors regarding NCNR processes being installed. Guess there indeed is some doubt regarding on 2026 demand...
 
Last edited:
Great report.

I really do not understand why Micron does not have grander expansion plans. Are they not as competitive as Samsung and Hynix? Why is the manufacture in America not big for memory? Without memory there will be no need for logic.

Article on Linkedin in says they are breaking ground this week on $100Bn expansion in Upstate New York.
 
Breaking ground doesn't mean HVM

Building semiconductor manufacturing in the US is an insurance policy. The focus has been on logic since the majority of leading edge silicon comes from Taiwan. What about memory? The majority comes from Korea. How about a memory insurance policy? Today a very small percentage of Micron's manufacturing is US based. Why is the Micron CEO not helping with the new White House Ballroom? Micron got $6B+ in Chips Act money. How about doubling that and getting a stake ala Intel? Give Korea a run for their money.

Sanjay Mehrotra is a legend in the memory business (Co-Founder of Sandisk). Micron made the AI pivot, it is time to build up memory manufacturing in the US, absolutely. Let's hope the megfab in NY goes as planned.
 
Building semiconductor manufacturing in the US is an insurance policy. The focus has been on logic since the majority of leading edge silicon comes from Taiwan. What about memory? The majority comes from Korea. How about a memory insurance policy? Today a very small percentage of Micron's manufacturing is US based. Why is the Micron CEO not helping with the new White House Ballroom? Micron got $6B+ in Chips Act money. How about doubling that and getting a stake ala Intel? Give Korea a run for their money.

Sanjay Mehrotra is a legend in the memory business (Co-Founder of Sandisk). Micron made the AI pivot, it is time to build up memory manufacturing in the US, absolutely. Let's hope the megfab in NY goes as planned.
Building commodity products in the US? I like to see how that works without heavy subsidy or tariff protection.

Of course, if the current supply/demand balance becomes the new norm, then I didn't say anything!
 
Een enkele Opmerking van Jensen Huang was genoeg om de wereldwijde opslagmarkt te schokken. Tijdens zijn ces-keynote beschreef de CEO van Nvidia een nieuwe AI-opslagarchitectuur die "de grootste opslagmarkt ter wereld" zou kunnen worden, wat een scherpe rally tussen geheugenvoorraden zou kunnen veroorzaken. Terwijl handelaren zich haastten om de implicaties te interpreteren, leek een foto die online circuleerde een Phison-flashcontroller te tonen in Nvidia ' s volgende generatie Vera Rubin—server-wat de speculatie aanwakkerde dat er een grote verschuiving in de toeleveringsketen aan de gang was.










Direct vanuit CES gaf Phison CEO K. S. Pua een exclusief en genuanceerder antwoord, door het marktgeluid te doorbreken. Dit stuk rabbit road gaat in op wat Huang ' s opmerking echt betekent, waar de echte opslagmogelijkheid ligt en waarom flashgeheugen rabbitroadcasino.nl onvermijdelijk wordt in de toekomst van AI—zelfs als de grootste winnaars niet zijn wie de markt aanvankelijk verwacht.
Wat Huang zei, stak een lont aan onder de hele opslagsector. Hij wijst erop dat AI enorme, snelle datapijpleidingen nodig heeft, niet alleen GPU ' s. Daarom zien flash en controllers er ineens zo belangrijk uit. De Phison-hoek voegt gewoon brandstof toe, maar het echte verhaal is dat AI-servers opslag-zware machines worden. Zelfs als Nvidia de hersenen bouwt, iemand moet ze nog steeds snel genoeg gegevens geven, en dat is waar deze markt naartoe gaat.
 
Last edited:
Building commodity products in the US? I like to see how that works without heavy subsidy or tariff protection.
Of course, if the current supply/demand balance becomes the new norm, then I didn't say anything!

I'm hoping that new megafabs will be mostly automated so the cost will not be huge but it will of course be more costly than Korea since the Korean government supports semiconductor manufacturing like the Taiwanese Government does.
 
What Huang said basically lit a fuse under the whole storage sector. He’s pointing out that AI is about to need massive, fast data pipelines, not just GPUs.
I think there's even a more general message - transformer-based inference needs a whole different way at looking at memory and storage. Both Google and NVIDIA are putting an exclamation point on that in different ways. . NVIDIA with this stuff and Groq licensing. Google via the paper below.

" Thus far, no GPU/TPU was designed solely for LLM inference. Because Prefill is similar to training whereas Decode differs significantly, two challenges make GPUs/TPUs inefficient for Decode."

Challenges and Research Directions for Large Language Model Inference Hardware
Xiaoyu Ma and David Patterson, Google

 
Back
Top