Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/what-percentage-of-the-data-center-market-will-be-ai-ml.24501/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030871
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

What percentage of the data center market will be AI/ML

Arthur Hanson

Well-known member
How long will it be until AI/ML will dominate the data center using AI/ML optimized software? Will datacenters be the optimum way to utilize AI/ML resources? If not, what will the alternative be?
 
That's an challenging question, especially since many software applications run in data centers appear to be moving to a mixed model with CPUs handing the code data and legacy app functionality, but with agentic/AI front-ends. But it also seem like GPUs are increasingly used for workloads that look “traditional CPUs,” especially read-heavy SQL analytics and vector-ish operations, but they are not replacing CPUs for OLTP or generic app logic.
GPUs are creeping in on
• Analytical SQL (OLAP): GPU-accelerated databases (HeavyDB, Kinetica, SQream, Brytlyt, Microsoft GPU SQL prototypes, etc.) show roughly 5–25× speedups on scan/aggregate/join-heavy queries versus CPU engines on similar cost hardware.
• GPU layers on top of CPU DBs: Systems like SiriusDB and others bolt a GPU execution engine under DuckDB/Postgres-style SQL, offloading big parallel scans, joins, and aggregates to GPUs while leaving the rest to the CPU engine.
• Vector/AI features inside RDBMS: Oracle Database 23ai, SQL Server 2025 previews, and cloud warehouses now use GPUs for vector embedding generation, vector index build, and some search paths; the rest of the SQL engine stays CPU.
These are all “SQL workloads,” but specifically analytics that are highly parallel—column scans, group-bys, hash joins, vector ops.
CPUs still dominate
• OLTP / transactional SQL: High‑churn, row‑oriented workloads (banking, ERP, web apps) still run on CPU-only engines; the fine-grained locking, branching, and latency sensitivity don’t map well to GPUs.
• General application logic: Orchestration, query planning, connection handling, stored procedures, business rules are CPU-heavy and stay that way. GPUs just act as accelerators for tight numeric kernels.
 
How long will it be until AI/ML will dominate the data center using AI/ML optimized software? Will datacenters be the optimum way to utilize AI/ML resources? If not, what will the alternative be?

I think there will be a lot of AI running on-premises. Individual companies like banks, semiconductor companies, etc... will do localized training, AI/ML development and operations.

Remember, OpenAI, Xai, Google, etc... are building HUGE LLMs used for generative AI. Individual companies not so much. If the individual company is already in the cloud then that is a different story but those might not be AI Datacenters per say.
 
Back
Top