Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/just-how-bad-would-an-ai-bubble-be.23564/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Just How Bad Would an AI Bubble Be?

XYang2023

Well-known member
The entire U.S. economy is being propped up by the promise of productivity gains that seem very far from materializing.

If there is any field in which the rise of AI is already said to be rendering humans obsolete—in which the dawn of superintelligence is already upon us—it is coding. This makes the results of a recent study genuinely astonishing.

In the study, published in July, the think tank Model Evaluation & Threat Research randomly assigned a group of experienced software developers to perform coding tasks with or without AI tools. It was the most rigorous test to date of how AI would perform in the real world. Because coding is one of the skills that existing models have largely mastered, just about everyone involved expected AI to generate huge productivity gains. In a pre-experiment survey of experts, the mean prediction was that AI would speed developers’ work by nearly 40 percent. Afterward, the study participants estimated that AI had made them 20 percent faster.

 
It's different than the dotcom crash because back then there were multiple sectors to the US economy. Nowadays it's 7 tech companies hoovering up all the profits and they're all in on AI, which has yet to really be monetized for anything other than displacing workers (not a productivity gain but a financial one) and making the internet even worse for your mental health. Whether it pops is irrelevant to 95% of the population, because for them it will be bad either way.
 
The Atlantic article linked above was written by a "staff writer", which means he doesn't know what he's talking about, and the article is meant to be sensational and catch eyes and clicks. "The entire US economy" is not being propped up by the promise of AI. More like the stocks of some tech companies like Meta and Microsoft are being propped up by the hope of new applications and revenue streams, but the notion that the entire US economy is being propped up by current LLM and agent technology is just silly. If anything, the entire US economy is being propped up by deficit spending by the federal, state, and local governments much more than AI speculation and spending.

This is the research article referred to in The Atlantic article:


Four authors who work for a non-profit founded by one of the authors, Beth Barnes, a former OpenAI employee. The paper reads well, until I looked for details. First of all, much of the code being discussed seems to be written in Python, because that's the only programming language I see discussed in the article. I queried some of the repositories listed in the appendix, and Rust was listed, but Rust is so different and more complex to use than Python, I am skeptical that results for Rust (or Go, C, or C++) would be comparable to results for Python.

Python is a simplistic interpreted language with automatic memory management and only partial support (I'm being generous) for parallelism. It is simplistically object-oriented, but simple enough to use that it is taught in high schools (and even earlier in many private schools). Open source code is widely available in Python for use as a basis for modification, or just to download for use as code snippets or subroutines. Using AI for code generation for Rust, Go, C, C++, or assemblers are at another level altogether, and can be very useful for generating examples for specific problems.

I haven't talked to any currently active software engineers who think AI is especially helpful for debugging, unless the problem is in language syntax, or simple stuff like mistakes in variable usage (e.g. static versus stack temporal allocations). No enthusiasm for AI debugging for the subtle problems in very large programs that drive software engineers nuts.

This paper smacks of attention-getting for the authors and their research organization. METR is referred as a non-profit, but that isn't surprising given the obscure organization of OpenAI itself.
 
Last edited:
Bad article but good topic.

Just from my point of view, today AI is an excellent research tool for people who are already experts in the field and can filter out the AI hallucinations. Companies are deploying it internally by data mining terabytes of company data to help employees ramp up and be more productive which will boost profits and reduce headcount. Customer service is where I am seeing it on a personal side. My car dealership has gone AI for appointments and such.

For semiconductor design, AI will help with the employment gap we are currently experiencing. Less people will be needed for sure. EDA companies are already using Generative AI to help customers be more productive by using 30 years of design data. Some companies will need less engineers, others will use the same amount of engineering resources but just be more productive.

This is all Generative AI. Agentic AI is next where we will move from creating content to taking action. We are already seeing this in EDA with some start-up companies so stay tuned.

AI has been compared to the California gold rush days which I can see with AI company valuations. It is not a Dot.com type of thing but there are some seriously over valued companies that will come crashing down.
 
The Atlantic article linked above was written by a "staff writer", which means he doesn't know what he's talking about, and the article is meant to be sensational and catch eyes and clicks. "The entire US economy" is not being propped up by the promise of AI. More like the stocks of some tech companies like Meta and Microsoft are being propped up by the hope of new applications and revenue streams, but the notion that the entire US economy is being propped up by current LLM and agent technology is just silly. If anything, the entire US economy is being propped up by deficit spending by the federal, state, and local governments much more than AI speculation and spending.

This is the research article referred to in The Atlantic article:


Four authors who work for a non-profit founded by one of the authors, Beth Barnes, a former OpenAI employee. The paper reads well, until I looked for details. First of all, much of the code being discussed seems to be written in Python, because that's the only programming language I see discussed in the article. I queried some of the repositories listed in the appendix, and Rust was listed, but Rust is so different and more complex to use than Python, I am skeptical that results for Rust (or Go, C, or C++) would be comparable to results for Python.

Python is a simplistic interpreted language with automatic memory management and only partial support (I'm being generous) for parallelism. It is simplistically object-oriented, but simple enough to use that it is taught in high schools (and even earlier in many private schools). Open source code is widely available in Python for use as a basis for modification, or just to download for use as code snippets or subroutines. Using AI for code generation for Rust, Go, C, C++, or assemblers are at another level altogether, and can be very useful for generating examples for specific problems.

I haven't talked to any currently active software engineers who think AI is especially helpful for debugging, unless the problem is in language syntax, or simple stuff like mistakes in variable usage (e.g. static versus stack temporal allocations). No enthusiasm for AI debugging for the subtle problems in very large programs that drive software engineers nuts.

This paper smacks of attention-getting for the authors and their research organization. METR is referred as a non-profit, but that isn't surprising given the obscure organization of OpenAI itself.

I think the article made some valid points. Imo, to use gen AI well requires understanding of software engineering, which is not trivial.

I think using Python is also appropriate. Although Python is simple, to use it well is not trivial. Google, OpenAI, Anthropic, etc all use Python extensively. Most machine learning research activities and projects are based in Python.
 
I think using Python is also appropriate. Although Python is simple, to use it well is not trivial. Google, OpenAI, Anthropic, etc all use Python extensively. Most machine learning research activities and projects are based in Python.
We're going to have to agree to disagree. Using AI to assist with programming such a simplistic language is inefficient, which is why productivity went down. Python is popular with the AI applications crowd because the target users are often not computer scientists, they are scientists and experts in other fields, like pharma, biology, medicine, etc., so using more complex languages is a non-starter. This is why SQL is used even as a query language for even NoSQL databases. If Python was really the primary subject of the paper, it was silly.
 
We're going to have to agree to disagree. Using AI to assist with programming such a simplistic language is inefficient, which is why productivity went down. Python is popular with the AI applications crowd because the target users are often not computer scientists, they are scientists and experts in other fields, like pharma, biology, medicine, etc., so using more complex languages is a non-starter. This is why SQL is used even as a query language for even NoSQL databases. If Python was really the primary subject of the paper, it was silly.
I tend to disagree.

I use C++, Java, and Python. On the surface, Python seems simple, but it can be quite complex. The engineers at Google are certainly computer scientists.


The threading mechanism in the latest Python release is becoming more like those in other languages.

Ideally, Python should adopt an approach similar to Mojo's to improve performance.
 
The reason Python is suited for gen AI is its extensive ecosystem which helps with verification. Without verification, gen AI is less useful.
 
Start ups and meme stocks will all crash and it wont matter. VC loses money

But if we entered a period of digestion and then 0-10% growth. The impact will be large.

Listen for terms "digestion" and "we are evaluating our growth plans" from hyperscalers.

That said, Apple, Nvidia and TSMC and Broadcom will survive.....they are great companies and can adjust to whatever the new market is.

Intel might do great since they have no significant AI Product sales ...
 
Start ups and meme stocks will all crash and it wont matter. VC loses money

But if we entered a period of digestion and then 0-10% growth. The impact will be large.

Listen for terms "digestion" and "we are evaluating our growth plans" from hyperscalers.

That said, Apple, Nvidia and TSMC and Broadcom will survive.....they are great companies and can adjust to whatever the new market is.

Intel might do great since they have no significant AI Product sales ...
Intel has a fairly comprehensive product suite for running AI locally. Personally, I prefer this approach, as I'm not comfortable with others indexing my code.

 
This is the research article referred to in The Atlantic article:

https://arxiv.org/pdf/2507.09089
I don't think their methodology is the best, given the current state of AI coding tools and to the best practices of using them. Effective use of AI tools (beyond intelligent auto-complete) requires significant adjustments to the workflow and to the codebase (especially in terms of documentation). In my experience, good ways to achieve speed-up were:

1. Building a small project from scratch, where code quality is not very important. If Claude Code can write all the code and get it to work, it's almost always faster. Speedup is greater when working with a language the developer is not fluent in (but that has lots of reasonable open source code, so LLMs can get good).
2. Using AI autocomplete in any project where the codebase is not too idiosyncratic. As long as Copilot doesn't mess up too much, this is just plain faster.
3. Incremental work in a project that is set up for using AI tools (existing code and development practices are documented aiming for AI understanding, parts of the workflow are automated with e.g. Claude Code custom commands). The developers that have experience with the tools and with the workflow can go faster with most code being written by AI and reviewed by a human. It's important to not trade speed for technical debt too much, as cleaning stuff up later is not very exciting.

With the tasks in the paper, I think that in some cases, my approach 2 could have been the way to get some speedup, but I'm not sure if the developers that took part in the study tried to do that. Cursor has AI autocomplete and it's good, but that's not the primary way I have seen people use it. Delegating bigger chunks to AI in a project that is not set up for AI work can generate a lot of slop quickly, and this might take a while to clean up and debug.
This paper smacks of attention-getting for the authors and their research organization. METR is referred as a non-profit, but that isn't surprising given the obscure organization of OpenAI itself.
I wouldn't go so far as to say that they were just trying to get attention. I think it's reasonable to think that they had good intentions, but agree that the study has limited real world relevance. It's still an interesting result and some base to build upon and produce better studies. AI coding tools and practices are still in their infancy and I'm quite sure that we will see significant progress in the coming years. I always remind myself that just 2 years ago none of the approaches I listed above would have worked -- even AI autocomplete was mostly getting in the way.
 
Last edited:
I think that crypto bubble enabled AI, because companies started/expanded development of different hardware and software infrastructure that was further expanded or directly used in AI (or deep learning).

OpenCL and Cuda, or more generally Unified shaders were in GPU since 2007/2008 but GPUs themself were still mainly used and developed for gaming and companies needed to see revenue stream from compute before committing serious resources into development of dedicated architectures such as Pascall (GP100).

Similarly today, even is AI falls 90% over night, we will still have lot of experience on how to build hardware and algorithms to improve lot of other things.

btw.: Isn't bitcoin around all time high?
 
I think that crypto bubble enabled AI, because companies started/expanded development of different hardware and software infrastructure that was further expanded or directly used in AI (or deep learning).

OpenCL and Cuda, or more generally Unified shaders were in GPU since 2007/2008 but GPUs themself were still mainly used and developed for gaming and companies needed to see revenue stream from compute before committing serious resources into development of dedicated architectures such as Pascall (GP100).

Similarly today, even is AI falls 90% over night, we will still have lot of experience on how to build hardware and algorithms to improve lot of other things.

btw.: Isn't bitcoin around all time high?

And still have no utility.


Last one holding the bag!
 
Glass half full on the AI bubble;

It's funding a bunch of advanced packaging and memory technologies that will still be available after the bubble bursts.

Re: The article; I can't add to what's been said above. It's a great topic but the 'staff writer' has no understanding of economics, let alone the use cases of AI besides the only one always hyped (coding).
 
Back
Top