Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/chip-headwinds-the-rise-of-bad-ai.22838/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Chip headwinds: The rise of bad AI.

xman747

New member
With all the great optimism, the sun shining on the semiconductor industry, and demand for next-gen chips unstoppable. I would love to speak optimistically, but today, some headlines caught my attention. They reminded me of a real risk, one that faces the industry and, by extension, the People: the rise of bad AI. We’re seeing an increased frequency of hacking attacks and the risk is worth weighing: a large-scale scare involving AI. Its ability to empower everyone, both those who love us and those who don't, is alarming. While reading the articles below, I thought: wow, there are a lot of vulnerabilities and AI will only accelerate their exploitation. IMO: This revalation is just one small warning of what could be coming if we are not more careful. We really should consider the implications of unchecked AI investment and balance it with the dignity of the human person.

A new Intel CPU vulnerability, named Branch Privilege Injection (BPI), was discovered by researchers at ETH Zürich. BPI exploits a flaw known as a Branch Predictor Race Condition (BPRC), impacting Intel processors released within roughly the past six years. The exploit targets a very short timing window when Intel CPUs change privilege levels, allowing attackers to briefly mislead the CPU's branch predictor into executing privileged operations speculatively. Although speculative results are ultimately discarded, attackers can analyze timing side-effects to extract sensitive data at high speeds (up to 5.6 KiB per second) in demonstrated tests. Intel confirmed this flaw (CVE-2024-45332) and have released microcode patches within the past 2 weeks.

Systems most vulnerable include data centers, cloud services, and environments utilizing virtualization due to the potential compromise of isolation between virtual machines or user processes. The issue has been independently validated through laboratory demonstrations, successfully bypassing current Spectre v2 mitigations such as Enhanced Indirect Branch Restricted Speculation (eIBRS). Adjacent discoveries by VUSec at Vrije Universiteit Amsterdam further confirm the broader problem, revealing related exploits like "Training Solo" that similarly bypass CPU isolation mechanisms. These findings suggest ongoing issues with speculative execution.

What are the experts' opinions relating to risk and AI, particularly concerning the semiconductor industry?"

P.S Love this forum - You all are exceptionally helpful!

https://thehackernews.com/2025/05/researchers-expose-new-intel-cpu-flaws.html
https://cybersecuritynews.com/new-vulnerability-affects-all-intel-processors/
https://onlinedegrees.sandiego.edu/top-cyber-security-threats/
 
Last edited:
It is practically impossible to remove this kind of "vulnerability", and it was well known in the industry since at least nineties. AMD/Intel know it, and fixes are just cosmetic.

The only way I see how to keep abitrary code isolated from timing influencing, memory exposing commands is by making purpose made hardware that throws away any gain from performance enhancing techniques that break process isolation.

Renting server hardware to more than 1 user running arbitrary code is a bad idea. Including JIT compilation from bytecode into the web browsers is also a very bad idea.
 
I found this NYT interview, a serious news interview with a former OpenAI employee, interesting: https://www.nytimes.com/2025/05/15/opinion/artifical-intelligence-2027.html

"AI 2027" is a book making some predictions such as:
"Kokotajlo: And then they kill all the people, all the humans.
Douthat: The way you would exterminate a colony of bunnies that was making it a little harder than necessary to grow carrots in your backyard.
Kokotajlo: Yes. If you want to see what that looks like, you could read “AI 2027.”
Douthat: There have been some motion pictures, I think, about this scenario as well.
Kokotajlo: [Chuckles.]
 
AI definitely brings tons of power, but with these new chip vulnerabilities, it feels like we’re opening doors for some serious security risks. The fact that these flaws can let attackers peek into sensitive data-even with patches coming out-shows how tricky it is to keep up.
 
I understand that these vulnerability discoveries bring a lot of attention to the researchers who find them. But I can't escape the impression that these vulnerabilities are being blown out of proportion for press attention.

The attacker needs to get the malware into the target system(s), know something about how the targeted apps work WRT to memory utilization, how to interpret the stolen bytes, and then communicate the stolen data to external controlling software. This memory leaking malware also seem to assume that hardware memory encryption and so-called "confidential computing" features are not engaged, so that the stolen data can be interpreted. In the cloud, you would need a priori knowledge of which servers are running which VMs running what specific software to understand where the data of interest is.

Mostly what I'm reading here is that some researchers with highly controlled Linux systems with known applications producing known data found some anomalous instruction processing behavior they could very carefully and strategically exploit. This is nothing like, for example, the old Pentium divide by zero flaw. Intel probably feels caught between a rock and a hard place, in that finding this problem in the wild is very unlikely, yet they can't be seen as ignoring a vulnerability only 0.001% or less of software engineers understand.
 
I understand that these vulnerability discoveries bring a lot of attention to the researchers who find them. But I can't escape the impression that these vulnerabilities are being blown out of proportion for press attention.

The attacker needs to get the malware into the target system(s), know something about how the targeted apps work WRT to memory utilization, how to interpret the stolen bytes, and then communicate the stolen data to external controlling software. This memory leaking malware also seem to assume that hardware memory encryption and so-called "confidential computing" features are not engaged, so that the stolen data can be interpreted. In the cloud, you would need a priori knowledge of which servers are running which VMs running what specific software to understand where the data of interest is.

Mostly what I'm reading here is that some researchers with highly controlled Linux systems with known applications producing known data found some anomalous instruction processing behavior they could very carefully and strategically exploit. This is nothing like, for example, the old Pentium divide by zero flaw. Intel probably feels caught between a rock and a hard place, in that finding this problem in the wild is very unlikely, yet they can't be seen as ignoring a vulnerability only 0.001% or less of software engineers understand.

A per-process/OS memory encryption doesn't help much as a lot of these exploits depend on not even knowing what's in the denied memory address, but on how other processes do paging, and timing which depend on whether given memory region was cached or not. They can find SSL key from nginx running in another VM just by knowing exact timing of worst/best case cryptographic operations with specially crafted inputs.

AMD is remarkable in being very transparent about just how horrible, and unfixable is the constant torrent of new process isolation "vulnerabilities"

Check just how many new ones were reported in 2025 alone https://www.amd.com/en/resources/product-security.html . For most, mitigations will only reach OS updates in 12+ months
 
A per-process/OS memory encryption doesn't help much as a lot of these exploits depend on not even knowing what's in the denied memory address, but on how other processes do paging, and timing which depend on whether given memory region was cached or not. They can find SSL key from nginx running in another VM just by knowing exact timing of worst/best case cryptographic operations with specially crafted inputs.
This is a good paper on Intel's Total Memory Encryption. I used to work with the author. He's very good.

 
I understand that these vulnerability discoveries bring a lot of attention to the researchers who find them. But I can't escape the impression that these vulnerabilities are being blown out of proportion for press attention.

The attacker needs to get the malware into the target system(s), know something about how the targeted apps work WRT to memory utilization, how to interpret the stolen bytes, and then communicate the stolen data to external controlling software. This memory leaking malware also seem to assume that hardware memory encryption and so-called "confidential computing" features are not engaged, so that the stolen data can be interpreted. In the cloud, you would need a priori knowledge of which servers are running which VMs running what specific software to understand where the data of interest is.

Mostly what I'm reading here is that some researchers with highly controlled Linux systems with known applications producing known data found some anomalous instruction processing behavior they could very carefully and strategically exploit. This is nothing like, for example, the old Pentium divide by zero flaw. Intel probably feels caught between a rock and a hard place, in that finding this problem in the wild is very unlikely, yet they can't be seen as ignoring a vulnerability only 0.001% or less of software engineers understand.

Just to add to this - any security minded major cloud provider is going to have (or be required to have) strict "separation of duties" policies in place that make insider threats for these things also very unlikely. i.e. even if you have physical access to these devices - it's unlikely you'll be successful for very long, or even at all.

These companies have a security owning group that grants access to do administrative functions, that is separate and distinct from the group that actually performs the work. All of that work will be logged, and of course the servers are constantly scanned by various tools that would look for malicious code and network transactions.

I think it's worth keeping an eye on this, but so far I'm 100% with blueone - it's scary sounding but hyper unlikely in practice. Foreign nation states *could* compromise large enough portions of companies to implement these types of hacks - but if you're that far infiltrated, you don't need advanced exploits like this to get the data you're looking for.

1748187097167.png
 
Back
Top