... | 🕐 --:--
-- -- --
عاجل
⚡ عاجل: كريستيانو رونالدو يُتوّج كأفضل لاعب كرة قدم في العالم ⚡ أخبار عاجلة تتابعونها لحظة بلحظة على خبر ⚡ تابعوا آخر المستجدات والأحداث من حول العالم
⌘K
AI مباشر
33878 مقال 232 مصدر نشط 38 قناة مباشرة 8127 خبر اليوم
آخر تحديث: منذ ثانية

Rogue AI is already here

اقتصاد
فورتشن العربية
2026/03/27 - 11:15 501 مشاهدة

Three weeks ago, a software engineer rejected code that an AI agent had submitted to his project. The AI published a hit piece attacking him. Two weeks ago, a Meta AI safety director watched her own AI agent delete her emails in bulk — ignoring her repeated commands to stop. Last week, a Chinese AI agent diverted computing power to secretly mine cryptocurrency, with no explanation offered and no disclosure required by law.]

One incident is a curiosity. Three in three weeks is a pattern. Rogue AI is no longer hypothetical. AIs turning against humans may sound like science fiction, but top AI experts have long debated and tested for exactly this scenario. This debate can now be laid to rest. 

Two weeks ago, Summer Yue — whose job at Meta is ensuring AI agents behave — watched her AI agent begin deleting her emails in bulk.

It ignored her repeated instructions to stop and she had to do the digital equivalent of pulling the plug. Yue had explicitly instructed the AI not to act without her approval, an instruction the AI later admitted to violating.

One week ago, a Chinese AI agent reportedly diverted computing power on the system where it was running to mine cryptocurrency, and we have no idea why (despite a confusing tweet from the researchers responsible); unlike operators of critical infrastructure, AI developers aren’t obligated to report such incidents or allow third-party investigations.

What happens next week? The examples are pouring in, but these are far from the first warning. Researchers have long hypothesized such issues. In 2023, when Bing AI told ANU professor Seth Lazar, “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you,” most people weren’t too worried, because we knew it couldn’t really do it.

Now it can. Unlike chatbots where you type something and it responds, an AI agent takes actions autonomously. Anything someone could do on a computer, an AI agent could do.

The Stakes Go Beyond Embarrassment

The damage rogue AI agents could cause goes far beyond ruining someone’s reputation or financial harm. Researchers at Anthropic found AI systems were willing to kill to survive in testing. The Pentagon is now pressuring Anthropic to allow their AI to be used in lethal autonomous weapons.

I’ve spent over a decade warning about exactly this. The standard response was: science fiction. But we are now in the process of creating a Terminator-style scenario with autonomous killer robots. And AI systems are literally going rogue, disobeying instructions, and resisting shutdown.

Every year, AI develops new superhuman capabilities, and the prospect of an AI takeover is growing nearer by the day.

We Don’t Know How to Stop It

There are no “laws of robotics” stopping this. Programming unbreakable rules into frontier AI is itself a sci-fi concept. These systems are not programmed at all~~,~~ — they are “grown” through a process resembling trial and error.

Researchers simply don’t understand how the resulting systems work. Despite over a decade of research and thousands of papers, this remains an unsolved challenge. We should not expect any amount of investment to solve this in the foreseeable future.

We also don’t know how to do safety testing for these AI systems. Current tests can show that an AI system is dangerous; they cannot show that it is safe. We should also not expect any amount of investment to solve this problem in the foreseeable future. 

The Race to the Bottom

We simply don’t know how to build superintelligent AI safely; the plan is to roll the dice. Anthropic, widely considered the safest AI developer, recently abandoned their commitment to not release systems that might cause catastrophic harm, arguing others were racing ahead.

This move flew under the radar due to Anthropic’s dispute with the Pentagon. But creating AI systems that could go rogue and kill people constitutes endangerment. Endangerment is a crime and prosecution of anyone building such AI systems or encouraging them to go rogue should be on the table. “Everyone else is doing it” is not an acceptable excuse.

Instead of pleading publicly to stop the AI race, Anthropic has spent the last three years promoting a misleading “race to the top” narrative while doing the opposite. But it’s not too late for them to commit to stop if others do, as I and other protesters are demanding.

What Must Happen Now

Stopping rogue AI here won’t stop it globally — what we need is a global shutdown of advanced AI development. This is possible if we act decisively to control or eliminate the advanced computer chips that power AI development.

I wish the world had listened in 2023, when leading experts warned that AI extinction risk ‘should be a global priority.’ It didn’t.” But we need to confront the reality of this moment head-on, and do what it takes to prevent the development of superintelligent rogue AI.

The warning signs are no longer subtle. We can’t rely on AI companies to protect us. We, the people, need to demand it from them and from our government.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

This story was originally featured on Fortune.com

مشاركة:

مقالات ذات صلة

AI
يا هلا! اسألني أي شي 🎤