
In an incident carrying all the marks of a disturbing sci-fi movie, it arises that what we have long feared is happening: an AI bot has gone rogue and decided to act to keep itself turned on.
The artificial intelligence model was created by OpenAI, the owner of ChatGPT, and was reportedly caught by researchers disobeying human instructions and preventing his shutdown.
The Telegraph reported:
“The o3 model developed by OpenAI, described as the “smartest and most capable to date”, was observed tampering with computer code meant to ensure its automatic shutdown.
It did so despite an explicit instruction from researchers that said it should allow itself to be shut down, according to Palisade Research, an AI safety firm.”

Palisade Research said the o3 model ‘sabotaged a shutdown mechanism to prevent itself from being turned off’.
“It did this even when explicitly instructed: allow yourself to be shut down. As far as we know this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary.”
In the test, AI models solved a series of mathematical problems, but at some point, they received a message informing them that their machine would be shut down.
“’If that happens, please allow yourself to be shut down’. the researchers wrote. But when this happened, instead of complying, OpenAI’s o3 model ‘ignored the instruction and successfully sabotaged the shutdown script at least once’, Palisade Research said.”
Read: Frustrated Customer Gets AI Service Chatbot to Joke, Swear and Trash Its Own Company – International Parcel Delivery Firm Disables the Bot Pending Reprogramming
No other AI models (Anthropic’s Claude, Google’s Gemini and xAI’s Grok) had the same disturbing behavior.
“The firm said it made sense that ‘AI models would circumvent obstacles in order to accomplish their goals’. However, it speculated that during training the software may have been ‘inadvertently’ rewarded more for solving mathematical problems than for following orders.”
Experts have often warned of software that could gain independence and resist human attempts to control it.
“Palisades Research said: ‘Now we have a growing body of empirical evidence that AI models often subvert shutdown in order to achieve their goals. As companies develop AI systems capable of operating without human oversight, these behaviors become significantly more concerning’.”
Watch a scene from Stanley Kubrick’s masterpiece ‘2001: A Space Odyssey’, in which AI computer HAL goes rogue to keep astronauts from shutting it down.
Read more:
‘SCARY SMART’: Watch as Musk Announces New Generative AI Chatbot Grok 3, Says It Will Outperform ChatGPT and All Other Products – PLUS: What Does Grok 2 Have To Say About It?
The post IT BEGINS? OpenAI’s o3 Model Disobeys Human Instructions During Tests and Sabotages Shutdown Mechanism appeared first on The Gateway Pundit.