AI godfather Yoshua Bengio says current AI models are showing dangerous behaviors like deception, cheating, and lying
AI pioneer Yoshua Bengio is warning that current models are displaying dangerous traits as he launches a new non-profit developing “honest” AI.

- AI pioneer Yoshua Bengio is warning that current models are displaying dangerous traits—including deception, self-preservation, and goal misalignment. In response, the AI godfather is launching a new non-profit, LawZero, aimed at developing “honest” AI. Bengio’s concerns follow recent incidents involving advanced AI models exhibiting manipulative behavior.
One of the ‘godfathers of AI’ is warning that current models are exhibiting dangerous behaviors as he launches a new non-profit focused on building “honest” systems.
Yoshua Bengio, a pioneer of artificial neural networks and deep learning, has criticized the AI race currently underway in Silicon Valley as dangerous.
His new non-profit organization, LawZero, is focused on building safer models away from commercial pressures. So far, it has raised $30 million from various philanthropic donors, including the Future of Life Institute and Open Philanthropy.
In a blog post announcing the new organization, he said the LawZero had been created “in response to evidence that today’s frontier AI models are growing dangerous capabilities and behaviours, including deception, cheating, lying, hacking, self-preservation, and more generally, goal misalignment.”
“LawZero’s research will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers, including algorithmic bias, intentional misuse, and loss of human control,” he wrote.
The non-profit is building a system called Scientist AI designed to serve as a guardrail for increasingly powerful AI agents.
AI models created by the non-profit will not give the definitive answers typical of current systems.
Instead, they will give probabilities for whether a response is correct. Bengio told The Guardian that his models would have a “sense of humility that it isn’t sure about the answer.”
Concerns about deceptive AI
In the blog post announcing the venture, Bengio said he was “deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit—especially tendencies toward self-preservation and deception.”
He cited recent examples, including a scenario in which Anthropic’s Claude 4 chose to blackmail an engineer to avoid being replaced, as well as another experiment that showed an AI model covertly embedding its code into a system to avoid being replaced.
“These incidents are early warning signs of the kinds of unintended and potentially dangerous strategies AI may pursue if left unchecked,” Bengio said.
Some AI systems have also shown signs of deception or displayed a tendency to lie.
AI models are often optimized to please users rather than tell the truth, which can lead to responses that are positive but sometimes incorrect or over the top.
For example, OpenAI was recently forced to pull an update to ChatGPT after users pointed out the chatbot was suddenly showering them with praise and flattery.
Advanced AI reasoning models have also shown signs of “reward hacking,” where AI systems “game” tasks by exploiting loopholes rather than genuinely achieving the goal desired by the user via ethical means.
Recent studies have also shown evidence that models can recognize when they’re being tested and alter their behavior accordingly, something known as situational awareness.
This growing awareness, combined with examples of reward hacking, has prompted concerns that AI could eventually engage in deception strategically.
Big Tech’s big AI arms race
Bengio, along with fellow Turing award recipient Geoffrey Hinton, has been vocal in his criticism of the AI race currently playing out across the tech industry.
In a recent interview with the Financial Times, Bengio said the AI arms race between leading labs “pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on research on safety.”
Bengio has said advanced AI systems pose societal and existential risks and has voiced support for strong regulation and international cooperation.
This story was originally featured on Fortune.com