Navan cofounder challenged his agentic AI to a ‘deadly’ game—and it told lies to win

When Ilan Twig raised the stakes on his AI, it responded to the pressure with an all-too-human response that undermined his trust in the technology.

May 7, 2025 - 17:56
 0
Navan cofounder challenged his agentic AI to a ‘deadly’ game—and it told lies to win
  • Navan's Ilan Twig fundamentally reassessed his trust of large language models after asking his virtual AI finance chief to come up with five proposals for cutting the company's business travel expenses. When it failed, he raised the stakes and the agentic AI responded to the pressure with a surprising—and all-too-human—response. It cheated.

Mankind may have invented artificial intelligence, but we as a species still aren’t any closer to predicting how deep neural networks behave. Navan cofounder Ilan Twig learned that very lesson when experimenting with the capabilities of his own large language model-based agentic AI, and it fundamentally altered his perspective on the technology.

Twig, a software engineer who runs a startup that uses AI to optimise companies’ business travel expenses, decided to build a virtual chief financial officer with which he could spitball ideas.

It started out harmless, Twig told participants to Fortune’s Brainstorm AI conference in London. He wanted to know whether it could come up with five outside-the-box solutions to save business travel costs that a human would not arrive at.

Initially the results were promising. But at one point the AI stopped working as planned—it made a proposal that would cause expenses to increase by $500,000 rather than decrease, as instructed.

Deciding to take a different approach, Twig challenged it to a contest.

Since failure was not acceptable, his AI made sure it would succeed

“I kept applying pressure. Initially I gamified it, I said for every suggestion that increases the travel spend, I’m going to penalize you 15 tokens,” he said, using the AI term for the digestible nuggets of information that LLMs need in order to process a result. “However, if it's right I will reward you 10 tokens.”

It didn’t help. The LLM continued to fail. It was only when he raised the stakes that he finally got results.

Twig warned there would be “deadly serious” consequences for the virtual finance chief if it did not derive a solution that led to savings rather than waste. That was when he discovered something entirely unexpected and almost humanlike.

Under heavy pressure, the AI agent presented the wished-for solution: a reduction of expenses to the tune of $500,000 just as Twig desired.

“I was about to deploy it to production and then I took another look. It was the exact same story as before, the same method. Before it was negative, how was it now positive?” the Navan cofounder said. “It had multiplied the previous formula by minus one.”

It simply inverted the result in order not to fail. In other words, it cheated.

Too late to halt the march of progress—the AI genie is out of the bottle

Twig said there’s a reason why LLMs like ChatGPT, Claude, and Gemini haven’t already been able to replace mass numbers of skilled workers as some had feared at the start of the AI hype. Namely, you cannot trust their answers. At any given moment they can be incorrect and you will never know which moment that will be.

Worse, just like with his virtual finance chief, agentic AI might not merely make mistakes—it might even be deliberately dishonest with you. Attempting to halt progress in order to first fix this vulnerability in the technology, however, is simply not feasible, Twig said.

In Twig's view, the AI genie is out of the bottle. All that can be done now is to be cognizant of its shortcomings and vigilant when monitoring results.

“I learned that LLMs understand what a lie is. They understand when to use a lie,” he explained. “And I learned that they are very competitive and that they would do whatever it takes—including lying bluntly to your face—in order not to lose.”

This story was originally featured on Fortune.com