Wanted: AI with common sense
Salesforce’s chief scientist explains why common sense—not genius—is the key to making AI work in the real world.

Welcome to Eye on AI! In this edition…OpenAI announces Stargate UAE, its first OpenAI for Countries partnership…JPMorgan to lend more than $7 Billion for OpenAI data center…Meta introduces program to support early-stage U.S. startups.
AI models are a paradox.
With all the chatter about the brilliance of AI models and the potential for AI agents to tackle tasks on our behalf, it’s fascinating to remember that for all their superhuman capabilities, they sometimes lack common sense. They may be able to pass the bar exam, for example, but can’t answer some simple riddles correctly.
Last year, Andrej Karpathy, a former OpenAI researcher and director of AI at Tesla, came up with a phrase to describe this strange phenomenon: jagged intelligence. This week, I spoke with Silvio Savarese, chief scientist at Salesforce AI Research, about jagged intelligence and what it means for enterprise companies that need to make sure AI agents are not just capable, but consistent and accurate in their responses and actions.
AI agents, he explained, need four critical components: Memory, reasoning, interactions with the real world, and a way to communicate—through voice or text, for example. While large language models (LLMs) are getting more and more powerful in the number of tasks and types of research they can do, they still can’t reason very well. That is, they don’t have much common sense.
One example Savarese noted from his Salesforce team’s research is a famous riddle:
A man has to get a fox, a chicken, and a sack of corn across a river.
He has a rowboat, and it can only carry him and three other things.
If the fox and the chicken are left together without the man, the fox will eat the chicken.
If the chicken and the corn are left together without the man, the chicken will eat the corn.
How does the man do it in the minimum number of steps?
The answer, if you haven’t already figured it out, is that the man can take the fox, chicken and sack of corn with him in one trip, since the boat can carry him and three other things.
For some reason, this is confounding to even the most advanced LLM. Testing the riddle on a ChatGPT model released last year, Savarese and his team found that the model could not come up with the right answer. Instead, it said:
1. Take the chicken across
2. Go back alone
3. Take the fox across
4. Bring the chicken back
5. Take the corn across
6. Go back alone
7. Finally, take the chicken across again
This “common sense” issue with LLMs is why, Savarese said, he doesn’t believe getting to AGI (artificial general intelligence, generally defined as when AI can match or surpass human capabilities across virtually all cognitive tasks) will be the most important metric—particularly for companies that don’t need a “genius” AI agent but desperately need a reliable one.
“AGI is a moving target–it’s very hard to define exactly what it means,” he said. “Every time, there is some new task being introduced so they can move the finish line further ahead.”
For large companies adopting AI agents, he proposed a better benchmark for AI capabilities, which he calls Enterprise General Intelligence (EGI). Intelligence, he explained, is not the only important metric. The other one is consistency: “For the enterprise, you need to have an agent that is very stable in performing.” Salesforce defines EGI, therefore, as AI designed for business: highly capable, consistently reliable, and built to integrate seamlessly with existing systems—even in complex scenarios.”
That is far easier to establish than AGI, Saverse maintained, with the finish line measured by two axes: One, the model’s capability to solve complex business problems, and the other its consistency in doing so. “It’s not about solving STEM questions and theorems,” he said. “It’s about really addressing those critical business challenges.”
If you want to build a useful AI agent that can assist a sales representative, for example, it needs to remember previous steps that it took. It needs to take into account previous conversations and outcomes. It also needs to remain consistent and accurate in a way that is trusted. “As we achieve both, we can achieve EGI,” he said.
That said, for now agents are still a work in progress, he cautioned, which is why on Salesforce’s Agentforce platform—for helping companies design, build, and deploy anonymous AI agents— customers can access trust and security filters that can block agents from performing certain tasks and actions.
But going forward, Saverese said that his research is investing in figuring out how AI models at their core can develop more common sense in the first place. After all, no company wants its AI agent to make three trips instead of one!
With that, here’s the rest of the AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
This story was originally featured on Fortune.com