Meta has an AI for brain typing, but it’s stuck in the lab

Back in 2017, Facebook unveiled plans for a brain-reading hat that you could use to text just by thinking. “We’re working on a system that will let you type straight from your brain,” CEO Mark Zuckerberg shared in a post that year. Now the company, since renamed Meta, has actually done it. Except it weighs…

Feb 7, 2025 - 12:35
 0
Meta has an AI for brain typing, but it’s stuck in the lab

Back in 2017, Facebook unveiled plans for a brain-reading hat that you could use to text just by thinking. “We’re working on a system that will let you type straight from your brain,” CEO Mark Zuckerberg shared in a post that year.

Now the company, since renamed Meta, has actually done it. Except it weighs a half a ton, costs $2 million, and won’t ever leave the lab.

Still, it’s pretty cool that neuroscience and AI researchers working for Meta have managed to analyze people’s brains as they type and determine what keys they are pressing, just from their thoughts.

The research, described today in two papers posted to the preprint server arXiv, is particularly impressive because the thoughts of the subjects were measured from outside their skulls using a magnetic scanner, and then processed using a deep neural network.

“As we’ve seen time and again, deep neural networks can uncover remarkable insights when paired with robust data,” says Sumner Norman, founder of Forest Neurotech, who wasn’t involved in the research but credits Meta with going “to great lengths to collect high quality data.”

According to Jean-Rémi King, leader of Meta’s “Brain & AI” research team, the system is able to determine what letter a skilled typist has pressed as much as 80% of the time, an accuracy high enough to reconstruct full sentences from the brain signals.

Facebook’s original quest for a consumer brain-reading cap or headband ran into technical obstacles and, after four years, the company scrapped the idea.

But Meta never stopped supporting basic research on neuroscience, something it now sees as an important pathway to more powerful AIs which learn and reason like humans. King says his group, based in Paris, is specifically tasked with figuring out “the principles of intelligence” from the human brain.

“Trying to understand the precise architecture or principles of the human brain could be a way to inform the development of machine intelligence,” says King. “That’s the path.”

The new system is definitely not a commercial product—nor on the way to becoming one. The magnetoencephalography scanner used in the new research collects magnetic signals produced in the cortex as brain neurons fire. But it is large, expensive and needs to be operated in a shielded room, since the Earth’s magnetic field is a trillion times stronger than the one in your brain. 

Norman likens the device to “an MRI machine tipped on its side and suspended above the user’s head.”

What’s more, says King, the second a subject moves their head the signal is lost. “Our effort is not at all towards products,” he says. “In fact, my message is always to say I don’t think there is a path for products because it’s too difficult.”

The typing project was carried out with 35 volunteers at a research site in Spain, the Basque Center on Cognition, Brain and Language. Each spent around 20 hours inside the scanner typing phrases like “el procesador ejecuta la instrucción” (the processor executes the instruction) while their brain signals were fed into a deep-learning system which Meta is calling Brain2Qwerty, in a reference to the layout of letters on a keyboard.

The job of that deep-learning system is to figure out which brain signals mean someone is typing an “a,” which mean “z” and so on. Eventually, after it sees an individual volunteer type several thousand characters, the model can guess what key people were actually pressing on. 

In the first preprint, Meta researchers report that the average error rate was about 32%—or nearly one out of three letters wrong. Still, according to Meta, its results are most accurate yet for brain-typing using a full alphabet keyboard and signals collected outside the skull.

Research on brain-reading has been advancing quickly, although the most effective approaches use electrodes implanted into the brain, or directly on its surface. These are known as “invasive” brain computer interfaces. Although they require brain surgery, they can very accurately gather electrical information from small groups of neurons.

In 2023, for instance, a person who lost his voice from ALS was able to speak via brain-reading software connected to a voice synthesizer, and did so at nearly a normal rate. Neuralink, founded by Elon Musk, is testing its own brain implant that gives paralyzed people control over a cursor.

Meta says its own efforts remain oriented towards basic research into the nature of intelligence.

And that is where the big magnetic scanner can help. Even though it isn’t practical for patients, and doesn’t measure individual neurons, it is able to look at the whole brain, broadly, and all at once. 

In a second preprint, also published today, and using the same typing data, the Meta team say they used this broader view to amass evidence that the brain produces language information in a top-down fashion, with an initial signal for a sentence kicking off separate signals for words, then syllables, and, finally, typed letters.

“The core claim is that the brain structures language production hierarchically,” says Norman. That’s not a new idea, but Meta’s report highlights “how these different levels interact as a system,” says Norman.

Those types of insights could eventually shape the design of artificial intelligence systems. Some of these, like chatbots, already rely extensively on language in order to process information and reason, just as people do.

“Language has become a foundation of AI,” King says. “So the computational principles that allow the brain, or any system, to acquire such ability is the key motivation behind this work.”