scorecardresearch
Monday, May 6, 2024
Support Our Journalism
HomeScienceLegilimency for Muggles! New language decoder can read your private thoughts non-invasively

Legilimency for Muggles! New language decoder can read your private thoughts non-invasively

US scientists develop model that can help decipher thoughts in a continuous language, a potential tool for people with communication disorders. Findings published in Nature Neuroscience.

Follow Us :
Text Size:

New Delhi: Remember that time when a friend cracked a tasteless joke? Or your most embarrassing moment? Now imagine if someone could actually tune into your thoughts at that exact moment. 

Researchers from the University of Texas, USA, have developed a new decoder that could help read one’s thoughts in a continuous language — a breakthrough that they believe could help advance brain-computer interfaces for people with communication disorders. Their findings, which have been peer-reviewed, were published Monday in the journal Nature Neuroscience.

According to the study, the decoder uses non-invasive brain recordings made using functional magnetic resonance imaging (fMRI) — a technique that measures the small changes in blood flow that occur with brain activity.

This marks a significant advancement from both existing technology and other known research work in the field. While researchers across the globe have been working on developing brain-computer interfaces that can decode brain signals, especially to restore communication in people who have lost the ability to speak, such decoders require invasive neurosurgery. 

Non-invasive brain recordings can capture many kinds of linguistic information, but previous attempts to decode this information have been limited to identifying one output from among a small set of possibilities.

The practical applications of this research include the development of brain-computer interfaces that can help people with communication disabilities, such as those with locked-in syndrome or amyotrophic lateral sclerosis (ALS). These interfaces could allow people to communicate using their thoughts, bypassing the need for physical movement, according to the research team. 

The research, however, has its limitations. For one, fMRI scanners are bulky and expensive. For another, the translations are not exact, but an approximation of the real thoughts. 


Also Read: Health is not govt’s business alone, and AI can’t do it all. Everyone must chip in


Translating thoughts

As part of the research, the team scanned the brains of three participants while they listened to several hours of podcasts. As they listened, an fMRI scanner recorded the blood oxygenation levels in parts of their brains. The researchers then used a large language model to match patterns in the brain activity to the words and phrases that the participants had heard.

In order to be able to use fMRI for this purpose, the team had to overcome a major obstacle. The blood-oxygen-level-dependent (BOLD) signals — the technique that is commonly used for measuring brain activity in humans — that fMRI measures were slow. According to the researchers, an impulse of neural activity causes BOLD to rise and fall over approximately 10 seconds. For comparison, the study says, native English speakers speak more than 2 words/second — which means that from each image that the fMRI captures, researchers had to decode at least 20 words.

Decoding continuous language thus becomes a challenge, as there are many more words than there are images. 

The decoder then generated candidate word sequences, picking up phrases that could be matched with the brain signal that the fMRI detected, and then selecting the most likely phrase that suits the context. 

In addition to the words, the decoder could also translate a person’s thoughts as they watched a silent movie, creating an accurate description of what they were seeing.

But there are restrictions. For one thing, the decoder is currently bulky and expensive, as opposed to mobile and compact — essential if it has to be used practically.

Moreover, not only is training the decoder a tedious process, it also has to be tailored to each individual who uses it, according to the research. As a result, when the team tried to use a decoder trained on one person to read the thoughts of another, it failed. 

This shows that every individual has a unique way of processing language, researchers said. 

The results are not exact, but an approximation. For example, when a participant was asked to imagine telling a story, what the decoder captured was: “To see her, for some reason, I thought she would come to me and say she misses me”, as opposed to the participant’s original thought: “Look for a message from my wife saying that she had changed her mind and that she was coming back.”

(Edited by Uttara Ramaswamy)


Also Read: Plagiarism challenge or opportunity?: AI platform ChatGPT has academics & tech analysts divided


 

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular