[ad_1]

Researcher scientists at the University of Texas at Austin have developed a brain decoding technology that combines an fMRI scanner and artificial intelligence, similar to well-known AI systems such ChatGPT or Bard. The technology can spell out our thoughts in text form, but more importantly may allow patients who cannot otherwise communicate, such as those experiencing significant paralysis, to communicate their thoughts. Unlike other attempts to achieve this, the technique is completely non-invasive and does not require surgical implants. The technique involves training the AI system, which occurs when a participant spends hours in the scanner listening to podcasts. Once trained, the AI can then approximate someone’s thoughts in text form, as long as the thinking occurs inside the scanner.

Artificial intelligence can spot patterns and make inferences where we cannot. This is the mechanism behinds its occasional tendency to astound us, and reading someone’s mind is a pretty neat trick. However, this latest technology can do just that, with a few caveats. If successful, such approaches could allow patients with no other method to communicate to convey their thoughts to the outside world, and unlike other approaches that have attempted this, this method requires no invasive implants.

“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” said Alex Huth, a researcher involved in the study. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”

The system requires that participants undergo extensive AI training by sitting in an fMRI scanner for several hours while listening to podcasts. This allows the AI to recognize an fMRI signature for certain thoughts. After training, if the participant wishes to have their thoughts ‘read’ by the system, then they can sit in the scanner and the technology will analyze their thoughts.

The system does not provide a word-perfect transcript, but rather approximates the thought. The researchers provided the following example: “I don’t have my driver’s license yet” was the original thought, and the translation was “She has not even started to learn to drive yet.”

So how long will it take for such a system to end up in a dystopian police state where our thoughts are no longer private? The researchers have considered the potential for this, but state that the system will not work with unwilling participants, but rather people have to participate in training the system, and can easily confuse it by thinking about other things.

“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” said Jerry Tang, another researcher involved in the study. “We want to make sure people only use these types of technologies when they want to and that it helps them.”

The system may not always require fMRI systems, but more portable imaging modalities may work too. “Functional near-infrared spectroscopy (fNIRS) measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” said Huth. “So, our exact kind of approach should translate to fNIRS, although the resolution with fNIRS would be lower.”

Study in journal Nature Neuroscience: Semantic reconstruction of continuous language from non-invasive brain recordings

Via: University of Texas at Austin



[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *