Is an AI capable of “reading” your mind?

ai-reading-mind

Semantic Reconstruction of Continuous Language from Non-Invasive Brain Recordings 

Ever wondered what others think? Have you ever thought, “I should have known better”? That is hindsight bias which impacts how we evaluate past occurrences. Imagine a non-invasive brain recording studying this bias! This may illuminate brain function and information processing; for instance, “Semantic Reconstruction of Continuous Language from Non-Invasive Brain Recordings” established a brain-computer interface to decipher continuous language from non-invasive recordings, revealing hindsight bias.

Scientists are working on a brain-computer interface (BCI) to decipher continuous language from non-invasive brain recordings. This technology could help us comprehend language processing and design novel language disorder treatments (Tang et al., 2023). Brain-Computer Interface Innovation

In hindsight, Brain-Computer Interfaces (BCIs) can only recognize a limited selection of words or phrases. On the other hand, a non-invasive decoder can recreate continuous language from cortical semantic representations collected using functional magnetic resonance imaging (fMRI). This unique decoder generates comprehensible word sequences from brain data (Tang et al., 2023). It can decipher perceived, imagined, and silent speech by analyzing cortical semantic representations active during thinking and speaking.

Testing Decoder Across Cortex

Researchers tested the decoder across the cortex, revealing that various brain regions can extract and analyze unbroken language. This implies that the brain processes language distributedly, which may affect how we comprehend language and communication. The decoder was also tested with humans, and the results showed that subject collaboration is needed for decoder training and operation. This shows the need for human-machine collaboration and the limits of AI without human input. Hindsight allowed these conclusions by allowing a fresh look at prior outcomes. These scientists improved our understanding of how the brain processes language and how we might utilize this information to improve communication and collaboration technology by critically examining earlier research and applying new technologies and methods.

Encoding and Word Rate Model Performance

The study used fMRI data to examine the encoding and word rate models. These models were tested to predict brain reactions and word rate vectors to a perceived speech test tale. Predicting brain reactions to the testing narrative and assessing the linear connection between expected and actual single-trial responses evaluated the encoding hypothesis (Caucheteux & King, 2020). Most cortical regions outside core sensory and motor systems responded as expected by the model, and each person’s training data improved the model. The encoding model can assess fMRI data and predict brain responses to speech stimuli.

The study’s results show the predictive power of encoding and word rate models. The encoding model can predict brain responses to speech stimuli in most cortical regions. In contrast, the word rate model can predict word rate vectors, especially when fitted to the auditory cortex. These findings shed light on how different brain regions process speech cues and lay the groundwork for fMRI-based speech decoding research.

In retrospect, the study’s approach and findings demonstrate fMRI data’s potential for decoding speech stimuli and comprehending speech perception brain processes (Tang et al., 2023). These models might be tested in diverse speech settings and with larger samples. The study’s findings may also affect voice prostheses and brain-decoding communication devices.

Speech Identification Performance

Language decoders trained on fMRI responses from two subjects listening to narrative stories were tested. Perceived and imagined speech were studied, and single-trial fMRI responses assessed decoders for the perceived speech test. The decoders identified speech more accurately than expected by chance showing that fMRI language decoders recognized speech patterns (Tang et al., 2023).

The second phase tested language decoders’ ability to distinguish imagined speech. In this phase, the decoders identified imagined speech in every experiment, as the language decoders were accurate at recognizing imagined speech patterns.

The study’s findings are significant for brain-computer interface (BCI) development. BCIs employ machine learning techniques to translate brain impulses into movements like typing or directing a robotic arm. These BCIs depend on the language decoder’s accuracy in recognizing intended speech. The work suggests training language decoders using fMRI data can significantly improve speech recognition BCIs. Generally, the study illuminates how fMRI data may train language decoders for speech recognition. fMRI-based language decoders can increase BCI accuracy and enable new speech-impaired assistive technology.

Decoder Predictions’ Behavior

Researchers tested decoder predictions of perceived speech, whereby the researchers chose four 80-second chunks from a perceived speech test tale to evaluate decoder predictions. Four stimulus-word-based multiple-choice questions were developed for each part, and questions were created without decoder predictions.

One hundred people were recruited to test decoder predictions, with the control group reading the stimulus words while the experimental group received the decoded words. On the other hand, the four perceived speech test story pieces were used to ask multiple-choice questions. The experimental group, which read the decoded words, scored significantly higher than chance on 9 of 16 questions.

The experimental group scored higher than the control group, which only had access to the stimulus words, suggesting that decoder predictions of perceived speech may be accurate. Using decoder predictions, the work could improve speech recognition technology for hearing-impaired and speech-disabled people.

Cross-Cortical Decoding

The speech network, parietal-temporal-occipital association region, and prefrontal cortex (PFC) were partitioned to decode speech perception brain activity. Speech perception and processing depend on these areas. Next, the researchers examined the decoding performance time courses for a perceived speech test story from each of the three regions. They measured each region’s decoder’s speech stimulus prediction accuracy with time. Decoder predictions were considerably more comparable to stimulus words in all three locations than chance alone. The decoder accurately decoded speech perception brain activity in each region. These discoveries are essential for understanding how the brain processes and perceives speech.

Conclusion

This study proved that non-invasive language BCIs are possible. Functional magnetic resonance imaging (fMRI) records and decodes brain semantic representations to provide continuous language output. The output was comprehensible word sequences that conveyed perceived and imagined speech; silent videos and behavioral exams confirmed the decoder’s predictions.

Literature

Tang, J., LeBel, A., Jain, S. et al. Semantic reconstruction of continuous language from non-invasive brain recordings. Nat Neurosci 26, 858–866 (2023). https://doi.org/10.1038/s41593-023-01304-9

Caucheteux, Charlotte & King, Jean-Rémi. (2022). Brains and algorithms partially converge in natural language processing. Communications Biology. 5. 134. 10.1038/s42003-022-03036-1.