Researchers have created a tool that can instantly convert ideas about speech into spoken words.

They hope the brain-computer interface will eventually enable those who are mute speak, even if it is still in the experimental stage.

The device was tested on a 47-year-old quadriplegic woman who had been unable to talk for 18 years following a stroke, according to a recent research. As part of a clinical experiment, doctors surgically inserted it in her brain.

READ MORE: ALS Patient With A Brain Implant Can Operate Amazon’s Alexa With His Thoughts

According to Gopala Anumanchipalli, a co-author of the study that was released on Monday in the journal Nature Neuroscience, it “converts her intent to speak into fluent sentences.”

Other speech-related brain-computer interfaces, or BCIs, usually feature a little lag between sentence ideas and computerized verbalization. According to studies, these delays might impede the organic flow of a conversation, which may result in misunderstandings and annoyance.

Jonathan Brumberg of the University of Kansas’ Speech and Applied Neuroscience Lab, who did not participate in the study, described this as “a pretty big advance in our field.”

READ MORE: AI Finds ‘Hidden Information’ In Brain Waves To Diagnose Dementia Faster

Using electrodes, a team in California captured the woman’s brain activity as she silently uttered sentences. In order to produce a speech sound that she would have spoken, the scientists constructed a synthesizer utilizing her voice prior to her injuries. An AI model that converts cerebral activity into sound units was trained by them.

Brené Brown joined Kenya Barris and Malcolm Gladwell on a live podcast at SXSW to talk about success, shame, and most importantly, curiosity.

According to Anumanchipalli of the University of California, Berkeley, it functions similarly to current systems used for real-time meeting or phone conversation transcription.

READ MORE: Elon Musk Provides An Update As The Second Person Receives Neuralink Brain Implants

The implant itself rests on the brain’s speech center, listening in and translating those signals into speech fragments that form sentences. Anumanchipalli described it as a “streaming approach,” in which a recorder records each 80-millisecond speech segment, or roughly half a syllable.

Anumanchipalli stated, “It’s not waiting for a sentence to finish.” “It is processing it in real time.”

According to Brumberg, rapid speech decoding could be able to match the rapid tempo of natural speech. Additionally, he stated that using voice samples “would be a significant advance in the naturalness of speech.”

Despite receiving some funding from the National Institutes of Health, Anumanchipalli claimed that the work was unaffected by recent NIH research budget cuts. Although further study is required before the device is suitable for widespread use, he stated that patients could have access to it in ten years with “sustained investments.”

Source