Health and Fitness

Thanks to artificial intelligence, implants give speech to paralyzed patients

Thanks to artificial intelligence, implants give speech to paralyzed patients

As a result of a neurological disease, some paralyzed patients lose the ability to express themselves. Today, brain-machine interfaces benefiting from artificial intelligence are able to restore speech to these patients who have lost it, sometimes for many years. Decoding language in the brain to restore speech to those who have lost it

A considerable advance to give voice back to those who have lost it

Now 68, Pat Bennett was Human Resources Director and former rider, avid daily jogger. But in 2012, he was diagnosed with amyotrophic lateral sclerosis (Charcot’s disease). In Pat, the deterioration of the neurons did not primarily affect his arms, his legs or his hands… but his brainstem and therefore his ability to speak. His brain can formulate instructions to generate these sounds, his muscles cannot execute the commands.

Researchers from the American University of Stanford implanted four small squares of 64 microelectrodes made of silicone in March 2022. Penetrating into the cerebral cortex for only 1.5 millimeters, they record the electrical signals produced by the areas of the brain linked to the production of language. The signals produced are conveyed outside the skull through a bundle of cables, and processed by an artificial intelligence algorithm. The machine has “learned“, over four months, to interpret the meaning. She associates the signals with phonemes – the sounds that make it possible to form the words of a language – and processes them with the help of a language model.

When the sentences were limited to a vocabulary of 50 words, the error rate of the translation system was 9.1%. When the vocabulary was expanded to 125,000 words (enough to express almost anything), the error rate increased to 23.8%. It’s far from perfect, but it’s a giant leap forward from techniques previously available.

We can now imagine a future in which we restore a fluid conversation with a person suffering from paralysis“of language, Frank Willett, a Stanford professor and co-author of the study, said in a press briefing. With his brain-machine interface (BCI), Pat Bennett speaks via a screen at the rate of more than 60 words per minute. Still far from the 150 to 200 words per minute of a standard conversation, but already three times faster than in the previous record, dating from 2021 and already held by the team that took it under its wing.”This is a scientific proof of concept, not a real device that people can use in daily life“, Willett said. “But it’s a big step towards restoring rapid communication for paralyzed people who can’t speak.“.

Another device via an avatar that reproduces facial expressions

In the second experiment, conducted by Edward Chang’s team at the University of California, the device relies on a strip of electrodes placed on cortical material. Its performance is comparable to the Stanford team system, with a median of 78 words per minute, five times faster than before.

A huge leap for the patient, paraplegic since a hemorrhage in the brainstem, and who until now communicated at a maximum rate of 14 words per minute, using a technique for tracking head movements.

Also in this experiment, the error rate rises to around 25% when the patients use a vocabulary of several tens of thousands of words.

The particularity of Pr Chang’s device is to be based on the analysis of the signals emitted not only in the areas directly linked to language but also more broadly in the sensorimotor cortex, which activates the facial and oral muscles to produce sounds.

Five, six years ago we really started to understand the electrical networks that order the movement of the lips, the jaw and the tongue allowing us to produce the specific sounds for each consonant, vowel and word.“, explained Professor Chang.

His team’s brain-machine interface produces language in the form of text, but also with a synthesized voice and an avatar reproducing the patient’s facial expressions when he speaks. Because “the voice and our expressions are also part of our identity“, according to Professor Chang.

The team is now aiming for a wireless version of the device, which would have “profound consequences on independence and social interactionsof a patient, according to David Moses, study co-author and professor of neurosurgery at the University of San Francisco.