Scientists at the San Francisco campus of the University of California have created a breakthrough new AI system. This system takes the neural patterns created while someone speaks and converts them into text.

The scientists who developed this technology tested it on volunteers who were being monitored for seizures. These volunteers had electrodes implanted in their brains already. The electrode arrays allowed the scientists to track brain activity while the volunteers repeated certain key sentences over and over. The scientists then transferred the data to their AI program. It used an algorithm to convert the data — the volunteers’ brain activity while speaking — into numbers.


Scientists then took the numbers and entered them into another part of the AI system. This part converted the numbers into words. It wasn’t 100% accurate at first, but the system kept comparing data until it learned how to more accurately reproduce the spoken sentences.

This is an improvement over previous methods that required millions of hours worth of data for the AI to “learn” to correctly convert brain activity to text. With only a few repetitions of a number of sentences, this new program had a high accuracy rate.

The scientists who created this program believe that it’s only the beginning. This includes Joseph Makin. He believes that in time, their AI program will be able to interpret brain activity without the need for a person to verbalize their thoughts.

This would make it possible to create an accurate speech prosthesis for people who are unable to speak. Currently, voice prostheses use a valve implanted between the trachea and esophagus. They work by funneling air from the lungs through the valve to the mouth. They’re not an option in some situations, such as when a person has facial paralysis. This new AI system would bypass the need to use the mouth or lungs to produce speech.