+49 (0) 421 9600-10
15 December 2021 - Anne-Katrin Wehrmann

When Thoughts become Spoken Words

Investing in Bremen

Speech neuroprosthetics can make the words you think audible

Tanja Schultz is the Head of the Cognitive Systems Lab (CSL)
Tanja Schultz is the Head of the Cognitive Systems Lab (CSL) © WFB/Jörg Sarbach

Some people lose their speech capacity because of an illness that makes them lose control over their muscles. There is now hope for those affected by this: a team at the University of Bremen has succeeded in transforming the brain's signals that are involved in imagining words into sounds that can be heard via loudspeaker.

Many people still remember the renowned English astrophysicist Stephen Hawking sitting in his wheelchair and using a computer with a voice synthesiser to communicate with the surrounding world. Hawking was diagnosed with motor neurone disease as a young man, and when he died in 2018, he had been unable to speak unaided for 33 years. In his latter years, he could only control the voice synthesiser by twitching the muscles in his cheeks: a slow and extremely laborious process. It is people like him, who suffer from neuromuscular diseases, who will enjoy a notable improvement in their quality of life in the future, thanks to the speech neuroprosthetics developed at the University of Bremen. "The aim of our system is to make it possible for these people to hold conversations naturally", says Tanja Schultz, Head of the Cognitive System Lab (CSL).

"Groundbreaking research success"

The first major step along this route has already been taken – it's a "groundbreaking research success", states the 57-year-old Computing Science Professor emphatically. "We've made it possible for our test subjects to hear themselves speak, although they only imagine the words". The paper on this topic, which is soon to be published in an internationally renowned scientific journal, is based on a study involving a patient suffering from epilepsy, and who had had deep electrodes implanted in her brains for medical investigations. "When people speak, the impulse for the speech itself comes from the brain", explains Miguel Angrick, lead study author. "The implanted electrodes enable us to represent these processes by recording the corresponding brain signals.

International cooperation

As part of his PhD thesis, the now-graduated 30-year-old developed an algorithm that converted these language-specific neural processes directly into audible speech. To achieve this, the 25 participants were first asked to read a text out loud. The members of the international project team, which also includes researchers from Virginia Commonwealth University in the USA and the University of Maastricht in the Netherlands, then recorded the resulting acoustic signals together with the brain signals on which they were based. Afterwards, the Bremen computer science specialists brought these two sets of signals together, enabling them to identify the specific neural activity on which each sound was based.

Miguel Angrick and Tanja Schultz have achieved a groundbreaking research success.
Miguel Angrick and Tanja Schultz have achieved a groundbreaking research success. © WFB/Jörg Sarbach

A scarce pool of data

However, implanting electrodes in human beings purely for research purposes is prohibited, making the development of effective speech neuroprosthetics more difficult. "This is why our research is based on data that would have to be collected for medical purposes", stated Schultz, Head of CSL, to clarify matters. "We work together with people who suffer from severe epileptic seizures. When the electrodes are placed in their brain, they are placed in the areas that the neurosurgeons suspect are the source of their problems, and not where we'd like them to be for our work". In the case of the patient mentioned above, whose data the researchers were able to use for their current study, these two factors converged extremely successfully: in this patient's case, the electrodes covered the parts of the brain that are especially important for the generation of speech.

Similar processes for audible and imagined speech

As part of the study, this patient first read texts out loud. The system then used a machine-learning process to identify and learn how the spoken words corresponded to neural activity. "In the second step, this learning process was repeated, first with whispered speech and then with imagined speech", reported Angrick. "For the very first time, this enabled our system to generate synthesised speech in real time, and without a discernible delay. In other words, an audible signal that was based on another signal." Although the system was able to learn the interrelationships using exclusively audible speech, it was also able to output audible sounds for whispered and imagined speech: "This brings us to conclude that the fundamental processes in the brain for audible and imagined speech are comparable."

The aim is not to create a mind-reading machine

The important issue for the researchers is the conclusion that their speech neuroprosthetics should only output the words that the patient actually wants to say. "When fleeting thoughts rush through the brain, we don't assume that these are planned and articulated, and that's also not something we can get to grips with using our approach", stated Tanja Schultz in clarification. "And that's also not something we want to do." added Miguel Angrick: "Building a mind-reading machine isn't our goal. We're creating a prosthesis, and that's something people will have to learn how to use."

Miguel Angrick is lead author of the widely acclaimed study.
Miguel Angrick is lead author of the widely acclaimed study. © WFB/Jörg Sarbach

Testing continues

According to Schultz, Head of CSL, the current study is a huge developmental leap forward, although there is still a long way to go. "The output quality isn't quite as convincing as we'd like. I'd still like to try out a couple more algorithms to improve the way speech is synthesised." In addition, Schultz and her team want to prove that, with some training, prosthetic device users will be able to improve the audio quality themselves.

In the meantime, former PhD student Miguel Angrick has just signed a contract with the prestigious Johns Hopkins University in the USA. He will soon start working there for a research project with "locked-in patients" whose almost total paralysis means that they can no longer speak. His former boss is optimistic: "I'm sure that the research group there will be able to implant the first speech neuroprosthetic device in the first patients fairly soon", says Tanja Schultz. "I estimate that this should happen in the next three to five years."

Success Stories


Investing in Bremen
21 February 2024
Bremen continues to attract foreign investors

Last year, international companies and investors invested around 8 million Euros in Bremen. In particular, it was funding for sustainable projects that attracted special attention at home and abroad.

Learn more
Überseestadt (New Harbour District)
9 January 2024
Brüning Group: From Fischerhude to the Überseeinsel

New settlement in Bremen: for three decades, the Brüning Group had its company headquarters in Fischerhude. In the summer of 2023, the rapidly expanding company moved its head offices to the Überseeinsel in the Überseestadt (the New Harbour District) in Bremen. A modern brick-built office block with a glass facade now stands where Kellogg's once stored rice.

Learn more
Letters from ...
19 December 2023
Letters from Silicon Valley: Winter 2023 – a Journey into the world of AI

AI at any cost or proportionate regulation? Silicon Valley is divided on the subject, as shown not least by the conflict surrounding OpenAI's CEO Sam Altman in the past few weeks. By visiting Silicon Valley as a delegation, we are personally going to find out how things are exactly.

Learn more