Researchers translate thoughts into Pink Floyd song

From song to catchy tune to song. The scientific breakthrough could be described in a similar way: Researchers have reconstructed real music from mere thoughts. The fact that brain waves can be converted into audio waves using AI could give many people a voice. literally.

“Hey, teacher, leave them kids alone!” Well, do you hear it already? With “Another Brick in the Wall”, the British rock band Pink Floyd created a catchy tune that has been haunting countless minds for more than four decades. literally. But today, at the latest when the slightly eerie children’s choir begins, the question should actually be: Well, do you think so?

Researchers at the University of California at Berkeley were apparently able to prove exactly that. Using artificial intelligence, they converted the brain waves of subjects who heard the song into audio waves. To put it more simply: They translated a thought into real sounds – a living catchy tune, so to speak.

Researchers translate Pink Floyd song from test subjects’ thoughts

“We don’t need mind control,” says the chorus in the 1979 rock classic. After all, the case wasn’t about controlling. A person hears a song, a computer watches the person listening – and translates their thoughts back into music. What, like so many other things, once sounded like science fiction is now just science – real, applied science.

According to a study published on Tuesday in the journal “PLoS Biology”, a team of researchers has succeeded in analyzing the brain activity of test subjects who heard the well-known song “Another Brick in the Wall” and using AI to reinterpret the neuronal patterns into an audio track.

However, the result is a scientific breakthrough that for many people could be the first step towards a new, more self-determined life.

Admittedly, the intellectual translation of Pink Floyd’s world hit is certainly not a finely mixed delight for the ears. The AI-supported music sounds muted at best. It sounds a bit like an alien humming the song from the bottom of a lake:

The research results made the difference between music and language visible. When the subjects listened to music, it was mainly electrodes on the right hemisphere that shone. With language it is the other way around. This could also explain why people who can no longer speak clearly after a stroke can sometimes still sing clearly.

Scientifically, there is much to be said for Pink Floyd – and also non-scientifically

Now “you can actually listen to the brain and recreate the music that the person heard,” explains Gerwin Schalk, who collected data for the study with his laboratory in Shanghai, China, the “New York Times” (NYT).

The scientists had previously succeeded in extracting individual words from electrical signals – even when the participants remained silent. Study leader Robert Knight of the University of California at Berkeley had one of his doctoral students Ludovic Bellier try the whole thing with music – allegedly because he was in a band himself.

And why this song? The combination of 41 seconds of text and around two and a half minutes of a wide variety of instrumental music is particularly well suited to observing the brain’s reaction to words and melodies. “The less scientific reason might be that we just really like Pink Floyd,” study author Bellier told Scientific American. Not to mention, “If they had said, ‘I can’t listen to this crap,’ the data would have been awful,” Schalk told the NYT.

Mind reading only with OP?

The big but: In order for the whole thing to work, the subjects had to have electrodes implanted on the surface of their brains. Therefore, all 29 subjects in the study were epilepsy patients. They had a network of needle-like electrodes implanted as part of the treatment.

This is also the reason why the replica sounds so muffled – the researchers were only able to examine the parts of the brain that were detected by the electrodes. More electrodes equal better sound, so the assumption. In the future, they hope to be able to do without surgical interventions. For example, by placing significantly more sensitive electrodes on the scalp.

After all, colleagues from the University of Texas in Austin have already managed to do something similar this year without surgery. Thanks to MRI scans and AI, scientists were able to convert thoughts into running text. Although no exact words came out, the core of the sentences did – and all of this without any surgical intervention.

However, previous methods of translating brainwaves into speech in this way are hardly practical: it takes around 20 seconds for one letter.

Digital voice prostheses: AI could give a voice to thousands

The researchers from Berkeley hope that in the future they will be able to completely reconstruct not only music but also language – with all its nuances and feelings – in a similar way. In this way, they could literally give a voice to people who can no longer communicate.

According to Schalk, so-called “prosody” has been a hurdle so far. Because language does not just consist of words, but also of rhythm, pauses, intonation, accents and emotions. It is their interplay that ultimately distinguishes human language from mechanical gibberish. “Instead of robotically saying, ‘I. Love. You,’ you can scream, ‘I love you!'” Knight explains.

By better understanding how our brain processes music, researchers hope to close this gap. A new generation of “speech prostheses” should not only translate “blunt” thoughts into words, but also deliver the associated emotions. Chances have probably never been better.

Sources: “Plos Biology“; “New York Times“; “The Guardians“; “Scientific American

source site-1