This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

Neuroscientists use AI to recreate music from brain activity

It has long been a goal of neuroscientists to be able to create words from our thoughts alone. People who have lost the ability to speak, through illness or accident, would then be able to speak through a computer.

Scientists have now moved a step closer to this goal. A study analysed data from electrodes on the brain while the participants were listening to music. Using machine learning, the researchers were able to reconstruct a somewhat garbled but distinctive audio of the music.

Although the study related to music, the research is particularly relevant to human speech. This is because speech contains melodic nuances, including tempo, stress, accents and intonation, known as "prosody", which carry meaning beyond the content of the words alone. If such features of speech can be recreated, the speech becomes much more natural and meaningful.

When considering patents in areas such as this, patent attorneys need to navigate around various exclusions to patentability. In Europe certain methods relating to therapy are not patentable, whereas no such exclusion applies to devices. Other exclusions apply to computer-related inventions, but these exclusions only apply to the extent that an invention relates to the excluded area as such. These matters are best considered on a case by case basis, and at Marks & Clerk we have specialists who can guide you through all these areas.

To turn brain activity data into musical sound in the study, the researchers trained an artificial intelligence model to decipher data captured from thousands of electrodes that were attached to the participants as they listened to the Pink Floyd song while undergoing surgery.


medical technologies, artificial intelligence