[Reading Science] AI Eavesdrops on Human Brain to Play Music
Following Visuals, Auditory Also Reproduced for the First Time in History
Artificial intelligence (AI) is accelerating the era in which human brainwaves can be read by machines. Recently, scientists succeeded in reproducing the exact song a person heard solely by measuring and analyzing their brainwaves.
On June 13, 2022, students at Sangmyung University Seoul Campus' Miraebaeknyeon Hall are analyzing the brainwaves of a student wearing a brainwave measuring device. Archive photo. Not related to the article. Photo by Honam Moon munonam@
View original imageA research team from the University of California, Berkeley, published a paper detailing these experimental results on the 14th (local time) in the academic journal 'PLOS Biology'.
The team conducted the study on 29 epilepsy patients by attaching a brainwave measurement device, about the size of a postage stamp, to the surface of their brains. They played the song "Another Brick in the Wall, Part 1," released in 1979 by the famous rock band Pink Floyd, to the patients. Meanwhile, they measured changes in brainwaves in areas responsible for processing musical elements such as lyrics, rhythm, harmony, and pitch in the patients' brains. The team then used AI to analyze these brainwave changes and succeeded in translating them into actual music. As this technology advances, people who have lost their language abilities due to accidents or diseases will be able to sing songs as well as speak through brainwave analysis and reproduction.
Neuroscientists have attempted so-called "brain eavesdropping" for decades, detecting and reconstructing what people see, hear, and think through changes in brainwaves. The results of this research team appear to be the first successful reconstruction of auditory-related brain activity of participants using an implanted brainwave measurement device. While previous experiments have succeeded in measuring brainwaves of people engaged in visual activities such as viewing faces, landscapes, or animated videos and reconstructing them with AI, this is the first time auditory reproduction has been achieved.
Why did the research team specifically choose Pink Floyd and this particular song? They explained that the song is composed of complex chords, various instruments, and rhythms. The team used an AI model to analyze the electrical changes in different parts of the brain responding to the song's acoustics, rhythm, tone, and pitch variations. Then, using another AI model, they reconstructed the analysis results to estimate the sounds the participants heard. The results were excellent. The team explained, "The reconstructed melody was almost undamaged, and although the lyrics were slightly distorted, it was enough to identify what was heard."
Hot Picks Today
"Now Our Salaries Are 10 Million Won a Month" Record High... Semiconductor Boom Drives Performance Bonuses at Major Electronic Component Firms
- Experts Already Watching Closely..."Target Price Set at 970,000 Won" Only Upward Momentum Remains [Weekend Money]
- Prime Minister Kim Minseok: "Samsung Electronics Strike Could Cost Up to 1 Trillion Won per Day, 100 Trillion Won Total... Tomorrow's Talks Are the Last Chance" (Comprehensive)
- Did Samsung and SK hynix Rise Too Much?... Foreign Assets Grow Despite Selling [Weekend Money]
- Is It Really Like an Illness? "I Can't Wait to Go Again"—Over 1 Million Visited in Q1, Now 'Busanbyeong' Takes Hold [K-Holic]
The research team also identified which parts of the brain are responsible for specific sounds and music. When voices or synthesizers begin, the general brain's voice processing area, specifically the superior temporal gyrus located just behind the ear, responds. However, a different area is responsible for processing continuous buzzing sounds.
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.