Music
University of Tennessee's "HarmonyCloak" to Shield Music from AI
2024-12-10
As generative artificial intelligence continues to advance, researchers at the University of Tennessee at Knoxville have made a significant breakthrough. They have developed a tool called HarmonyCloak, which effectively protects music from being copied by AI models. This is a crucial development in the field, as AI models like ChatGPT, CoPilot, and MusicLM pull data from across the internet to generate various forms of content, including music. Assistant professor Jian Liu was concerned that these models might use copyrighted sources to create new content, which could have a negative impact on musicians. That's why he collaborated with doctoral student Syed Irfan Ali Meerza and assistant professor Lichao Sun at Lehigh University to create HarmonyCloak.

Protecting Musicians' Intellectual Property

HarmonyCloak integrates low-pressure sounds into a song or composition. These sounds are imperceptible to the human ear but make it difficult for AI models to learn. By doing so, it prevents AI from mimicking songs and violating copyrights. In July, Tennessee passed the Ensuring Likeness Voice and Image Security Act to protect against deepfakes and unauthorized uses of artists' voices and likenesses. The ELVIS Act focuses on protecting artists upfront, while HarmonyCloak provides an additional layer of protection for music.Liu had two main goals for HarmonyCloak. Firstly, he aimed to minimize the perceived impact of the musical notes for the listener. Secondly, he wanted to leverage these subtle perturbations to make the music unlearnable by generative AI. The team worked on the tool through trial and error, starting with audible perturbations and then experimenting with high and low frequencies to make the notes inaudible. Each extreme on the frequency spectrum affected different musical components, and they found that low-pressure notes that the human ear doesn't usually pick up due to the lack of vibrations were the most effective. The tool inserts these perturbations into the song to cloak the music, making it nearly indistinguishable from the original. However, when run through an AI model, the difference is significant. The uncloaked AI mimic sounds extremely similar to the original, while the cloaked version creates discordant sounds.AI learns using a "knowledge gap," and HarmonyCloak tricks AI into thinking the gap is already shortened, thus preventing it from learning anything from the song. This is a unique approach that has not been seen before. HarmonyCloak functions similarly to anti-AI tools like Glaze and Nightshade developed by Ben Zhao and his research team at the University of Chicago, but it is specifically designed for music.In their research paper, around 50 people listened to the cloaked and uncloaked songs at random. Only one person, an audiophile, reported hearing the perturbations. The rest of the participants couldn't hear any noises and were satisfied with the quality of the music. They couldn't distinguish between the two versions.The group will continue to refine HarmonyCloak by involving musicians, audiophiles, and students at UT's College of Music in their next experiments. Liu hopes to use HarmonyCloak to teach students about ways to prevent AI from infringing on copyright. This is just the beginning, and he wants to send a signal to high-tech companies that they need to consider the rights of data owners and the intellectual property of artists and musicians.Keenan Thomas is a higher education reporter. Email keenan.thomas@knoxnews.com. X, formerly known as Twitter @specialk2real.Support strong local journalism by subscribing to subscribe.knoxnews.com.
More Stories
see more