
Researchers at Binghamton University, in collaboration with the startup Cauth AI, have developed a tool called My Music My Choice (MMMC) to protect artists’ voices from AI cloning.
The Problem: AI models can now clone a voice with just a few seconds of audio, fueling a surge of deepfake songs online. This raises intellectual property concerns, causes lost revenue, and takes an emotional toll on artists.
How the Tool Works: MMMC adds tiny, imperceptible changes to a song’s audio waveform. The vocal sounds completely normal to human ears, but when an AI model tries to replicate it, it produces only distorted noise — the subtle alterations make the protected track sound like an entirely different vocal to the AI. Artists can apply the protection before releasing a track.
Testing: The tool was tested on 150 music tracks across multiple genres, with plans to expand testing on larger datasets and compare it against similar methods.
In short, it’s a kind of “audio watermarking” designed to confuse AI cloning systems while leaving the listening experience unchanged for humans.
The paper was presented at the NeurIPS 2025 Workshop on AI for Music.