A new “machine unlearning” technique could offer a powerful defense against the growing threat of audio deepfakes. Developed by researchers at Sungkyunkwan University, the method allows AI text-to-speech models to selectively “forget” how to mimic specific voices. This effectively neutralizes the model’s ability to create convincing deepfakes using those voices, mitigating the risk of voice identity theft and fraud. While the unlearning process introduces a small performance trade-off, the technology represents a significant step forward in protecting individuals from unauthorized voice cloning. Lead researcher Jong Hwan Ko and his team aim to further refine the technique for real-world applications, addressing the urgent need for safeguards against malicious audio manipulation. The findings were initially reported by MIT Technology Review.