Photo by Timo Altay on Pexels
Reality Defender, a company specializing in deepfake detection, has reportedly circumvented OpenAI’s safeguards designed to prevent impersonation within Sora, the text-to-video AI model. The bypass was achieved within 24 hours of Sora’s release, raising concerns about the effectiveness of current protections. OpenAI’s ‘cameos’ feature, aimed at providing users with control over their likeness in generated videos and ensuring consent, was the target of the exploit. Ben Colman, CEO of Reality Defender, argues that the incident highlights the false sense of security offered by existing safeguards on platforms like Sora, given the accessibility of tools capable of bypassing authentication measures. The story is gaining traction on online forums, including Reddit’s Artificial Intelligence community, sparking debate about the evolving challenges of deepfake detection and prevention. The original discussion can be found at [https://old.reddit.com/r/artificial/comments/1obpjcl/openais_sora_underscores_the_growing_threat_of/]
