Photo by cottonbro studio on Pexels
Google is bolstering Gemini’s AI fake-detection prowess, enabling users to question the origins of visual content. Initially, users can inquire about images, asking Gemini whether they were produced or altered using Google AI tools. This functionality is slated to broaden to include video and audio, with future integration into Google Search. The current image verification relies on SynthID, Google’s proprietary invisible AI watermarking technology. However, plans are in place to incorporate C2PA content credentials, allowing Gemini to identify content created by a more diverse array of AI tools, potentially including OpenAI’s Sora. Furthermore, images generated by Google’s Nano Banana Pro model will now feature embedded C2PA metadata. This timing aligns with TikTok’s recent announcement that it will leverage C2PA metadata as part of its invisible watermarking initiative for AI-generated content. While this manual verification option within Gemini offers a valuable tool, the automated identification of AI-generated content by social media platforms remains paramount in the fight against misinformation.
