Reports have emerged of Google’s generative AI chatbot providing strangers with users’ real phone numbers, resulting in a surge of unwanted calls and messages.
A recent incident on Reddit revealed that a user’s phone was inundated with calls from strangers seeking various services, all misdirected by Google’s AI. Similarly, an Israeli software developer was contacted on WhatsApp after Google’s chatbot Gemini provided incorrect customer service instructions that included his phone number.
Experts attribute these incidents to the use of personally identifiable information (PII) in training data, although the exact mechanism behind the exposure of real phone numbers remains unclear. The outcome, however, is a significant concern for those affected, with limited options to stop it.
According to DeleteMe, a company that specializes in removing personal information from the internet, inquiries about generative AI have increased by 400% over the last seven months. Notably, 55% of concerns reference ChatGPT, 20% reference Gemini, and 15% reference Claude.
These incidents underscore the need for increased vigilance and regulation around the use of generative AI and its potential impact on personal privacy.
Photo by Văn Nguyễn Hoàng on Pexels
Photos provided by Pexels
