Photo by Google DeepMind on Pexels
The Federal Trade Commission (FTC) is intensifying its focus on the potential effects of AI chatbots on children and teenagers, launching an investigation into seven leading AI companies. OpenAI, Meta (including Instagram), Snap, xAI, Alphabet (Google), and Character.AI have been ordered to furnish the FTC with detailed information regarding their chatbot safety protocols, monetization strategies, and efforts to safeguard young users. The FTC’s inquiry is driven by mounting concerns regarding online child safety and the inherent risks associated with AI chatbots that mimic human interaction. The agency seeks to determine whether these companies are adhering to consumer protection laws and adequately protecting vulnerable user groups from potential harm. The investigation will likely explore issues such as data privacy, age verification, and the potential for manipulation or exploitation.