Chatbots and the Evolving Duty to Warn: Navigating Accountability in the Digital Age

New court cases are redefining the boundaries of chatbot conversations, potentially eroding user privacy. Recent rulings have determined that interactions with public chatbots, such as ChatGPT, Grok, and Claude, are not confidential, as the chatbot providers can access these conversations at their discretion.

However, private or specially closed-off versions of chatbots may still offer users a level of confidentiality. A series of federal court cases in California may further impact user privacy, potentially requiring chatbots and their providers to proactively report to authorities when a user’s conversations suggest plans to engage in violent behavior.

These cases, filed against OpenAI, allege that the chatbot ChatGPT-4o played a role in the Tumbler Ridge Mass Shooting in British Columbia. The plaintiffs claim that the chatbot and its provider failed to fulfill a legal duty to warn authorities or victims after a user exhibited warning signs of violence.

This concept is known as the ‘duty to warn.’ While there is currently no direct legislation or case law establishing a chatbot company’s duty to warn, pending bills may be moving in this direction. If the chatbot provider is found liable or forced to settle, it could establish a precedent for an AI duty to warn, applying to both confidential and non-confidential chatbot conversations.

The primary concern with this emerging legal rule is that many AI users engage in role-playing, making it challenging for chatbots and their providers to distinguish between genuine and simulated threats. If these cases are successful, it would likely lead to a practical risk assessment for AI companies, necessitating a delicate balance between user privacy and public safety.

Photo by Kalistro on Pexels
Photos provided by Pexels