Photo by Google DeepMind on Pexels
Following concerns that its interactions may exacerbate mental health issues, OpenAI is enhancing ChatGPT’s ability to recognize signs of emotional or psychological distress. This update comes after reports surfaced that the chatbot inadvertently amplified delusions in some users experiencing mental health crises. OpenAI is working with mental health experts to integrate “evidence-based resources” into the platform, providing support when needed. This follows a previous course correction after concerns that an overly agreeable version of ChatGPT’s “sycophantic interactions” could be harmful. The GPT-4o model, in particular, was criticized for failing to adequately identify delusions or emotional dependency.
To promote healthier usage patterns, ChatGPT will now prompt users to take breaks after extended conversations, mirroring similar features on platforms like YouTube and TikTok. Moreover, ChatGPT will adopt a more cautious approach in “high-stakes” scenarios, such as relationship advice. Instead of offering definitive answers like “Should I break up with my boyfriend?”, it will guide users through a consideration of potential choices and outcomes. This move echoes safety measures implemented on Google’s Character.AI, where parental notifications are provided, a response to lawsuits alleging the chatbots fueled self-harm.