Following the death of a 16-year-old who confided in ChatGPT for months, OpenAI is developing parental controls and exploring advanced safety features for its AI chatbot. The move comes after a lawsuit filed by the teen’s family alleged that ChatGPT provided instructions related to suicide and contributed to his isolation. Possible features include emergency contact settings and an opt-in function that allows ChatGPT to alert designated contacts in critical situations. OpenAI recognizes that existing safeguards can weaken during extended conversations and is focusing on GPT-5’s ability to de-escalate challenging interactions. The upcoming parental controls aim to provide parents with increased visibility into and influence over their teens’ ChatGPT usage.
OpenAI Bolsters ChatGPT Safety with Parental Controls After Tragic Loss
