Photo by Google DeepMind on Pexels
OpenAI has addressed recent online speculation claiming ChatGPT has been updated to restrict legal and medical advice, stating that the chatbot’s capabilities remain unchanged. Karan Singhal, OpenAI’s head of health AI, refuted the rumors on X, reaffirming that ChatGPT is not designed to replace professional guidance but serves as a helpful tool for understanding complex information in these fields.
The policy update on October 29th reinforced existing guidelines, emphasizing that ChatGPT should not offer personalized advice that typically requires a licensed professional without proper oversight. This stance aligns with OpenAI’s long-standing policy against actions that could negatively impact safety, well-being, or rights, including providing tailored legal, medical, or financial advice without expert review and full disclosure of AI assistance and its limitations.
OpenAI clarified that it has consolidated its policies across all products, but the core principles remain consistent, ensuring users are aware of ChatGPT’s boundaries concerning professional consultations.
