AI Therapy: ChatGPT Use Sparks Betrayal and Ethical Concerns Among Patients

AI Therapy: ChatGPT Use Sparks Betrayal and Ethical Concerns Among Patients

Photo by cottonbro studio on Pexels

The burgeoning use of ChatGPT by therapists has ignited a debate surrounding patient trust and data security within the mental health field. Clients are increasingly reporting feelings of betrayal upon discovering their therapists are employing the AI chatbot to formulate responses, potentially fracturing the vital therapeutic relationship. One such instance involved a patient, Declan, who inadvertently observed his therapist utilizing ChatGPT during a virtual session.

While some research indicates that AI can generate ostensibly helpful responses, patient satisfaction tends to diminish when AI involvement is suspected. Experts are stressing the necessity for openness and explicit disclosure when integrating AI into therapeutic practices. A perceived lack of authenticity on the part of the therapist can breed mistrust and anxiety among patients.

Beyond these ethical considerations, data privacy emerges as a significant concern. General-purpose AI chatbots like ChatGPT typically lack HIPAA compliance, thereby exposing sensitive patient data to potential breaches. Although specialized AI tools tailored for therapists are beginning to emerge, many professionals remain wary of recording entire sessions due to privacy risks.

Furthermore, relying on Large Language Models (LLMs) carries the risk of harm if the AI reinforces biases or provides unqualified validation. The American Counseling Association advises against using AI for mental health diagnoses. A practicing psychiatrist noted that, while ChatGPT can simulate therapeutic responses, it often lacks the capacity for in-depth analysis and the construction of comprehensive narratives.

As therapists explore the incorporation of AI-driven tools, they must carefully balance potential advantages against the fundamental needs and well-being of their patients. The integration of AI in therapy raises critical questions concerning data privacy, authenticity, and its potential to reshape the therapist-patient dynamic.

This article is based on reporting from The Algorithm, an AI-focused newsletter published by MIT Technology Review. Reports of patients’ experiences have appeared online, including on platforms such as Reddit, where individuals like ‘Hope’ have expressed profound feelings of betrayal after receiving messages from therapists suspected of using AI.