Photo by cottonbro studio on Pexels
The increasing use of AI, particularly models like ChatGPT, by therapists during sessions is raising significant ethical questions and threatening patient trust. A recent report details instances where therapists have secretly employed these tools, prompting concerns about transparency and the potential impact on therapeutic relationships. One patient recounted witnessing their therapist inputting their thoughts into ChatGPT in real-time, followed by the therapist relaying the AI-generated responses.
Laurie Clarke, who first reported on the issue, stresses the critical need for transparency. She argues that therapists must disclose their use of AI to patients to maintain trust. While AI offers potential advantages, such as streamlining administrative tasks like note-taking, its application for treatment advice without proper oversight or disclosure poses risks. Professional organizations caution against using AI for diagnostic purposes, and some states are considering stricter regulations. The debate also includes concerns about the potential for tech companies to overstate AI’s therapeutic capabilities and the resulting attachment people may form to these AI products. Experts emphasize that therapists should challenge patients, understand patients which ChatGPT doesn’t do.