Americans are increasingly turning to AI for help handling relationships and other challenging situations, as a no-cost alternative to a professional therapist. They value the feedback and are willing to share whatever they’re thinking without worry about being judged. Darius Tahir reports for KFF News on the dangers of relying on AI for your therapy.
More Americans are needing mental health care. And many of them like the therapist who is not a human being. They enjoy talking to a bot, especially because the bot is always available and at relatively little cost compared to a live therapist.
And, in fairness, plenty of human therapists are likely not any better than a bot. So, it’s not surprising that OpenAI reports that as many as one in ten of its users–80 million people–use ChatGPT for therapy or emotional support. Sam Altman says that as many as 1,500 people a week talk to a bot about suicide.
But, there are serious concerns with relying on AI for your therapy. One major concern is the likelihood that the AI product will delude people seeking help, flattering or making them feel good, which could end up hurting them. Some say the bots advise people to hurt themselves, leading people to be hospitalized.
To be clear, there are limited, if any, consumer protections with regard to the marketing of the services or the quality of the services themselves. There aren’t even privacy protections. Companies can and do use the word “therapy,” cavalierly. But, there’s no evidence their products are effective, and it’s not even clear how to test their efficacy.
Here’s more from Just Care:



