Future of AI Mental Health

How we got here

In 2017, the first major innovation toward AI mental health support began with the creation of Woebot, an unassuming character that would pop-up on your phone to ask questions about life and offer pre-programmed support suggestions.

The release of ChatGPT in 2022 was a major leap forward and there has been much more widespread experimentation using AI for mental health support. Beyond that, there are now additional AI “companion” chatbots being developed that take on different personalities and identities, some of which claim to offer mental health support.

Most major mental health organizations (American Psycholopgical Association, American Psychiatric Association, etc) have developed position statements on the use of AI for diagnosing and treating mental health conditions, all of which have clarified that no AI has been approved for providing professional services.

Despite that, large numbers of people are using the tech for a range of mental health purposes, some saying ChatGPT or Claude are their “therapist”. It appears that teens and young adults are the biggest users of the tech in this way.

How AI will replace human therapists

I do not actually think AI will ever replace therapists or psychotherapy, our current mental health technology. However, I do think at some point, possibly in my lifetime (I’m in my 40s), therapists will become less necessary because AI.

My view is that the turning point will be when AI becomes predictive in daily life. Meaning, that based on a trove of personalized biographical information, real time health data, and a communication network of other AI, it can start to predict things for users like daily personal conflicts and stresses, accidents and inconveniences, bad decisions, flawed thought processes, disregulated moods, and others great and small with relative accuracy. It could also automatically take steps on your behalf to manage these things.

If AI can reliably do that, then it will prevent many mental health problems from beginning or worsening. If it gets so sophisticated that it has better intuition and judgment than the average person, it would mean many of the current causes of persistent mental health issues could improve simply weighing AI advice more heavily into personal decisions. Once this is possible, it could become the first line of treatment for most people instead of psychotherapy.

Think of it this way: therapy relies only on the information immediately available to the two people in the room, who make rough estimates of what the future holds, rely on biased assumptions and flawed memories about the causal factors of past events, use theoretical beliefs about how to fix the client’s problems, and do all of this work within the relational, subjective, and narrative process that’s visited once a week. AI will someday exceed these structural limitations that psychotherapy can never escape.

At that point, psychologists will still be needed for complex situations and for other jobs that will emerge, including those in AI mental health, but providers doing full days of talk therapy as it exists now would be much more uncommon and unnecessary. Ultimately, psychotherapy would fade out in time with whatever generation was last to be told that human-based treatment is the best way to improve mental health. After that, it will comfortably settle into being complementary medicine to the next dominant paradigm, like all things that came before it.



Search

CONNECT

willmeekphd@gmail.com

Disclaimer & Disclosure: The owner of this site has received no compensation and is not paid for, or claiming to represent any company, entity, or institution (e.g. no one is paying me to write anything or advocate for on this site). Everything on this site is a good faith effort to improve safety and effectiveness related to AI mental health, specifically people who are freely choosing to experiment with this technology to improve their mental health. This site and owner are not responsible for any negative outcomes, misinterpretations, or other problems related to this technology or information on this website. To say it one more time, this is an experimental technology and users should proceed at their own risk.