As a therapist with over ten years of professional experience, I grow increasingly humble about what I can achieve and comprehend each year. A few weeks ago I shared my belief that nobody knows what AI therapy will look like. Still, the deeper I dig into this topic, the more convinced I become that some broad trends are emerging. I want to share my entirely subjective and highly uncertain predictions about the future of AI and mental health.
1) Clients Shape The Future Of AI And Mental Health
Harvard Business Review reported that “therapy/companionship” is the most prevalent use case for how users use AI. I have learned a lot about how people use AI for their wellbeing and mental health in the last month.
shared how he uses NotebookLM to scour his personal journals for patterns in his thoughts and behaviour and how he then uses Claude to nudge him to change what he learned about himself by those insights. suggested how AI can cautiously be used in mental health. has powerfully presented the limits of AI because it cannot feel, there is no embodied connection, it will never be able to repair a ruptured therapeutic alliance and might serve as a sounding board but not give you real insight.Did you notice what's missing here? My hypothesis is that because AI is a tool, clients and their usage will shape the future of AI and mental health. It is a bottom-up process. Yes, there are recommendation that come top-down such as the American Psychological Association (APA) health advisory on the use of AI, especially in adolescents. Yet, when you are given a tool, you are enabled to explore it’s capabilities on your own. The future of AI in mental health care starts with clients. It's how they use it. It's how they bring it to the therapy session that will change the profession and how it handles AI. Rita McGrath said snow melts at the edges. These are the edges.
2) Why Chat With A Bot If You Can Speak To A Robot Therapist?
The Therabot study is the first randomized controlled trial which used AI and text messaging to study whether this is to clients benefit. However, the mode of communication here is text. There are small areas of psychotherapy that focus on text but the majority of clients ent a therapy room and talk to therapists in spoken language. Relevant factors in this interaction are the embodiment of the interaction, of meeting a person in the same room.
Do you remember the change that occurred when you switched from chatting with generative AI to using voice input? I remember getting goosebumps when I used it for the first time because it felt so real. I hypothesize that chatbots and text in generative AI output will only be the beginning. We will soon see therapeutic offerings that utilize natural language, voice output, and even facial expressions. There is a lot of research on detecting emotions in your voice and assessing whether the emotions on your face match what you say, which could be a disturbingly next step. Yet, that is what human therapists do in every interaction to help clients uncover unconscious and hidden emotions (the moment when a client clenches his fist while saying that his relationship is absolutely fine). It will take a long time for AI to access any of these informations but the output will grow from text to voice and facial expressions. Did you know there are already social robots that project a face on a mask? Have a look here at what Swedish startup Furhat is doing in terms of humanoid, social robots. It seems they have shifted their focus from therapy to sales interactions; however, you can still get an impression of the technology (see other YouTube videos for newer versions of the robot).
3) Sycophantic AI Will Change To A Complex Personality
At this stage, there is a real issue with sycophantic AI, which is excessively agreeable and flattering, and has inherent flaws due to its training. (one of my latest articles). One of the main issues is that sycophantic AI will not confront you the way a real therapist does. Have you ever noticed a 7 year old use generative AI with voice input? I recently saw one and in minutes he was testing the limits of Gen AI, asking what would happen if he would hit his father because he was angry at him. Now, AI was laughing sheepishly and then suggesting something along the lines of “this is not nice, why not do something else” and I can tell you that 7 year old was not impressed.
But what if your AI therapist had a real personality? What if AI had told that 7 year old that this is absolutely not acceptable, setting clear boundaries that would probably have stunned that boy? As AI was not doing this, his impression was this is something to joke and play with. When AI develops a real personality, when it challenges you, when it feels authentic instead of fake and stilted, then its ability to change thoughts and behavior will become even more nuanced. Again, a disturbing thought. You may have heard about the recent discussion about a Swiss study that appeared to show that AI can influence Reddit users opinions better than humans were able to do, but there is a lively discussion about this unethical approach (users were not aware of being part of a study and the AI use) and the methods of the study.
4) AI Therapy Provides Access That Humans Cannot
AI doesn't need to go to college, AI doesn't need to train for years, AI does not need supervision. It can read all of the therapy manuals and train on new studies in seconds. AI has low barriers, it has much lower costs, it might be easier to “use” because clients feel less frightened or threatened than meeting a stranger to share their personal story. Given the depth and breadth of the mental health crisis, this is fertile ground for AI therapy options to pop up. The Bureau of Health Workforce estimates:
“Substantial shortages of addiction counselors, marriage and family therapists, mental health counselors, psychologists, psychiatric physician assistants/associates, psychiatrists, and school counselors are projected in 2037.As of August 2024, more than one third (122 million) of the U.S. population lives in a Mental Health Professional Shortage Area.”
Given the number of professionals needed and the time and resources spent on education, there will be immense social pressure for a scalable technological solution. We might see a divide between those who can afford to see a human therapist for real, personal therapy and those who will receive AI, commoditized "therapy." I am still searching for the right metaphor, but it might be something like the difference between a tailored suit—with the tailor personalizing it, taking time, and genuinely trying to cater to your needs—and a cheap off-the-rack solution that broadly covers the basics but isn’t truly fitting. Like antiques or mass-produced furniture, human therapy may become a highly valued service that many cannot or will not afford.
5) AI Can Help Us To Learn How Therapy Actually Works
Psychotherapy is highly effective, typically has few side effects, and has existed for decades, but its exact mechanisms remain mysterious. It is an analog process where two people meet in a room, talk to each other, and no one, except for those two, knows what truly transpired during the session.
“Although hundreds of randomized controlled trials have shown that psychotherapies are effective in treating mental disorders, it is not known how they work.” (Cuijpers et al., 2019)
It may be that therapy is effective regardless of specific approaches or techniques, such as psychoanalysis, systemic approaches, or cognitive behavioral therapy. Or, it may be that concrete exercises actually have an effect in the right therapeutic situation. Also, the role of the therapist appears to be central. There is research that shows that there are exceptional therapists, supershrinks, whose clients “improve at a rate at least 50 per cent higher and drop out at a rate at least 50 per cent lower than those of average clinicians”. Still, we have not figured out how to train these supershrinks and why therapy works in granular detail.
What AI could contribute to mental health is a deeper understanding of what works, what doesn’t, and why. By digitizing therapy and accelerating the processing of substantial amounts of multimodal data (text, voice, therapy feedback questionnaires, physiological parameters such as heartbeat, skin conductance…), AI could improve our understanding of how therapy functions, even in its traditional form as a personal encounter.
What AI Can’t Touch: The Human Core of Therapy
Still, I see clear limits to this idea of measuring, analyzing, and understanding the phenomenon of how mental health improves in a process that involves a client and a therapist—two lives intricately intertwined for a certain period within a controlled process that has clear boundaries. I believe this touches on the core of our humanity, a space where we make sense of our thoughts, emotions, and life stories, which is simply inaccessible to the AI methods proposed here. The personal connection to another human being, the presence of someone else in the same room, and the attunement to another person who feels how our emotions manifest in our bodies create a space that seems impossible to grasp with technology.
These are my thoughts but I am eager to hear yours, please share them in the comments section!
This is one of the most nuanced explorations I’ve seen on AI’s role in mental health, grounded in clinical insight and emotional depth. I especially appreciate your reminder that how clients use AI may shape the field more than top-down guidance. The idea that the human core of therapy might remain untouchable is both comforting and, honestly, necessary.
Your insights into the evolving patterns in AI and mental health are very sharp, especially your observation that users, through their practical choices, will drive the development of AI in mental health, influencing it from the ground up. The move from text-based chatbots to more human-like AI with voice and expressions highlights how crucial perceived authenticity and engagement are. Furthermore, your thoughts on current AI's overly agreeable nature and the idea of a more challenging AI personality point directly to how technology affects the deeper emotional process of facing issues in therapy. Your acknowledgment of AI's ability to increase access to mental health care, especially with the current shortage of professionals, and its potential to help us understand how therapy truly works, shows a comprehensive understanding of the current mental health system's strengths and weaknesses.
Your approach skillfully considers various viewpoints, balancing the practical aspects of technology with a subtle grasp of how people understand the world. You show an honest recognition that your predictions are "entirely subjective and highly uncertain," demonstrating you understand the underlying assumptions that shape even logical forecasts. Your conclusion, clearly stating "What AI Can’t Touch: The Human Core of Therapy," powerfully highlights where logical analysis ends and the deep, complex reality of human experience begins.
However, an important point that remains unsaid, but deeply influences your whole discussion, is the fundamental difficulty of providing genuine human therapeutic connection on a large scale. This lack of widespread availability is quietly accepted, creating a tension between the ideal of personalized human therapy and the urgent need for widespread access. Directly stating this – that it's economically and practically impossible to offer highly personalized human therapy to everyone who needs it – would change the conversation from just what AI can't do to a broader societal discussion about balancing quality and access, and the almost sacred importance we place on one specific type of human interaction. The hidden societal belief that values human therapy as the ultimate standard, despite its limited availability, is the underlying assumption of your entire analysis.