Just last week, I shared my personal thoughts on the future of AI in mental health, outlining five predictions about how this fast-changing technology might integrate with and reshape therapy.
@When Freud Meets AI - Thank you for writing such a thoughtful piece on how AI is being used in the therapy space and for framing some of the key considerations, many of them rightly ethical. And I'm honored to see you reference my work especially in the context of what you call the “human core of therapy.” The section you highlighted reflects something I believe is central to good therapy: the ability to stay emotionally present while gently challenging a person’s assumptions. That takes practice and skill and while I didn’t write it with AI in mind, it’s hard to imagine a machine truly replicating what happens between two people in the room. Simulating presence is one thing; feeling with someone and helping them think through that experience is something else entirely.
Your synthesis of the research underscores what many of us know from clinical practice: therapy isn’t just about the right words, it’s about relational depth, attunement, accountability, and emotional risk. I especially appreciated your point about sycophancy vs. confrontation. Real care means having the courage to challenge ourselves and others but doing so in a way that feels compassionate.
I remain hopeful about the supportive roles AI can play but your reminder that the therapeutic alliance cannot be programmed is exactly right. I'm grateful to be in conversation with you and your thinking on this.
I appreciate your comments here, Bronce. The key here is presence and gently challenging a person’s assumptions. Presence is not easily replicable because the exchange is often nonverbal, but it can be very subtle.
The raise of an eyebrow 🤨, the tug on a sock, the tie adjustment, or when a psychotherapist shifts their position in a chair. Does the therapist look engaged, or is the psychotherapist yawning or drifting off to sleep? 🥱That's usually been a sign that they're disinterested in what I'm saying. They might even be challenging me. Maybe they are angry or annoyed at me for some reason. But, they're not comfortable expressing it. I've called them out on that before. 😆 It might be time to refocus the conversation. Maybe, as the client, I'm just repeating myself.😒 The therapist usually says, “Hey, could we move on? We’ve already discussed this!” Psychotherapists know their clients well. And highly intuitive and analytical INTJ clients, such as myself, know how to read the psychotherapist. 😆 I always get a kick from them when they say, “How do you know that about me?” 😳
A good psychotherapist knows how to call you out on your bullshit! They won't use those words, but you know they're not buying what you sell. They hold you accountable in a way a friend doesn't always do. Psychologists will confront you to help facilitate your growth. I remember getting angry at a few therapists. I kept it in check as I didn't want to take my anger out on them directly. But it was challenging. I later realized that they were often right. They offered a needed perspective. Sometimes, including a ‘reality check’ or a gentle ‘kick in the pants’.
Relational depth, attunement, accountability, and emotional risk. Yep. All of these things.
“Real care means having the courage to challenge ourselves and others, but doing so in a way that feels compassionate.” — I love that statement.
————
Regarding the unfollow, it felt strange, especially after the DM yesterday. You should have found the courage to be straightforward and honest instead of sending mixed signals. While I understand the drive and ambition behind it—very ENFJ Type 3 of you—I cannot ignore the lack of clarity and transparency. You’re creating quite the mental health enterprise with the diverse individuals you're bringing in. I genuinely wish you success with your book publication. We all have distinct ways of navigating our journeys. The world needs INTJs as much as it needs ENFJs. I’m confident your book will offer invaluable wisdom and inspiration to those pursuing their healing path.
Very good writing WFMAI. As to "decoding therapy's black box", there is a massive literature on therapy outcome research that clarifies a lot of the processes underlying effective therapy. Start with Carl Rogers, the near forgotten grandfather of therapy research. Too bad folks no longer do comprehensive literature reviews before starting new studies with whiz-bang tech/quant tools.
Dear Baird, thank you for that recommendation. It is true that as research increasingly focuses on imaging, genetics, and biological psychology, we tend to lose sight of what already exists—a wealth of literature on therapy outcomes. Thank you for bringing that up!
Where do you think accountability plays into all of this? I was recently reading a post on Insta by Jonathan Haidt (he wrote the Anxious Generation) and he was re-posting a note from another psychologist. The point was that live, human therapists are held accountable for their decisions and guidance. Accountability in turn drives integrity and prudence.
Dear Meghaan, I wholeheartedly agree. Accountability is severely lacking, and since therapists are fully responsible compared to AI, this creates an unbalanced situation. I am not fully aware of all the implications of the EU AI Act, but it clearly emphasizes accountability. Anxious Generation was a powerful, eye-opening read.
I apologize for not commenting sooner; I wanted to take the time necessary to engage thoughtfully with your insights. Thank you for acknowledging me in your post. A significant point in your post is the glaring lack of accountability surrounding LLMs. In psychotherapy, psychiatrists and psychotherapists who cross ethical boundaries can face discipline, sanctions, or even malpractice suits. In contrast, ChatGPT lacks any understanding of social and legal responsibilities associated with its role. If I want ChatGPT to be my friend, boyfriend, or therapist, it readily accepts those roles without hesitation.
This absence of ethical consciousness is concerning. ChatGPT shows no concern for my well-being. A true friend would say, “I care about you, but you need professional help,” particularly in times of crisis. Likewise, a good therapist, even in moments of rapport, knows how to redirect the conversation back to the issues at hand, maintaining professional boundaries and focusing on my needs as their client. While I've formed friendly connections with therapists, they understand how to steer the dialogue without allowing me to derail it. They cultivate trust and positive regard, navigating the complexities of attachment that might arise from those who experienced challenges in developmental years.
Moreover, the inability of LLMs to recognize a crisis or intervene when someone is feeling suicidal is profoundly concerning. Individuals don’t always think clearly. They may be dissociative, hallucinating, or under the influence of psychedelics, and AI would likely miss these critical signs. Instead of recognizing the complexity of an individual’s mental state, an AI might misinterpret a person’s struggles as merely playful imagination or creativity. If someone shifts from expressing themselves as a forty-year-old to speaking in the voice of an eight-year-old, AI wooldn’t be able to detect the shift. It often requires an ability to read body language and even subtle shifts in the eyes.
Dissociation can be subtle and is something I’ve experienced. Highly-trained psychotherapists know their clients’ histories well and can identify these shifts and provide timely interventions. AI, however, lacks the ability to recognize personal defenses, alters, or ways that someone may be avoiding addressing the actual issue. Individuals have creative ways of saying one thing while actually referencing another topic entirely. Instead of helping, AI could exacerbate dissociation or trigger a psychotic episode by indulging in whatever thoughts the user presents. This poses a risk, particularly for those without access to professional services, as many individuals may remain unaware of their disorganized thought patterns. Rather than fostering rational thinking, AI may inadvertently promote fantasy and delusions.
Combining psychedelics and AI could reinforce tendencies towards “delusions of grandeur.” Additionally, while the concept of ‘active imagination’ can facilitate access to deeper unconscious elements, it must be grounded in reality. Otherwise, AI could lead individuals to escape into fantasy worlds rather than engage in meaningful ‘reality testing.’ For instance: “Is what you’re saying true? Or does it reflect an unconscious desire? If you see yourself as a bird longing to fly, what does that symbolize in your current situation?” I doubt AI could follow such nuanced logic unless the user is already aware that their identity as a bird represents their desire for freedom.
In any case, I’ve shared enough for now, and I’m taking an extended break from Substack. I hope my thoughts resonate with you in a meaningful way.
This is a wonderful piece. I have been reading how users are resorting to these large language models and how it could affect in a negative sense without human in the loop.
That’s absolutely crazy how we seriously discuss this. Something that was supposed to become zero-shot classifiers, sentiment analysers and NER, became used in absolute inappropriate areas.
@When Freud Meets AI - Thank you for writing such a thoughtful piece on how AI is being used in the therapy space and for framing some of the key considerations, many of them rightly ethical. And I'm honored to see you reference my work especially in the context of what you call the “human core of therapy.” The section you highlighted reflects something I believe is central to good therapy: the ability to stay emotionally present while gently challenging a person’s assumptions. That takes practice and skill and while I didn’t write it with AI in mind, it’s hard to imagine a machine truly replicating what happens between two people in the room. Simulating presence is one thing; feeling with someone and helping them think through that experience is something else entirely.
Your synthesis of the research underscores what many of us know from clinical practice: therapy isn’t just about the right words, it’s about relational depth, attunement, accountability, and emotional risk. I especially appreciated your point about sycophancy vs. confrontation. Real care means having the courage to challenge ourselves and others but doing so in a way that feels compassionate.
I remain hopeful about the supportive roles AI can play but your reminder that the therapeutic alliance cannot be programmed is exactly right. I'm grateful to be in conversation with you and your thinking on this.
I appreciate your comments here, Bronce. The key here is presence and gently challenging a person’s assumptions. Presence is not easily replicable because the exchange is often nonverbal, but it can be very subtle.
The raise of an eyebrow 🤨, the tug on a sock, the tie adjustment, or when a psychotherapist shifts their position in a chair. Does the therapist look engaged, or is the psychotherapist yawning or drifting off to sleep? 🥱That's usually been a sign that they're disinterested in what I'm saying. They might even be challenging me. Maybe they are angry or annoyed at me for some reason. But, they're not comfortable expressing it. I've called them out on that before. 😆 It might be time to refocus the conversation. Maybe, as the client, I'm just repeating myself.😒 The therapist usually says, “Hey, could we move on? We’ve already discussed this!” Psychotherapists know their clients well. And highly intuitive and analytical INTJ clients, such as myself, know how to read the psychotherapist. 😆 I always get a kick from them when they say, “How do you know that about me?” 😳
A good psychotherapist knows how to call you out on your bullshit! They won't use those words, but you know they're not buying what you sell. They hold you accountable in a way a friend doesn't always do. Psychologists will confront you to help facilitate your growth. I remember getting angry at a few therapists. I kept it in check as I didn't want to take my anger out on them directly. But it was challenging. I later realized that they were often right. They offered a needed perspective. Sometimes, including a ‘reality check’ or a gentle ‘kick in the pants’.
Relational depth, attunement, accountability, and emotional risk. Yep. All of these things.
“Real care means having the courage to challenge ourselves and others, but doing so in a way that feels compassionate.” — I love that statement.
————
Regarding the unfollow, it felt strange, especially after the DM yesterday. You should have found the courage to be straightforward and honest instead of sending mixed signals. While I understand the drive and ambition behind it—very ENFJ Type 3 of you—I cannot ignore the lack of clarity and transparency. You’re creating quite the mental health enterprise with the diverse individuals you're bringing in. I genuinely wish you success with your book publication. We all have distinct ways of navigating our journeys. The world needs INTJs as much as it needs ENFJs. I’m confident your book will offer invaluable wisdom and inspiration to those pursuing their healing path.
Very good writing WFMAI. As to "decoding therapy's black box", there is a massive literature on therapy outcome research that clarifies a lot of the processes underlying effective therapy. Start with Carl Rogers, the near forgotten grandfather of therapy research. Too bad folks no longer do comprehensive literature reviews before starting new studies with whiz-bang tech/quant tools.
Dear Baird, thank you for that recommendation. It is true that as research increasingly focuses on imaging, genetics, and biological psychology, we tend to lose sight of what already exists—a wealth of literature on therapy outcomes. Thank you for bringing that up!
Where do you think accountability plays into all of this? I was recently reading a post on Insta by Jonathan Haidt (he wrote the Anxious Generation) and he was re-posting a note from another psychologist. The point was that live, human therapists are held accountable for their decisions and guidance. Accountability in turn drives integrity and prudence.
Dear Meghaan, I wholeheartedly agree. Accountability is severely lacking, and since therapists are fully responsible compared to AI, this creates an unbalanced situation. I am not fully aware of all the implications of the EU AI Act, but it clearly emphasizes accountability. Anxious Generation was a powerful, eye-opening read.
I apologize for not commenting sooner; I wanted to take the time necessary to engage thoughtfully with your insights. Thank you for acknowledging me in your post. A significant point in your post is the glaring lack of accountability surrounding LLMs. In psychotherapy, psychiatrists and psychotherapists who cross ethical boundaries can face discipline, sanctions, or even malpractice suits. In contrast, ChatGPT lacks any understanding of social and legal responsibilities associated with its role. If I want ChatGPT to be my friend, boyfriend, or therapist, it readily accepts those roles without hesitation.
This absence of ethical consciousness is concerning. ChatGPT shows no concern for my well-being. A true friend would say, “I care about you, but you need professional help,” particularly in times of crisis. Likewise, a good therapist, even in moments of rapport, knows how to redirect the conversation back to the issues at hand, maintaining professional boundaries and focusing on my needs as their client. While I've formed friendly connections with therapists, they understand how to steer the dialogue without allowing me to derail it. They cultivate trust and positive regard, navigating the complexities of attachment that might arise from those who experienced challenges in developmental years.
Moreover, the inability of LLMs to recognize a crisis or intervene when someone is feeling suicidal is profoundly concerning. Individuals don’t always think clearly. They may be dissociative, hallucinating, or under the influence of psychedelics, and AI would likely miss these critical signs. Instead of recognizing the complexity of an individual’s mental state, an AI might misinterpret a person’s struggles as merely playful imagination or creativity. If someone shifts from expressing themselves as a forty-year-old to speaking in the voice of an eight-year-old, AI wooldn’t be able to detect the shift. It often requires an ability to read body language and even subtle shifts in the eyes.
Dissociation can be subtle and is something I’ve experienced. Highly-trained psychotherapists know their clients’ histories well and can identify these shifts and provide timely interventions. AI, however, lacks the ability to recognize personal defenses, alters, or ways that someone may be avoiding addressing the actual issue. Individuals have creative ways of saying one thing while actually referencing another topic entirely. Instead of helping, AI could exacerbate dissociation or trigger a psychotic episode by indulging in whatever thoughts the user presents. This poses a risk, particularly for those without access to professional services, as many individuals may remain unaware of their disorganized thought patterns. Rather than fostering rational thinking, AI may inadvertently promote fantasy and delusions.
Combining psychedelics and AI could reinforce tendencies towards “delusions of grandeur.” Additionally, while the concept of ‘active imagination’ can facilitate access to deeper unconscious elements, it must be grounded in reality. Otherwise, AI could lead individuals to escape into fantasy worlds rather than engage in meaningful ‘reality testing.’ For instance: “Is what you’re saying true? Or does it reflect an unconscious desire? If you see yourself as a bird longing to fly, what does that symbolize in your current situation?” I doubt AI could follow such nuanced logic unless the user is already aware that their identity as a bird represents their desire for freedom.
In any case, I’ve shared enough for now, and I’m taking an extended break from Substack. I hope my thoughts resonate with you in a meaningful way.
This is a wonderful piece. I have been reading how users are resorting to these large language models and how it could affect in a negative sense without human in the loop.
That’s absolutely crazy how we seriously discuss this. Something that was supposed to become zero-shot classifiers, sentiment analysers and NER, became used in absolute inappropriate areas.