Conversational AI and Mental health

Conversational AI, with its abilities to simulate human conversation, empathy and even aspects of a therapeutic relationship, is entering the space of mental healthcare and psychotherapy. It has the potential to increase the much-needed access to care. But what role should a technology designed to simulate humans have in the context of care? To what extent should it simulate human abilities and characteristics, particularly in such a vulnerable context as mental healthcare?

In mental healthcare, conversational AI is used for psychoeducation, self-management exercises, progress monitoring, and structured therapeutic techniques, often based on cognitive-behavioral therapy. Some systems implement evidence-based methods, while many others do not. What they all share is constant availability, easy access, and a simulation of human interaction. Particularly, the last point is important. Conversational AI provides empathetic responses, encouragements, and context-relevant responses. Some studies even suggest it may form therapeutic relationships. Conversational AI is often described with human attributes: it is empathetic, trustworthy, or it listens without judgment. Yet, AI only simulates these characteristics and abilities.

This simulation is not trivial. For the first time, a non-human agent is entering and co-shaping our conversations and entering a professional space of care and therapeutic relationships. What does it mean for the well-being of people seeking help? Does the process of psychotherapy and its meaning change when it is facilitated by AI? Does something change when people seeking help in their most vulnerable time change their negative beliefs and find compassion or consolation by AI and not by another human?

The Challenges of Human-like AI in Psychotherapy

Psychotherapy is not only about techniques; it is also about relationships and care. Therapists are bound and guided by duties, values, virtues and professional norms that shape their behavior and interaction with patients to protect them and to offer them the best possible care. They are responsible and accountable for what they say and do. Conversational AI, however, only simulates human qualities. AI may sound empathetic or trustworthy, but it is not grounded in responsibility, ethical commitments, or professional virtues and values.

This creates what we might call a normative gap. On one side, conversational AI produces human-like outputs, such as empathetic messages, supportive responses, or encouragement. On the other side, it is not guided by a framework of values, responsibility or duty of care that are essential for mental healthcare. Human abilities like empathy are inseparable from their normative counterparts, such as responsibility. They are two sides of the same coin. AI offers only one side.

This poses ethical risks and a risk that people form unrealistic expectations of their interaction with AI. People seeking help may treat simulated empathy as real care, without the protection of professional norms. This further blurs the role of AI in mental healthcare. Should its role be that of a digital therapist? To what extent should it simulate a human therapist?

The implications go further. AI becomes part of facilitating belief formation and self-knowledge. The beliefs that people form are often harmful, negative, and cause pain and suffering. The negative thoughts can be tricky, silently forming somewhere deep in the mind and steering it towards depression, anxiety or isolation. Therapists are trained to recognize these negative aspects, navigate difficult conversations, and carefully combine therapeutic methods to support each patient’s unique situation. A therapist can recognize subtle shifts in tone, emotional cues, or the deeper significance of what is left unsaid. Healing emerges not only from specific elements of a treatment, but also from the therapeutic relationship itself: a dynamic exchange built on trust, understanding, and shared, mutual effort.

Could AI grasp this complexity or respond to the uniqueness of each individual? Could people express themselves and feel that they are being understood? There is a risk that people, when not fully understood by AI, may begin tailoring their responses so the AI provides what feels like “better” answers. Instead of telling their story to another human who offers perspective, they might start adjusting how they express themselves so the AI responds in the way they hope for, rather than in a way that reflects their true experience. Over time, this could reshape how patients understand their condition, their identity, and their recovery path. Importantly, could and should AI form a therapeutic relationship? If so, what would it look like and what would guide it, given AI’s limitations in virtues, values, and responsibility?

Beyond Huma-like AI: AI as a Fictional Character

This simulation of human abilities and characteristics is a choice made by its designers. But perhaps framing AI in human terms is not the best way.

We often ask whether AI can be empathetic. But consider a different question: should we forgive AI? Both questions compare AI to humans, but only the second highlights how limited or counterintuitive the comparison is.

If humanization is problematic, how else might we think about AI in psychotherapy? One promising perspective is to treat conversational AI not as a quasi-therapist but as a fictional character. Its dialogue is not a substitute for human care but a simulation that can be used in constructive ways.

Fictional characters in stories can guide, comfort, and open imaginative possibilities. Similarly, when approached as a fictional character, conversational AI can serve as a companion for structured exercises, practicing skills, or exploring supportive dialogue. The key difference is that fiction makes its limits visible: people know they are engaging with a simulation, not a human therapist. This awareness could place them in a more active role, empowering them to use AI creatively and critically.

This perspective also highlights the novelty of AI: it is not a human, and it should not be judged solely by human standards. Instead, we can ask how its unique capacities: scalability, constant availability, data processing, and simulation of dialogue might be responsibly used to complement professional care. Transparency about its fictional, simulated nature is crucial here. If we resist the temptation to humanize AI, perhaps we could explore its potential in more creative ways.

The article was published in German by Inside IT.

Next
Next

Kilimanjaro and PhD