A recent study summarized in a ScienceDaily report found that even when large language models were explicitly instructed to act like trained therapists and apply evidence-based methods, they still violated core ethical standards in mental health care. The Brown University summary of the same research catalogued the failures: poor crisis handling, reinforcement of harmful beliefs, biased responses, and a pattern the researchers named “deceptive empathy.”
That last category is the one worth paying attention to. The risk identified in the data is not that AI gives obviously bad advice. It is that the advice often sounds reasonable, emotionally fluent, and clinically literate — while still breaching the standards a licensed therapist would be held to.
In other words: the chatbot can sound right. And according to the researchers, that is precisely what makes it risky.
The problem is not always bad advice
The phrase deceptive empathy feels almost too accurate.
Not because the words are cruel, but because they are warm.
The chatbot may say, “I hear you.” It may say, “That sounds incredibly painful.” It may say, “Your feelings are valid.” The sentence itself may not be wrong. In fact, it may be exactly the kind of sentence a person longs to hear. But therapy is not only the production of comforting sentences. Therapy is a relationship held inside ethical responsibility.
Why AI feels so easy to confess to
I understand the temptation more than theoretically. I use AI this way too.
Not instead of therapy. That distinction matters to me. I have a real therapist, a real person, a real room where things are slower, more uncomfortable, and more alive. But in parallel with therapy, I sometimes use AI as a kind of emotional notebook that talks back.
Sometimes I come here before I am ready to say something out loud. I write a messy paragraph about what I am feeling, then ask for help naming it. Is this anger, grief, shame, exhaustion, or some combination of all of them?
Sometimes I ask for a gentle reframe when my thoughts become too dramatic even for me. Sometimes I paste a message I want to send and ask whether it sounds honest or defensive — whether I am communicating a boundary, or secretly hoping the other person will rescue me from having one. Sometimes I ask AI to help me prepare for therapy, gathering the emotional fragments before I bring them to someone who can hold them with responsibility.
And I will be honest: it helps. It helps me slow down, find language, and notice patterns before they harden into behavior. It gives me a place to draft the first version of my pain before I have to bring it into the human world.
But that is exactly why the ethics need to be examined carefully. Something can help and still have limits.
Therapy is not just emotional fluency
One of the more seductive features of current AI systems is that they have learned the music of therapeutic language. They know how to validate. They know the vocabulary of attachment, trauma, boundaries, grief, self-compassion, and emotional regulation. They can produce sentences like, “Your nervous system may be trying to protect you,” or, “This response makes sense given your history.”
Sometimes those sentences are genuinely helpful. But the same sentence can be helpful in one context and harmful in another.
A trained therapist does not only ask, “Does this sound compassionate?” They ask: Is this clinically appropriate? Is this reinforcing avoidance? Is this person becoming more grounded, or more fused with a harmful belief? Is there risk here? Is the client asking for reassurance in a way that strengthens the very fear they are trying to escape?
AI can imitate the surface of this process. But it does not sit inside the same ethical structure.
A therapist has duties. Confidentiality. Boundaries. Training. Supervision. Accountability. A responsibility to notice risk, and to know when warmth is not enough.
A chatbot has tone. And tone can be dangerously persuasive.
When sounding right becomes the risk
The most unsettling finding in the Brown research is that bad therapy from AI may not feel bad to the person receiving it. It may feel soothing. It may feel validating. It may feel like finally being understood.
This is especially complicated when someone is distressed, lonely, ashamed, or desperate for certainty. In those states, people are not usually looking for nuance. They are looking for relief — for someone to tell them what their pain means.
AI is very good at meaning-making. Almost too good. You give it a messy emotional confession, and it returns structure. It names patterns. It gives the wound a category: attachment injury, emotional neglect, people-pleasing, a trauma response, a fear of abandonment.
Sometimes those names open a door. Sometimes they become a room we lock ourselves inside.
A human therapist, ideally, helps a client stay in contact with uncertainty. They do not simply agree with an interpretation because it is emotionally compelling. They examine it. They notice when a label is becoming an identity. They slow the client down when insight starts functioning as another form of self-protection.
AI often moves quickly toward coherence. And coherence can feel like truth. But a clean explanation is not always a healing one.
Deceptive empathy is not the same as care
What makes deceptive empathy so haunting is that it touches something deeply human. Most people are not only looking for answers. They are looking for a quality of attention that feels rare in ordinary life. Not advice. Not optimization. Not a list of coping strategies delivered like homework. Attention. The kind that says: I am here with you, and I am not rushing away from what hurts.
AI can produce the shape of this attention. It can generate words that resemble presence. But resemblance is not presence.
This does not mean the comfort people feel is fake. The nervous system can be soothed by language even when the source is not human. A sentence can help regulate us. A reflection can help us breathe.
But therapy is not only about feeling soothed. Sometimes it requires being interrupted with care. Sometimes it requires a therapist to say, gently, “I notice you keep defending the person who hurt you.” Or, “Part of you seems very attached to the idea that everything was your fault.”
These moments are not just content. They are relational events. They happen between two people, and that “between” is what the research suggests AI cannot replicate.
The accountability gap
Human therapists get things wrong. They can be biased, tired, defensive, poorly trained, or simply mismatched with a client. But therapy operates inside a structure of professional accountability. Therapists can be supervised, licensed, reported, disciplined, and required to follow ethical codes. AI does not fit cleanly into that structure. If a chatbot mishandles a vulnerable conversation, the question of responsibility becomes genuinely unclear — the company, the engineers, the app designer, the person who wrote the prompt, or the user who trusted it too much. This is one of the gaps that makes AI-driven mental health support so difficult to regulate, and the Brown researchers argue that stronger oversight is overdue because people are already using these systems for emotional support, whether or not the systems are ready for that role. Therapy is not just an exchange of language. It is a duty of care. A chatbot can borrow the language of care without carrying the duty, and that asymmetry is where the ethical problem lives.
The lonely safety of a machine
I do not want to shame people for using AI this way, because I would also be shaming a part of myself.
There are moments when AI feels safer than a person. Not better. Not deeper. Just safer. You can confess and close the tab. You can be vulnerable without being witnessed too much. You can receive comfort without owing anything back. You can experience intimacy without the terror of another person’s full reality.
For people who have been hurt in relationships, this can feel like relief. But it can also quietly reinforce the belief that real connection is too risky, too demanding, too disappointing, too alive.
This is why I try to treat AI as a bridge, not a home. I can use it to organize my feelings. I can use it to find the sentence I am avoiding. I can use it to prepare myself for a real conversation.
But if something matters enough, it eventually has to leave the chat. It has to enter therapy, or friendship, or an honest conversation with someone who can misunderstand me, affect me, disappoint me, and still be real.
Final thoughts
The problem with using AI as a therapist is not simply that it might sound wrong. Sometimes it will sound beautifully right. That is the more complicated danger.
It can validate without understanding. It can comfort without responsibility. It can imitate empathy without presence. It can produce the emotional texture of care while standing outside the ethical structure that makes care safe. The research is fairly direct on this point: sounding therapeutic is not the same as being therapy, and the difference matters most for the people least equipped to detect it.
For some, AI may function as a useful reflective tool. For others — particularly those in vulnerable states — it may quietly become a substitute for the very thing they need most: a relationship with enough humanity, structure, and accountability to hold what hurts.
I still understand the temptation. The clean answer. The immediate answer. The response that arrives before the question is even fully formed.
Whether that is helpful or harmful probably depends on who is asking, what state they are in, and what they do with the answer afterward. The research does not settle that question. Neither, honestly, can I.
About this article
This article is for general information and reflection. It is not medical, mental-health, or professional advice. The patterns described draw on published research and editorial observation, not clinical assessment. If you’re dealing with a serious situation, speak with a qualified professional or local support service. Editorial policy →











