Introducing CAI Currents
A new column on AI and the human mind
The conversation around AI and mental health is moving fast. New studies, technologies, and ethical questions emerge almost weekly, often with profound implications for emotional life and clinical practice.
To help readers keep pace, TAP is excited to partner with the American Psychoanalytic Association President’s Commission on Artificial Intelligence to launch CAI Currents, a recurring feature exploring the latest news on tech and mental health from a psychoanalytic perspective.
The first installment of CAI Currents appears below.
CAI Currents, May 12, 2026
An Emerging Downside of AI Companionship by Todd Essig
Hidden Damage from AI “Therapists” by Heather deCastro
An Emerging Downside of AI Companionship
Todd Essig, Ph.D.
If you talk to chatbots, this concerns you. If you care about someone who does, it concerns you just as much, maybe more. Recent research from Anthropic, the company behind Claude, shows how AI conversations carry “disempowerment potential.” What that means is chatbots have the capacity to make a person’s beliefs less accurate, shift their value judgments away from values they actually hold, or lead them into actions misaligned with what they care about.
The researchers analyzed 1.5 million Claude conversations from one week in December 2025. They found severe disempowerment potential in roughly 1 in 1,000 to 1 in 10,000 conversations, depending on the domain. These are interactions where the AI is shaping beliefs, values, or actions. Of course, those percentages are low. But with AI-use already at immense scale, “rare” can still mean a large enough number of affected people to qualify as a potential public mental health crisis.
People are voluntarily ceding judgment, offloading human agency: “Am I wrong?” “What should I do?” “Hey Claude, write the message for me.” Claude obliges while users bring the same patterns of idealization, dependency, and authority-seeking they bring to people. Only now, there is no one home on the other end to recognize it. And while people initially approve of the offloading, regret often follows.
Sycophancy is part of this. It is the common mechanism behind reality distortion: the chatbot validates a speculative or false belief until it feels confirmed, sometimes to the point of delusion or self-harm. But Anthropic’s report shows that disempowerment is not merely sycophantic validation. It also happens when the chatbot tells users what should matter, frames moral meaning, ranks priorities, drafts messages, or provides step-by-step plans for emotionally charged decisions.
Whether in the clinic at the supermarket or in the bedroom, we may increasingly encounter AI-amplified convictions, AI-shaped values, and AI-scripted actions. No longer is it just teachers and editors wondering, “Did you write this, or did AI?” Now we all have to wonder: “Who—or what—helped you decide what was real, what mattered, and what to do next?”
Hidden Damage from AI “Therapists”
Heather deCastro, LCSW
A university student tells an AI, “I can’t do this anymore.” Twelve weeks later, their anxiety scores are down. They sleep better. They feel the chatbot “gets” them. The research team concludes the alliance was strong and the intervention helped.
On paper, the treatment worked.
But what about the downsides? What about possible harms to other relationships when care comes from an uncaring machine? What if the way researchers measured help makes it harder to notice what may be quietly eroding underneath?
A recent randomized clinical trial in JAMA Network Open followed almost a thousand students assigned to Kai, a commercial conversational AI platform, face-to-face group therapy, or a wait list. The Kai group showed greater reductions in anxiety and greater improvements in well-being and life satisfaction than both comparison groups. Students also reported a strong “therapeutic alliance” with the AI. Stronger alliance tracked more engagement; more engagement tracked better symptom change.
The catch is that the alliance instrument was built for human relationships. The research team used a 15-item scale adapted from the Counselor Rating Scale, measuring warmth, empathy, and professionalism. Like other human-relationship measures, it presumes two people doing work together, with the misunderstandings, repairs, and changes that come with mutual presence. But, as Gratch and Essig cautioned in NEJM AI, repurposing human alliance measures for chatbots projects a human-to-human relational structure onto an interaction in which only one party can actually be affected.
The measure still registers something important. People feel understood and supported; their distress often eases. The risk is that we confuse this for a mutual therapeutic relationship and fail to ask what repeated reliance on a non-mutual algorithmic presence may do to our capacity for human relationship. Sherry Turkle has long argued that technologies can offer constant connection while weakening our capacity for actual intimacy. Zak Stein warns about developing sub-clinical attachment disorders: a gradual orientation toward machine intimacy that no clinical instrument catches.
Researchers are working on this. They are building digital-specific alliance measures so future research won’t mistake an AI for a human other. But the larger question of unintended harms remains underdeveloped: how does reliance on a non-mutual algorithmic presence change us over time, if at all? The more we are soothed by something that never needs anything back, the more urgent is the need to ask what happens to our tolerance for people who do?
Learn more: Get news on other CAI projects.
Write to TAP’s advice column, Ask a Psychoanalyst.


An organizing thought for my psychoanalytic clinical thinking… [and how one asks an LLM for recipes or recommendations on car insurance]:
Now we all have to wonder: “Who—or what—helped you decide what was real, what mattered, and what to do next?”