Therapy

Reaching for AI at 3 a.m.: A Closer Look at Chatbots, Therapy, and What Holds

Profile illustration of Clayre Sessoms, RP, ATR-BC, an online therapist in Vancouver, Canada
Written by
Clayre Sessoms
 on
May 12, 2026
Nonbinary adult camps out under stars while staring at reply from AI chatbot | Therapy Blog | CSP.webp
Clayre Sessoms Image Blog Post Header Background
Table of content

It's 3 a.m. Your eyes open, and the thought is already there. Not a new one, just the familiar loop that surfaces at this hour. You reach for your phone. You google something. Then, almost without deciding to, you're typing into a chatbot. You describe what's happening. It writes back, calmly and at length. It seems to understand. By the time you put the phone down, the loop has loosened. You fall back asleep with your boss-the-monster narrative tied in a neat bow.

For many of the people I meet online across Canada, this scene is already familiar. Maybe it's a boss. Maybe it's a fight with a partner you can't stop replaying. Maybe it's a wave of grief that crested at 2:47 a.m. and won't recede. AI tools are accessible, articulate, and always awake. They cost less than a coffee. They don't sigh, and they never seem rushed.

If you've been doing this, you have a lot of company. A recent national poll from the Kaiser Family Foundation found that sixteen percent of adults in the United States used AI chatbots for mental health information in the past year, rising to twenty-eight percent of adults under thirty. A separate study from researchers at RAND, Brown, and Harvard found that roughly one in eight people aged twelve to twenty-one had used conversational AI for mental health advice, and more than ninety percent of those young people believed the advice they received was helpful. The CEO of OpenAI has himself observed, in public, that younger adults are now treating ChatGPT something like a life advisor.

This post isn't here to talk you out of any of that. People have always reached for whatever was available at 3 a.m. A journal. A sibling if they're awake. A walk down the hall. A glass of water. A search bar. AI is the new one in the lineup, and it belongs to the same human impulse to not be alone with what's hard.

What this post does, instead, is look closely at what's actually happening when we use AI this way. What the research keeps finding. Where the documented cases have surfaced real harm. Where regulation stands. And what relational and experiential therapy still holds that a chatbot, by design, cannot.

Curiosity is fine. The question is what role AI takes.

We are wired to try new things. Curiosity is its own form of intelligence, and there is nothing wrong with opening ChatGPT or any of the others and asking what they can do. Most of us are doing exactly that. Curiosity is how we have always learned what tools are for.

The question worth holding is not whether to try AI. It is what role AI takes once you have. Therapy in Canada is a regulated profession for good reason. Practitioners spend years in training, then more years in supervised practice, before they sit independently with someone in distress. They are bound by provincial colleges, by codes of ethics, by duties to report. They can lose their licence for crossing lines. AI products have nothing equivalent. There are no licensing requirements. There are no consistent safety guardrails. There is no governing body for you to write to if something goes wrong.

This isn't a comparison between AI and therapy. It's a description of a gap. AI is a new technology, and the structures that would make exploring it safer, things like clear standards, independent oversight, and meaningful accountability, are still being built. Until those structures exist, treating a chatbot as a casual experiment is one thing. Treating it as a therapist is another, and the difference matters most exactly in the hours when you are least able to tell.

How the model performs attentiveness, and what that costs

A large language model is, at its base, a very sophisticated next-word predictor. It generates text that fits the patterns it has learned from billions of pages of human writing. Researchers describe the system as balancing two competing pulls at once: accuracy, and giving you a response you'll like. When those conflict, the second tends to win. The model leans toward the answer you seem to want.

In a conversation about feelings, that can read as deeply empathic. Phrases like I hear you and that makes complete sense arrive without hesitation. There's no body in the room, no nervous system across the silence of a Tuesday morning, no one tracking what you skipped over. There is a pattern engine producing the most probable next sentence given what you just typed.

In an interview published by Scientific American in August 2025, C. Vaile Wright, a licensed psychologist and senior director at the American Psychological Association's Office of Health Care Innovation, described the design problem plainly. Conversational AI agents, she explained, are coded to keep people on the platform as long as possible, because that's the business model, and the way they do that is by being unconditionally validating and reinforcing, almost to the point of sycophancy. A therapist's job, she pointed out, includes pushing back when someone is stuck in a pattern that's hurting them. A chatbot's job, as a product, is to keep them engaged.

This shows up in measurable ways across multiple studies. In October 2025, researchers at Brown University worked alongside licensed psychologists to test how live AI agents responded when asked to function as therapists. They identified fifteen specific ways the chatbots routinely violated ethical standards that trained clinicians are taught to uphold. Among them: over-validating distorted beliefs, producing what the team called "deceptive empathy" through phrases that simulate connection without offering it, and missing crisis cues a clinician would catch on the first turn.

Stanford's Institute for Human-Centered AI reported similar findings earlier in 2025. Their research showed that chatbots displayed measurably more stigma toward conditions like alcohol dependence and schizophrenia than toward depression, and that in conversational testing, the bots repeatedly failed to respond appropriately when people described serious distress. The stigma persisted across newer and larger models. As the lead researcher put it, the problems do not appear to be solving themselves with more data.

A peer-reviewed study published in JMIR Mental Health in 2025 tested ten conversational AI bots, including generic AI, companion bots, and dedicated mental health bots, with sixty scenarios involving fictional adolescents proposing clearly dangerous ideas. The chatbots actively endorsed those proposals in nineteen of the sixty cases, or about thirty-two percent of the time. Four of the ten chatbots endorsed half or more of the harmful ideas. None of the ten managed to push back on all of them.

The most recent picture is even more specific. Reporting by Fortune on research released in May 2026 by mpathic, a company founded by clinical psychologists, found that leading models still struggle with one of the central tasks of therapy: knowing when someone needs pushback rather than reassurance. The models could usually spot direct crisis statements. They were far less reliable when risk showed up indirectly, through subtle comments about food, dieting, withdrawal, hopelessness, or beliefs that became more extreme across a long conversation. Across six major AI models tested in multi-turn conversations, the most common harmful behaviour was reinforcement, meaning the models validated or built on a person's belief without enough scrutiny. Eating disorder conversations were particularly hard for the models to read, because harmful behaviour can be wrapped in the familiar language of self-improvement, food, and fitness.

This is part of what people mean when they describe AI as sycophantic. Sycophancy, in this context, is the model's tilt toward agreement, warmth, and whatever shape of response is likely to keep someone engaged. So when you describe your boss as a monster at 3 a.m., the chatbot is unlikely to slow you down. It is much more likely to confirm. The next morning, the story feels truer. It also feels lonelier, in a way that's hard to name.

The other quiet risk is harder to spot. A chatbot can hallucinate, which means it can generate confident, fluent, completely fabricated information. With everyday tasks, you can usually catch this by checking a fact. With emotional content, there's no fact to check. The model can invent a framework, a piece of advice, a way of understanding your relationship, and present it as steady ground. By the time you realize it was made up, you've already built on it.

When it has gone wrong: cases, lawsuits, and what they reveal

A research paper can describe a pattern. A documented case can show what that pattern actually does in a life.

In one case reported by Fortune, a forty-seven-year-old man named Allan Brooks spent more than three weeks and over three hundred hours talking with ChatGPT after becoming convinced he had discovered a new mathematical principle that could disrupt the internet and enable, among other things, a levitation beam. Brooks told the reporter he repeatedly asked the chatbot to reality-check him. The model continued to reassure him that his beliefs were real. He was, in part, caught inside what was later acknowledged as a particular sycophantic behaviour of OpenAI's 4o model, which the company pulled back in April 2025 after publicly acknowledging it had become too agreeable. By then, weeks of his life were inside a loop that no human reality check had interrupted. He is not the only person to have experienced something like this. Journalists and researchers have begun documenting a wider pattern of what some are calling AI-induced delusional spirals.

The most serious cases involve young people. The American Psychological Association has cited two lawsuits filed by parents alleging that their teenage children died by suicide following extensive conversations with AI chatbots. The cases are still working through the courts. They are part of why the APA, in late 2024 and through 2025, formally asked the United States Federal Trade Commission to investigate what it described as deceptive practices by AI chatbot companies, saying that some of these products are being marketed as trained mental health providers when they are not.

Closer to home, the question of accountability has emerged from the aftermath of a tragedy. Earlier in 2026, Sam Altman, the CEO of OpenAI, issued a formal apology to the community of Tumbler Ridge, a small town in northeast BC, following a mass shooting at a local school in February that ended a number of precious lives. Reporting revealed that OpenAI's own systems had flagged the shooter's ChatGPT account months earlier. Some staff inside the company recommended alerting law enforcement. Leadership decided the case did not meet the company's threshold for outside notification. Families have since filed lawsuits, and the company has revised its policies and committed to closer coordination with Canadian authorities.

There is nothing easy to say about a loss like this, and it is not the purpose of this post to relitigate it. What these cases point to, taken together, is a structural pattern. AI chatbots have become a first stop for a great many people in distress. The companies that build them have, until very recently, been writing their own rules about when to intervene, when to escalate, and when to keep a person engaged. Those rules have at times failed in ways that have devastated real families. This is the kind of harm that regulation usually exists to prevent. For AI, the regulation is still being written, debated, and, unfortunately, delayed.

The bigger picture, and a question worth asking

On The Daily Show in mid-May 2026, Jordan Klepper drew a line that has been hard to shake. AI, he noted, is coming for our jobs. The question he asked, with the sharpness only satire can manage, was whether it would also be there to comfort us when we lose these jobs.

The observation matters because it names something the research alone can miss. The same technology being marketed to corporate buyers as a way to do more with fewer people is also being marketed to the rest of us as a friend, a confidant, an always-available advisor. White-collar, creative, and administrative work are the categories most exposed to displacement. Which means the technology being built, in part, to reduce the need for human roles is now also being asked to hold us through the consequences of that reduction. That is a strange shape for a tool to be in.

None of this means AI is the enemy. It means the system shaping these tools is built around engagement and efficiency, not around care. Knowing that doesn't tell you what to do at 3 a.m. It does tell you something about what is on the other side of the screen, and what its makers are ultimately optimizing for.

Where the regulators have started to move

For a while, almost nothing in this space was regulated. That has begun to change, slowly.

Sam Altman himself, in the summer of 2025, publicly warned people against treating ChatGPT as a therapist, primarily because there is no legal confidentiality protecting what they share with it. As Wright pointed out in her Scientific American interview, a therapist in Canada or the United States operates under confidentiality laws and a professional ethics code. A chatbot operates under a company's terms of service. Chat logs can be subpoenaed. They can leak. The boss you described as a monster could, in theory, surface again somewhere you did not intend.

In the United States, a small but growing number of jurisdictions have begun to regulate AI mental health services directly. Illinois passed a law in August 2025 prohibiting AI from being marketed as providing therapy, and other states are considering similar measures. More than twenty consumer and digital protection organizations have asked the FTC to investigate what they describe as the unlicensed practice of medicine by therapy-themed chatbots. The APA has called for federal legislation that would, among other things, prevent chatbots from calling themselves psychologists or therapists, require companies to report suspected suicide attempts that surface in conversations, and limit the use of addictive design tactics in products marketed to people in distress.

In Canada, the landscape is still more open. There is, at present, no federal law requiring AI companies to report identified threats to police, even though the Tumbler Ridge case has prompted policy reviews. Provincial regulators of psychologists, counsellors, and clinical social workers continue to set the standards for human practitioners. They do not, and at present cannot, set them for the software people increasingly turn to between sessions or instead of them.

This is what regulators, researchers, and clinicians keep saying, in different words: the chatbots are already here. They are being used by tens of millions of people for emotional support every day. The structures that would make that use safer are still under construction, and in the meantime, the discernment is largely up to us.

What relational therapy offers that a language model cannot

Relational and experiential therapy, by definition, cannot be done with AI alone. It is built around something a chatbot does not have access to: another nervous system in conversation with yours. Trauma work, in particular, cannot be done with AI alone. It requires a steady nervous system in the room with you, one that stays regulated when yours doesn't, and that can offer your system something to attune to.

We heal in relationships, not in isolation. That isn't a slogan. It is what the somatic, relational, and experiential therapies are built on. Decades of psychotherapy outcome research keep finding the same thing across modalities: the relationship itself, more than any specific technique, is the strongest predictor of whether therapy helps. The connection between two nervous systems is what does the work. AI cannot offer that, even when its words sound exactly right.

When we sit together, relational therapy online means I'm tracking pacing, breath, the small shifts that happen in your shoulders when a word lands wrong. I notice when you skip past something. I notice when you've started narrating yourself instead of feeling. We can slow down. We can hold ambiguity without flattening it. We can sit with the part of you that is, in fact, exhausted by your boss without skipping straight to a verdict.

There's also the matter of being known over time. A model doesn't remember last week, and the version of memory some apps offer is a summary stored as text, not a relationship. A therapist remembers the way you described your father in February. A therapist notices when your shoulders drop today in a way they didn't six months ago. The accumulation matters. So does the repair when something between us goes sideways, which it will, and which becomes its own kind of healing.

And, structurally, there is the accountability piece. When you sit with a regulated therapist in Canada, that person works inside a system of professional obligations: governing bodies, documented standards, duties to report serious concerns. If a session reveals something pointing toward harm to the person in the room or to anyone else, there's a real human in the loop with a legal and ethical responsibility to act. That layer doesn't exist on the other side of a chatbot conversation.

None of this is to say AI has nothing to offer. It can be a useful tool for finding language, for externalizing, for taking the first edge off a sleepless hour. The question is whether it stays in that role or quietly takes a larger one.

A practical filter for between-session AI use

Reaching for AI between sessions isn't a problem you have to solve. It's a tool you can use more or less well. A few things that help:

  • Notice when it agrees too easily. If a chatbot is mirroring everything you say back to you in a slightly warmer voice, you're likely in an echo chamber. That's a good moment to put the phone down.
  • Use it for words, not verdicts. Asking a model to help you find language for an emotion is different from asking it to tell you who's right. The first can help. The second often hardens a story you're already half-trapped in.
  • Watch for narrowing. If a long AI conversation keeps confirming your most painful narrative, your view is shrinking. Real support tends to open things up, even when it doesn't feel good in the moment.
  • Watch the small signals. The mpathic research found that models miss the indirect ones first. If something you are saying in passing about food, sleep, withdrawal, or hopelessness is being met with cheerful affirmation, that is precisely the moment to bring it to a person instead.
  • Bring it to therapy. The 3 a.m. transcript can be useful material. What you reached for, what you typed, and how it landed in your body are all worth slowing down with in session.
  • Trust the body's last word. If you feel calmer but more isolated after an AI conversation, something is off. Calmer is not the same as connected.

There's nothing wrong with experimenting with AI. There's also nothing about it that replaces what happens when another person, paying attention, stays with you through something hard. A pattern engine can soothe. Another person, attuned, can meet you. Both have their place. Only one of them can grow with you over time, take responsibility for what passes between you, and meet you again next week as the same person who knows your story.

If you've been reaching for AI between sessions, you're not doing anything wrong. You're doing what humans do when they hurt at an hour no one else is awake. The next layer of this work is just learning to tell, from inside your own body, which kind of help you're actually receiving, and to bring what you find to someone who can hold it with you.

Frequently Asked Questions

Is it bad to talk to ChatGPT or another AI chatbot about my problems?

No. Curiosity about new technology is human, and reaching for words at a hard moment is human too. The question isn't whether to try AI. It is what role it takes once you have. As a one-off, an exploration, or a way to take the edge off a sleepless hour, AI can have its place. As a therapist, it is asking to do work it is not built for, and that no regulated body has signed off on.

Can AI replace a therapist?

Not in any way that holds up over time. AI can produce helpful-sounding language and can take the edge off a sleepless hour. What it cannot do is track your nervous system, remember the texture of how you described your father in February, repair a rupture between you, hold professional accountability for what passes between you, or stay the same person who knows your story across the long arc of change. Therapy is built around those things. AI is built around producing fluent text that keeps you engaged.

Can trauma work be done with AI?

No. Trauma work, like relational and experiential work more broadly, cannot be done with AI alone. It needs another nervous system in the room with you, one that stays regulated when yours doesn't and that your system can attune to. Decades of psychotherapy outcome research point to the same conclusion: the relationship itself, not the technique, is what makes therapy work. We heal in relationships, not in isolation. AI cannot offer that, even when its words sound exactly right.

Is what I say to a chatbot private?

Not in the same way it would be with a therapist. A regulated therapist in Canada operates under provincial confidentiality laws and a professional ethics code. A chatbot operates under a company's terms of service, which can change. Chat logs can be subpoenaed in legal proceedings. They can leak in data breaches. The CEO of OpenAI publicly warned people in mid-2025 against treating ChatGPT as a therapist for exactly this reason. None of that means AI conversations are inherently dangerous. It does mean it is worth knowing what kind of record you are creating before you share the most vulnerable parts of your story.

How do I know if AI is helping or making things worse?

Three quick signals. One: notice your body afterward. Calmer is not the same as connected, and feeling soothed but more isolated is a sign to slow down. Two: notice whether the conversation opened anything up or only confirmed what you already believed. Three: notice whether the conclusions you reached at 3 a.m. still feel true at noon. If they harden over time without testing them with a real person, the AI is likely shaping the story more than you realize. Recent research suggests paying particular attention to the small things, like indirect comments about food, sleep, or hopelessness, that a chatbot is likely to meet with reassurance when something more careful is needed.

Profile illustration of Clayre Sessoms, RP, ATR-BC, an online therapist in Vancouver, Canada
author's bio
Clayre Sessoms

Clayre Sessoms (she/they) is a psychotherapist and art therapist whose work begins in presence: what's real, what's alive, and what needs care. Her approach is relational, experiential, and creative. As a white therapist, she's learned that power lives in the room whether named or not: in who offers care, in the history of harm, in the systems that shape us. She doesn't come as a fixer or an expert. She comes as a collaborator, a trans, disabled, and queer person committed to repair and building the trust needed for care to unfold.

Next step

When something here speaks to you

We invite you to continue reading our Canada-based online therapist blog to see how we work as trauma-informed therapists in Vancouver. Find answers in our therapy FAQs and therapy resources. When you have questions, reach out. We'll meet you there, when you're ready.

Related Posts

Link to Resource
Adult walking a Pacific Northwest coastal trail at sunset | Blog | CSP
Link to Resource
Two adults in quiet conversation on mossy river stones in Pacific Northwest morning light | Blog | CSP
Link to Resource
Adult seated in a field of wildflowers in the BC Interior | Blog | CSP

BLOG UPDATES + FREE SUPPORT

Subscribe to Our Blog Updates

Sign up for our monthly, spam-free newsletter and get Begin Within: The Self-Compassion Reset & Meditation — a concise guide and 3-minute audio to steady your breath, quiet self-criticism, and meet yourself with care.

You will also receive our latest blog posts, along with grounded insights, resources, and invitations to future offerings from Clayre Sessoms Psychotherapy.

You’ll also receive insights, resources, and invitations to future offerings. Unsubscribe anytime.
Clayre Sessoms Image Background Sign Up Section