Unmind logo
Mental Health at Work

From Crutch to Coach: What 14,000 Conversations Tell Us About AI and Mental Health

Default description for the image
Dr. Max Major

19 May 2026

Default description for the image

Content

  • What we set out to understand
  • What we found
  • Why the mechanism question matters
  • How this shapes Nova
  • The question worth asking

The question that follows AI in mental health almost everywhere is: does it create dependency? It's a reasonable concern. If someone reaches for an AI whenever things feel hard and there isn’t growth, you haven't really helped them – you've given them somewhere comfortable to go.

But I think dependency is the wrong frame for the problem. The more useful question is: what kind of engagement is happening, and is it building anything? That's what our new research sets out to examine (currently in preprint: [link]). And what it found changes how I think about what good AI mental health support should look like.

What we set out to understand

Access to mental health support is a structural problem that AI has a genuine role in addressing. Most people who need support can't get it quickly enough, consistently enough, or at a cost that works for them. That gap is real, and it's one of the reasons organisations are investing in AI tools for their people's mental health. But investing responsibly means asking harder questions than "does it help?" The one that matters most, and gets asked least, is: how does it help – and is the way it helps building lasting capacity, or just managing the moment?

We analysed more than 14,000 real-world conversations from Nova, Unmind's AI mental health coach, looking at patterns in how people engage over time, between January and August 2025.

The analysis was calibrated with expert human raters, grounding the findings in clinical judgement rather than automated metrics alone. The question we were asking: is Nova functioning more like a crutch (reactive, no developmental arc) or more like a coach, where consistent engagement builds something over time?

What we found

People who return to Nova tend to come back to the same topics. At first glance, that might look like the crutch pattern – returning to the same problem, not moving past it. But when we looked more closely at the nature of those repeat conversations, something different emerged.

Repeat users don't repeat themselves. They go deeper. The same themes reappear with greater nuance, more self-awareness, more evidence of skill being applied. That pattern – consistent return to familiar territory, but with increasing depth rather than repetitive surface-level exchange – is the hallmark of coaching-style engagement. It's precisely what you'd want to see if a tool is building something rather than simply managing the moment. And it's the opposite of what dependency looks like.

Repeat users didn't repeat themselves. They went deeper – returning to the same themes with greater nuance, self-awareness, and evidence of applied skill. That's what coaching looks like.

Default description for the image
Dr Max Major
Principal Clinical Psychologist, AI

Why the mechanism question matters

Knowing that a tool helps is one thing. Understanding how it helps is another, and in some ways more actionable.

If an AI tool works by providing a release valve: somewhere to put difficult feelings so they feel more manageable in the moment, that's genuinely valuable. But it's contingent on the tool being available. If, on the other hand, the tool works by building the kind of reflective capacity that people carry with them after the conversation ends, that's a different kind of value. It persists.

For anyone responsible for supporting people's mental health at scale, that distinction matters. The coaching literature is clear: depth-based, skill-building engagement is what produces lasting change. Whether an AI tool generates that kind of engagement, or something shallower, is a meaningful question to be able to answer.

How this shapes Nova

These findings reflect how we've approached Nova's design from the start. The goal has never been to provide support in the narrow sense, that is, something to reach for only when things are hard and set down when they're not. It's to build the kind of inner resource and self-awareness that makes someone better placed to cope, with or without the tool.

In practice, that means Nova asks questions that develop insight rather than simply providing answers. It maintains continuity across conversations so that themes can be explored with real depth, rather than starting from scratch each time. And it's designed to recognise when someone is ready to go further, not just to meet them where they are.

The research gives us more granular evidence that this design philosophy translates into how people actually engage – and that at scale, the engagement looks like coaching rather than dependency.. 

The question worth asking

We're sharing this research openly because the questions it raises matter beyond Unmind. As AI becomes a more significant part of how organisations support their people, the standard it's held to should include whether it leaves people more capable than it found them.

It's also, we'd argue, the right question for anyone evaluating AI mental health tools to be asking of the vendors they consider. Not just "does your data show it helps?" but "how does it help – and does the mechanism of that help build lasting capacity or simply manage the moment?"

14,000 conversations suggest the right answer is achievable. What this research adds is evidence that it's not just achievable – it's measurable.

Read the full paper

Available as a preprint here.This paper is submitted to Scientific Reports and currently in peer review. Authored by Dr Max Major, Viktoria M Ivan, Dr Katie M White & Marcos Economides.

For more on Nova and how Unmind approaches AI-assisted mental health support, visit unmind.com

About the Author

Default description for the image

Dr. Max Major, Principal Clinical Psychologist, AI

Dr Max Major, PhD, leads the clinical safety, ethics, and governance of all AI innovation and serving as the clinical architect behind Nova, Unmind’s AI mental health agent. A registered Clinical Psychologist with a decade of experience across New Zealand and the NHS, he brings deep expertise in clinical AI, digital transformation, and the neuroscience of decision-making.