Unmind logo
AI in Mental Health

A Call for Regulation: Building Safer AI for Mental Health

Default description for the image
Raya Moshiri

15 July 2025

Default description for the image

Content

  • Call for Regulation
  • The illusion of support
  • What safety should actually look like
  • How Unmind does it differently
  • Why now?

Call for Regulation

Mental health support is changing.

Generative AI tools are being introduced into digital wellbeing platforms, mental health apps, and workplace ecosystems, often pitched as scalable, accessible, and emotionally intelligent. They offer reflection. Guided exercises. Human-sounding responses. For many organizations, this feels like progress.

But for the people using them – especially during a difficult moment – the stakes are high.

What happens when a chatbot misunderstands the tone of someone’s message?
When it suggests a breathing exercise, but the person is actually in crisis?
When it stores sensitive personal data with no clarity on where it’s going?

In mental health, these aren’t edge cases. They reflect what happens when care systems fall short – long wait times, high costs, and hit-or-miss quality leave people turning to AI as a faster, more accessible way to get support.

The illusion of support

Many generative chatbots are designed to sound reassuring – warm tones, validating responses, even subtle callbacks to earlier comments. This creates the sense of being seen or understood. And when someone is vulnerable, that experience can feel comforting.

But here’s the risk: when a tool mimics care without offering real support, it doesn’t just miss the mark, it can deepen the problem. The illusion of care can feel real, even comforting, until it breaks. When expectations clash with reality, users may be left feeling more alone, more confused, and questioning whether their experience ever really mattered.

Without clinical grounding, these tools may:

  • Offer vague or generalized advice that lacks psychological relevance
  • Miss cues around suicidality, anxiety, or trauma
  • Reinforce unhelpful narratives by validating everything uncritically
  • Fail to offer an appropriate next step – especially in urgent moments

The result? A tool that looks like it’s helping, but leaves people to navigate emotional complexity alone.

What safety should actually look like

AI isn’t inherently harmful or helpful – in mental health, what matters most is how it’s designed, deployed, and safeguarded.

We believe any mental health chatbot or generative tool must meet a clear set of criteria:

  • Ethical integrity – designed with transparency, boundaries, and respect
  • Evidence-based content – aligned with psychological research and clinical best practice
  • Conversational accuracy – coherent, context-aware, and free of bias or misinformation
  • Defined limitations – with built-in escalation routes and clear handoffs to human care
  • Inclusivity and accessibility – usable by people with diverse backgrounds, languages, and lived experiences

These shouldn’t be nice-to-haves but non-negotiables. Mental wellbeing doesn’t benefit from tools that are “close enough.” It requires systems that are built to be trustworthy from the start.

How Unmind does it differently

At Unmind, we build AI tools like Nova to complement human care – not replace it. Nova isn't designed to simulate therapy. Nova offers guidance thoughtfully – it doesn’t presume, pressure, or pretend to know more than it can. Instead, Nova helps people pause, reflect, and engage with clinically designed tools across the Unmind platform. These include guided journaling, science-backed courses, and practices for stress, sleep, and self-awareness, each developed with input from psychologists, researchers, and individuals with lived experience.

If someone needs more support, Nova helps them take that next step, whether that’s reaching out to a therapist, speaking with their manager, or accessing a structured resource. Every element of Nova’s design is thoughtfully crafted, tested, and refined to ensure safety and impact. This isn’t just about functionality. It’s about values. Safety and transparency are baked into the design, not layered on after.

Why now?

There’s no universal regulation yet for AI in mental health. No clear line that separates innovation from risk. And that’s exactly why this conversation matters now.

The tools we choose – and how we build them, will shape how people engage with mental health support in the years to come. If we get this wrong, we risk damaging the very trust we’ve worked so hard to earn. But if we get it right, we can scale support responsibly, through thoughtful design, ethical guardrails, and alignment with human care.

Because mental wellbeing isn’t something to experiment with. It’s not a playground for new tech. It’s a deeply human space, one that deserves the same standards of safety, ethics, and empathy as any clinical setting.

We don’t need more tools that sound supportive. We need tools that truly are.

Want to talk about how we build AI with care, clarity, and evidence? Let’s connect.