Unmind logo
AI

Advancing mental health through ethical innovation

Advancing mental health through ethical innovation

Welcome to the future of mental health support

At Unmind, we’re dedicated to nurturing and celebrating mental health and wellbeing. Our commitment to safety and responsibility shapes every aspect of our AI technologies, ensuring they align with our purpose and vision to empower people in all aspects of their lives.

Nova mobile conversation

Our principles

These principles ensure our AI aligns with our purpose and vision, providing safe, secure, and human-centric solutions.

Learn more about our AI Guardrails
Default description for the image

Human-centric

We prioritize people by designing AI systems that deeply understand and empower human needs, ensuring inclusivity and equity by celebrating the diversity of human experiences while proactively mitigating biases for a fair experience.

traffic-cone-icon

Safe

Our AI systems are built to enhance mental wellbeing by adhering to strict safety guidelines, incorporating expert collaboration with clinicians and psychological experts, and aligning with WHO guidance and international regulations.

science

Powered by science

Our in-house science team ensures that everything we do is evidence-based and continuously updated with the latest scientific findings to deliver effective solutions.

reload icon

Always learning

We are committed to continuous improvement, evolving based on user feedback, usage data, and ethical considerations, with rigorous testing and ongoing monitoring to prevent potential harm and misuse.

view icon

Transparent & accountable

We maintain open communication by informing users about the nature, capabilities, and limitations of our AI systems, providing clear guidelines to enhance understanding and trust, and utilizing robust technology for a responsive experience.

private icon

Secure & private

We never share user or client information without consent, uphold strict privacy standards to keep personal and sensitive data secure, and comply with GDPR, HIPAA, and adhere to ISO 27001 and Cyber Essentials standards.

Private, safe, and secure for employees everywhere

With strict confidentiality and top-tier security, our use of AI follows strict security and AI ethics and guidelines for a private and secure experience.

Default description for the image

ISO 27001 certified

Default description for the image

Cyber Essentials certified

Default description for the image

GDPR compliant

Default description for the image

HIPAA compliant

Default description for the image

AA Accessibility

Default description for the image

US AI Bill of Rights

Default description for the image

WHO Guidelines of Ethics & Governance for AI in health

Default description for the image

EU AI Act

FAQs

How does the AI Coach work?

Our AI Coach, Nova, was built by Unmind researchers and Clinical Psychologists. Through robust guardrails and rigorous prompts, Nova uses a tool called GPT-4o, by OpenAI, to give users an engaging, ethical and friendly coaching experience. Nova can understand language. When you talk to it, it gives responses like a real person.

How does Nova learn?

The Unmind product team and Clinical Psychology team ensures that Nova is kept up to date and gives appropriate responses by setting guard rails within the prompt. We do this based on clinical best practices, user research and feedback, and looking at anonymous chats to give users a highly personalized and safe experience. AI models are known to sometimes say things as if they're true, even if they're not. We've guarded against this by telling Nova not to give fact-based responses, as explicitly detailed in the prompt.

How do you ensure the advice is appropriate?

Nova is there to provide support and can direct users to appropriate resources as determined by our Psychology team. Still, it’s important to note it will not give medical advice, mental health diagnosis, assessment, or treatment.

  • We have a dedicated team that manually tests the responses.
  • We have guardrails in place to ensure the responses are appropriate.
  • We have a set of testing tools that are run each time we make an update.
  • OpenAI has a set of safeguards and safety mitigations that prevent their models from being misused; any usage of their models requires adherence to these policies, including our use for Nova.
What happens with the data, and where does the data go? Is it anonymized?

Conversations with Nova are strictly confidential. We use robust data security measures, including strong encryption, to protect conversation histories. To improve Nova’s functionality, a select group of authorized experts may review samples of anonymized conversation data, ensuring user privacy. Data processed by OpenAI, our third-party provider, is anonymized and automatically deleted after 30 days, with no use for model training. Your privacy is always our priority.

How is Unmind keeping ethics and safety in mind?

Unmind is committed to advancing the safe, ethical, and responsible use of AI.

We believe in the power of AI to make mental health and wellbeing support more accessible, fair, and inclusive, reflecting the diversity of human experiences and cultures. Our rigorous, evidence-based approach is supported by our in-house science team and enhanced by continuous feedback and improvement processes. Unmind emphasizes safety by implementing strict protocols to prevent any potential harm their AI systems might cause, including psychological and societal impacts. We adopt a science-driven, human-centric strategy, ensuring their practices are grounded in rigorous research and evidence.

We value transparency and accountability, continuously engaging with stakeholders and incorporating feedback to enhance their AI solutions. We ensure privacy and confidentiality, uphold data protection standards rigorously, and maintain a commitment to continuous improvement through a feedback-informed cycle of measuring, understanding, and acting.

Where can I find more information on Nova and Unmind's use of AI?

We have a full FAQ on Nova and Unmind's use of AI in our Help Center, our Trust Center, and in our AI Guardrails page.

Nova supercharges OpenAI’s powerful model with Unmind’s expert-crafted guardrails, delivering tailored mental health support that’s as safe as it is personal—all aligned with WHO’s trusted safety standards. Empowering wellbeing, one conversation at a time.

Dr Acacia Parks
Dr. Acacia Parks PhD, MBA
Director of Science
Default description for the image