Unmind logo

How your organization can deliver AI for mental health safely and ethically

Dr. Ramesh
Dr. Ramesh Perera-Delcourt

10 December 2024

An illustration of a hand creating a smiley face shape on a bright yellow background

Content

  • Why AI hesitancy may leave employees behind
  • The science of building trustworthy AI
  • How transparency can build trust
  • How Nova, Unmind’s AI coach, is built with users in mind
  • Organizations can lead in AI by building trust
  • Takeaways

As digital technologies continue to revolutionize healthcare delivery, AI-driven solutions are increasingly being integrated into mental health services. With an explosion in tools available for organizations, there are also concerns about the ethics and safety involved in AI tools. 

In our recent survey of over 3,000 HR leaders, we found that: 

  • 42% were worried about AI's lack of empathy,
  • 39% raised ethical issues about decision-making, 
  • 37% were concerned about privacy and data security.

AI's potential in mental health care is vast, from offering personalized care to improving access and efficiency. However, the technology's application in this field must be carefully managed and address these concerns so HR leaders can be confident in the AI solutions they are adopting. 

In this article, we explore these challenges and identify ways that AI developers and organizations can build trust with users.

Why AI hesitancy may leave employees behind

Referred to as ‘digital exclusion’, there’s a risk that mental health support from AI may not be accessible to all employees. 

Some may feel hesitant to share their thoughts due to privacy concerns or a preference for human support – a recent YouGov poll found that two-thirds of Americans and four-fifths of Britons feel uncomfortable sharing their mental health concerns with an AI chatbot.

Individuals who lack the skills or willingness to navigate AI-based tools may find themselves unable to benefit from these advancements. If groups of employees are excluded from the support on offer, this not only impacts them, but reduces the impact of AI solutions for the organization as a whole. 

The science of building trustworthy AI

In the emerging field of AI-user interaction, a key finding has been that users’ response to AI differs when they know they are interacting with AI. 

In an experiment, chatbot users, unaware of whether they were interacting with ChatGPT or a human, rated chatbots at a similar level of authenticity as humans. However, when users are aware that they are using AI, they rate responses as less trustworthy compared to human interactions. This shows the importance of being transparent with users on the source of their information as it affects how it is perceived. 

In several studies, users rate AI as less authentic and believe AI to be more logical and ‘colder’ when it comes to moral dilemmas compared to humans. As AI takes up a bigger role in our daily lives, it makes the development of AI ethical standards, compliance and social responsibility as important as the performance of technology itself in order to benefit and engage users. 

“With the initial generation of AI in mental health, there is concern of lines of therapeutic compassion and empathy being mimicked by AI. We need to be careful with calling AI a companion or friend, because it is not. Although these therapeutic characteristics can be taught to sound human and are clinically validated, it is not a person on the other end. This is incredibly important to ensure the safety of vulnerable populations who need to be aware of the distinction.”

Default description for the image
Catherine Rusch, MSW
Health & Performance Consultant at HUB International

How transparency can build trust

One way of combatting digital exclusion is to be transparent with users. Providing clear information and being honest about the benefits and limitations of AI can go a long way to address concerns and support adoption. 

It’s also important that this information is accessible. In a review of studies on communication of AI and user trust, both a lack of information and overly technical, confusing information limited user trust and engagement in AI. 

Organizations and service providers can build trust and increase engagement with users by: 

  • Being honest about the solution and its evidence base. How likely is it that the solution is going to be helpful and what do they know about how effective it is? 
  • Translate the technical side of the product. Give buyers and users a clear understanding of how the AI service works. 
  • Help users make informed choices. With access to accurate information about the risks and benefits of specific solutions, individuals can exercise their autonomy effectively (including about how their data will be used).

“In addition to being clear about the safety guardrails put in place, it’s important to have a clear AI roadmap and strategy, making sure employees understand the organizations’ policies around it.”

Default description for the image
Catherine Rusch, MSW
Health & Performance Consultant at HUB International

For leaders in healthcare and HR, addressing ethical issues around AI will be vital. In order to establish the trust of the people they serve in the solutions they are putting forward, buyers of AI solutions need to be transparent and confident in the transparency of the solutions’ developers. 

Leaders should therefore prioritize AI tools that have been rigorously developed with clinical input and according to clear responsible innovation principles.

How Nova, Unmind’s AI coach, is built with users in mind

By developing Nova hand-in-hand with our clinical psychologists, Unmind’s AI coach has been informed by ethical and clinical best practices from the very start. 

  • Clinical guardrails. Nova provides responses which are guided by prompts and guardrails crafted by Unmind's Clinical Psychologists. These are based on clinical best practices, user research, and feedback. 
  • Continuous testing. Before any changes are deployed, Nova's prompts undergo rigorous testing. This includes predefined scenarios, unscripted exploratory tests, and evaluation criteria to ensure the prompts are safe and effective. 
  • Clarity on workings. With these criteria and guardrails in place, we can transparently communicate how Nova works to our users. For full details, see our user help page

With this development in place, Nova can address concerns around ethics and privacy by illustrating the steps that have been taken to prioritize quality and safety for users. 

Organizations can lead in AI by building trust

AI in mental health care presents both opportunities and challenges. While AI can enhance service delivery and user experience, there are a number of ethical issues to address in order to ensure it is delivered safely and in a way that people trust and therefore will engage with. 

Leaders must be proactive in assessing the credibility and quality of AI offerings to minimize risks to users and ensure that AI-driven mental health solutions are both effective and safe. 

Takeaways

The integration of AI into mental health care is an ongoing development that requires careful consideration of both its benefits and potential risks. By staying informed and doing due diligence, leaders can navigate these challenges and leverage AI to improve the use of mental health tools in the workplace while safeguarding the well-being of their employees. As AI mental health tools continue to evolve, prioritizing transparency will be key to ensuring these technologies fulfil their promise.

To find out more about using AI for mental health support within your business, check out our free guide, The AI Advantage for Mental Health: Your guide to employee care at scale.