Unmind logo
Science & Research

Life, the universe and AI: An astrophysicist’s take on the future of workplace mental health support

Default description for the image
Dr. Jazz Croft

Dr Gareth

Content

  • What were you working on before joining Unmind?
  • How did you support AI work at Unmind?
  • What’s the importance of using AI for workplace wellbeing?
  • How can mental health services innovate using AI?
  • How can we ensure we’re using AI for mental health safely?
  • What did you learn from your time at Unmind?

Dr. Gareth has been on quite a journey, from observing the cosmos to exploring the future of mental health care. We were delighted that the astrophysics and data expert could join us for a work placement in Unmind’s Data team to support AI development.

We sat down for a chat with Gareth to hear about his experience at Unmind – covering everything from working with a range of our experts, to the future of AI in mental health, and a special ‘behind-the-scenes’ look at Nova, Unmind’s AI wellbeing coach.

What were you working on before joining Unmind?

I recently completed my PhD in Astronomy and Astrophysics at KU Leuven in Belgium. There, I spent a lot of my time looking at and working with data, namely at observations of massive stars. It was fascinating work, and I really wanted to see what insights in other fields I could leverage out of the tools and methods I learned in Astronomy.

How did you support AI work at Unmind?

I helped with the ongoing development of Nova, Unmind’s AI coach, to optimize its response to user requests. This included the technical work involved in making sure Nova provides the best content to users, can give empathetic and validating responses, and signpost to mental health support when necessary. 

To make this happen, I worked with the Data team to improve methods of testing how Nova performs in specific scenarios. This means we can rigorously test the safety of Nova and validate how useful updates are before they are rolled out to users. 

To do this, I helped build an AI tool that assesses how Nova responds to particular scenarios and summarises its performance for a human evaluator to assess and confirm that it behaves as we would expect. 

What’s the importance of using AI for workplace wellbeing?

As AI becomes more advanced, it’s becoming easier to build personalized services and applications that can understand the individual needs of the user. For workplace wellbeing, this means that AI can offer reliable support, and be readily available for conversations for employees at a large scale. 

There are lots of exciting opportunities for leveraging AI in mental health. Chatbots are the most obvious application of AI, and using them in the context of mental health is a great fit. They’re scalable, personalized, and available 24/7.

As AI's context retention and reasoning capabilities keep improving, chatbots are increasingly able to feel naturalistic. It’s become easier to have a genuine conversation about your mental health. One recent development is the ability to use voice as an interface – it takes this to another level.

How can mental health services innovate using AI?

Experiment! It’s surprisingly easy to get started. Foundational technologies are starting to mature now, so there’s a lot of help out there to try new things. That includes open-source tools and platforms that make experimenting with AI fairly straightforward.

There are also more hidden AI applications that can benefit end users. For example, AI is used in Unmind’s Talk platform to improve the Talk Matching experience. Users now benefit from AI’s ability to provide them with a curated list of practitioners. It’s a more personalized way to deliver easily accessible mental health support.

How can we ensure we’re using AI for mental health safely?

There always needs to be great care in how AI is applied, especially regarding data privacy. 

Very high standards of data protection are essential, especially where it includes the discussion of sensitive and high-risk topics. Safety is also a top priority to ensure output is correct and accurate so that users can trust the content they’re given. This caution should be applied to all AI-related development, but particularly for mental health where many users will be vulnerable or experiencing difficulties.

What did you learn from your time at Unmind?

Working with a range of experts meant I could quickly expand my technical knowledge beyond my expertise in Astronomy and Astrophysics. This helped me deliver impactful work in a short amount of time. 

I also learned a lot about integrating and analyzing AI in real-world scenarios. This included working with Dr. Max Major, Senior Clinical Psychologist, to guide the development of Nova using feedback, which further validated the project thanks to his clinical knowledge. 

It was great to work with a team that took issues around AI so seriously, including ethical data use and privacy. Everyone demonstrated how hugely important this is – particularly for mental health and wellbeing support. 

Thanks, Gareth, it’s been a pleasure, safe onward travels!  

Find out more about AI within the Unmind platform below.