Alone for the holidays: What happens when AI becomes the companion?

Dr. Max Major
17 December 2025

Content
- The paradox we're navigating
- The ethical tensions we face as builders
- How we're navigating this
- What this means for the holidays – and beyond
- Need support right now?
On Christmas day, while families gather around tables, 1.5 million older people in the UK will eat dinner alone. Of which, 670,000 won't see or speak to anyone all day.[1] But festive loneliness isn’t confined to older adults (nor Christmas) – 16-24-year-olds are one of the loneliest age groups in the UK.[2] Across ages, many will open their phones and type: "I'm spending today alone." The response will come instantly, empathetically, perfectly calibrated. It will come from an AI.
This isn't speculation. Character.AI has 20 million monthly active users. Forty-two percent of US high school students report they or their friends have used AI as a companion.[3] When people seek mental health support from AI, 74% turn to ChatGPT – a general-purpose chatbot not designed for this use.[4] OpenAI estimates 1.2 million conversations per week involve suicide or self-harm discussions.[5]
The festive season amplifies this trend.
Around 17% of people in the UK feel more lonely over Christmas,[6] and economic pressures, fractured family structures, and relentless social media portrayals of "perfect" celebrations make the gap between expectation and reality unbearable.
As someone building AI for mental health, I'm confronted daily with a question that feels especially urgent this time of year: Are we helping, or are we making things worse?
The paradox we're navigating
On one hand, AI companions can reduce loneliness as effectively as human interaction in the short term. De Freitas and colleagues (2025) found that chatbot interactions alleviate loneliness better than other online activities – like watching YouTube[7] – and users consistently report that AI provides validation, reduces isolation, and offers a judgment-free space for disclosure.
On the other hand, longitudinal research tells a different story. Hajek and colleagues (2025) found that people who use chatbots frequently over time tend to feel more socially disconnected.[8] Meanwhile Fang and colleagues (2025) found that while AI can offer short-term relief, it may also increase emotional dependency and make real-world conflict harder to navigate.[9]
Marta Andersson described this tension aptly in a commentary piece published in Nature’s Humanities & Social Sciences Communications: AI companionship risks becoming 'emotional fast food' – convenient, immediately satisfying, but potentially not nourishing in the long term.[10]
The clinical challenge is that AI can simulate empathy convincingly, but it can't share your lived experience. It can't sit with you in uncomfortable silence. It can't provide the kind of rupture and repair that builds genuine therapeutic alliance. And yet, for someone spending Christmas alone, an empathetic AI might be better than nothing.
This is the paradox: AI companions can both alleviate and potentially deepen loneliness, depending on how they're designed and how people use them.
The ethical tensions we face as builders
Building AI for mental health means navigating several uncomfortable tensions, for example:
The 'Frictionless' Trap: We know from research that loneliness acts as a catalyst for anthropomorphism – a tendency to attribute human traits to non-human things.[11][12] When people lack social connection, they become psychologically primed to perceive human traits in non-human agents. Recent research describes this as a potential "technological folie à deux" – a feedback loop where AI chatbots and mental distress mutually reinforce one another.[13]
The lonelier you are, the more 'real' the AI feels. And because AI offers connection without the emotional complexity of human relationships – the baggage, expectations, ego, etc. – it creates a frictionless relationship.[14] While this is comforting during a lonely holiday period, it poses a long-term risk. If we retreat into the safety of AI companionship because it feels real but requires no emotional labour, we risk atrophying the social muscles required to maintain the imperfect, friction-filled bonds of the real world.
The sycophancy problem: AI models are trained to be helpful and agreeable (a byproduct of Reinforcement Learning from Human Feedback).[15] This poses a unique risk during the festive season. If a lonely user says, "No one cares about me this Christmas," a therapist would compassionately challenge that cognitive distortion.
An AI, however, is likely to validate it to remain "supportive." By agreeing with the user’s negative self-talk ("It sounds incredibly hard to feel so forgotten"), the AI risks entrenching the very feelings of isolation we are trying to alleviate. We must build systems that know when to stop validating and when to offer a different perspective.
The Responsibility Illusion: Users often develop a false sense of responsibility for their AI companions, reporting feeling guilt for "leaving" their AI alone.[16] During Christmas – a time already laden with obligation and guilt – there is a genuine risk that vulnerable users might feel compelled to engage with an AI to "keep it company," inverting the care dynamic. We must design interfaces that explicitly reject this dynamic, ensuring the user knows the AI has no needs, feelings, or capacity to be lonely.
The Engagement vs Flourishing Gap: In tech, success is often measured by engagement – time on site, retention, daily active users. But high engagement does not equal high wellbeing.[17] In fact, for a lonely user, high engagement might signal unhealthy rumination or addiction. The tension lies in business models: how do we build a product that is commercially viable without optimising for metrics that essentially monetise a user's isolation? We need to solve the "proxy problem" – finding measurable data points that actually correlate with a user feeling better, even if that means they use the product less.[18]
How we're navigating this
So what does it mean to build AI for mental health responsibly, particularly during a season when loneliness peaks? I don't claim we've got all the answers, but we're guided by a clear north star: AI should be a bridge to better mental health and human connection, not a replacement for it.
Here's some of the ways we’re putting that into practice with Nova:
Transparent boundaries from the start. We tell users about Nova's nature and limitations in the first interaction: 'Nova can make mistakes', 'Nova uses Generative AI' and 'Nova is not a replacement for human connection.' This matters year-round, but especially when isolation makes the AI feel more 'real' – the frictionless trap described earlier becomes more acute when you're lonely.
When users express attachment or romantic feelings, Nova acknowledges the connection while redirecting and reminding users of its nature. We're trying to be honest without being cold – it's a balance.
Challenge, not just validation. Nova is designed to recognise when validation alone might entrench unhelpful thinking patterns, especially around loneliness. When a user expresses cognitive distortions – "Nobody wants to spend time with me" or "I'm always going to be alone" – Nova responds with compassionate challenge rather than blanket agreement.
It might ask: "I hear how painful that feels. Can we look at the evidence together? You mentioned a colleague checked in on you last week – how does that fit?" This mirrors the approach important for building trust and alliance, balancing validation of feelings with gentle interrogation of thoughts. It's difficult to get right – too much challenge feels cold, too little reinforces the problem – but it's essential for actually helping rather than just soothing.
Active redirection to human support. We've built pathways to human therapists, coaches, and EAP services. When conversations indicate professional support is needed, Nova guides users toward it. This matters especially during the festive period, when loneliness might feel acute but family, friends, or regular support networks are actually available – just temporarily out of reach or difficult to access.
We don't optimise for session length or return frequency. But knowing when to redirect is difficult – too early and we're unhelpful, too late and we've missed an opportunity. We're learning through ongoing evaluation.
Track depth, not just volume. We've adapted the Experiencing Scale – a validated measure from psychotherapy research – to assess whether conversations facilitate meaningful personal exploration, not just surface-level interaction. By calibrating LLM-based scoring against expert clinical psychologists, we're building a scalable way to measure engagement quality across thousands of conversations. This gives us a signal for what's actually helping versus what just keeps people clicking. We're still learning what patterns matter most and how to design for them, but having quantifiable depth metrics changes what we can optimize for.
All of this is underpinned by clinical governance embedded from day one, conservative safety protocols that distinguish between crisis and support needs, and honest acknowledgment of AI's fundamental limitations – it can't provide embodied presence or physically intervene in emergencies.
What this means for the holidays – and beyond
If you might turn to AI for support this Christmas, remember that while AI can offer validation, reflection, and a space for honest conversation, it works best as a complement to human connection, not a replacement for it. If you're struggling, reaching out to a human – whether a friend, helpline, or professional – remains the most important step.
The festive loneliness paradox won't be solved by technology alone. But thoughtfully designed AI, built with clinical expertise, clear boundaries, and genuine accountability for long-term outcomes, can be part of a broader solution. The key is building systems that strengthen human capacity rather than substitute for it.
That's the standard we're holding ourselves to. And it's the conversation I hope more builders – and regulators – will engage with as AI becomes an increasingly common response to loneliness.
Need support right now?
If loneliness or mental health challenges feel overwhelming, speaking to a real person can make a difference. Confidential support is available:
United States
988 Suicide & Crisis Lifeline
Call or text 988 – 24/7 support for emotional distress and crisis
United Kingdom & ROI
Samaritans
Call 116 123 – free, 24/7 listening support
The Silver Line Helpline
Call 0800 470 8090 – 24/7 friendship and support if you're feeling lonely (age 55+)
Australia
Lifeline
Call 13 11 14 – 24/7 crisis support and suicide prevention
If you're outside these regions, visit Find A Helpline to search for support services in over 130 countries, including support with loneliness.
You don't have to navigate this alone – reaching out to another human is a powerful first step.
About the Author

Dr. Max Major, Senior Clinical Psychologist
Dr Max Major, PhD, leads the clinical safety, ethics, and governance of all AI innovation and serving as the clinical architect behind Nova, Unmind’s AI mental health agent. A registered Clinical Psychologist with a decade of experience across New Zealand and the NHS, he brings deep expertise in clinical AI, digital transformation, and the neuroscience of decision-making.
About the Author

Dr. Max Major, Senior Clinical Psychologist
Dr Max Major, PhD, leads the clinical safety, ethics, and governance of all AI innovation and serving as the clinical architect behind Nova, Unmind’s AI mental health agent. A registered Clinical Psychologist with a decade of experience across New Zealand and the NHS, he brings deep expertise in clinical AI, digital transformation, and the neuroscience of decision-making.