
Imagine a therapist could live in your pocket. They’d be on hand for every wobble, every meltdown, every crisis – no matter where or when. They’d be cheap and accessible, so no more worries about finding the money for expensive therapy or lingering on a waiting list for months for NHS treatment. Sounds too good to be true?
Maybe, but few can deny the appeal of AI therapy, which uses artificial intelligence, like chatbots and digital platforms, to provide mental health support, guidance, coping strategies and structured exercises, often mimicking talk therapy.
The growing popularity of AI therapy may be troubling some experts but it’s understandable why so many people are turning to this convenient and cost-effective resource for mental health support.
In the UK, an NHS mental health referral can take 18 weeks or longer. According to 2025 data from the British Medical Association: “Services are not currently resourced to meet the increased demand, resulting in long waits and high thresholds for treatment; latest estimates put the mental health waiting list at one million people.
It’s perhaps no wonder then that a growing number of young people, in particular, are turning to AI chatbots to help them cope with mental health issues.
But, while AI can prove beneficial for some – often as a supplement to human therapy – it isn’t an effective substitute for a human therapist. And it could even prove dangerous.
Psychotherapy, known as the “talking cure”, uses dialogue to explore thoughts and feelings to help clients understand and address mental health challenges. Psychotherapists are now using AI tools to improve their work in mental health treatment. For example, software such as ChatGPT is being used by therapists to carry out client assessments. They enter details of the client, such as their sex, age, and psychological issues. In response, the chatbot collates the information to create a treatment plan for the therapist to follow.
But, although AI is proving helpful for some therapists, people turning to chatbots for help with mental health crises might find the lack of human supervision and input far less useful.
Lack of Humanity
Chatbots can simulate empathy, but don’t understand or feel emotions. Human therapists can provide emotional nuance, intuition and a personal connection, which chatbots currently cannot replicate in a meaningful way. Chatbots also have a limited ability to understand complex emotions and can struggle with understanding the complexity of human emotions, particularly when the situation involves deep trauma, cultural context or complex mental health issues.
Chatbots, then, are unsuitable for those with severe mental health issues. The software may provide some support for less severe cases, but they aren’t equipped to deal with severe mental health crises, such as suicidal thoughts or self-harm. Human therapists, however, are trained to recognise and respond to these situations with appropriate interventions.
While chatbots can be programmed to provide some personalised advice, they may not be able to adapt as effectively as a human therapist can. Human therapists tailor their approach to the unique needs and experiences of each person. Chatbots rely on algorithms to interpret user input, but miscommunication can happen due to nuances in language or context. For example, chatbots may struggle to recognise or appropriately respond to cultural differences, which are an important aspect of therapy. A lack of cultural competence in a chatbot could alienate and even harm users from different backgrounds.
So while chatbot therapists can be a helpful supplement to traditional therapy, they are not a complete replacement, especially when it comes to more serious mental health needs. Human psychotherapy provides a supportive, safe space for clients to slow down, reflect, and explore their thoughts and feelings with expert guidance. Human therapists are held accountable through ethical guidelines and professional standards.
Chatbots, however, don’t have accountability structures in place, which may lead to inconsistent or inappropriate advice. Research has also raised concerns about the potential for privacy violations and security risks of sharing sensitive information with chatbot therapists.
Some people might become overly dependent on chatbot therapists, potentially avoiding traditional therapy with human professionals. This could lead to a delay in receiving more comprehensive care when needed, making vulnerable people more isolated rather than easing their suffering.
The talking cure in psychotherapy is a process of fostering human potential for greater self-awareness and personal growth. These apps will never be able to replace the therapeutic relationship developed as part of human psychotherapy. Rather, there’s a risk that these apps could limit users’ connections with other humans, potentially exacerbating the suffering of those with mental health issues – the opposite of what psychotherapy intends to achieve.
Nigel Mulligan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.