
When Roro (not her real name) lost her mother to cancer, the grief felt bottomless. In her mid-20s and working as a content creator in China, she was haunted by the unfinished nature of their relationship. Their bond had always been complicated – shaped by unspoken resentments and a childhood in which care was often followed closely by criticism.
After her mother’s death, Roro found herself unable to reconcile the messiness of their past with the silence that followed. She shared her struggles with her followers on the Chinese social media platform Xiaohongshu (meaning “Little Red Book”), hoping to help them with their own journeys of healing.
Her writing caught the attention of the operators of AI character generator Xingye, who invited her to create an AI version of her mother as a public chatbot.
“I wrote about my mother, documenting all the important events in her life and then creating a story where she was resurrected in an AI world,” Roro told me through a translator. “You write out the major life events that shape the protagonist’s personality, and you define their behavioural patterns. Once you’ve done that, the AI can generate responses on its own. After it generates outputs, you can continue adjusting it based on what you want it to be.”
During the training process, Roro began to reinterpret her past with her mother, altering elements of their story to create a more idealised figure – a gentler and more attentive version of her. This helped her to process the loss, resulting in the creation of Xia (霞), a public chatbot with which her followers could also interact.
After its release, Roro received a message from a friend saying her mum would be so proud of her. “I broke down in tears,” Roro said. “It was incredibly healing. That’s why I wanted to create something like this – not just to heal myself, but also to provide others with something that might say the words they needed to hear.”
Grief in the age of deathbots
As I recount in my new book Love Machines, Roro’s story reflects the new possibilities technology has opened for people to cope with grief through conversational AI. Large language models can be trained using personal material including emails, texts, voice notes and social media posts to mimic the conversational style of a deceased loved one.
These “deathbots” or “griefbots” are one of the more controversial use cases of AI chatbots. Some are text-based, while others also depict the person through a video avatar. US “grieftech” company You, Only Virtual, for example, creates a chatbot from conversations (both spoken and written) between the deceased and one of their living friends or relatives, producing a version of how they appeared to that particular person.
While some deathbots remain static representations of a person at the time of their death, others are given access to the internet and can “evolve” through conversations. You, Only Virtual’s CEO, Justin Harrison, argues it would not be an authentic version of a deceased person if their AI could not keep up with the times and respond to new information.
But this raises a host of difficult questions about whether estimating the development of a human personality is even possible with current technology, and what effect interacting with such an entity could have on a deceased person’s loved ones.
Xingye, the platform on which Roro created her late mother’s chatbot, is one of the key prompts for proposed new regulations from China’s Cyberspace Administration, the national internet content regulator and censor, which seek to reduce the potential emotional harm of “human-like interactive AI services”.
What does digital resurrection do to grief?
Deathbots fundamentally change the process of mourning because, unlike seeing old letters or photos of the deceased, interacting with generative AI can introduce new and unexpected elements into the grieving process. For Roro, creating and interacting with an AI version of her mother felt surprisingly therapeutic, allowing her to articulate feelings she never voiced and achieve a sense of closure.
But not everyone shares this experience, including London-based journalist Lottie Hayton, who lost both her parents suddenly in 2022 and wrote about her experiences recreating them with AI. She said she found the simulations uncanny and distressing: the technology wasn’t quite there, and the clumsy imitations felt as if they cheapened her real memories rather than honoured them.
There are also important ethical questions about whose consent is required for the creation of a deathbot, where they would be allowed to be displayed and what impact they could have on other family members and friends.
Does one relative’s desire to create a symbolic companion who helps them make sense of their loss give them the right to display a deathbot publicly on their social media account, where others will see it – potentially exacerbating their grief? What happens when different relatives disagree about whether a parent or partner would have wanted to be digitally resurrected at all?
The companies creating these deathbots are not neutral grief counsellors; they are commercial platforms driven by familiar incentives around growth, engagement and data harvesting. This creates a tension between what is emotionally healthy for users and what is profitable for firms. A deathbot that people visit compulsively, or struggle to stop talking to, may be a business success but a psychological trap.
These risks don’t mean we should ban all experiments with AI-mediated grief or dismiss the genuine comfort some people, like Roro, find in them. But they do mean that decisions about “resurrecting” the dead can’t be left solely to start-ups and venture capital.
The industry needs clear rules about consent, limits on how posthumous data can be used, and design standards that prioritise psychological wellbeing over endless engagement. Ultimately, the question is not just whether AI should be allowed to resurrect the dead, but who gets to do so, on what terms, and at what cost.
This article includes a link to bookshop.org. If you click the link and go on to buy from bookshop.org, The Conversation UK may earn a commission.
![]()
James Muldoon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. James is the author of Love Machines: How Artificial Intelligence is Transforming Our Relationships (Faber).