English Section

Ethicist’s warning on ‘digital afterlife’ tools

03.11.2025 09:45
An ethicist at the University of Cambridge has warned that the commercial rush to build artificial-intelligence tools that imitate the dead is a risky experiment in a highly sensitive area of human emotion.
Graves at a Polish cemetery, decorated with candles for All Saints Day on November 1, 2024.
Graves at a Polish cemetery, decorated with candles for All Saints' Day on November 1, 2024.Photo: MichalPL, CC0, via Wikimedia Commons

Tomasz Hollanek told Poland’s PAP news agency that more people now try to keep a semblance of contact with the dead or to prepare a “digital life after death,” and he called the industry largely unregulated.

He argued that consent cannot be obtained from the dead, and that posthumous chatbots can speak in a person’s name without their approval.

Such systems, often called “griefbots” or “deadbots,” are trained on a person’s digital traces, including messages, recordings, and public appearances.

They learn a distinctive vocabulary and manner of speaking, then try to reproduce it.

Some tools let people assemble their own posthumous “digital self” while alive, which can offer a measure of control over how they are remembered. Even then, families may react in unexpected ways to a “digital ghost,” and private grief can become entwined with commercial services.

Hollanek, a scholar in ethics and critical design, said the market is expanding, with firms in the United States and South Korea, and a growing number in Europe.

Prices range from low-cost subscriptions to bespoke services that cost tens of thousands of euros.

He added that user numbers are hard to estimate, because a griefbot can be built both with specialist products or with widely available general tools.

He also pointed to areas where these systems are used beyond private mourning, for example interactive projects that preserve the testimony of historical figures or survivors of war and genocide.

These efforts, he said, raise their own questions about accuracy, curation, and the line between documentation and reenactment.

A broader perspective is that of the human rights of the deceased.

Writing for the Institute for Human Rights at the University of Alabama at Birmingham, Natasha Fernandez argues that grief-focused products risk turning mourning into a revenue stream, and that engagement-driven design can reshape memories into a comforting caricature.

She also notes potential harms to dignity after death if replicas misstate a person’s values or voice because the underlying data are incomplete or biased.

Her analysis emphasizes the need for safeguards without treating the tools as a cure-all for loss.

The regulatory picture is still taking shape. The European Union’s Artificial Intelligence Act requires makers of chatbots to inform users when they are interacting with AI rather than a human.

In the United States, policymakers are debating stronger protections for a person’s likeness after death.

For Hollanek, the core problem is that a profit-driven sector dominates this space, while rules lag behind. He said the industry remains largely unregulated, and what firms are doing now amounts to a large experiment in an exceptionally sensitive area.

(rt/gs)

Source: PAPsites.uab.edu