“Digital Afterlife”: The business with virtual survival

Status: 05/09/2023 12:58 p.m

Thanks to modern technology, people can continue to live virtually after their death, for example as an avatar. But does that help you deal with grief? What about ethical issues – and how safe is the data?

There are now a number of providers online who promise people that they will create a digital image of them that will survive them and later remain as a permanent legacy for their bereaved relatives. With the help of artificial intelligence (AI), these companies create chatbots from the data of a deceased, for example, which can then write or even speak like the deceased. Other companies produce virtual avatars, i.e. 3D representations, with which the bereaved can then also interact in virtual space.

avatars as interlocutor

You, Only Virtual (YOV) is one such US start-up. Its founder Justin Harrison advertises in a YouTube video: “Everyone should take a moment and think about what it would be like to lose someone you love and care about.” YOV is a communication platform that uses AI to create avatars of loved ones. You can communicate with them after the loved one has died.

The more data there is, the more realistically artificial intelligence can imitate the deceased. This leads to special situations: In South Korea, a mother had an avatar created for her deceased seven-year-old daughter in order to meet her again and then say goodbye to her. The girl then symbolically died again during the virtual encounter. The mother explains that this helped her in the grieving process.

Suppose we are immortal. Will we then live with artificial organs? A thought experiment.
more

Is saying goodbye that hard?

However, skeptics warn that the bereaved could get stuck in a kind of loop and repeatedly contact the avatars of the deceased. That way they wouldn’t be aware that the person isn’t coming back.

Jessica Heesen researches the ethics, law and security of digital survival at the University of Tübingen and assures us: “Those who we have already interviewed say so in our research project as well.” However, it is also possible that the culture will change in this regard. “Just like we go to the grave and talk to the deceased, talk to the gravestone, in the future we might talk to an avatar, who will even answer with the deceased person’s own voice. Everyone has to decide that for themselves. ”

Planning virtual survival before death

In the meantime, there is an increasing number of people, especially in the USA, Asia and Great Britain, who are already planning their virtual lives before they die. They allow themselves to be filmed and answer questions about their lives. Later, the AI ​​can then equip their avatar with it. A popular application, for example, is that the voice of the deceased grandfather can later read bedtime stories to the grandchildren.

But with the help of AI, attempts are already being made to produce new answers, i.e. to create new statements that are not contained in the original recordings. US companies like Dadbot and StoryFile are experimenting with this.

Avatars can be manipulated

The possibility of misuse of the data and avatars is rarely considered. The AI ​​ethicist Heesen sees one problem, for example, in the fact that it has not yet been regulated who may create avatars of the deceased.

In this way, avatars could also be created after a person has died – without consent. “They can then perhaps show a completely different picture, a falsifying picture. Or avatars can be hijacked by people who have manipulative wishes – and then maybe the deceased grandfather suddenly says that he was on the side of the National Socialists. Or he does things that we don’t want, or spreading untruths about relatives – it’s all possible.”

This can lead to very uncomfortable situations for the bereaved. For example, when avatars of the deceased are used in porn. Ethicist Heesen demands that there should be clear legal regulations as to who is authorized to decide on avatars of the deceased and when an avatar may be turned off. And we may have to state in our wills in the future that we are not allowed to create an avatar.

labeling requirement as regulation

The problem with the chatbots or avatars of the deceased is similar to what we already have with deepfakes. So with the deceptively real-looking manipulated image or video recordings that are created by an AI. There are major problems in regulating generative artificial intelligence or deepfakes, says Heesen.

First of all, there would be labeling requirements: “So that we know exactly which images are artificially created and which are not. But of course only people who are willing do that. They will label such images. Those who go ahead with bad intentions will don’t do that.” For this reason, Heesen believes that postmortem personal rights should also be adapted to these specific problem cases.

data of the deceased commercialized

There are already regulatory proposals for the use of AI at EU level, but they do not explicitly deal with the digital survival of the deceased. Nevertheless, regulation could also help with these applications, because one thing is clear: it is primarily the large data companies that benefit from the “digital afterlife” applications: “This is a means of ultimately also commercializing the data of the deceased,” says Heesen.

And in the future there will be even more data from the deceased. “We all live on the internet to a large extent, and our data is still there even after we die. And if you want to use it optimally, then you do it by reviving this data. And in These interaction relationships are then also used to collect the data of the living.”

source site