The AI ​​Historical Figures simulates chats with historical figures. – Culture

Jesus has had a not so good day lately, he says. Too many moments of doubt and anger. Shakespeare, on the other hand, was different. He was able to get a lot of work done and still be able to hook up with friends he hadn’t seen in a long time. And Van Gogh? Also great, thanks for asking, the best thing about nightlife is that you don’t have to worry about the weather anymore and can just focus on the art.

The answers are from the Character.ai website. Here, users can enter into dialogue with the AI ​​versions of great personalities. Alive or dead, everyone from Billie Eilish to Socrates is there. The same concept is also available as an iPhone app. With the Historical Figures program, you can download AI versions of 20,000 famous people from world history onto your phone. However, you have to pay money for certain dialogue partners in order to be able to speak to them. Would you like to unlock the AI ​​Hitler? Costs 15.99.

All revenants are based on a language AI. The technology is already useful today, says one of the founders of Character.ai, “for fun, for emotional support, for brainstorming, for all kinds of creativity”. And the developer of the competing app sees his program “as a new way for children to deal with the past”. The App Store actually classifies the app as educationally valuable.

Of course, that sounds pretty if you’re looking for investors. Meanwhile, however, users post their conversations with Eichmann, Mengele, Pol Pot or Stalin on Twitter. If you test the limits of the software, the chatbots are amazingly eloquent and reassuring. In any case, the AI ​​versions of the criminals pretend to be innocent on the corresponding screenshots. Automated falsification of history instead of fake news.

It remains to be seen whether the warnings will prevent misuse

A Google publication that showcased one of the AI ​​language models said the technology’s risks, in addition to the bias and imprecision of the software itself, included the tendency for humans to “anthropomorphize non-human agents and inflate social expectations broadcasting them,” even when they are explicitly aware that they are actually interacting with an AI. Ironically, two of the study’s authors are now the founders of Character.ai.

In any case, it was a rightly expressed fear. Finally, there are already reports that some test subjects believed that the AI ​​models they were dealing with had developed consciousness. What if this didn’t take place in the interaction with an unlabeled model, but with an AI posing as Heinrich Himmler?

In addition, the individual bots are not specially trained for the writings of the respective historical figures. “I may not be historically accurate, please check the facts,” is the first message from the software in the app, no matter who you select. Character.ai also has a note above every chat that “everything the characters say is just made up”. It remains to be seen whether this will be enough to prevent misunderstandings or intentional misuse.

If the historical digression is omitted, what use do these specific applications of texting AIs still have? What insight do they promise? Aren’t they more than a curiosity, a proof of concept? With the arrival and proliferation of these systems, we need a new form of online literacy more than ever. This must also include a basic distrust of any form of text. The world has definitely gotten stranger.

source site