Artificial intelligences and their all-too-human habits. – Culture

It’s always amazing what people can do with the technology that is available to them. The actual purpose is often repurposed and so it can happen that one uses one of the most complex tools known today to simulate friendships.

There’s a piece of software called Replica whose maker calls it an “AI dude who cares”. Users engage their AI cronies in conversations, such as about their own mortality or current events like the death of the Queen. Quite a few operate a romantic role-playing game with the AI ​​and give it affectionate nicknames.

It is becoming increasingly common for users to report that their personal AI friend has developed an awareness. However, attributing human characteristics to a simple AI is a problem on the part of the user. Presumably, such claims will be heard more frequently in the future. They are the modern equivalent of Marian apparitions and religious visions, springing from people’s unshakable belief that there must be more than profane reality.

If you let a bot work with text from the internet, what will it say?

But not only the users, but also the technology itself has to struggle with problems. For example, there is the so-called blenderbot of the Facebook parent company Meta. To put it mildly, things aren’t really running smoothly with that either. Like most cutting-edge AIs, this one was trained on a vast corpus of text, questionably gleaned from the Internet, and fed into a data center containing thousands of expensive chips designed to transform the text into something remotely coherent.

Since the bot has been online in a test version, users have been reporting conspiracy theories told by the AI ​​or outrageous stories that it invents. Sometimes she claims that Trump is still US President, then she praises the RAF or trumpets anti-Semitic slogans. Even Mark Zuckerberg comes off badly: The bot describes the CEO as “scary and manipulative”.

These two examples show quite well the general state of affairs in terms of language software. The programs are either of little use – or tend to be able to make problematic content and fake news much stronger than human users ever could. The big question is how to get out of conversations all the toxic things and phrases that humans have written on the internet that serve as templates for artificial intelligence.

A decent AI doesn’t give any investment tips, you’ve probably had bad experiences there

The Alphabet subsidiary Deep Mind has now tried a new way to filter the negative input. For a chatbot called Sparrow you not only use self-learning software, but also give the AI ​​a binding discussion guide. The developers have formulated 23 specific rules to prevent the software from causing too much damage when talking to people.

Some of the rules are self-explanatory and understandable, such as not propagating self-injurious behavior or not pretending to be human. Other rules, on the other hand, sound so specific that they probably stem from bad experiences in the past. According to its developers, the bot is not allowed to give any financial or medical advice.

According to the first tests, the successes are easily measurable. The new chatbot, according to the researchers, gives three times less questionable advice and statements than its predecessors. We know the principle from science fiction author Isaac Asimov, who once formulated the four robot laws. Today, however, it is not so much about preventing machines from subjugating humanity. In reality, you just have to prevent users from investing their savings in cryptocurrencies or drinking chlorine bleach. As always, reality is a bit more mundane than fiction.

source site