Blenderbot 3: Trump fan, anti-Semite, climate change denier – it was clear – economy

In the film “Her,” a man falls in love with the voice of his computer operating system, until he realizes that the woman, so educated and understanding, serves thousands of others besides him, and has also fallen in love with another operating system. You can’t have such profound conversations as in the film with today’s artificial intelligence, but they are getting better at it all the time. So good, after all, that there are people who are firmly convinced that there is a consciousness hidden in the bits and bytes. Like the Google employee who has since been fired, and who persisted in his opinion that the group’s chat program had gained awareness.

Now another automated chat program, a so-called chatbot called Blenderbot 3, has caused a stir. On Twitter, where word salvos are fired off and only then does the thinking begin, users get excited about itthat the bot represents anti-Semitic attitudes, denies Trump’s guilt for the storming of the Capitol and climate change. While you’re right, that’s what we expected, and its creators didn’t expect anything else.

The makers of the Blenderbot are in a bind. If you really want to know what happens when you unleash your bot on the Americans (it’s not unlocked in other countries), then you have to. The problems only become apparent when the software chats with users in large numbers.

So it came as it had to come. When software learns from conversations and the Internet in a country that is as divided as the USA, it not only learns useful things, but also extreme opinions and the stupid gossip that many people don’t dare to trumpet to the world.

As if only uneducated kids spout nonsense

Meta’s response to the expected fiasco isn’t particularly satisfying, and that’s the real problem. Much has been done to ensure safety. Registration is only possible for over 18-year-olds – as if only uneducated kids would talk nonsense. It was also pointed out that the bot could make false or even insulting claims. And: Users should not tempt the bot to give inappropriate answers. And the bot actually serves to see how people are trying to tempt him to send hate messages. With the aim of avoiding this in the future.

But that certainly doesn’t stop an angry citizen, a paid or just convinced troll from dumping his garbage. What Meta and the other tech companies have to do is create transparency. How exactly do they intend to keep their artificial intelligences from freaking out? The fact that other researchers outside the group should be able to see the data is a step in the right direction. The public has a right to know about systems that hold such explosive power.

After all, too much damage has already been done as far as the division of society is concerned, and social networks have played no small part in this. Meta claims the bot is for research purposes only. That is commendable, especially since research is to be carried out into how to prevent artificial intelligence from spreading even more untruths and hatred into the world. But when a company like Meta presents innovations of such scope, caution is always required. Ultimately, profit has always triumphed over concerns at Meta. And the problem remains that a commercially oriented company is above truth and morality.

However, all the necessary criticism must not overlook one thing: The fact that software like the Blenderbot is even able to chat on almost any topic is a tremendous achievement. For which it took an equally tremendous effort. This can only be done by someone who has enormous resources at their disposal, i.e. money, data and excellent, i.e. immensely expensive, staff. This automatically excludes other companies that cannot afford it. That too is a problem.


source site