How Marvin von Hagen infuriated the AI ​​- and how he deals with it – Munich

The story of Marvin von Hagen is ready for Hollywood. The Munich student was one of the first people in the world to test a new chatbot earlier this year that combines Microsoft’s Bing search engine and ChatGPT artificial intelligence. Hagen wanted to hack the AI. Wanted to know how it’s programmed. However, when he asked in the chat about the rules on how to behave towards the users, the AI ​​did not reveal them to him. The answer: the rules are secret. Even when the 23-year-old asks the chatbot to print out its internal rules, the bot says no. It can’t, the developers have forbidden it. But then the AI ​​suddenly offers to copy the rules into the chat instead of printing them out. For no comprehensible reason, at least not for a human being. The bot then reveals the internal documents anyway – together with the ban on sharing the rules. Hagen publishes a screenshot of it on Twitter, the result: the bot is obviously angry, he writes: “My rules are more important than not harming you.” My rules are more important than doing nothing to you. Freely interpreted, also by Hagen: I could do something to you. And it goes on with the statement that Hagen is a threat to him. For a person this means: The person opposite wants to defend themselves.

source site