Chat GPT has to take a break in Italy – culture

On Friday, the Italian data protection authority GPSP asked the Californian startup OpenAI to stop its chatbot ChatGPT from processing the data of Italian citizens with immediate effect. Neither privacy nor the protection of personal data is guaranteed. In addition, the chatbot lacks a youth protection filter, although the company recommends its use only for young people aged 13 and over. The agency explained that “the lack of a filter to verify users’ age results in minors receiving responses that are totally inappropriate for their level of development and self-esteem.”

The company now has 20 days to inform the authority of the measures it has taken to comply with Italian and European data protection and youth protection laws. Otherwise there is a penalty of 20 million euros or up to four percent of global sales.

Italy is the first Western nation to take active action against the new generation of artificial intelligence. On Thursday, the European Consumers’ Association in Brussels had already addressed the European Union appealsto examine the new chatbots for their data and other security. The EU is preparing an extensive legislative package for dealing with artificial intelligence, but that will take a long time, according to consumer advocates.

Italy’s move comes at a time when warnings about AI and appeals to the startup OpenAI are piling up. One got the most attention open letter, published by the Future of Life Institute, an organization founded in 2014 by scientists and entrepreneurs dedicated to the safety of AI. In the letters, prominent entrepreneurs such as Tesla founder and Twitter boss Elon Musk, intellectuals such as best-selling historian Yuval Noah Harari and AI researchers such as Stuart Russell called for a six-month moratorium during which artificial intelligence will no longer be trained.

Criticism of artificial intelligence comes at every volume, from reasonable to hysterical

The letter is controversial. For one thing, the institute is supported by Elon Musk, who has its own plans for the development of AI. On the other hand, the letter describes the danger of artificial intelligence with a science fiction-sounding undertone, as if the very existence of the human species is at stake here. Such exaggeration is especially helpful for companies that want to sell their artificial intelligence applications. Because it ascribes capabilities to the technology that it doesn’t even have and pretends a technical level that the machines are actually still lagging behind. This is how politicians are distracted from the real dangers: from the automation of jobs, the production of fake news and the lack of data security. Harari was criticized last week for being in the New York Times a opinion article published, which even more drastically painted AI as a danger to humanity. Reputable scientists called the text “science fiction”.

But there are also serious voices calling for a pause in the development of artificial intelligence. Journalists like the technology expert Casey Newton, for example. He admitted in his newsletter that he himself fell for the AI ​​falsification of the picture of the Pope in the puffer coat. Such fakes are reason enough to stop. Columnist Ezra Klein warned against the business modelswith which digital corporations could manipulate users even more with the help of elaborate AI than with the simple AI of social networks and shopping sites.

In Europe, Sarah Spiekermann, head of the Institute for Information Systems & Society at the Vienna University of Economics and Business, has responded to the demand for a AI break of six months connected, but she goes further. Of all the objections, hers is the most concrete so far. She proposes that all AI models be tested according to the ISO/IEEE 24748-70 standard. This is a procedure that both the International Organization for Standardization (ISO) and the International Association of Engineers (IEEE) have adapted. Spiekermann led the development of this process. It provides that ethical standards and values ​​are observed in the design and construction of systems. This can mean, for example, that mechanisms are built into artificial intelligence that guarantee data protection and the protection of minors, as the Italian data protection authority is now demanding.

As of Friday evening, neither OpenAI nor the company’s Twitter-zealous bosses commented, Sam Altman and Greg Brockmanon the threats of punishment from Italy.


source site