Italy: Data Protection Authority blocks ChatGPT – Economy

Many are impressed by how well the software can imitate human speech. But there are also concerns. Now the application can no longer be used in Italy – mainly because, according to the data protection officers there, children are not protected well enough.

Italy’s data protection authority has temporarily blocked the popular text machine ChatGPT. She points out that the operator OpenAI does not provide sufficient information about the use of data. There are also no filters that prevent children from being shown “absolutely inappropriate” information.

Investigations have been launched, officials said. At the same time, as a precaution, the processing of data from users from Italy was banned – ChatGPT is therefore no longer applicable in the country.

OpenAI now has 20 days to present measures against the allegations. After that, there is a penalty of up to 20 million euros or four percent of the company’s worldwide turnover.

The data protection authority also refers to a data breach that has recently become known. Some ChatGPT users had seen information from other people’s profiles. According to OpenAI, the problem was due to a software error.

Risk of the software “hallucinating facts”

ChatGPT relies on the software capturing massive amounts of text. On this basis, she can formulate sentences that can hardly be distinguished from those of a human being. The program estimates which words could follow next in a sentence. This rationale carries the risk of the software “fact hallucinating,” as OpenAI calls it – misrepresenting incorrect information as correct.

The Italian privacy advocates also see a fundamental problem in how ChatGPT was trained. There is no legal basis for the mass collection and storage of personal data to train the algorithms. The agency took similar action against another chatbot called Replica in February. It was also about protecting children.

ChatGPT has been the subject of numerous social debates in recent months. Many observers are impressed by how well the software can imitate human speech. At the same time, there are concerns that such technology based on artificial intelligence could be misused, for example to spread false information.

source site