Open-AI: ChatGPT invents date of birth, Noyb files complaint – Economy

When people can’t answer a question, they say “I don’t know.” Some may be trying to piece something together. Then you can at least correct them. This doesn’t work with Chat-GPT. The language model tends to invent supposed facts. Apparently the false allegations cannot be corrected. At least that’s what Open AI, which develops Chat-GPT, claims. And this rebelliousness could get the company into a lot of trouble.

Together with a man from Austria who wishes to remain anonymous, the data protection organization Noyb Complaint lodged with the Austrian data protection authority. They accuse Open-AI of violating the European General Data Protection Regulation (GDPR). Specifically, it involves six digits: the complainant’s date of birth, about which there is no information online. Nevertheless, Chat-GPT tries to answer relevant questions and randomly spits out false data.

Open AI has stated that it cannot prevent Chat-GPT from displaying incorrect birth dates. The terms of use state: “Given the technical complexity of our models, we may not be able to correct the inaccuracy in every case.” Although it is possible to block personal data, this affects all inquiries about the respective person. Since the complainant is a public figure, the general blocking violates Open AI’s freedom of expression and the public’s freedom of information.

Chatbots could violate EU law

Noyb sums it up in his complaint summarized as follows: “Chat-GPT cannot correct information, cannot selectively block information and each affected person simply has to live with this situation.” Open AI seems to be of the opinion that it can spread false information and, unlike media companies, is not liable for it.

“If a system cannot provide accurate and transparent results, it must not be used to create personal data,” says Maartje de Graaf, data protection lawyer at Noyb. Open AI and other companies are currently apparently unable to bring chatbots into line with EU law. “The technology must follow legal requirements, not the other way around.”

The GDPR provides for fines of up to four percent of annual turnover for violations. What may be more threatening for Open AI: data protection authorities can instruct companies to disclose and adapt their data processing. So the complaint could mean that Open AI has to let some light into its black box. So far, the company has not revealed what data its models were trained with or how the answers are generated.

Language models with a tendency to hallucinate can create much more mischief than false birth dates. Last year, an Australian mayor sued Open AI after Chat-GPT falsely claimed he had been sentenced to prison for bribery. He recently withdrew the lawsuit, the current model GPT-4 has stopped slandering him. Maybe Open-AI will be able to teach GPT-5 a new answer instead of inventing a birthday: “I have no idea.”

source site