Study: Not enough measures against deepfakes

study
Not enough measures against deepfakes

With AI software you can take increasingly realistic photos

© Peter Steffen/dpa

A new study by the Internet Foundation Mozilla warns of significant deficiencies in the labeling of AI-generated images, videos and audio files.

An expert study warns of significant deficits in today’s methods Awarding of images, videos and audio recordings that were created with the help of artificial intelligence.

Markings that are visible to users can be misleading and difficult to detect – and at the same time are relatively easy to remove, emphasized the Internet Foundation Mozilla in a report published on Monday. Technical watermarks that are automatically embedded in content generated by AI are the more robust solution. Here too, however, there is the caveat that this information must first be read by the software on the users’ devices and displayed for them.

Malicious use not thought through from the start

Regulations for labeling content often come to nothing, warned Mozilla researcher Ramak Molavi Vasse’i. After all, the so-called deepfakes are usually created specifically to mislead. If the industry doesn’t make progress with self-regulation, there needs to be more pressure. In general, it is a problem that a technology is currently being brought into circulation in which measures against malicious use are not thought through and installed from the outset. In addition, there is still a proliferation of several procedures for this instead of a uniform standard.

With AI software you can create increasingly realistic photos from text specifications. And now more and more video is being generated in this way – the ChatGPT developer OpenAI recently presented its video software Sora. The Mozilla expert criticized the visible marking that indicates an AI origin in Sora’s case, which is small and often difficult to see.

Big concern for elections

Especially this year, with the US presidential election and the European election, there is great concern that fakes created using AI could be used to influence their outcome. In the USA, shortly before the Democratic Party’s primary election in the state of New Hampshire, there were automated calls in which President Joe Biden’s deceptively real voice called for people to stay away from the vote.

Such audio fakes are considered a big problem simply because there are fewer ways to detect them based on errors or license plates. There have already been cases where such recordings have been used for the so-called grandchild trick, in which criminals try to steal money in the name of relatives.

dpa

source site-5