“Made with AI”… Meta will identify content generated by artificial intelligence on its social networks

They should be more easily recognized, thanks to the words “Made with AI”. The American giant Meta will very soon identify sounds, images and videos generated by artificial intelligence (AI) on its social networks, according to a blog note published Friday. “We plan to start labeling content in May 2024,” said Monika Bickert, vice president in charge of content policies at the parent company of Facebook, Instagram and Threads. The notice should be placed “on a greater number of video, audio and image content” than previously.

This content will be flagged by the platform if it detects “industry-standard AI image indicators” or if “people indicate that they are uploading AI-generated content,” a- she pointed out.

“Transparency and more context”

The Californian group announces more generally that it will change the way it handles content modified by AI, after consulting its supervisory board. He believes today that “transparency and more context are now the best way to deal with manipulated content”, “in order to avoid the risk of unnecessarily restricting freedom of expression”. Concretely, it will rather be a question of adding “labels and context” to this content, rather than deleting them as has been done until now.

Meta nevertheless clarified that it would continue to remove from its platforms any content, whether created by a human or an AI, going against its rules “against interference in the electoral process, intimidation , harassment, violence (…) or any other policy included in our community standards”. It also relies on its network of “around 100 independent fact-checkers” to identify “false or misleading” AI-generated content.

“Disinformation or misinformation”

The parent company of Facebook announced in February its wish to label any image generated by AI, a decision taken against a backdrop of the fight against disinformation. Other tech giants like Microsoft, Google and OpenAI have made similar commitments.

The rise of generative AI has raised fears that people could use these tools to sow political chaos, particularly through disinformation or misinformation, in the run-up to several major elections this year, particularly in states -United.

Beyond these ballots, the development of generative AI programs is accompanied by the production of a flow of degrading content, according to many experts and regulators, such as fake pornographic images (“deepfakes”) of famous women, a phenomenon which also targets anonymous people.

source site