Fake image of Pentagon explosion briefly goes viral

An image no doubt generated by AI, a Twitter account impersonating the Bloomberg agency thanks to Elon Musk’s cardboard certification system, Wall Street which drops briefly before recovering: welcome to the nightmare of disinformation . A fake image showing an explosion at the Pentagon briefly went viral on Twitter on Monday, causing markets to slump for ten minutes and reigniting the debate around the risks of artificial intelligence.

The fake photograph, apparently made with a generative AI program (capable of producing text and images from a simple plain language query), forced the US Department of Defense to respond. “We can confirm that this is false information and that the Pentagon was not attacked today,” a spokesperson said. Firefighters in the area where the building is located (in Arlington, near Washington) also intervened to indicate on Twitter that no explosion or incident had taken place, neither at the Pentagon nor nearby.

Responsibility of Twitter

Luckily, it was easy to spot the deception here: the building doesn’t look like the Pentagon, and there are many visual glitches in the sidewalk, security gates, and windows. The image was reportedly first posted by the (since suspended) account @BloombergFeed certified with a blue tick thanks to the new paid system launched by Elon Musk.

Except that the system does not verify that an account is affiliated with an organization. Many netizens, including some with hundreds of thousands of followers, fell for it. The false information was taken up by the Russian channel RT, which then retracted.

The image appears to have caused the markets to drop slightly for a few minutes, with the S&P 500 losing 0.29% from Friday before recovering. “There was a drop related to this false information when the machines detected it. noted Pat O’Hare of Briefing.com, referring to automated trading software that is programmed to react to social media posts.

Concern for the American presidential election

The incident comes after several false photographs produced with generative AI have been widely publicized to show the capabilities of this technology, such as that of the arrest of former US President Donald Trump or that of the Pope in a down jacket.

Software like DALL-E 2, Midjourney and Stable Diffusion allow amateurs to create convincing fake images without needing to master editing software like Photoshop.

But if generative AI facilitates the creation of false content, the problem of their dissemination and their virality – the most dangerous components of disinformation – is a matter for the platforms, experts regularly remind.

“Users are using these tools to generate content more efficiently than before (…) but they are still spreading via social networks”, underlined Sam Altman, the boss of OpenAI (DALL-E, ChatGPT), during of a congressional hearing in mid-May. A particularly sensitive subject with the approach of the American presidential election of November 2024, which risks mobilizing full-time fact-checkers.


source site