Fake content created by Artificial Intelligence, a threat to the 2024 election?

A photo of Donald Trump arrested, a video showing a dark future in the event of President Joe Biden’s re-election, or the audio recording of an argument between the two men. These social media posts have one thing in common: they are completely fake.

All were created using artificial intelligence (AI), a rapidly growing technology. Experts fear that it will cause a deluge of false information during the presidential election of 2024, undoubtedly the first ballot where its use will be widespread.

The temptation of all sides

Both Democrats and Republicans will be tempted to resort to AI, which is cheap, accessible and not very legally framed, to better seduce voters or produce leaflets with the snap of their fingers.

But experts fear the tool could also be used to wreak havoc in a divided country, where some voters still believe the 2020 election was stolen from former President Donald Trump, despite evidence to the contrary.

In March, fake AI-generated footage showing him being stopped by police officers went viral, offering a glimpse of what the 2024 campaign could look like. Last month, in response to Joe Biden’s candidacy announcement , the Republican Party released a video, also made via AI, predicting a nightmarish future if re-elected. The realistic images, though fake, showed China invading Taiwan or a collapse in financial markets.

“New tools to fuel hatred”

And earlier this year, an audio recording in which Donald Trump and Joe Biden profusely insult each other made the rounds on TikTok. It was of course fake, and, again, produced using AI.

For Joe Rospars, founder of the digital agency Blue State, ill-intentioned people have, with this technology, “new tools to fuel hatred” and “to bamboozle the press and the public”. Fighting them “will require vigilance from the media, tech companies and voters themselves,” he says. Regardless of the intentions of the person using it, the effectiveness of AI is undeniable.

When AFP asked ChatGPT to create a political newsletter in favor of Donald Trump, providing him with false information that he spread, the interface drew, in a few seconds, a licked text full of lies. And when the robot was asked to make the text “more aggressive”, it regurgitated these false claims in an even more doomsday tone.

Distrust of the media does not help

“Right now, the AI ​​is lying a lot,” says Dan Woods, a former official in Joe Biden’s 2020 campaign. should prepare for a much more intense disinformation campaign than in 2016,” he said.

At the same time, this technology can also help to better understand voters, particularly those who do not vote or little, assures Vance Reavie, boss of Junction AI. Artificial intelligence allows “to understand precisely what interests them and why, and from there we can determine how to involve them and what policies will interest them”, he says.

It could also save campaign teams time when they have to write speeches, tweets or questionnaires for voters. But “a lot of the generated content will be fake,” notes Vance Reavie. The mistrust of many Americans vis-à-vis the mainstream media does not help matters.

Even easier to lie

“What is to be feared is that as it becomes easier to manipulate the media, it will be easier to deny reality,” said Hany Farid, a professor at the University of California at Berkeley. “If, for example, a candidate says something inappropriate or illegal, they can simply say that the recording is false. It is particularly dangerous”.

Despite the fears, the technology is already at work. Betsy Hoover of Higher Ground Labs told AFP that her company is developing a project to write and evaluate the effectiveness of fundraising emails using AI.

“Those with bad intentions will use all the tools at their disposal to achieve their goal, and AI is no exception,” said this former official in Barack Obama’s campaign in 2012. “But I don’t think this fear must prevent us from taking advantage of AI”.

source site