Elections 2024: How ChatGPT should be protected from abuse in election campaigns

Strategy against manipulation
How OpenAI wants to prevent the misuse of artificial intelligence in the US election campaign

At the end of March 2023, these images of the alleged arrest of ex-US President Donald Trump caused a stir – but the whole thing never happened; the images were generated by artificial intelligence.

© Elliot Higgins via Twitter

Artificial intelligence makes it easier than ever before to manipulate elections. The ChatGPT operator OpenAI has now presented a plan to minimize the risk of misuse. The timing is no coincidence.

This year there will be elections in more countries than ever before: in over 50 countries the population will vote on their political leadership. At the same time, the risk of manipulation is higher than ever. Thanks to the revolution in generative artificial intelligence, images, text and even videos can be created in an instant to make the opponent look bad. With OpenAI, one of the companies at the forefront of AI development has now committed to putting a stop to this.

The operator was aware that the danger was great ChatGPT has been aware of this for a long time. “I’m nervous about what impact AI will have on future elections,” OpenAI boss Sam Altman admitted last summer to the short message service X, formerly Twitter. “Personalized one-to-one persuasion, together with high-quality media, will develop tremendous power,” he believed. Five months later, the company is now announcing exactly what measures it wants to take to prevent this.

Fear of manipulation

Protecting elections from manipulation is everyone’s job, the company emphasizes in its blog post. “And we want to ensure that our technology is not used in a way that could undermine that process.” Several approaches have been developed to prevent misuse of ChatGPT – by far the best-known text creation AI – and the image creation AI Dall-E.

ChatGPT should no longer be able to create political campaigns or write lobbying texts. Creating chat bots that imitate political candidates is also prohibited. In order to be able to include current political information, the company also works closely with international news publishers. The AI ​​now receives information about US voting law directly from the non-partisan election organization NASS.

Since several AI images of politicians went viral last year, Dall-E should also receive protective measures. OpenAI is working with a coalition of AI companies to incorporate watermarks into images created with Dall-E. In this way, AI fakes should be identified quickly. A so-called “origin classification” tool is intended to help recognize images created with Dall-E even without a watermark. The measures should all be implemented in the near future.

Overdue measures

It is actually surprising that the AI ​​industry is only now starting to implement such measures. The experience of recent years on social media has shown the extent to which political actors engage in abuse when new technologies allow it. Tools that can be used to create new content in no time should have been able to think about possible misuse and appropriate countermeasures much earlier.

There are already enough examples. Last year, AI-created images of Donald Trump’s arrest made the rounds, and the AfD used images of supposed refugees to stir up the mood. After the founding party conference, images of pizza boxes left behind caused anger on social media – even after the images were identified as being created by AI (find out more here).

Global super election year

The measures shouldn’t have taken much longer to arrive: with the USA, India, Great Britain, Russia and numerous other countries, almost half of the world’s population will be called to the polls in over 50 countries this year. In particular, the election in the USA between current President Joe Biden and his likely challenger Donald Trump is seen as groundbreaking – and at the same time at enormous risk of manipulation.

It remains to be seen how well OpenAI’s plans can actually keep abuse in check. Social media companies like Meta have been fighting manipulation for years with ever new means. Despite measurable successes, they have not yet gotten the biggest danger under control: If voters want to believe the false reports, even a fact check often doesn’t stop them.

Sources:OpenAI, Axios, AP, X


source site-5