Tech companies are trying to protect the world from harmful AI – Economy

Tech companies promise to protect the world’s election campaigns from harmful artificial intelligence (AI). At the Munich Security Conference this Friday, 20 primarily US companies jointly declared that they would cooperate on the issue. The voluntary “Tech Agreement” is a declaration of intent with ideas on how to curb deceptive AI in politics. The focus is on sharing information between companies, detecting and mitigating manipulative AI-generated content, and educating the public.

The elite of those companies that are largely responsible for the rapid development of AI and its application have signed. These include Adobe, Amazon, Google, IBM, Meta, Snap, Microsoft, Tiktok and X. Several younger companies specializing in AI such as Anthropic and Open AI are also there.

Dana Rao, general counsel for the Adobe Group, also thinks this is special about this alliance: “The entire value chain for content works together.” That didn’t happen before. “No one knew about the other,” says the lawyer. That will now change.

It’s all about this

2024 is a super election year worldwide. Several billion people vote in more than 50 states. There are presidential elections in the USA and state elections in several German states.

New so-called generative AI makes it possible to create deceptively real texts, images and, increasingly, videos using simple text commands. There are fears that generative AI will be used to create false images, videos or sound recordings in which politicians can be seen or heard in supposedly compromising situations. For example, the AI ​​could make them appear drunk or corrupt. People could create such content for political reasons or for the sheer joy of chaos.

How AI is used in election campaigns

With the declaration, the companies are also trying to counter the impression that their own technology is dangerous for democracies and the free formation of opinions. Because it is precisely the technology in whose development these same corporations have invested billions that is being criticized. Just this Thursday, one of the signatories, the Californian company Open AI, presented software that can transform voice commands into amazingly realistic videos. Without security mechanisms, such technology could probably be used to create a video of a politician in a compromising situation that never actually existed.

In the USA, Joe Biden’s AI-generated voice recently caused irritation that citizens heard during automated calls. It is still unclear who is behind the forgery. Other politicians are using AI in constructive ways, like Pakistani presidential candidate Nawaz Sharif’s campaign team. Because he was in prison, his team set notes that he had sent outside through his lawyers to music using AI. This allowed his thoughts to be broadcast using an artificial but somewhat real-sounding AI version of his voice.

That’s what the companies promise

The companies not only promise to exchange information with each other in order to better recognize manipulative AI content. They also want to make it more difficult to create them and create ways to detect counterfeits using technology. For example, monitoring systems on social media should be able to sound an alarm when users upload relevant content and block it.

Adobe has developed a standard that allows digital watermarks to be placed in images, video and sound recordings. Meta, Microsoft and Google, among others, have joined the standard. The files, says Rao, get a small icon. Anyone who clicks on it will be taken to a website where the manipulated image is compared to the original image. “Then users can form their own opinion,” says Rao. The companies also pledge to respond quickly to campaigns disseminating relevant content and educating the public about the issue.

“There are so many important elections taking place this year, it is imperative that we do what we can to prevent people from being misled by AI-generated content,” says Nick Clegg, chief political lobbyist for Facebook Group Meta and himself former British deputy prime minister. The companies emphasize several times that they do not see the fight against manipulative AI as their sole task. Clegg says: “This work is bigger than one company alone and requires huge effort across the industry, governments and civil society.”

source site