Deception via video and audio: Deepfakes as an offensive weapon


Exclusive

Status: 06/25/2022 3:52 p.m

They imitate faces and voices in a deceptively real way – deepfakes. According to the Senate Chancellery, Berlin’s Governing Mayor Giffey was the victim of such a fake video call. Security authorities have been warning of this for a long time.

After about half an hour, Franziska Giffey became suspicious. The conversation with Kiev Mayor Vitali Klitschko on Friday afternoon was somehow strange. Berlin’s governing mayor therefore broke off the video call without further ado. Her feelings didn’t deceive Giffey. Although the man in the video looked deceptively real, she hadn’t spoken to the real Klitschko. It was apparently a fake video, a so-called deepfake.

The term is made up of the English descriptions for deep learning and fake. This refers to videos or audio recordings that have been manipulated using artificial intelligence. For example, long-dead personalities, such as actors or politicians, can be brought to life virtually – but the video forgeries can also be misused. To deceive, manipulate and discredit. As a means of propaganda and disinformation.

Giffey wasn’t the only victim

Berlin Mayor Giffey wasn’t the only victim of the fake Klitschko. In the past few weeks, further video switching with other European mayors took place. With José Luis Martinez-Almeida, for example, the mayor of Madrid. He too is said to have suspected the deception and broke off the conversation.

Apparently, Vienna’s Mayor Michael Ludwig was far less suspicious. He is said to have spoken with the person he believed to be Vitali Klitschko for almost an hour. It is still unclear who is behind the sophisticated deception actions. The Berlin Senate Administration immediately informed the police, and the state security department of the State Criminal Police Office (LKA) is now investigating the matter. However, the case does not come as a complete surprise.

Deepfakes could be used criminally more often

German security authorities have been warning for some time that deepfakes could be used more frequently by criminals and foreign secret services in the future. For example, to steal money or engage in industrial espionage. For example, the virtual forgeries can be used to obtain company secrets. There is already talk of a new level of espionage, a “social engineering 2.0”.

In 2019, a British energy company fell victim to such a deepfake attack. The company’s managing director received calls from the alleged CEO of the German parent company. In the phone calls he was asked to transfer around 225,000 euros to a specific account. The investigation later revealed that the caller’s voice had been imitated by the criminals using artificial intelligence software.

A deepfake has also appeared in the context of the current war in Ukraine. In mid-March, a video allegedly showing Ukrainian President Volodymyr Zelenskyy announcing a capitulation to Russia was circulated on YouTube and eventually also on Twitter. However, the recording turned out to be rather bungling and poorly produced, which was quickly exposed as a fake.

Better AI makes it easier for attackers

There is great concern within German security authorities that deepfakes could soon be used in much better quality. And also by secret services, which use it to deliberately operate disinformation and manipulation or to gain confidential information.

It is “highly likely that well-equipped intelligence services or other state actors will use deepfakes as new weapons of attack,” says a current brochure from the Federal Office for the Protection of the Constitution (BfV), which warns against industrial espionage in particular. As AI develops, it will become “more and more easy for attackers to imitate people in sound and video recordings almost perfectly”.

“New Possibilities of Manipulation”

It’s no longer about manipulated videos that are posted online, but “also about live manipulation during a video call,” the constitutional protection officers warn. Such deepfakes open up “completely new possibilities of manipulation and data skimming” for foreign secret services, hackers or industrial spies.

Compared to other forms of espionage, “trust does not first have to be built up in the victim, since the impersonated person presumably already has a trust status with the victim.” Like Vitali Klitschko, for example, who has repeatedly spoken to German politicians and journalists in recent months and has had a strong presence in the media. But how can deepfakes be detected?

Pay attention to the facial expressions

The image quality of videos is often crucial, say experts. Video switching should therefore not necessarily take place via mobile phones, but rather larger screens on which anomalies may be more easily recognizable. The Office for the Protection of the Constitution also advises paying attention to “natural reactions such as blinking or frowning”. Such facial expressions can often not be imitated well enough by AI software.

“Anyone who is suspicious of their contact person in live videos can also ask them to tap their nose or cheek,” according to advice from the Office for the Protection of the Constitution. “To date, the AI, even at its best, is unable to display this animation. The image would be visibly distorted.”

In principle, however, it is important to check the source carefully. So through which channels such a video call comes about, what the contact looked like. In the current case of the “wrong Klitschko” it seems that Berlin, Madrid and Vienna acted rather carelessly.

The video calls in the past week are said to have been requested and scheduled via the e-mail address [email protected], as the “Bild-Zeitung” reports. However, this is an ordinary e-mail provider and not an official government or administrative body: Their e-mail addresses end with @gov.ua. There were probably no inquiries on other channels, such as previous calls to Kyiv.

source site