World Economic Forum: AI in Davos: Is Europe losing touch?

Artificial intelligence is omnipresent in Davos. But especially in a super election year, the assessment is very different.

Last year, ChatGPT was the latest hype at the World Economic Forum – and artificial intelligence (AI) was still very abstract for many. Since then, the tech industry has been in a gold rush mood. The applications seem limitless: from AI-generated music to early detection of breast cancer, forecasting extreme weather, optimizing supply chains and analyzing business reports. The sheer uncontrollable possibilities bring worry lines to the foreheads of many a politician. The keyword is fake news, especially in the super election year of 2024. And what is noticeable on the podiums is that European companies have to be careful that they do not lose touch.

Overcoming the doomsday mood?

Google, Microsoft, the Facebook group Meta, they have all made rapid developments when it comes to artificial intelligence. Fortunately, the time for panic reactions is over, says Meta top manager Nick Clegg (President Global Affairs). “I feel like we’ve wasted a lot of energy over the last year or two speculating whether the world will end next Tuesday and whether robots with glowing red eyes will take over.” But just a few weeks ago, the UN High Commissioner for Human Rights, Volker Türk, classified AI as a threat to human dignity.

Seize opportunities – and minimize risks

One thing is certain: AI has the potential to change the world. There are still no global rules that guarantee responsible use of the technology. In the near future, these will also be unrealistic, says Jürgen Müller, Chief Technology Officer of the German software company SAP. International ideas about transparency and privacy are too different: While China relies on facial recognition to monitor its population, the EU wants to restrict exactly that.

A few weeks ago, Brussels agreed on rules for the use of artificial intelligence. Certain applications should be banned, such as biometric systems that use sexual orientation or religious beliefs. The untargeted reading of images from the Internet or from surveillance recordings should also not be permitted.

Some consider the EU regulations to be too lax, others warn that this threatens to put Europe behind technologically. Meta is also skeptical: “It’s still a lot of work in progress,” says Clegg. For example, he would like to see guidelines for labeling images created with AI – a kind of mandatory watermark that Instagram and Facebook could use to identify manipulated photos.

Danger in a super election year

In its risk survey, the World Economic Forum has classified AI as one of the greatest threats of the next few years. This is mainly about false information in the super election year with elections in the USA, Great Britain and India. With artificial intelligence, fake material could reach huge numbers of voters in no time, warns Carolina Klint from the consulting firm Marsh McLennan.

The federal government has already had a foretaste: In November, a manipulated video by Olaf Scholz circulated. The Chancellor was told that the government was aiming to ban the AfD.

Meta manager Clegg believes many warnings are exaggerated. But the group’s leading AI scientist, Yann LeCun, also admits: “Detecting dangerous disinformation is very difficult. We don’t have the ideal technology for it.” Despite all the doom and gloom, one must remember that if AI is used for cyberattacks, the same technology can be used to detect such attacks and eliminate vulnerabilities.

What AI can do – and what it can’t (yet).

Artificial intelligence can do more than just write texts and gather information. Microsoft boss Satya Nadella reports on a material designed with the help of software that can be used to reduce the lithium content in batteries. Google has developed an AI to identify gene mutations. SAP uses it to coordinate supply chains and helps record receipts. According to LeCun, Meta now recognizes 95 percent of all hate posts on Facebook and Instagram – in all languages.

Intel boss Pat Gelsinger expects AI to be available on all platforms and devices in the future. By 2028, 80 percent of all computers could have chips installed that enable the use of artificial intelligence.

But scientist LeCun also clearly emphasizes what AI applications cannot yet do: “Contrary to what some claim, we do not yet have a system that would achieve human intelligence.” AI cannot yet remember, cannot think or plan, and cannot understand the world. Larger amounts of data and computers cannot change this; as yet unknown scientific breakthroughs are required. “And that won’t happen quickly, but will take years, if not decades.”

AI is still a long way from having the intelligence of a human being – and this must also be taken into account when regulating. “Demanding regulation now out of fear of superhuman intelligence is like demanding regulation of turbojets in 1925,” argues LeCun. “The turbojet had not yet been invented in 1925.”

Where are the big players located?

Google, Microsoft, Meta, Intel and of course ChatGPT – just the Europeans, they are hardly represented on the AI ​​panels of the World Economic Forum. “The main developments take place in the USA, in China and then nothing happens for a long time,” admits SAP board member Müller. Germany is often excellent in basic research – but less so when it comes to the commercialization of technology.

According to a study by the management consultancy McKinsey cited in the “Handelsblatt”, there are 35 larger AI companies in the USA, and the researchers found a total of 3 in Europe. The disproportion is also large when it comes to investments: Europe invested 1.7 billion dollars in the future industry last year, the USA 23 billion.

dpa

source site-5