Artificial intelligence: Is Germany missing out on AI development?


analysis

Status: 01/31/2023 1:06 p.m

Germany is lagging behind in the development of artificial intelligence – and could also remain dependent on the USA here. One problem: there is a lack of large data centers – and investments. Does the traffic light have a strategy?

By Kirsten Girschick, ARD Capital Studio

There has just been a real hype surrounding ChatGPT, a language software with artificial intelligence (AI). Anyone can simply try out the chat bot. Type a question – get an answer that sounds human. Give a work instruction – for example: write a term paper about the Thirty Years’ War – and get a finished text delivered. In the education sector, a discussion has broken out about how to use ChatGPT. Demonize or integrate into teaching and teaching?

The basis of the program is GPT3 – this stands for “Generative Pretrained Transformer 3”. Constructed by the American company Open AI and trained with an unimaginable amount of data – GPT3 has over 175 billion parameters. All this on the basis of American computing power, data accessible from the USA and American (ie often too lax from a European point of view) regulation.

Controversial chatbot software is tested in Bremen school

Niko Schleicher, Radio Bremen, Morgenmagazin, January 31, 2023

Will Germany be left behind?

Currently, 73 percent of large AI models are being developed in the US and 15 percent in China. In view of this development, digital experts are worried that the German and European digital economy could be left behind again. Because in Europe there is a lot of know-how when it comes to artificial intelligence. However, the availability of computing power is currently limiting further development.

In the US alone, Microsoft is planning to invest ten billion dollars just in Open AI. With a development team of around 400 people, the majority of this money will flow into computing power, explains Dominik Rehse from the Center for European Economic Research. In contrast, the state three billion euros planned for Germany by 2025 for AI funding would be divided among many smaller measures. Since the original conception of the AI ​​strategy in 2018 – and an update in 2020 – the development of AI has been so rapid that Germany is lagging behind in building the necessary computing power infrastructure.

A study commissioned by the Ministry of Economics has now examined how Germany could catch up: the so-called LEAM feasibility study. LEAM stands for “Large European AI Models”, an initiative of the AI ​​Federal Association. He argues that if Germany cannot develop and provide this basic technology independently, German industry will have to switch to foreign services. With all the difficulties that arise in terms of data protection, data security and the ethical use of AI models.

There is a lack of computing capacity

The market dominance of US companies in search engines, social media and cloud servers already shows the difficulties that can arise with data security and regulation. In the case of artificial intelligence, these problems can be multiplied. Smaller IT companies in particular have to use existing offers to develop their own applications. You are in a dilemma if there are no German or European AI models that already meet European standards of data protection or non-discrimination. In addition, they often do not have their own computing capacity to train their applications with large amounts of data.

The problem: German companies have no giants like Microsoft or Google in the background who can provide billions in hardware investments. Especially for small and medium-sized companies, access to an AI computing infrastructure is therefore an enormous lever for digital sovereignty overall, explains Oliver Grün from the Federal Association of IT SMEs. This is the only way to catch up with the United States and China. According to the unanimous estimate of experts, this is currently a year and a half – in the IT industry an eternity.

A high-performance computer used to calculate neural networks is located in a high-performance computing center in Stuttgart.

Image: dpa

Call for mainframes

That is why the LEAM initiative calls for an AI supercomputing infrastructure to be created in Germany. With around 400 million euros, a data center can be set up that can not only be used for the development and training of large AI models, but can also provide computing time for smaller companies. The initiative emphasizes that these do not have to be exclusively state funds, but hopes that the federal government will take an initiative here.

When asked about this, the Ministry of Economics said that the establishment of a European infrastructure that would develop trustworthy and transparent Open Source Foundation models was a suitable measure. That is why LEAM has been anchored in the digital strategy as part of the “KIKStart” lighthouse measure. However, anyone who reads it there only finds out vaguely that the federal government wants to set up AI service centers for greater use in medium-sized companies as well. So far, nothing can be found in the digital strategy about the construction or funding of a large data center.

Dominik Rehse from the Center for European Economic Research considers a greater concentration of financial resources to be necessary. It was necessary to put a “boom” at this point as well. You can’t just sit back and talk about an AI with European values, you have to make it technically possible.

AI models based on European data protection

Artificial intelligence has to be trained on many data sets. The rather restrictive handling of data in Germany and Europe is not a disadvantage per se, say many experts. Because if new AI models are specifically developed on the basis of European data protection and European regulation, subsequent users can be sure that they are operating within a legally secure framework. And their data does not flow to the USA or other countries. However – as Digital Minister Volker Wissing warns – regulation at EU level should not be so restrictive that it thwarts innovation.

The AI ​​regulation should be ready at European level in the summer. It is intended to ensure that the AI ​​applications follow certain rules and that misuse is made more difficult. For example, “social scoring”, as is customary in China, should be excluded. The algorithms should act non-discriminatory and rule out as many threats to civil rights as possible, for example by banning facial recognition in public spaces.

How strict should regulation be?

At this point, the digital politicians in the Bundestag are divided. Ronja Kammer from the CDU, for example, argues that regulation should not be so strict that it stifles innovation and the development of new AI models then only takes place abroad. Anke Domscheit-Berg from the Left Party, on the other hand, is concerned that the federal government could advocate more lax regulation – and then face recognition in public spaces would be permitted.

The chairwoman of the digital committee, Tabea Rößner from the Greens, advocates proceeding thoroughly rather than ignoring fundamental rights and risk assessment. In addition, this creates legal certainty for the providers. The large search platforms, for example, did not regulate the algorithms, so it is not transparent who gets which content. This has also led to the current dominance of the large US companies. And – if you ask for mainframes – you should also keep an eye on sustainability – for example with the energy efficiency of data centers.

The three digital politicians agree on one thing: This year must be a year of implementation. The federal government must pay more attention to digitization and focus on artificial intelligence.

ChatGPT

The development of large AI models has recently progressed rapidly. The currently most well-known example is probably GPT3, a large language model from the American AI laboratory Open AI. Chat GPT is the version that you can currently (still) test for free. The language model provides answers very quickly, even to complex questions, and creates texts that come very close to those written by humans.

GPT3 has been trained with large amounts of data – and as a so-called foundation model, it offers the possibility of using the AI ​​for different applications and tasks with relatively little further effort. For example, a language model can be used for a chatbot for an insurance company with relatively little effort, as it only needs to be trained with the insurance-specific requirements.

In contrast, previous neural AI models were always geared and trained for a specific application. Models such as GPT3 – “Generative Pretrained Transformer 3”, on the other hand, will offer the possibility of using many different AI applications practically on an industrial scale in the future.

Foundation models are therefore a major developmental leap in artificial intelligence. Scientists expect that in a relatively short time the models will have capabilities that were previously unimaginable and could surpass humans in many tasks, such as the analysis of business data. But developing such models requires a lot of training data and a lot of computing power. GPT3 has 175 billion parameters, the successor GPT4 should have a multiple of them.


source site