AI Act: How long the leash will be for ChatGPT and Co

As of: December 6th, 2023 6:30 a.m

Artificial intelligence can write essays or help diagnose diseases, but can also lead to disinformation and discrimination. The EU therefore wants to regulate AI by law.

The hype started about a year ago with this sentence: “We trained a model called ChatGPT that interacts in a conversational way.” With this awkward understatement, the company OpenAI presented its chatbot on its own website. This was followed by cheers, horror scenarios and much in between. The great excitement surrounding ChatGPT has subsided, and at the same time AI applications are becoming more and more part of everyday life.

This can be seen at “Robotspaceship”, a Mainz-based consulting agency that also produces digital content for customers. “We use ChatGPT to develop concepts for podcasts, or image generators to create graphic designs,” says owner Oliver Kemmann. AI tools could accelerate many processes, including creative ones, says the innovation consultant.

But not least because of errors and distortions in the data with which the AI ​​was trained, Kemmann also says: “We need regulation. But it has to be done correctly, we must not over-regulate.”

Regulation at EU level

The EU Parliament, the Commission and the member states have been trying to achieve this balancing act for months. It’s about the “AI Act”, the regulation of artificial intelligence at the European level. The last day of the hearing (for now) is scheduled for Wednesday.

At its core, the AI ​​Act aims to regulate not the technology but the application of AI. There are different risk levels for this. For example, “social scoring” through artificial intelligence, i.e. the evaluation of citizens based on their social behavior, should be banned.

“High-risk AI systems” are those used in healthcare or job recruiting, for example. For example, by using AI to help diagnose diseases or screen out applicants. There should be transparency obligations for applications that only pose a limited risk, and no further obligations for those with minimal risk.

“AI must not discriminate against people”

This risk-based approach makes sense in principle, says Sandra Wachter, professor of technology and regulation at the University of Oxford. This applies at least to AI systems that are used to support decision-making. In the areas of migration, criminal law and insurance, among other things, it must be ensured that AI does not discriminate against people.

In some areas, however, the “inexplicability” of AI decisions is less dramatic. “If the Netflix algorithm suggests a film to me, do I really have to understand the content of why it did that,” asks Wachter. Generative AI like ChatGPT, on the other hand, should not be a “black box”. And it was precisely here that there was recently a dispute between the EU institutions.

The question here is which regulations should apply to so-called basic models. These are AI models that are trained with a lot of data and can be used for various purposes: a kind of “all-purpose AI”. The most prominent example is the “GPT 4” model, which was “fed” with tons of texts from the Internet. The current version of the ChatGPT bot is based on it.

EU Parliament wants Risk minimization

In the AI ​​Act, the EU Parliament has planned to introduce its own rules for the basic models and, among other things, require them to minimize risks to health, fundamental rights and democracy. During the course of the EU negotiations, Germany, France and Italy – probably not least due to pressure from domestic AI companies – insisted that no additional rules should apply to the basic models. Instead, the three countries are proposing a voluntary commitment: The AI ​​developers would have to explain, among other things, the functionality and capabilities of their model in a kind of leaflet.

“No unnecessary hurdles”

The German digital economy is in favor of such a voluntary commitment. “Strict and rigid regulation” of the basic models, on the other hand, is problematic for two reasons, says Ralf Wintergerst, President of the Bitkom industry association.

On the one hand, the diverse uses made it impossible for providers of this type of AI to effectively assess and reduce the risk. “Secondly, technical developments are rapid, especially at the model level, so that fixed rules in the AI ​​Act would quickly become outdated,” says Wintergerst. He adds: “Mandatory self-regulation does not mean that there are no rules.” However, the requirements must be practically implementable and dynamically adaptable, without “unnecessary hurdles caused by rules that are too rigid”.

But some experts believe that a voluntary commitment to “all-purpose AI” is not enough. “If a technology can be used in such a diverse and potentially harmful way, then by definition it falls into a high-risk category,” says technology researcher Sandra Wachter. Both are needed: transparency and personal responsibility, but also clearly defined boundaries. “It’s a bit like going to the supermarket. If I pick up a can of soup, it’s important that it clearly states what ingredients it contains,” says Wachter, “but there also have to be rules that Certain things just can’t be in the soup.”

Danger of mass disinformation

Wachter sees a particular danger in the fact that generative AI models are used to generate mass amounts of disinformation. Or they can also be used to commit crimes more easily. “Settings are needed so that you can’t quickly figure out how to make a bomb or murder someone without a trace,” says Wachter. So far, OpenAI, the developer of ChatGPT, wants to independently prevent corresponding answers.

There is also much discussion about the question of whether AI-generated content should be marked with a digital watermark to prevent deception. “That wouldn’t work,” says innovation consultant Oliver Kemmann. Such transparency notices could possibly be removed too easily. Researcher Sandra Wachter sees it differently. “It’s a kind of cat-and-mouse game. But you could also say that removing the watermark is a criminal offense,” suggests Wachter.

Failure of the legislation is also conceivable

It remains difficult to predict until the end what specific rules the EU will impose on the developers and users of artificial intelligence. Recently, it was also discussed whether there should be a graduated process for the basic models: that is, certain rules only for the largest or most powerful among them. It was also said again and again that the legislation could fail.

In the end, the AI ​​Act could help create the necessary trust in AI systems, says Bitkom President Wintergerst. The next step is to quickly clarify which authorities will oversee compliance with the rules. “Not only bans, but also legal uncertainty can lead to AI no longer being developed in Europe, but elsewhere in the world,” says Wintergerst.

In any case, time for the AI ​​Act is running out, not only because the technology is developing rapidly, but also because the EU Parliament will be re-elected next year.

source site