Questions and Answers: Artificial Intelligence: What has the EU agreed on?

questions and answers
Artificial intelligence: what has the EU agreed on?

Artificial intelligence is the promise of the future for some, but a great danger for others. The EU has now agreed on rules in a marathon meeting. Now what does that mean?

After tough negotiations, the EU has agreed on stricter rules for artificial intelligence (AI). These are the first rules for AI in the world, the European Parliament and the EU states announced on Friday evening in Brussels. The most important questions and answers:

What is AI and how does it work?

Artificial intelligence (AI) is the attempt to transfer human learning and thinking to computers. The goal is to allow complex tasks that normally require human intelligence to be completed. Despite all the progress, general problem-solving machines (Artificial General Intelligence) are not yet in sight. However, more narrowly defined AI applications are already widely used in today’s world: These include automatic translations, personalized recommendations for online shopping, facial recognition on cell phones, but also intelligent thermostats or navigation systems. The applications of generative AI such as the text robot ChatGPT are also among the more narrowly defined AI applications.

Why is there a need for a law for this?

AI is considered a future technology. Experts suspect that the technology could affect practically all aspects of the economy, but also everyday life, and that the labor market, for example, will change massively as a result: Some jobs will change, others may disappear completely. But AI is also considered a technology that poses risks. For example, the head of ChatGPT inventor OpenAI, Sam Altman, warned against false information using artificial intelligence and therefore spoke out in favor of regulation. Photos or videos can easily be manipulated by AI. Another problem is that AI has sometimes been trained with distorted data sets and thus people are discriminated against. Its use in warfare is also considered possible.

What has the EU now agreed on?

The regulations now presented set obligations for AI based on its potential risks and impacts. AIs that have significant potential for harm, for example to health, democracy, the environment or security, are classified as particularly risky.

Certain applications will be banned completely, such as biometric categorization systems that use sensitive characteristics such as sexual orientation or religious beliefs. The untargeted reading of images from the Internet or from surveillance recordings for facial recognition databases should also not be permitted. However, there will be exceptions for biometric identification in public spaces in real time, for example when there is a risk of a terrorist attack or when specifically searching for victims of human trafficking. There was intense debate about this point; the EU Parliament actually wanted a complete ban.

Another point of contention was the regulation of so-called basic models. These are very powerful AI models that have been trained on a broad set of data. They can be the basis for many other applications. This includes GPT, for example. Germany, France and Italy had previously called for only specific applications of AI to be regulated, but not the basic technology itself. The negotiators have now agreed on certain transparency requirements for these models.

What are the reactions?

EU Commission President Ursula von der Leyen welcomed the agreement and described the law as a “worldwide novelty”. Svenja Hahn from the FDP draws a mixed conclusion: “In 38 hours of negotiations over three days, we were able to prevent massive over-regulation of AI innovation and anchor constitutional principles in the use of AI in law enforcement. I would have enjoyed innovation more and even stronger ones Commitments to civil rights wanted,” she said. The CDU’s legal policy spokesman, Axel Voss, said he was not convinced that this was the right way to make Europe competitive in the field of AI. “Innovation will still take place elsewhere. We as the European Union have missed our chance here.”

The European consumer protection organization Beuc criticized the EU for relying too much on the good will of companies to self-regulate. “For example, virtual assistants or AI-controlled toys are not adequately regulated because they are not considered high-risk systems. Systems like ChatGPT or Bard will also not receive the necessary guardrails so that consumers can trust them,” it said.

How does it go from here?

First, EU states and the European Parliament must officially approve the project. But that is considered a formality. The law should then apply two years after it comes into force.

Information from the European Parliament on AI Press release EU Parliament

dpa

source site-4