EU agreement on AI law: fundamental rights versus innovations


analysis

As of: December 9th, 2023 3:26 p.m

The EU has agreed on rules for the use of artificial intelligence. However, not everyone involved believes that the balance between security, innovation and fundamental rights has been achieved.

The law puts the EU at the forefront of AI regulation worldwide. It is the first of its kind. That is why the responsible EU Internal Market Commissioner, Thierry Breton, calls it historic. Commission President Ursula von der Leyen speaks of a legal framework for the development of artificial intelligence that people could “trust”.

And what do the many European parliamentarians say who have been working on this law for over two and a half years – and who have now spent 38 hours over three days poring over the major sticking points in the final round of negotiations? “The bottom line is that it is a good law that responds very well to the challenges of our times and answers the most urgent questions,” says Green MEP Sergey Lagodinsky.

The AI ​​lobby put pressure on generative models

However, it must also create a framework for questions that do not yet arise – with technology that is still in constant change and rapid development.

That’s why a lot of things revolved around the question: Should particularly powerful AI models also have special rules? Yes, is the answer, says Lagodinsky. “The most important thing is that we managed to regulate not only artificial intelligence, but also the particularly strong and particularly advanced form of artificial intelligence, namely generative models. We achieved this despite the pressure from the lobbies and from companies – and this pressure was enormous.”

Two risk classes introduced

The German federal government also recently spoke out against legal regulations for particularly large AI models, out of concern that future technology could then avoid Europe. Now the so-called basic models are to be regulated in two risk classes – and given corresponding special duties: when passing on information, when analyzing risk and when documenting the data with which the AI ​​is trained.

Basic models are particularly powerful models. The largest currently is called GPT-4; it is also the basis for the well-known language model ChatGPT. But it can also flow into other applications – for example in software from hospitals, law firms and human resources departments or in chatbots in customer service.

Legal certainty For companies

That’s why the current decision is also good for European companies, says FDP MEP Svenja Hahn, who negotiated the AI ​​law for her group. “This is extremely important, especially for European companies – small and medium-sized companies in particular – so that they can build secure systems when they rely on systems like ChatGPT and are not left alone with compliance costs – or even responsible for malfunctions of these systems .”

She would have liked fewer regulatory requirements, says Hahn. But it is a great success for innovation in Europe “that we were able to prevent a blanket high-risk classification of so-called general purpose AI systems like ChatGPT.”

Regulation as Innovation inhibitor?

The CDU’s legal policy spokesman, Axel Voss, is skeptical. He is not convinced that this is the right way to make Europe competitive in the field of AI. “Innovation will still take place elsewhere. We as the European Union have missed our chance here.” Voss is now focusing on crucial work on the technical details until the European Parliament and the EU states can finally approve the outcome of the negotiations.

This probably also applies to the biggest controversial question in the entire AI law: To what extent can artificial intelligence be used in public spaces, for example for law enforcement? It is an important success for civil rights that biometric mass surveillance has been prevented, says FDP MP Hahn. “Originally we had even called for a complete ban on real-time biometric identification from Parliament.”

Limits for prosecutors

However, many EU states – except Germany – demanded significantly more options. Biometric identification should now be possible within narrow limits in the future. “But we managed to overcome crucial constitutional hurdles,” says Hahn. “According to this, the technology may only be used to specifically identify people who are specifically wanted for committing very serious crimes. This includes, for example, kidnapping or rape. Victims of serious crimes and missing people may also be specifically searched for.”

Many other applications, however, will be completely banned: so-called “social scoring”, such as biometric categorization systems that use sensitive characteristics such as sexual orientation or religious beliefs. The untargeted reading of images from the Internet or from surveillance recordings for facial recognition databases should also not be permitted.

Pirate Party still sees danger Mass surveillance

The Pirate Party’s MEP Patrick Breyer still sees this as a dangerous step: “With these legal instructions on biometric mass surveillance, our faces can be scanned in public at any time and everywhere without suspicion.” After all, thousands of suspects of the crimes mentioned in the new AI law are constantly being sought by judge’s orders.

Green MEP Lagodinsky is more conciliatory. A so-called fundamental rights impact assessment was able to be anchored in the law. This is “something that limits AI in its attacks on fundamental rights and our democracy.” These are successes. “Of course we can’t say that the result is 100 percent the desired result for everyone. But of course parliamentary business depends on that.”

Authority should monitor AI

A new EU authority should also monitor and analyze how the world’s first AI law corresponds to the ever-new challenges of this technology – whether the bridge between opportunities and risks holds, also socially: between AI optimists and AI pessimists. And when the law needs an “update”.

Kathrin Schmid, ARD Brussels, tagesschau, December 9th, 2023 6:03 a.m

source site