One of the advantages of artificial intelligence (AI) is that it doesn’t need sleep. As long as there is electricity and computer capacity, automated software can work independently and, for example, produce texts. Theoretically also legal texts.
However, the negotiators in Brussels were unable to outsource the negotiations on the AI Act, the major EU set of rules on AI, to machines. They wrestled for 22 hours on Wednesday and Thursday, and then another 16 hours after a break for sleep. On Friday night, the Commission, Parliament and Member States announced the agreement. Europe finally has rules for this powerful technology.
According to the decision-makers in Brussels, the AI Act should be a role model for the world. The set of rules is intended to prevent self-learning software from adopting prejudices and discrimination against certain groups. That AI models that can do many different things are monitored particularly well. And that AI is not used for mass surveillance and finds out the most intimate things about people.
At the same time, the rules are intended to strengthen the EU as a location for successful AI companies; so far the USA and China are leaders in this area.
Why AI should have rules
Artificial intelligence is the attempt to transfer human learning and thinking to software. Thanks to modern algorithms, computing capacities and ever-increasing amounts of data, AI has recently made great progress. It is used to optimize processes in industry, to automate stupid tasks and to generate images and texts.
Because automating human tasks involves risks and AI algorithms are often opaque “black boxes,” politicians around the world are working on regulations.
What was decided
The focus of the specifications is the handling of “AI basic models”. These are programs like GPT-4, the model behind the well-known chatbot Chat-GPT. They serve as a basis on which other companies can develop chatbots for customer service or digital assistants for doctors. Now such models are divided into two classes. Particularly powerful models that are considered “high risk” must meet transparency requirements. Manufacturers must document how it works, as well as compliance with EU copyright law and detailed explanations of the data used to train the AI. It must also ensure cybersecurity against manipulation by hackers.
The use of AI for social scoring, i.e. evaluating people for their behavior, is also prohibited. Databases that randomly collect facial photos online and create databases of these faces will also be banned. Likewise, AI analysis of feelings based on facial expressions in the workplace or in educational institutions.
Strict limits have been set for the handling of biometric data. Authorities are only allowed to use automated facial recognition in public spaces to prosecute serious crimes such as kidnapping and rape and to defend against terrorism.
Skeptics like Pirate MP Patrick Breyer see this as a “de facto permission for widespread mass surveillance”. Ultimately, AI software would scan every person who is in a corresponding area.
If companies violate the rules, they face comparatively high penalties. It should be up to 35 million euros, or 7 percent of global sales.
Internal Market Commissioner Thierry Breton said the agreement historical. “The EU will be the first continent to set clear rules for the use of AI,” he explained. However, so far there has only been one political agreement, which Breton is proud of.
Now comes the technical elaboration. Kris Shrishak from the NGO Irish Council for Civil Liberties accompanied the negotiations. He told the SZ: “Many details still need to be clarified. As long as the entire text of the law is not available and finalized, which could take many weeks, this agreement is only about political points. Not about the law.”
The Parliament and the Council of Member States still have to agree – but that should now be a formality. Some of the decisions will then take effect after one year, such as the high-risk classifications, others in two years.
What there was a dispute about until the end
The AI Act is intended to complete the complex of laws with which Commissioner Breton wants to organize the digital sphere in the EU and make it a model for the world. This complex also includes the Digital Services Act, which sets rules for “dangerous content” for platforms such as Facebook and Tiktok, and the Digital Markets Act, which is intended to limit the excessive online power of corporations such as Amazon.
The AI negotiations almost failed. There was a dispute mainly over two points.
Firstly, about AI in facial recognition. Biometric data about citizens’ bodies is considered particularly worthy of protection because its misuse can have a serious impact on people’s privacy. Systems from certain backgrounds could disadvantage people if they are trained accordingly. Some parliamentarians also objected to an exception for security authorities who wanted to use AI facial recognition, for example in the event of terrorist threats. Critics of the Greens and Liberals see such exceptions for “national security” as a gateway to misusing the technology and expanding its use to other areas.
Secondly, there was trouble regulating the basic models. One of these is the “language model” that allows Chat-GPT to have seemingly intelligent conversations with people. These language models can make fascinating connections between the vast amounts of information they have learned. That’s why they are seen as a step towards truly “intelligent” systems that will be superior to humans in more and more tasks.
Recently, three countries provoked criticism: Germany, France and Italy. They wanted to have some of the basic models exempt from the new strict rules. This hinders research at the EU as an AI location. That’s why companies should take care of it themselves instead of legislators and supervisors.
Representatives of civil society, among others, opposed this. Anyone who leaves the models that many companies use for their purposes unregulated is taking on considerable risks. After all, they could be used for an incredible number of things that would then remain unregulated. It also creates legal uncertainty for those companies that use the basic models that they now have from Open AI from Silicon Valley or Aleph Alpha from Heidelberg.
The critics accused France and Germany of wanting to spare their national favorites Mistral from Paris and Aleph Alpha expensive editions. The two companies are considered to be hopeful for bringing the EU on an equal footing with the major AI powers USA and China when it comes to the popular large language models. The AI Act is to be accompanied by a guideline for AI that is intended to clarify liability issues.
The trio of Germany, France and Italy has now failed, and basic models are not left out. This annoys some business representatives. Iris Plöger, executive board member of the Federation of German Industries, said: “With the comprehensive regulation of AI base models and AI applications, the AI Act endangers the competitiveness and innovative ability of both manufacturers and users.”
Others see the regulation as helpful for those companies that adapt and use the versatile basic models for their purposes. Svenja Hahn, EU parliamentarian from the FDP, says: “An extremely important success for European companies in order to be able to build secure systems and not be left with compliance costs or responsible for malfunctions of GPAI systems (the basic models, note d. Red.) to be. Small and medium-sized companies in particular that integrate GPAI systems such as Chat-GPT into their own systems will receive massive regulatory relief.”