AI Act: Reining in the Machines – Economy

The scandal hit the Netherlands hard. A few years ago it emerged that tens of thousands of families had been forced to pay back their state child allowance to the same state for years – and wrongly so. A supposedly modern algorithm suspected innocent people as suspected social fraudsters. Poor people in particular were driven into debt and desperation; single women with a migrant background were hit hardest. The government resigned and since then Dutch politics has been counted. Faith in the state and its decisions has suffered.

Welcome to the algorithm society, where the trust of politicians, civil servants, entrepreneurs or the military in automated systems and artificial intelligence (AI) can lead to catastrophe.

That’s why it’s good that the EU wants to bring the world’s first comprehensive set of rules for AI into the home stretch this week – the AI ​​Act. He rightly prohibits perfidious practices such as social scoring – the assessment, reward and punishment of behavior; and psychological manipulation by AI in software. Strong requirements should apply to AI-based variants of government services such as those in the Netherlands, and also, for example, to AI software that is involved in hiring or firing. Bans are urgently needed at a time when every backyard start-up and every office can build AI into their app. Because it is still unclear who is responsible if the AI ​​becomes dangerous.

AI is often opaque

It is convenient and efficient to let artificial intelligence take over annoying tasks for you. It can make many emails, Excel spreadsheets and meetings unnecessary. But this progress also contains the temptation to be irresponsible. Because many AI systems have weaknesses that are not compatible with the popular idea of ​​how democracy should work. The citizen cannot understand them.

AI is often opaque. The decision-makers themselves don’t know why they are making the decisions they are making. As in the Netherlands, it can take years before how much damage a system has caused comes to light. The systems are so-called black boxes – black boxes into which no light falls.

The number of black boxes must be kept as low as possible; the transparency requirements of the AI ​​Act will help here. If citizens no longer understand how administration and politics, but also companies, come to their decisions, democracy suffers. Only dictatorships are black boxes.

Now the EU Commission, Parliament and member states in Brussels are still arguing over the definitions of which forms of AI should be tackled and how hard. A division that could unfortunately be so differentiated that entrepreneurs and supervisors will no longer be able to see through it. Here the negotiators have to differentiate and communicate precisely. Otherwise the rules will create new confusion instead of transparency.

The police should be exempt from most provisions of the law

The security authorities are also making things difficult for citizens. They are putting pressure on planned exceptions that would allow the state to use biometric AI, which is actually banned, in certain cases – for example, for facial recognition on surveillance recordings. In addition, they want to exempt the police from most of the law’s provisions in the name of the equally old and vague “national security” manslaughter argument. This dispute over civil rights issues could bring negotiations to a standstill. In addition, the EU election campaign is approaching, the rules must be passed by spring, otherwise there is a risk of paralysis.

The danger of 21st century society is not, as in the film, that machines will take over the world. But rather that politicians, bureaucrats, entrepreneurs or military officers outsource responsibility to automated software. Blind trust in systems that work with artificial intelligence can take its toll. Now that the AI ​​hype from business is further reinforcing this blind trust, it is time for the international community to rein in the machines.

source site