EU: More protection for victims of artificial intelligence – Economy

Artificial intelligence (AI) is a powerful future technology. Self-learning computer programs that are trained with huge amounts of data can make factory production more efficient, control self-driving cars or help the HR department with pre-sorting applications. But this triumph of algorithms also scares many people. For this reason, strict rules should apply in the EU when AI is used in important or sensitive areas. The commission submitted a draft for this a year and a half ago an AI regulation before. About the legal act currently negotiating the EU Parliament, which is proving tough. Next week, however, the Brussels authority will present the next law on artificial intelligence directly – this time it is about liability issues.

Of the Süddeutsche Zeitung a draft of this guideline is available. The legal act addresses the problem that it can be very difficult for victims to obtain compensation after errors in an AI system. For this, the victims must prove misconduct and a connection between this behavior and the damage. For example, a bank’s software could falsely declare a customer uncreditworthy after the employee mishandled the program. But making the connection between an action and the end result is extremely difficult with AI because the systems are so complicated, non-transparent and difficult to predict.

That is why the directive gives victims the right to demand that the information they need about the systems be handed over, such as the data used to train the software, user logs or information on quality management. However, trade secrets should remain protected. If the companies refuse, this claim can be enforced in court. If it is still not possible to get hold of the data, this would be laid out against the company in the damages process. In that case, the burden of proof would be reversed. It is now assumed that the suspiciously concealed group violated its duty of care – as long as it cannot prove the opposite.

The draft directive also makes it easier for victims to provide evidence if the company has disregarded the requirements of the AI ​​regulation. In risky areas of application, for example in traffic, medicine or loan applications, this law requires that the data used to train the systems is of high quality and does not discriminate against certain groups. People need to be able to easily monitor the software and turn it off quickly if need be; the working of the programs must be transparent. If users violate these regulations, victims in the event of damage are spared having to prove a causal connection between the omission and their injustice. So the burden of proof is back on the shoulders of the corporations.

What should apply to AI cameras in squares?

After the draft law is presented next week, the European Parliament and the Council of Ministers, the body of EU governments, will have to deal with it. If the directive is finally passed, the member states have two years to transpose it into national law. The EU Parliament has already applauded the proposal. Green MEP Anna Cavazzini says she welcomes the initiative because consumers in the EU “must be able to rely on a high standard of protection in our digital single market”. However, the Chairwoman of the Internal Market Committee would like to further lower the hurdles of the burden of proof for victims.

Meanwhile, the debates on the AI ​​regulation in the European Parliament continue – i.e. on the legal act that sets rules for use. The responsible committees are to vote on their position in the autumn. After that, negotiations between the EU Parliament and the Council of Ministers on the final text of the law could begin. A major point of contention between MEPs is the conditions under which AI can be used for automatic facial recognition in public places.

The Commission’s draft wants to prohibit the police from having AI systems search surveillance camera images for people in real time. However, temporary and geographically limited exceptions are possible if a judge approves them and they are used, for example, to prevent a terrorist attack or to help find a missing child or a dangerous criminal. For many Greens, Social Democrats and Liberals in the EU Parliament, even these exceptions go too far. No doubt: The rules for AI will still be good for some excitement.

source site