Artificial intelligence and law: humans or machines? – Knowledge

If you think about what artificial intelligence (AI) means for the concept of human responsibility, you will eventually come across HAL. The computer from Stanley Kubrick’s “2001: A Space Odyssey” quickly took over the controls of the spaceship and let the human protagonist know: “This mission is too important for me to leave it to you.”

Martin Eifert, law professor at Berlin’s Humboldt University, recently quoted HAL at a conference of the International Lawyers’ Commission in Göttingen on the subject of AI and law. Of course only to make it clear that the disempowerment of people through AI is by no means as clumsy as Kubricks imagined in 1968. If one can speak of disempowerment at all. But one thing is clear: the highly developed self-learning algorithm, whether it’s called HAL or something else, can be such a powerful helper that it’s happy to take the lead from time to time.

Such claims to leadership by machines, subtle or direct, raise legal questions that often have to do with one and the same basic problem: people think in terms of causality and try to grasp connections. “As humans, we have intuition and experience, as well as law and morality,” said Simon Burton of the Fraunhofer Institute for Cognitive Systems. One tries to transfer this to the systems.

AI makes decisions but hides the reasons behind them

So far with moderate success: AI understands nothing, but recognizes patterns in large amounts of data and creates correlations. So she operates a kind of statistics for advanced users. Which brought one conference attendee to the question: If the AI ​​found out that 96 percent of people with pink socks didn’t pay back their loans – what would the bank do with it? No more money for pink sock wearers? Although there is absolutely no connection between the color of the socks and creditworthiness?

This takes you straight to the next difficulty. “Unlike with classic software, you can’t look into whether the AI ​​has learned correctly,” said Burton. The AI ​​does not explain and log why it arrives at which result. “Because the AI ​​is a self-learning system, the decisive factors and their respective weighting in individual cases cannot normally be shown when making decisions,” explained Eifert. So AI makes decisions, but hides the reasons for them. It is a “black box” that couldn’t be darker.

That drives lawyers crazy. Because the entire concept of the legal verifiability of decisions – from objecting to official decisions to appeals in court – starts with the justification. With a silent computer, legal protection becomes difficult. Eifert explains this using the example of a gun owner who has been classified as unreliable by the authorities. It would be conceivable that the computer would do this by combing through documents and registers. But the decision would remain in the dark as to whether previous violations of the law in dealing with weapons played a role or simply the fact that she once sloppy with the tax return. Or wearing pink socks. Anyone who wanted to attack a negative decision would have nothing in their hands.

The obvious answer to this would be to reverse the HAL set. Such things are simply too important to be left to the computer; in the end it’s up to the human being to decide. Human in the loop That’s what it’s called in technical jargon, and the EU Commission’s draft AI regulation also provides for human supervision of high-risk AI. But how difficult it is with human supervision was already shown by the accident with a self-driving taxi in 2018, in which a pedestrian died. It was clear that the ultimate responsibility for driving safety should lie with the test driver sitting in the autonomous vehicle – who, however, looked at her smartphone at the crucial moment.

But the pairing of man and machine is much more complicated. Eric Hilgendorf, Professor of Criminal Law in Würzburg, referred to the current discussion about the lie detector once again. Should the technology really be so advanced that it could unmask the liar in court – what judge would override that? “The decision goes to the AI, either factually or legally.”

Susanne Beck, Chair of Criminal Law, Criminal Procedure Law, Comparative Criminal Law and Legal Philosophy at the University of Hanover, described a similar example from the healthcare sector. Diagnostics using imaging methods is currently developing into an important field of application for AI. She described the case of a genetic diagnostician who used a program to detect genetic diseases. The diagnosis was wrong and led to incorrect therapy. Sure, in the end the doctor was responsible, possibly even criminally. But if you look closely, man’s work and computer’s contribution become blurred. Beck speaks here of cognitive inertia: “You have to actively decide against what the computer tells you.” Which is not so easy, because even a no to the AI ​​suggestion is a decision for which the doctor can be liable.

So one suspects that criminal law doesn’t quite fit when both people and machines have made mistakes. Could one really accuse the diagnostician of violating her duty of care, asked Dieter Inhofer, chief public prosecutor in Freiburg. Susanne Beck therefore expects a “decrease in criminal liability” in areas in which AI is used.

The weaknesses of the technology should not be a reason to dismiss it outright

However, less criminal law does not necessarily mean less liability, at least as far as compensation for damages is concerned. If fault in the interaction of man and machine is so diluted that it can no longer be proven, civil lawyers fall back on strict liability, which makes the operator of a risky system liable for damages even if they are not at fault. You’ve known that from driving a car for a long time; in case of doubt, the holder is liable. Or his insurance.

Of course, not all legal problems in dealing with AI have to do with the fact that the technology is so unhuman. In one respect, the AI ​​even seems to be similar to humans, unfortunately in a negative sense. She has prejudices and is prone to discrimination. The best-known example is the use of AI to calculate the likelihood of recidivism in the United States. This is due to the training data. Because the AI ​​was fed a disproportionately large number of relapse examples from people with dark skin, it distorts the forecasts to the detriment of this group, said Martin Eifert from Berlin. There are similar feedback effects when using AI to prevent crime. If the AI ​​classifies a certain district as particularly dangerous, the police will carry out more checks there – and thereby uncover even more crimes, which in turn will flow into the software. A vicious circle of discrimination.

Of course, Eifert’s conclusion is by no means a blanket rejection of the technology. But a plea to sort here by strengths and weaknesses. In the case of serious encroachments on fundamental rights, their use should generally be ruled out, since people have to do it. However, its place is where it improves the quality of decisions thanks to a proven low error rate. Because there is no reason to romanticize human decisions, Eifert pointed out. They are known to be extremely error-prone.

source site