AI ethics: Judith Simon on Chat-GPT and conservative AI – economics

It’s getting tight for people, reports Judith Simon: “We’re being boxed in. Where are there still spaces in society where I’m no longer just classified based on my data points?” The philosopher asked this on Monday in her lecture at the Munich Economic Debates by the Ifo Institute and South German newspaper. The professor of ethics in information technology from the University of Hamburg explained the challenges data-driven artificial intelligence (AI) poses to humanity.

Simon is a member of the Ethics Council, which advises the federal government on how to deal with risky technologies. He got even more work when the company Open AI released the AI ​​chatbot Chat-GPT on people in 2022. “Chat-GPT was a huge social experiment,” says Simon. “They simply threw it onto the market and outsourced all the risks to others. You would never do that in science.”

First of all, AI is conservative

AI will soon be everywhere. Social media is still the most permeated by her. Self-learning software sorts and filters content, determining what reality users see. That’s just the beginning. In medicine, for example in tumor detection, AI could be a blessing. “I don’t want to be so doomsday-like about it,” says Simon. But if credit software classifies men and women differently, that is discrimination. This is also the case when AI helps a company filter out candidates for promotion and absenteeism is a criterion. This puts women at a disadvantage – because they usually look after sick children or have to deal with inconvenient daycare opening times.

“AI is an inherently conservative instrument. But it’s not about left or right, the core of AI is: it learns from old data and carries it forward into the future,” says Simon. Such a system is unlikely to have any ideas for improving the world – instead it has an old-fashioned worldview in which excluded people remain excluded.

Simon explains how treacherous the use of AI in social bureaucracy could be. For example, if software is supposed to assess whether the well-being of a child is at risk. Both an incorrect assessment of leaving a child in a family and an incorrect recommendation to take the child into care have devastating consequences.

The philosopher has an idea as to how the uneasy relationship between humans and machines can be changed. People have long followed the software and hardware motto: If the computer says it, then it must be right. Simon advocates reversing this thinking: “I then don’t have to explain why I deviate from the software’s recommendation, but rather why I follow it.” In this way, people could regain autonomy.

Trusting the machines is convenient, but this simplification comes at a price. The technology cannot reflect the complexity of the world: “When building software, everything that is gray is made black and white. It has to be a pixel or none, zero or one,” explains Simon. This is not a purely technical question: “People act as if it were just mathematics and not politics.” From Simon’s point of view, AI is conservative in its basic nature and can promote social rigidity and thinking in established patterns.

When asked: “And if I want to make money with AI?”, Simon answers, to the general amusement in the room: “Then don’t ask an ethicist.” Human ethics and human commerce – these are the forces that will shape AI in the future.

source site