Lawyer in the USA: When artificial intelligence backfires

Status: 09.06.2023 1:27 p.m

A lawyer in New York had ChatGPT research precedents for a lawsuit. But artificial intelligence simply makes up judgments. It is not the only case in which AI stands out negatively.

The case didn’t sound complicated: a man is suing an airline because his knee was injured by a serving trolley on a flight. His attorney is seeking settlement judgments to support the lawsuit. And try the chatbot ChatGPT. He also spits out cases: “Petersen vs. Iran Air” or “Martinez vs. Delta Airlines.” The bot even provides them with file numbers.

But when the lawyer submits his application to the court, it comes out: the cases are fake. A dangerous process, says the head of the Institute for Law and Ethics at Fordham University in New York, Bruce Green. The judge in charge calls the case unprecedented. The legal community is alarmed. The plaintiff’s attorney affirms under oath that he did not want to deceive the court, but rather relied on artificial intelligence.

“Check AI research”

That was certainly careless, maybe even reckless, says Green: “The rules for lawyers here in the USA are very clear: They have to be comfortable with the new technical tools they use and they have to be aware of the dangers and pitfalls.”

Anyone who knows the ChatGPT program knows that it can also invent things. “If this lawyer knew how to use the program for his research, then he should have been smart enough to know that the research that artificial intelligence is doing needs to be cross-checked.”

Data protection as another problem

Some US judges are now calling for regulation for the use of artificial intelligence in the US judicial system. Green also sees the danger: Evidence using a chatbot could not only contain incorrect information. It could also violate the confidentiality that a lawyer is required to assure his clients. “For example, with information that a client does not want to disclose: If it is fed into artificial intelligence, it can spread it further.”

In recent months, chatbots like ChatGPT have sparked many discussions about applications of artificial intelligence. Such software is trained on the basis of enormous amounts of data. Experts warn that the technology could also output fictitious information.

Tips for eating disorders

Or even dangerous ones, like a chatbot used by the largest non-profit organization for eating disorders in the United States. NEDA, headquartered in New York, replaced around 200 employees on its helpline with chatbot “Tessa” – developed by a team from Washington University Medical School in St. Louis. “Tessa” had been trained to use therapeutic methods for eating disorders. But those seeking help experienced surprises.

Sharon Maxwell, for example. She suffers from severe eating disorders: “And this chatbot is telling me to lose a pound or two a week and cut my calorie intake by up to 1,000 a day.” Three out of ten tips from the chatbot were associated with a diet. Tips that brought her into the spiral of eating disorders years ago. Such mechanical advice is more than dangerous for people like her, says Maxwell.

AI not yet ready for therapeutic talks

The activist alerted her followers on social media. Many had already had similar experiences. But NEDA had already reacted: The organization had noticed that the current version of “Tessa” might have given harmful information, the were not in the spirit of their program. “Tessa” has been withdrawn from circulation for the time being and is now to be examined again.

That applauds the leader of the team that developed the chatbot. Ellen Fitzsimmons-Craft said to the ARD Studio New York: Artificial intelligence is not yet mature enough to be let loose on people with mental health problems. And that’s exactly why “Tessa” was originally constructed without artificial intelligence. A user company later added this component to the bot.

source site