Lawyer makes ChatGPT write an application – and fails – economy

A New York attorney’s attempt to use the chatbot Chat-GPT while researching a case went horribly wrong. A motion he submitted contained references to cases such as “Petersen v. Iran Air” or “Martinez v. Delta Airlines,” which were fictitious. According to the lawyer, the alleged judgments and case numbers were issued by Chat-GPT.

The judge in charge of the case scheduled a hearing for early June. In the case, a passenger had filed a lawsuit against the airline Avianca because he was injured in the knee by a trolley. The airline moved to have the lawsuit dismissed. In the counter-motion in March, the plaintiff’s law firm referenced various previous decisions. For six of them, however, the Avianca lawyers could not find any evidence that they exist at all.

The plaintiff’s attorney has now stated in a statement under oath that he did not want to deceive the court, but only relied on assurances from Chat-GPT that the cases were authentic. The chatbot also issued texts of alleged judgments that his law firm submitted to the court in April. These documents, in turn, also contained references to cases that turned out to be fictitious. That would certainly have been verifiable: in the USA there are databases with judgments.

In the past few months, chatbots such as Chat-GPT have generated tremendous hype about applications based on artificial intelligence. Such software is trained on massive amounts of data and builds sentences by guessing word by word how to proceed. Experts warn that this way the technology can also output fictitious information that may look real to the user. At the same time, “lawyer” is often cited as one of the professions that could be particularly changed by such AI technology because it can quickly evaluate information and formulate texts that could come from a human being.

source site