Lawyers presented fake case law created by ChatGPT. A judge fined him $5,000

- Advertisement -

New York — A federal judge on Thursday fined two attorneys and a law firm $5,000 in an unprecedented case in which ChatGPT was found guilty of submitting fraudulent legal research in an aviation injury claim.

Judge P. Kevin Casteel said he acted in bad faith. But he credited her apology and the corrective steps taken for explaining why harsher sanctions were not necessary to ensure that he or others would be tempted to use artificial intelligence tools again to present fake legal history in their arguments. will not do.

“Technological progress is commonplace and there is nothing inherently inappropriate in using reliable artificial intelligence tools to assist,” Castell wrote. “But the current rules place the onus on lawyers to perform the role of gatekeeping to ensure the accuracy of their filings.”

The judge said that the lawyer and his firm, Levido, Levido and Oberman, PC, “abandoned their responsibilities when they presented non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then fake opinions even after their existence was questioned by judicial orders.” But stay on.”

In a statement, the law firm said it would comply with Castell’s order, but added: “We respectfully disagree with the conclusion that anyone at our firm acted in bad faith. We have already apologized to the court and our client. We continued to believe that what even the Court accepted was an unprecedented situation, we made a good mistake in failing to believe that a piece of technology could create cases out of whole cloth.

The company said it is considering whether to appeal.

Castell said that bad faith arose from the attorneys’ failure to properly respond to the judge and their legal opponents when it was observed that the six legal cases listed to support their March 1 written arguments did not exist.

The judge asked attorney Steven A. cited the “variable and contradictory explanations” offered by Schwartz. He said that lawyer Peter Loducka had lied about being on vacation and had been dishonest in verifying the veracity of statements submitted to Castell.

At a hearing earlier this month, Schwartz said he used an artificial intelligence-powered chatbot to help find legal examples supporting a customer’s case against Colombian airline Avianca for an injury suffered on a 2019 flight. Did.

Microsoft has invested about $1 billion in OpenAI, the company behind ChatGPT.

The chatbot, which generates essay-like answers to prompts from users, suggested several cases involving aviation accidents that Schwartz had not been able to find through the normal methods used at his law firm. Many of those cases were not genuine, misidentified judges or involved airlines that did not exist.

The judge said that one of the fake judgments generated by the chatbot had “certain features that are superficially consistent with actual judicial decisions” but added that other parts were “vague” and that they were “nonsensical”.

In a separate written opinion, the judge dismissed the underlying aviation claim, saying that the statute of limitations had expired.

Attorneys for Schwartz and Loducka did not immediately respond to a request for comment.

- Advertisement -

Latest articles

Related articles

error: Content is protected !!