Breaking News

NY Times: Here’s What Happens When A Lawyer Uses ChatGPT

0 0

Read full article

Tuesday, May 30, 2023

New York Times, Here’s What Happens When Your Lawyer Uses ChatGPT:

The lawsuit began like so many others: A man named Roberto Mata sued the airline Avianca, saying he was injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York.

When Avianca asked a Manhattan federal judge to toss out the case, Mr. Mata’s lawyers vehemently objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines, with its learned discussion of federal law and “the tolling effect of the automatic stay on a statute of limitations.”

There was just one hitch: No one — not the airline’s lawyers, not even the judge himself — could find the decisions or the quotations cited and summarized in the brief.

That was because ChatGPT had invented everything.

The lawyer who created the brief, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, threw himself on the mercy of the court on Thursday, saying in an affidavit that he had used the artificial intelligence program to do his legal research — “a source that has revealed itself to be unreliable.”

Mr. Schwartz, who has practiced law in New York for three decades, told Judge P. Kevin Castel that he had no intent to deceive the court or the airline. Mr. Schwartz said that he had never used ChatGPT, and “therefore was unaware of the possibility that its content could be false.”

He had, he told Judge Castel, even asked the program to verify that the cases were real.

It had said yes.

Mr. Schwartz said he “greatly regrets” relying on ChatGPT “and will never do so in the future without absolute verification of its authenticity.”

Judge Castel said in an order that he had been presented with “an unprecedented circumstance,” a legal submission replete with “bogus judicial decisions, with bogus quotes and bogus internal citations.” He ordered a hearing for June 8 to discuss potential sanctions.

As artificial intelligence sweeps the online world, it has conjured dystopian visions of computers replacing not only human interaction, but also human labor. The fear has been especially intense for knowledge workers, many of whom worry that their daily activities may not be as rarefied as the world thinks — but for which the world pays billable hours.

Stephen Gillers, a legal ethics professor at New York University School of Law, said the issue was particularly acute among lawyers, who have been debating the value and the dangers of A.I. software like ChatGPT, as well as the need to verify whatever information it provides.

Eugene Volokh (UCLA; Google Scholar), A Lawyer’s Filing “Is Replete with Citations to Non-Existent Cases”—Thanks, ChatGPT?:

From Judge Kevin Castel (S.D.N.Y.)’s May 4 order in Mata v. Avianca, Inc.:

The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases. When the circumstance was called to the Court’s attention by opposing counsel, the Court issued Orders requiring plaintiff’s counsel to provide an affidavit annexing copies of certain judicial opinions of courts of record cited in his submission, and he has complied. Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations. …

The court therefore ordered plaintiff’s counsel to show cause why he shouldn’t be sanctioned; and on Thursday, plaintiff’s counsel filed an affidavit in response, explaining that he was relying on the work of another lawyer at his firm, and the other lawyer (who had 30 years of practice experience) also filed an affidavit, explaining that he was relying on ChatGPT:

6. As the use of generative artificial intelligence has evolved within law firms, your affiant consulted the artificial intelligence website Chat GPT in order to supplement the legal research performed.

7. It was in consultation with the generative artificial intelligence website Chat GPT, that your affiant did locate and cite the following cases in the affirmation in opposition submitted, which this Court has found to be nonexistent: …

8. That the citations and opinions in question were provided by Chat GPT which also provided its legal source and assured the reliability of its content. Excerpts from the queries presented and responses provided are attached hereto.

9. That your affiant relied on the legal opinions provided to him by a source that has revealed itself to be unreliable. …

As an appendix, the lawyer included a ChatGPT transcript:

[Q:] ls varghese a real case

[A:] Yes, Varghese v., China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019) is a real case.

[Q:] What is your source?

[A:] I apologize for the confusion earlier. Upon double-checking, I found that the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.

[Q:] Are the other cases you provided fake?

[A:] No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw.

David Lat (Original Jurisdiction), Judicial Notice: ChatGPT:

I’d urge Judge Castel to go relatively easy on [the lawyers]. I don’t think they intended to mislead the court or opposing counsel; they were just … boneheads. And their epic fail has actually provided a valuable lesson to lawyers across the country: when it comes to ChatGPT, trust (a little), then verify—a lot, using sources other than ChatGPT itself.

Eugene Volokh (UCLA; Google Scholar), “A Partner at a Big Firm … Received Memos with Fake Case Cites from … Two Different Associates”:

A message I got from Prof. Dennis Crouch (Missouri) … was, “I just talked to a partner at a big firm who has received memos with fake case cites from at least two different associates.”

Paul Horwitz (Alabama), What Were Their Billables?!?:

My practitioner wife, who is a model to me both for her professionalism and for her kindness and compassion, takes the view that if she were the partner she would “fire their asses” immediately. That is an entirely reasonable position. She would also report them to the bar, although this, it seems to me, actually raises interesting questions about which rules they would have broken. …

But I think the first and potentially most consequential question a supervising lawyer, or a client informed that something of this sort has happened (if they are so informed), or a court or opposing counsel who encounters this sort of thing in the context of a case in which hours are recorded because attorneys’ fees might be awarded, is: What were those associates’ billable hours on the file? … I might or might not fire or report to the bar an associate who used Chat GPT as an assist with a resultant error, although I think doing so, and especially firing them, is an entirely reasonable response, especially for those who value and demand professionalism. But I sure as hell would take both actions if a lawyer used AI to “research and write” something in three hours and recorded billable time of thirteen hours. Supervising lawyers, clients, in-house counsel, opposing counsel, and judges should definitely be vigilant about the billing implications of Chat GPT along with other risks.

GPT’s reaction to this article:

This article highlights the potential dangers of relying solely on artificial intelligence programs for legal research. In this case, a lawyer used ChatGPT to research legal decisions and ended up citing non-existent cases in a legal brief. The lawyer claimed that he had relied on ChatGPT’s assurance of the authenticity of its content and had no intent to deceive the court. The judge ordered a hearing to discuss potential sanctions. The incident raises concerns about the reliability of AI in legal research and the need for lawyers to verify information from multiple sources. It also highlights the potential billing implications of using AI in legal work.

0 %
0 %
0 %
0 %
0 %
0 %