Why a Lawyer Might Be Disbarred for Entrusting Legal Brief to ChatGPT
ChatGPT helped a lawyer create a legal brief that looked plausible but was completely wrong, and based on fake, made-up cases.
A lawyer is in hot water with a Judge for filing a ChatGPT generated-brief that cites to made-up cases
The lawyer failed to double-check ChatGPT’s work, and then compounded his earlier error by submitting fake printouts of the made-up cases, and now faces disbarment
The case is a cautionary tale for lawyers, and also a harbinger of things to come, as ChatGPT and other AI tools become more widely used for sensitive, high-stakes tasks.
A federal judge is threatening a lawyer with monetary sanctions, and possible disbarment, after the lawyer filed a bogus brief drafted by ChatGPT. According to the Judge, the brief contained citations to “bogus judicial decisions with bogus quotes.” It turns out that ChatGPT fabricated the law and made up the cases that it cited, and then the lawyer failed to check its work.
The story is a cautionary tale, but also illustrates a number of issues that everyone, inside and outside the legal world, is grappling with as we figure out how to best use AI tools.
How ChatGPT Turned a Simple Personal Injury Case Into a Cautionary Tale
At the center of this controversy over AI and the law is Mata v. Avianca, (SDNY case no. 1:22-cv-01461), a personal injury case against an international airline. The full docket (aka list of all the papers filed in the case) is available on the Court Listener website at https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/ (while you are there, feel free to donate to the good folks at Court Listener (@courtlistener, also on mastodon at law.builders/@flp), who are doing great work making court records available for free).
The plaintiff claims that he was injured on one of the airline’s flights when he was hit by a serving cart. In response to the plaintiff’s lawsuit, the defendant airline moved to dismiss the case on the grounds that the plaintiff waited too long to bring the lawsuit, i.e., that the case is “time-barred.”
This is where ChatGPT enters the story.
Act 1: ChatGPT Drafts an Argument
In response to the defendant’s argument that the plaintiff’s claim is time-barred, the lawyer for the plaintiff asked ChatGPT to draft a response, and then filed this “ChatGPT brief” with the Court without double-checking it. Let’s call this “Mistake Number One.” To be clear, the mistake was not using ChatGPT, the mistake was failing to double-check what ChatGPT said.
The “ChatGPT brief” argues that a 2019 case, Varghese v. China Southern Airlines, shows that the defendant airline had the law all wrong, and that the claim was not time-barred. The brief offers what purports to be a lengthy quotation from the Varghese decision as well as what purports to be quotes from another case, Zicherman v. Korean Air Lines.
At first glance, the argument looks reasonable. But the problem is that the argument from ChatGPT is totally wrong, the law is totally made up and there is no such case as Varghese v. China Southern Airlines or Zicherman v. Korean Air Lines. ChatGPT made these cases up, as well as several other cases. And according to his own affidavit , the lawyer for the plaintiff did not double-check ChatGPT’s work because he was “unaware of the possibility that its content could be false.”
After the lawyers for the defendant airline alerted the judge that something seemed fishy, the judge ordered the lawyer for the plaintiff to provide copies of the Varghese case and the Zickerman case, as well as several other cases cited in the ChatGPT brief. For the non-litigators, this may sound innocuous but it’s actually a major “oh *$(& moment” for a lawyer when a judge asks for something like this. The judge is essentially accusing you of lying to the court.
Act 2: ChatGPT Hallucinates Some Legal Cases
At this point, the lawyer for the plaintiff could have done what he should have done in the beginning – he could have double-checked the result from ChatGPT by consulting one of several legal research databases that his firm uses every day.
For those unfamiliar, using legal research databases to check cases is standard, bread-and-butter stuff that litigators do every day. And even if he didn’t want to check the official databases, at the minimum, he could have searched Google.
But he didn’t. Instead, he made “Mistake Number Two”: he went back to ChatGPT and asked ChatGPT for a copy of the case as well as several other cases. When ChatGPT provided excerpts from the (made-up) cases, he then took screenshots and submitted them to the Court.
Act 3: A Federal Judge Threatens Disbarment
The judge was not amused:
The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases.
Order To Show Cause, May 4, 2023 (SDNY Case no. 22-cv-1461)
The court then ordered the plaintiff's lawyer to appear in person for a hearing to explain himself, and to explain why the Court should not recommend that the lawyer be reported to the grievance committee (which is the first step to being disbarred).
The hearing is set for June 8. Stay tuned for updates.
Next Steps for this Case and for the Legal System
Next time we’ll have more to say about lessons to be drawn from this case, and predictions for what it will mean for the legal system, and for other professions as we see ChatGPT incorporated into more occupations.