Are Courts 🧑⚖️ Unfairly Blocking AI 🤖?
Many judges are overreacting to recent stories about lawyers misusing ChatGPT, and this is bad news, because we need courts to decide AI-legal issues based on reason and the law, not on fear
Judges who don’t understand AI are making fear-based decisions to try to block the adoption of AI tools.
Why It Matters
Everyone is struggling to figure out what AI tools like ChatGPT are going to mean for business, for government, for education, for society, and for humanity.
Much is going to depend on our legal system. To navigate these new issues raised by this transformative tech, we will need judges who take the time to understand AI, how it works, and what it means.
But so far, many judges aren’t doing that. Instead, these judges seem to be in denial about what AI is and what it can be. And rather than engage with AI, they are reacting out of fear, and trying to shove the genie back in the bottle.
This is bad for the legal system, and bad for the rest of us.
The Backstory
The Tale of the ChatGPT Brief and the Made-Up Cases
🙈🙉🙊 The fake 🤖 brief. By now, you’ve probably heard about the lawyers who asked ChatGPT to draft a legal brief and then filed the brief in court without double-checking ChatGPT’s work. (See our earlier coverage here: Why a Lawyer Might Be Disbarred for Entrusting Legal Brief to ChatGPT)
The lawyers found out the hard way, and too late, that ChatGPT had gotten the legal argument completely, embarrassingly wrong. ChatGPT made up, or “hallucinated” a bunch of legal cases that don’t exist.
Oops.
😠 🧑⚖️ The angry judge. The judge smelled something fishy when he looked for, but couldn’t find, any of the cases cited in the brief. He ordered the lawyers to submit copies of the cases. For those that don’t know, this is the polite way for the judge to say, “I’m pretty sure you’re lying, but I want to give you one last chance to redeem yourself.”
Rather than doing basic legal research, as any 1st-year law student would know how to do, and then come clean, instead, the lawyers doubled down. They went back to ChatGPT. And once again, ChatGPT steered them wrong. It told them that the cases were legit and proceeded to make up excerpts on the spot. The lawyers then turned around and submitted these fake cases to the judge.
Double Oops.
😥 😥 The sad lawyers. After the dust settled, the judge fined them $5,000, which was pretty light, all things considered.
[Correction/Updated to add: On re-reading this post this morning, I see that the original version incorrectly downplayed the severity of the penalties handed down against the lawyers. It was more than just a $5,000 fine. The judge also required the lawyers to inform their client in writing of what had happened, as well as to write written apologies to each of the judges who were incorrectly cited in the made-up cases. Nonetheless, the penalty could have been worse:
The judge could have — but did not — recommend that the lawyers be banned from practicing in Federal Court in New York; and
The judge could have — but did not — order the lawyers to pay the other side’s legal fees.
They could have been disbarred The original version of this post also incorrectly left the impression that the lawyers are not facing disbarment. I’m grateful to my friend John E. Grant (@JEGrant3) for pointing out that in New York (as in most states), that is not up to the judge, but to the State Bar (technically a separate Discipline and Grievance Committee). In short, the lawyers are not out of the woods yet, and may yet be disbarred or have their licenses suspended.
Finally, the original post failed to mention that they are almost certainly going to face (if they aren’t already) a malpractice claim by their client. Almost certainly, they will seek to — quickly and quietly — settle any such claim in order to avoid further bad publicity.]
The lawyers got a painful lesson and the rest of us got a story to tell.
The Aftermath
In response to this fiasco, several judges across the country (and in Canada, too) issued orders essentially banning lawyers from using ChatGPT. They didn’t all phrase it exactly that way but it was obvious that’s what they meant.
One order, for example, states that for any brief or other document drafted using Artificial Intelligence, the attorney must disclose
that AI was used, with the disclosure including the specific AI tool and the manner in which it was used.”
(for the full text of this and all the orders issued to date, see our website at Court Orders Regarding filing AI-generated materials.
So what’s the problem? There are several.
🤔 Vague definition of AI. First, it’s not clear what exactly is meant by “AI.” Is a spell checker AI? Is a tool like Grammarly? And how exactly is a lawyer supposed to explain “the manner in which” a tool was used? When in doubt, lawyers —fearing the wrath of judges — will tend to avoid innovative uses of AI altogether.
👮 Trying to Police Lawyers. Second, these orders are overreach by judges. Attorneys are already required to check and stand behind, all the materials they file in court. These new orders don’t help. Judges shouldn’t be trying to tell attorneys how to be attorneys.
🚧 Blocking innovation. Third, the whole philosophy of these orders is backward-looking rather than forward-looking. Our courts need to be ready to regulate AI, but must also keep an open mind to potentially transformative technology. These orders send exactly the wrong message.
Go Deeper
On the LawSnap Wiki on AI and the Courts
Nice piece Adam. Two quick thoughts. (1) Judges can’t disbar lawyers — without looking it up, I’m pretty sure the Levidow lawyers are facing their bar‘s disciplinary processes. (2) if I were practicing in front of judges who require an AI disclaimer, I’d write a generic one that “floods the zone” with every software tool in my toolbox: Word, gmail, CaseText, etc. who’s to say what is and isn’t AI these days? I’m not usually the malicious compliance type, but in this situation it would be the proper CYA and also illustrate the ridiculousness of the order.