Part 2: 7 Legal Pitfalls and How to Steer Clear
More on the lessons from the case of two lawyers who used ChatGPT without the proper safeguards
Watch out for knee-jerk AI regulations: Courts (and regulators generally) should focus more on applying current rules rather than creating new, ill-considered rules and regulations.
Watch out for anyone ascribing “motives” to AI tools: ChatGPT and similar AI models respond to prompts. They make mistakes, and you need to verify their output. But they don’t have goals, and they aren’t trying to “mislead” or “deceive” anyone because they don’t “try” to do anything.
Focus on finding the right balance: AI is powerful but can also lead to catastrophic mistakes. Our best hope is to learn how to harness it while understanding both its strengths and its limitations.
Last time we discussed some of the lessons to be learned from a recent debacle, where attorneys used ChatGPT to do legal research but failed to verify its work.
Those attorneys are facing sanctions and possible disbarment for this. Earlier this week, the Judge in the case conducted a hearing, and – perhaps not surprisingly – it did not go well.
Previously, we discussed lessons to be learned and, specifically, pitfalls to be avoided. To recap from last week, those first three pitfalls to be avoided were:
Forgetting that you are responsible for everything that goes out over your signature
Failing to account for the fact that not everybody, including your associates, your co-counsel, your partners, and your opposing counsel, will not always follow rule 1
Using AI tools for tasks that they are not good at, such as legal research, rather than playing to their strengths.
This week we look at a few more pitfalls to be avoided and lessons to learn.
Pitfall Lesson 4: jumping in with new regulations prematurely
The attorneys in this case are – appropriately – facing sanctions and other penalties because they made false statements to the court, and specifically, they misstated the law. Specifically, they violated, among other provisions, Rule 11 of the Federal Rules of Civil Procedure. Specifically, they certified that their legal arguments were “warranted by existing law.” Of course, that wasn’t true. Their arguments got the law completely wrong because they were based on made-up cases. But the point is, this is already covered under the rules: don’t make up cases and lie to the court when asked. At this point, we don’t need an “AI rule” – we need to enforce the rule we already have.
Nonetheless, at least two judges, Judge Starr of the Northern District of Texas and Magistrate Judge Fuentes of the Northern District of Illinois (h/t Carolyn Elefant (@carolynelefant) for sharing the news about Judge Fuentes), have each issued orders concerning the use of generative AI. Judge Starr has ordered that”
All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being.
Judge Fuentes has ordered that:
Any party using any generative AI tool in the preparation or drafting of documents for filing with the Court must disclose in the filing that AI was used and the specific AI tool that was used to conduct legal research and/or to draft the document.
Neither Judge Starr nor Judge Fuentes asked my opinion before issuing these orders, but if they had, I would have cautioned against doing so. Indeed, with all due respect to both of them, I would argue that not only are these orders unnecessary – since this issue is already covered under Rule 11 – but they also overstep the Judge’s role and start to transition into telling lawyers how to do their jobs. I think they and the profession would have been better served by reminding attorneys of their responsibilities.
The technology for AI is developing rapidly, and many professions – the law, not the least of them – are trying to adapt. AI tools have tremendous potential for improving the legal system and, among other things, increasing access to justice.
Right now, we risk a situation where we wind up with many well-meaning but ill-considered regulations and new rules, leading to a patchwork of requirements. What if every federal Judge has their own – slightly different rules – about how to manage AI?
Pitfall Number 5: Falling for the Temptation to Anthropomorphize AI
One of the brilliant aspects of the ChatGPT interface is that it feels like you are talking to another entity. You type in a prompt (often, but not always, in the form of a question), and it responds. It feels like a conversation. But this can also be deceiving. You start to assume there is a “someone” back there.
In their submission asking the Judge not to issue sanctions, the attorneys repeatedly stated that the attorney who relied on ChatGPT had no way of knowing that ChatGPT was “capable of fabricating information or lying” or that it would “would make up entire cases and then continue to lie to him.” Memo in Response to May 26, 2023 Order to Show Cause.
This is, at best, a fundamentally mistaken view. ChatGPT, along with similar tools such as Bard, Anthropic, and Harvey.AI, is based on a type of AI called a “Large Language Model” or “LLM.” These tools do not, as of now anyway, have any intent. They don’t lie because they don’t know what lying is. They can’t “mislead” you because they don’t know that “you” are there. We muddy the waters by anthropomorphizing them, especially by attributing bad motives to them.
Pitfall Number 6: Failing to Address the “Hallucination Problem”
While it’s a huge overreach to suggest that LLMs can, do, or will intentionally mislead anyone, at the same time, of course, that does not mean they are 100% trustworthy. As they become embedded in more tools and technologies, it is crucial that those who build these tools, and build other tools based on them, confront head-on the “hallucination problem,” i.e., that they make up information. I know, I just told you not to anthropomorphize them, and here I am . . . anthropomorphizing them. It’s hard not to. But however we characterize the “hallucination problem” it is a problem — we can’t, at least as of now, trust the outputs of ChatGPT and similar tools.
The good news is that many are working hard to address this hallucination problem. It’s beyond our scope to explain that today, so more on that later.
Pitfall Number 7: Failing to Find the Balance (1) Between Ignoring AI (at our peril) and (2) relying too much on AI (at our peril)
We need to walk a line with AI. If we ignore it, we risk getting left behind. On the other hand, if we dive in recklessly, we risk making serious mistakes. Many attorneys may be learning the – incorrect – lesson that AI can’t be trusted for legal work.
This is the wrong lesson.
The right lesson is: we need to embrace it, but carefully. We need to learn to harness it but also to verify its outputs. We need to use it to help make us better attorneys.