7 Legal AI Hazards and How to Steer Clear: Lessons from a Courtroom Disaster (Part 1)
Two lawyers are currently facing sanctions for and possible disbarment for using ChatGPT without the proper safeguards. Their experience has lessons for all legal professionals
Legal AI tools like ChatGPT are extremely powerful but, when used incorrectly, can lead to serious, potentially career-ending mistakes
There are several potential pitfalls in using legal AI, the most serious of which is forgetting that you, as a legal professional, are ultimately responsible for anything that goes out over your signature
Another pitfall is trying to use Legal AI tools for things, like legal or factual research, that they are not good at rather than playing to their strengths
Last time we discussed a recent legal debacle where a lawyer learned the hard way not to use ChatGPT to draft a brief without checking its work. To recap, a lawyer asked ChatGPT to prepare a brief, which ChatGPT promptly did, complete with an argument that got the law completely wrong, supported by fabricated cases that don’t exist. After the Judge called the lawyer out and demanded to see copies of these suspicious cases, the lawyer provided copies that ChatGPT also made up.
We’ve argued previously that Legal AI is extremely powerful tech that every lawyer needs to learn or face getting left behind. But this same power makes it risky. And lawyers – and all other professionals – need to learn to use it with care to avoid career-ending mistakes.
This post is part 1 of a two-part series on seven pitfalls to avoid as you explore applying AI to the legal system.
Pitfall 1: Forgetting Rule 1 – That You Are Ultimately Responsible For Everything That Goes Out Over Your Signature
Nothing about new technology changes the rule that, by signing a document, an attorney is certifying that it is correct as to the facts and the law. This duty is set forth in Rule 11 of the Federal Rules of Civil Procedure, and every state code of civil procedure has a similar rule. There is no “sorry, ChatGPT ate my homework” exception to Rule 11.
As an attorney, you also have a professional duty to:
perform services for your clients competently, and this includes understanding “the benefits and risks associated with relevant technology” (see Rule 1.1 of the Model Rules of Professional Conduct, Comment 8);
avoid making false statements of law or fact to a court (see Rule 3.3); and
supervise effectively any “nonlawyers” (I don’t love that term, but that’s what the rule says), and this includes, e.g., technology providers (See Rule 5.3, comment 3).
The rule remains the same: you can delegate authority but not responsibility. As a practical matter, this means you are responsible for checking and verifying everything that an AI tool produces. This duty to verify is essential when using AI tools like ChatGPT, Bing, or Bard because of their well-known tendency to “hallucinate” to output “plausible-sounding but wrong” information.
Pitfall 2: Failing to Account for the Fact That Not Everyone Will Follow Rule 1
Unfortunately, not everyone in the legal profession will follow the rule that they are responsible for what they sign. Those who forget this Rule may include your co-counsel, your partners, your associates, your opposing counsel, and perhaps even the judge you're standing before.
So, what do you do when they fall short? You remember that even if they fall short, the responsibility is still yours. In the current case making the headlines, the lawyer who filed the problematic brief was not the one who drafted it. The first lawyer used ChatGPT to draft the brief, but because the first lawyer was not admitted to practice in federal court, a second lawyer, his colleague, actually signed the brief and filed it. The Judge is considering sanctions against both of them and so far has shown no sympathy for the argument that “it was another attorney who drafted the brief that I signed.”
Pitfall 3: Using AI Tools for Tasks They Are Not Good At, Rather Than Playing to Their Strengths
Generative AI tools are great at tasks like:
Doing the first draft of an outline of a document;
Evaluating an opposing party’s argument and suggesting counter-arguments;
Critiquing your arguments for logical inconsistencies;
Writing an introduction that summarizes a complex argument;
Preparing the first draft of a letter to opposing counsel.
See our earlier coverage here: How New Artificial Intelligence Program ChatGPT Will Transform Legal Practice.
What they are not good at is factual or legal research. They get things wrong. They make things up. Any time a tool such as ChatGPT makes a claim about facts or the law, you must verify that what it says is right. Related, you need to not be surprised if it makes something up. That’s what it does. That’s how it works.
Learn to play to Legal AI strengths and how to manage its weaknesses. Most important step here: verify, verify, verify.
Thanks for reading. Next time we’ll cover part two and the other four pitfalls.
Great article!