How to Manage Legal Risks While Capturing AI Opportunities for Growth
AI Tools like ChatGPT are powerful but also mysterious, unpredictable and risky. Businesses that want to thrive in the age of AI need to take a strategic approach to risk management
Businesses are looking into using AI tools like ChatGPT to transform all aspects of their operations, from personalized advice to customers, to automating business decisions, improving fraud detection, and creating new marketing campaigns. AI tools are so powerful, and have so many applications, because an AI is "creative" and spontaneous: it can, in a sense, come up with ideas on its own and do things that it hasn't been specifically programmed to do.
But this creativity or unpredictability is also risky because no one knows exactly how an AI makes decisions or what exactly it will do next. AI Tools like ChatGPT are mysterious "black boxes," meaning that they can get you into trouble in new, unforeseen, and unpredictable ways. By using these tools, companies may face claims that they (1) didn't properly monitor and train the AI, (2) knowingly exposed others to harm, or even (3) acted recklessly.
To use AI tools safely, businesses must plan for risk management in all steps, from choosing AI tools and consultants, to selecting and verifying data. It's also important for every business using AI tools to have clear agreements with partners, like AI providers, data suppliers, and -- above all -- insurance carriers, about who will pay for what if something goes wrong.
What Happens When an AI Bot Goes Rogue?
The latest from the “LawSnap Crystal Ball™” Department: News from two years in the future — will AI need its own insurance?
From ChatGPT to Chat “Woe Is Me!” – E-commerce Platform’s “Evil Elf” Saga Goes From Bad To Worse As Insurance Carrier Ditches Defense
ST. LOUIS, 2025 – Enchanted Emporium of Extraordinary Experiences (“EEoEE”) has hit another setback in the saga of the “Evil Elf Chatbot” mega lawsuit. Last year EEoEE rolled out its AI-based “Ernie the Enchanted Chat Elf” – a chatbot meant to give product recommendations and advice to EEoEE’s customers. Public reaction was overall positive at first, but things started to go south as the chatbot, which was soon dubbed “Evil Ernie” started offering bizarre and dangerous advice and information to EEoEE’s customers.
The outrageous advice from “Evil Ernie” the chatbot included: suggesting peanut-packed snacks to customers who confessed their allergies — “don’t worry these peanuts are specially treated so they can’t hurt you” — urging consumers to "upgrade" their devices with risky DIY hacks, doling out quack “home remedies” for arthritis and acne, offering step-by-step guides for perilous electronics repairs, and sneakily promoting phishing attacks.
“That elf is evil,” said former customer Mike Stanely, of Jefferson City. “I’m telling you, you can call me crazy, but I heard it cackle”
EEoEE is facing a class action seeking over 25 million in damages for, among other things, claims of negligence, negligent misrepresentation, product liability and fraud.
One of the biggest battles EEoEE has faced so far is not with the plaintiffs' lawyers, but with its own insurance carrier. And EEoEE has just lost the latest round. The carrier, Integrated Indemnity Insurers International (“IIII”), has denied EEoEE’s claim. IIII argued, and the court agreed, that it had no responsibility to defend EEoEE because (1) Evil Ernie acted “deliberately and intentionally” and EEoEE’s policy does not cover “deliberate and intentional acts”; (2) EEoEE failed to properly monitor and supervise the ChatElf; and (3) That EEoEE acted recklessly because it knew or should have known that a ChatBot can act unpredictably and is therefore inherently risky.
The AI Black Box Dilemma: Who’s to Blame When Things Go Wrong?
The strength of AI tools like ChatGPT – that they are creative (aka “unpredictable”) and independent – also complicates the task of assigning legal liability when things go wrong. As you explore bringing AI tools into your business and plan your risk management strategy, here are some questions to consider from the start:
How Will You Monitor/Supervise The AI? What are you going to do if you find something that doesn't fit your policies? What is the process for handling that content? What if the content goes out into the world before you have had a chance to review it? If there are lessons that you learn, how will you apply them to ensure that the system improves? What numbers are you going to track?
Who is Going to Guard the Guardians, aka Be In Charge of the Monitoring? This needs to be, at least at first, assigned to somebody, and somebody high up in your organization. It also needs to be the responsibility of everyone in your organization: How can every department in your company participate and help? Each department head should be responsible for developing a written plan.
How Will You Decide Which Use Cases to Handle First? As you evaluate pilot projects, include a robust discussion of risk management. For each potential use case, what are the unique risk factors, and how can you manage them?
What Exactly is Covered Under Your Existing Insurance Policies? What exclusions to your policy might apply? Some policies already exclude losses related to artificial intelligence – does yours? What are the exact losses that your carrier will reimburse you for? Bodily injury? Business Interruption? Lost Profits? Be sure to war-game out the most likely scenarios.
What Is Your Plan For Communicating With Your Customers, Your Employees, Your Partners, Your Suppliers, and Other Stakeholders? How are you going to convey the limitations and potential risks of AI tools? How are you going to manage expectations? How are you going to be transparent when things don't go as planned? As a starting point, be sure to review your terms of service, but be sure to think through all your other policies and communications as well.
How Are You Going to Negotiate With Your Suppliers, Consultants, and AI Software Providers? Who is going to pay for what if you get sued? Be sure to discuss indemnification, but assume that they are going to push back hard on any attempt to hold them liable.
How Are You Reviewing and Preparing Your Company's Data That Will Be Used to Customize the AI Tools? Likely any claim against you will turn, in part, on an analysis of the data that you provided that was incorporated to your AI.
What Independent Auditors Will You Use to Review and Double Check? This is an unknown and fast-changing era, and it will be crucial to get neutral, expert advice.
Questions or comments? Please leave them in the comments below.