What I Learned Building AI That Does the Deep Reading For You
Last week we launched our series on how to get started with AI for legal professionals, with our first post on how to build a Legal Explainer System – one that helps translate "legalese" into language comprehensible to normal humans.
This week, I want to talk about what I’ve learned so far building teams of robot research assistants. These assistants don't get tired, and can browse the web and process hundreds of documents while you sleep.
Imagining a smart, tireless team of research assistants – how could they help you?
Most people are familiar with using AI to answer questions or generate text, but another use case where it really shines is in analyzing text.
I've been experimenting with different approaches to this, and here are some of the most valuable ways I've found to use AI as an analysis assistant:
Comparing statute versions: I recently had it analyze a new statute and compare it against the text of a previous version. It went well beyond just searching for keywords, actually looking at the meaning of changes.
Contract analysis: I've had it analyze lengthy contracts to find all provisions relevant to a specific issue – much faster than keyword searching alone.
Editorial compliance: Remember the Editor Bot I discussed in Streamline Your Editorial Process with an Editor Bot 🤖? I set up automated systems that ensure documents follow organizational editorial standards.
Policy review: When working with a client on a new corporate policy, I had the system analyze whether the explanation would make sense to the intended audience.
Simulated audience feedback: I've evolved the Editor Bot concept to create simulated audience feedback – asking the AI to react as if it were a specific client or donor, similar to what I discussed in "See Your Writing Through Your Reader's Eyes" back in October.
Citation checking: I've had systems browse the web or legal databases to verify citations in a document, saving hours of tedious checking.
Watching for errors
When I first started building these systems, I was concerned about hallucinations or errors – as any legal professional would be. And yes, they make mistakes and hallucinate. Two points
Just as with a human assistant – you need to check it’s work. You need to get comfortable, you need to build up trust. You need to develop an intuition for what AI tools are good and where they tend to make mistakes
The best way to watch out for mistakes is to build redundancy into the system. This means using AI tools to check each other. It means, for example, building a tool whose entire job is to check what the other tools do. Just as humans check each other’s work. I talked about this in my earlier post How I Built an AI Editorial Assistant That Strengthens Your Writing Without Torpedoing Your Credibility
Simulated Focus Groups
One of my favorite techniques builds on the Editor Bot concept I shared in October. I've evolved it to create simulated focus groups for feedback, similar to what I first explored back in my March 2023 article "How to Boost Your Legal Career with ChatGPT" where I showed how to run a "free focus group from your desktop."
Here's how it works: After drafting a client update on a complex regulation, I have the system simulate how different audiences would react:
"As a CFO, what questions would you have after reading this?"
"As a compliance officer, what concerns would this raise?"
This reveals blind spots in my explanation before I ever send it to actual clients. For a deeper exploration of this approach, check out Use AI to Write FOR Clients, Not AT Them
Comparative Analysis
I've also been using this approach to compare document versions – whether opposing counsel's edits to an agreement or different drafts of the same document.
By asking the system to identify substantive changes rather than just format differences, I get a cleaner, more focused review than traditional redline comparisons.
Starting Simple: No Technical Expertise Required
The beauty of this approach is that you can start without any complex setup. Here's my recommendation for dipping your toe in:
Choose a public document you're already familiar with (statute, regulation, public filing)
Upload it to ChatGPT or Claude ($20/month subscription)
Ask targeted questions about provisions, requirements, or implications
Verify the results against your own knowledge
This "start simple" approach addresses what many of you have told me is your biggest concern: the learning curve. You don't need technical expertise to begin using these tools. Lots of lawyers start out investing just a few hours a week, and see great progress.
Addressing Security Concerns
When I talk with legal professionals, confidentiality is always front-of-mind. “How can I use this without risking attorney client privileged or confidential information?” In prior substacks I’ve talked about two approaches to this issue.
As we discussed last week in the post on the Legal Explanation Engine — Stop Making Their Eyes Glaze Over – Start Using AI to Translate Legal Complexity into Clarity — one approach is to start with public facing documents, such as client updates. If you are using AI to (1) analyze a publicly available statute or government report and then (2) draft an explanation that you are going to share publicly, then you don’t need the AI to review confidential information and so, realistically there is little to no concern
The second approach, which we discussed in a prior substack Harness AI While Holding Your Data Secure 🔐, is to avoid public, cloud based tools like ChatGPT or Claude, and instead set up and run AI tools on your own local machine. We’ll have more to say about that in coming posts.
That's why I recommend starting with public documents while you build comfort with these systems.
Both OpenAI (ChatGPT) and Anthropic (Claude) state in their terms of service that they don't train on your uploaded data. But I understand some of you will require greater certainty.
In a future newsletter, I'll discuss setting up local implementations that never send your data to external servers – an approach I've implemented for firms with strict confidentiality requirements.
Beyond Simple Analysis: Building Multi-Stage Systems
As you gain comfort, consider creating a workflow where multiple AI components work together, similar to what I discussed in "No-Code Automation: Your Personal Robot Army" last month:
First component analyzes the document
Second component provides simulated audience feedback
Third component fact-checks against source material
Final component suggests implementation steps
This multi-stage approach mirrors how experienced attorneys analyze complex material – breaking it down, considering context and implications, then developing a response.

Connections to Previous Systems
If you've been following along and implemented the communications engine I discussed last week, these systems naturally complement each other:
Analysis Engine: Helps you understand complex information
Communications Engine: Helps you explain that information to others
Think of it as research followed by writing – two fundamental skills now amplified by AI.
Go Deeper: Learning From Your Specific Context
Curios how you could apply this to your practice? As we’ve discussed, the great thing about these tools is you can start small and build from there.
Let’s talk. You can email me at adam@lawsnap.com or click here to schedule a free conversation.
Whether you're a solo practitioner concerned about cost, a BigLaw partner focused on quality control, or an in-house counsel managing outside counsel expenses, we can scale the system to match your goals and your resources.
I'd love to hear about your specific practice challenges. What documents would you want analyzed? What verification steps would give you confidence in the results? Email me or schedule a consultation to discuss your specific situation.
Next week, I'll share my experience building the third essential system for legal professionals: the Knowledge Bank – your institutional memory amplified.