Back to blog
DemystifierProfessional Services

What AI 'Hallucinations' Mean for Law Firms and Accounting Practices

If you run a law firm or an accounting practice, you have probably heard the word "hallucination" thrown around when people talk about AI. You may have also heard about lawyers getting sanctioned for filing briefs full of fake case citations. Or about accountants who got a confident AI answer that turned out to be wrong.

This post explains what "hallucination" actually means in plain language. It covers why it happens. And it covers how to use AI in your practice without getting burned.

What "hallucination" actually means

When people in the AI industry say a chatbot "hallucinated," they mean it made something up. The tool produced an answer that sounds correct, looks correct, and is presented with confidence. The answer is just not true.

A hallucination is not the AI being broken. It is the AI doing what it is built to do, which is generate plausible-sounding text. A chatbot like ChatGPT or Claude is, at its core, a very fast pattern matcher. It looks at the words you typed. Then it predicts what words should come next based on patterns it learned from a huge pile of text on the internet.

That pile of text included real legal cases, real tax code, and real client letters. It also included fiction, opinion, outdated information, and a lot of confidently wrong content. The AI does not "know" the difference. It just produces what looks like a good answer.

Think of it like an associate who has read everything but remembers nothing exactly. If you ask them to cite a case, they will produce something that sounds like a real case. The format will be right. The judge name will sound plausible. The holding will fit the facts. But the case may not exist.

Why this matters more for your practice than for most businesses

Most small businesses can absorb a small AI mistake. If a contractor's AI assistant drafts a follow-up email with the wrong date, the customer writes back to correct it. The cost is a minute of cleanup.

In a law firm or an accounting practice, a hallucination can show up in a court filing, a client memo, a tax position, or an audit response. The cost of a confident wrong answer in your work is much higher than a wrong date in a contractor's email.

The Mata v. Avianca case is the example most lawyers have heard about. In 2023, a New York attorney filed a brief that cited 6 cases. None of them existed. ChatGPT had invented all 6, complete with fake quotes and fake docket numbers. The lawyer was sanctioned. Judges and bar associations across the country have been paying attention ever since.

Accounting practices have a quieter version of the same problem. AI tools can confidently cite tax code sections that say something different from what the AI claims. They can also produce financial reasoning that sounds rigorous. But the reasoning may skip a step that actually matters under GAAP or under a state-specific rule.

Why hallucinations happen

Three things drive most hallucinations.

First, the AI does not have access to a verified database of facts. When you ask ChatGPT for a case citation, it is not searching Westlaw. It is generating a string of text that has the shape of a citation. The shape is right. The substance may be invented.

Second, AI tools are trained to be helpful. If you ask a question, the model wants to give you an answer. It does not have a strong instinct to say "I do not know." It has a strong instinct to produce something.

Third, the more specific your question gets, the more the model has to fill in gaps. If you ask "what are the elements of negligence," the answer is well-covered in the training data. The reply is likely correct. If you ask "what is the leading case in Louisiana on negligent infliction of emotional distress as of 2024," the model may invent something rather than admit it does not know.

How to use AI in your practice without getting burned

You do not have to avoid AI. You have to use it the way you would use a sharp but unverified junior employee.

A few practical rules that work in real practices.

Use AI for first drafts, not final answers. A demand letter draft. A client update draft. A memo outline. The AI gives you a starting point. You verify and revise.

Do not let AI generate citations or specific authority without verification. If your tool produces a case name, a code section, or a regulation, look it up in your real research source before it leaves the office. Treat it the way you would treat a citation from a first-year associate you do not know yet.

Use AI tools that connect to verified sources for the work that needs them. Several legal research platforms and accounting platforms have built AI features that pull from their own verified databases. These are different from a generic chatbot. The tool is still working off patterns. But it is checking those patterns against a real document set. Products like CoCounsel for legal work, and the AI features inside the major accounting platforms, reduce the hallucination risk for the specific kind of question they are built to answer.

Set internal policy. Every firm using AI should have a one-page policy. It should say what AI is allowed to do, what it is not allowed to do, and what verification step is required before AI-assisted work goes out the door. This is not a heavy compliance project. It is a memo your associates and staff can read in 5 minutes.

When hallucinations matter and when they do not

Hallucinations are a serious risk when AI is producing factual claims that you or a client will rely on. They are a much smaller risk when AI is doing work that does not depend on it being right about specific facts.

Drafting an internal task list. Summarizing a long email thread. Cleaning up a paragraph you wrote. Generating 5 subject line options for a client newsletter. Reformatting a document. None of these depend on the AI knowing a real case or a real tax code section. Hallucinations are basically a non-issue here.

The risk shows up the moment the work is "the AI is telling me what is true about the world or about the law." That is the line. Stay on the safe side of it and AI is a very useful tool. Cross it without verification and you are exposed.

What this means for your practice

You do not need to be afraid of AI in a law firm or an accounting practice. You also should not pretend the risk is zero. The buyer who gets this right uses AI for the parts of the work where being wrong is a small inconvenience. They verify anything that depends on a fact being true.

The technology will keep getting better. Hallucination rates have dropped a lot in the last 2 years. They have not dropped to zero. And they are not going to drop to zero in the timeframes that matter for your next filing or your next audit. Plan accordingly.

-- Stacey | The Standalone

what is AI hallucinationAI hallucinations explainedAI for law firmsAI for accounting practicesis AI safe for lawyersAI mistakes in professional services

- Stacey Tallitsch, The Standalone