No Result
View All Result
SUBMIT YOUR ARTICLES
  • Login
Sunday, May 10, 2026
TheAdviserMagazine.com
  • Home
  • Financial Planning
    • Financial Planning
    • Personal Finance
  • Market Research
    • Business
    • Investing
    • Money
    • Economy
    • Markets
    • Stocks
    • Trading
  • 401k Plans
  • College
  • IRS & Taxes
  • Estate Plans
  • Social Security
  • Medicare
  • Legal
  • Home
  • Financial Planning
    • Financial Planning
    • Personal Finance
  • Market Research
    • Business
    • Investing
    • Money
    • Economy
    • Markets
    • Stocks
    • Trading
  • 401k Plans
  • College
  • IRS & Taxes
  • Estate Plans
  • Social Security
  • Medicare
  • Legal
No Result
View All Result
TheAdviserMagazine.com
No Result
View All Result
Home Legal

AI Hallucinations in Law: How to Spot Them and Stop Them

by TheAdviserMagazine
3 days ago
in Legal
Reading Time: 10 mins read
A A
AI Hallucinations in Law: How to Spot Them and Stop Them
Share on FacebookShare on TwitterShare on LInkedIn



9 minutes read

Published May 7, 2026

AI hallucinations, fake case citations or real cases cited for propositions they don’t support, are the most-discussed risk in legal AI right now. Across roughly 40 million U.S. court cases filed since January 2023, only about 955 have included a documented AI hallucination. But the lawyers and pro se litigants who do file them keep getting caught, sanctioned, and named in published opinions. Here’s where hallucinations come from, how to keep them out of your own work, and what to do when you find them in an opponent’s brief.

Open any legal news feed in 2026 and you’ll find a fresh story about a lawyer caught citing cases that don’t exist. The coverage has been steady enough that AI hallucinations in law have become the boogeyman of legal AI, the thing that comes up first whenever a partner asks whether the firm should be using these tools at all.

The numbers tell a different story. Hallucinations are real and they have real consequences. As a percentage of total filings, they’re also extraordinarily rare. The lawyers who get caught share a few habits. They used general-purpose chatbots instead of tools built for legal work. They skipped the supervision step at the end of drafting. And when they were caught, they often made things worse by deflecting blame.

Here’s where hallucinations come from, how often they’re actually showing up in court filings, what to do to keep them out of your own work, and how to handle them when you find them in an opponent’s brief.

What AI hallucinations in law actually are

In a legal context, AI hallucinations are one of two things. They’re either citations to cases or statutes that don’t exist, or citations to real authorities for propositions those authorities don’t actually support.

The first kind is the one making headlines. A lawyer or pro se litigant uses a general-purpose chatbot like ChatGPT, Claude, Gemini, Copilot, or Grok to help draft a brief. The model, predicting the statistically likely next word, decides a citation belongs in a particular spot, and produces one. The reporter might be real. The volume number might fall within the right range. The Bluebook formatting is often better than what most associates produce. The case itself just doesn’t exist.

The second kind is older than AI. Lawyers have always occasionally cited a case for a proposition that the case doesn’t stand for. AI has made this kind of error easier to commit and easier to catch.

If you’re hoping the next generation of models will fix this, set that hope aside. Sam Altman has acknowledged that hallucinations aren’t a bug in large language models. They’re a feature of how the technology works, and GPT-5 hallucinates more than GPT-4 did. The hallucinations have gotten more convincing, not rarer. That’s not a reason to swear off AI. It’s a reason to choose your tool wisely, and be disciplined about your workflow. We’ll cover both below.

Why the citations look so convincing

There’s a psychological trap with hallucinated citations. In a brief with 19 citations, an AI tool may produce 18 that are real and one that isn’t. Reviewing the first several and finding them accurate lulls you into trusting the rest. Then citation 14, perfectly Bluebooked and perfectly plausible, points to nothing.

For a generation of lawyers, polished writing has been a proxy for careful lawyering. That proxy is now broken. A motion can be simultaneously flawlessly written and badly lawyered. The perfect Bluebooking is no longer a signal that anyone actually read the case.

That puts the burden of supervision back where it has always belonged: on the supervising lawyer, at the end of the drafting process, before the document goes out. This is already required by ABA Model Rules 5.1 and 5.3. Accuracy is also required by federal Rule 11 (and its state-court analogs). In a court filing, Rule 11 states that everything above your signature is true and correct, whether it came from a paralegal, a first-year associate, or an AI-backed tool. Supervision is one piece of a broader set of ethical duties that apply to AI in legal practice.

Some jurisdictions are responding by adding AI-specific rules. California is considering amendments to its professional conduct rules to address AI directly, and Florida has already done similarly. Those rules will probably not age well. The duty to supervise people and tools that produce work in your name has existed since the profession’s inception. It applies to AI for the same reason it applies to a typist or a junior associate. We probably don’t need a new rule. We need lawyers to follow existing rules. 

How often are AI hallucinations really happening?

Damien Charlotin, a researcher who tracks AI hallucination legal cases worldwide, has documented around 1,400 cases globally where AI-generated errors made it into a filing. More than 955 of those are in the United States.

For context, Docket Alarm contains roughly 40 million U.S. cases filed since January 1, 2023, when ChatGPT-style tools entered widespread use. That works out to one documented hallucination per 41,000 cases, or about 0.002 percent. Across the roughly 200 million filings in those cases, the rate is even smaller.

Two caveats. First, that count only includes hallucinations that were caught. The real number is almost certainly higher, since some bad citations slip past both opposing counsel and the court. Second, the denominator includes every filing, not just AI-assisted filings. If only a fraction of lawyers are using generic chatbots in drafting, then the rate within that subset is much higher.

A few other patterns from the data:

More than 60 percent of the U.S. cases involve pro se litigants, not represented parties.
The cases that do involve lawyers cut across firm sizes and practice areas. Sullivan & Cromwell was recently called out for hallucinated citations. These AI hallucination lawyer stories aren’t just a small-firm problem.
The lawyers who get caught with hallucinations sometimes double down. They deny that they used AI. They might insist that the cases are real—until they’re proven wrong. 

You’re statistically more likely to encounter hallucinated citations in an opponent’s filing than to produce one yourself. Which is exactly why this matters in both directions.

How to keep AI hallucinations out of your own work

verify legal ai output

Strong AI hallucination guardrails for legal work come down to four things to look for in any AI tool you use.

It’s trained on real legal authority, not the open internet. A general-purpose chatbot is trained on pablum like Reddit threads and YouTube comments. You wouldn’t do legal research in those dubious sources. So don’t use a research tool that learned from them either. Solutions like Clio Work and Vincent by Clio are grounded in actual case law, statutes, and rules. We’re obviously not unbiased about those products, but the principle stands regardless of which tool you choose: use a tool that uses real law.
It can be confined to your jurisdiction. A persuasive case from another circuit isn’t the same as binding authority. Your AI tool should let you direct it to the law that actually applies to your matter.
It produces verifiable output with hyperlinks. Inside Clio, a phrase is more frequent: “hyperlinks or it didn’t happen.” Citations in AI-generated drafts should link directly to each underlying authority, making the citation easy to verify. The absence of a working link is itself a red flag. Before you file, click every link. Trust but verify. 
It produces a defensible record of how you used it. If a court ever asks how AI fits into your workflow, you should be able to show your AI interactions, the output, and your verification steps. Tools built for legal use create that “trust but verify” audit trail. Public chatbots don’t.

Even with all four in place, you still need that end-stage supervision. Read the cases. Click every hyperlink. If a citation doesn’t resolve to a real case that actually says what the brief claims it says, that’s the moment to catch it, before adding your signature. 

What to do when you find AI hallucinations in opposing counsel’s brief

You will run into this, either in your work, or work from someone else. When you do, you have an obligation to catch it. The duty of competence requires you to verify the law cited against you, the same way the supervising lawyer on the other side should have verified it before filing. In Noland v. Land of the Free, L.P., a 2025 California Court of Appeal (Second District) decision, the court sanctioned a party about $10,000 for filing a brief with hallucinated citations. When the non-erroneous party then sought attorney’s fees for the work caused by the hallucinations, the court denied them, finding that they should have caught the errors themselves. Attorney’s fees in these cases tend to track the extra work caused by bad citations, not a separate failure to flag the misconduct, but the principle remains the same. Courts expect you to read the law cited at you.

You also have a choice about how to handle hallucinations once you’ve found them. The model rules guide you either way. Rule 3.3 (duty of candor to the tribunal) and Rule 8.3 (duty to report misconduct) both support raising the issue with the court. Nothing requires you to give opposing counsel a heads-up first.

That said, there’s a strong professional courtesy argument for notifying opposing counsel before the court. We’ve heard an anecdote from a lawyer in a contentious case where opposing counsel had been condescending throughout. He filed a brief with hallucinated citations. She had every reason to drop it on him with the court. Instead, she reached out to him directly, told him what she’d found, and offered him a chance to file an amended brief. His response was to threaten her with sanctions if she was making it up. About a week later, he refiled the brief with the citations corrected, no acknowledgment.

Even in that interaction, courtesy was the right call. The lawyer you’re across from today might refer you a case next year. Zealous advocacy doesn’t require being rude.

Consider giving opposing counsel a chance to fix it if you can. If they decline, or if their response makes you doubt their good faith, report it to the court and consider seeking fees for the time it took you to identify and document the error. Bring receipts. Show the cases that don’t exist or the propositions that aren’t supported. Courts are taking this seriously, and you should ask them to compensate for the work it takes to clean up someone else’s mess.

What to do if you’re the one who filed the hallucination

Safe AI use for lawyers

If you find a hallucination in something you’ve already filed, or opposing counsel does, take responsibility. That sounds obvious. But watching how some lawyers handle it in the moment, apparently it isn’t.

The pattern in the catalogued cases is striking. Confronted with a hallucinated citation, lawyers sometimes deny using AI. They often blame their associate, their software vendor, or their paralegal. They might pivot to attacking opposing counsel’s behavior. Or they sometimes insist the cases are real, then quietly correct the brief without explanation a week later. None of this works. Courts can see what happened. The deflection makes things worse.

The model for the right response is what Sullivan & Cromwell did when it happened to them: own the mistake, take personal responsibility, apologize, correct the filing, and don’t try to delegate the fault. You may still face a sanction. The sanction is almost always smaller, and the professional damage almost always less, than what comes from compounding the mistake with denial.

The bottom line on AI legal hallucinations

AI legal hallucination risks are real, but manageable. They can and do happen, but there are a few best practices you can adopt to keep them out of your work and to handle them when they show up in someone else’s.

Use legal AI for legal work. General chatbots are great for marketing copy. But they’re not built to cite case law. If you’re producing legal work, use a tool grounded in real legal authority—yesterday’s case, yesterday’s statute, yesterday’s regulation—with hyperlinked citations and a verification workflow.
Read the cases. Or at the very least, click the hyperlinks and pull a parenthetical quote from each one. The duty to supervise belongs at the end of the drafting process, on the supervising lawyer, before the document goes out. That has always been true. AI just made it more visible.
Civility costs nothing. If you find hallucinations in opposing counsel’s filing, give them a chance to fix it before going to the court. If they decline, then file. If you’re the one who filed the hallucination, take responsibility quickly and cleanly.

Lawyers using purpose-built legal AI tools like Clio Work and Vincent by Clio, where citations are grounded in real law and verification is built into the workflow, will catch most hallucinations before they leave the office, in their own work and in the briefs filed against them. Used well, AI is a force multiplier in legal practice. Used carelessly, it’s a sanctions risk. The difference is the supervision step.

Loading …

Subscribe to the blog



Source link

Tags: HallucinationsLawspotstop
ShareTweetShare
Previous Post

Cars.com Blows Past Q1 2026 Forecasts: $0.45 vs $0.13 Expected

Next Post

US stocks today: US market ends lower as semiconductor stocks reverse earlier gains

Related Posts

edit post
Today on Legaltech Week: MikeOSS, Legaltech Giants Supporting ICE, ILTA Evolve, Rethinking Lawyer Training, AI Mansplaining, and More!

Today on Legaltech Week: MikeOSS, Legaltech Giants Supporting ICE, ILTA Evolve, Rethinking Lawyer Training, AI Mansplaining, and More!

by TheAdviserMagazine
May 8, 2026
0

Our Friday Legaltech Week panel has been off for a couple weeks, due to travel and other conflicts. But we’re...

edit post
Let’s Talk about Neal Katyal’s TED Talk

Let’s Talk about Neal Katyal’s TED Talk

by TheAdviserMagazine
May 7, 2026
0

In November, I attended the oral argument in the tariff case. I wrote a lengthy post about how I perceived...

edit post
First The Partners, Then The Associates – See Also

First The Partners, Then The Associates – See Also

by TheAdviserMagazine
May 6, 2026
0

Paul, Weiss Associates Are Being Shown The Door: Looks a lot like the first firm to fold to Trump is...

edit post
Court agrees to immediately finalize Voting Rights Act decision

Court agrees to immediately finalize Voting Rights Act decision

by TheAdviserMagazine
May 4, 2026
0

The Supreme Court on Monday night granted a request to immediately finalize its opinion in Louisiana v. Callais, in which...

edit post
Flight Cancelled? Can Airlines Refuse Compensation During the Fuel Crisis and Middle East Disruption

Flight Cancelled? Can Airlines Refuse Compensation During the Fuel Crisis and Middle East Disruption

by TheAdviserMagazine
May 4, 2026
0

Airlines are cancelling thousands of flights as the fuel crisis deepens, but passengers face a critical legal risk: airlines may...

edit post
Law Firm Website Accessibility: 5 Intake Fixes

Law Firm Website Accessibility: 5 Intake Fixes

by TheAdviserMagazine
May 2, 2026
0

When you hear the phrase “law firm website accessibility,” you’re probably thinking “compliance risk,” not “marketing opportunity.” But many firms...

Next Post
edit post
US stocks today: US market ends lower as semiconductor stocks reverse earlier gains

US stocks today: US market ends lower as semiconductor stocks reverse earlier gains

edit post
Five Smart Things You Can Do with Your Tax Refund

Five Smart Things You Can Do with Your Tax Refund

  • Trending
  • Comments
  • Latest
edit post
Gavin Newsom issues ‘final warning’ amid California’s dire housing crisis — what’s at stake for millions of residents

Gavin Newsom issues ‘final warning’ amid California’s dire housing crisis — what’s at stake for millions of residents

May 3, 2026
edit post
Florida Warning: With Senior SNAP Benefits Averaging 8/Month, Thousands Risk Losing Assistance in 2026

Florida Warning: With Senior SNAP Benefits Averaging $188/Month, Thousands Risk Losing Assistance in 2026

April 27, 2026
edit post
Minnesota Wealth Tax | Intangible Personal Property Tax

Minnesota Wealth Tax | Intangible Personal Property Tax

May 6, 2026
edit post
10 Cheapest High Dividend Stocks With P/E Ratios Under 10

10 Cheapest High Dividend Stocks With P/E Ratios Under 10

April 13, 2026
edit post
Exclusive: America’s largest Black-owned bank launches podcast with mission to unlock hidden shame holding back generational wealth

Exclusive: America’s largest Black-owned bank launches podcast with mission to unlock hidden shame holding back generational wealth

April 29, 2026
edit post
NYC Mayor Mamdani knocked Ken Griffin in pied-a-terre tax promo. His firm calls the move ‘shameful’

NYC Mayor Mamdani knocked Ken Griffin in pied-a-terre tax promo. His firm calls the move ‘shameful’

April 23, 2026
edit post
Monthly Dividend Stock In Focus: Oxford Square Capital

Monthly Dividend Stock In Focus: Oxford Square Capital

0
edit post
Rajeev Thakkar and Sankaran Naren see value in IT despite AI disruption concerns

Rajeev Thakkar and Sankaran Naren see value in IT despite AI disruption concerns

0
edit post
Courts Keep Striking Down Tariffs. Why Aren’t Prices Dropping Too?

Courts Keep Striking Down Tariffs. Why Aren’t Prices Dropping Too?

0
edit post
Home Chef Meals Discount Code: 18 FREE Meals + Free Shipping + Dessert for Life!

Home Chef Meals Discount Code: 18 FREE Meals + Free Shipping + Dessert for Life!

0
edit post
Mewgenics trailer released after 13 years

Mewgenics trailer released after 13 years

0
edit post
Tax Relief Companies: What to Look for Before You Sign 

Tax Relief Companies: What to Look for Before You Sign 

0
edit post
Rajeev Thakkar and Sankaran Naren see value in IT despite AI disruption concerns

Rajeev Thakkar and Sankaran Naren see value in IT despite AI disruption concerns

May 10, 2026
edit post
US and Iran announce framework to ease tensions, reopen Strait of Hormuz

US and Iran announce framework to ease tensions, reopen Strait of Hormuz

May 10, 2026
edit post
How To Distinguish A Real Bull Market

How To Distinguish A Real Bull Market

May 10, 2026
edit post
CLARITY Act: Banking Trade Groups Push For Yield Agreement Revision – Details

CLARITY Act: Banking Trade Groups Push For Yield Agreement Revision – Details

May 9, 2026
edit post
China-linked U.S. solar factories shunned in Trump crackdown – Reuters (TAN:NYSEARCA)

China-linked U.S. solar factories shunned in Trump crackdown – Reuters (TAN:NYSEARCA)

May 9, 2026
edit post
People who keep their phone face-down on every table aren’t always being secretive, they may have spent years learning that every unexpected notification meant someone needed something from them

People who keep their phone face-down on every table aren’t always being secretive, they may have spent years learning that every unexpected notification meant someone needed something from them

May 9, 2026
The Adviser Magazine

The first and only national digital and print magazine that connects individuals, families, and businesses to Fee-Only financial advisers, accountants, attorneys and college guidance counselors.

CATEGORIES

  • 401k Plans
  • Business
  • College
  • Cryptocurrency
  • Economy
  • Estate Plans
  • Financial Planning
  • Investing
  • IRS & Taxes
  • Legal
  • Market Analysis
  • Markets
  • Medicare
  • Money
  • Personal Finance
  • Social Security
  • Startups
  • Stock Market
  • Trading

LATEST UPDATES

  • Rajeev Thakkar and Sankaran Naren see value in IT despite AI disruption concerns
  • US and Iran announce framework to ease tensions, reopen Strait of Hormuz
  • How To Distinguish A Real Bull Market
  • Our Great Privacy Policy
  • Terms of Use, Legal Notices & Disclosures
  • Contact us
  • About Us

© Copyright 2024 All Rights Reserved
See articles for original source and related links to external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Financial Planning
    • Financial Planning
    • Personal Finance
  • Market Research
    • Business
    • Investing
    • Money
    • Economy
    • Markets
    • Stocks
    • Trading
  • 401k Plans
  • College
  • IRS & Taxes
  • Estate Plans
  • Social Security
  • Medicare
  • Legal

© Copyright 2024 All Rights Reserved
See articles for original source and related links to external sites.