9 minutes learn
Revealed Might 7, 2026
AI hallucinations, pretend case citations or actual instances cited for propositions they don’t help, are the most-discussed danger in authorized AI proper now. Throughout roughly 40 million U.S. court docket instances filed since January 2023, solely about 955 have included a documented AI hallucination. However the attorneys and professional se litigants who do file them hold getting caught, sanctioned, and named in revealed opinions. Right here’s the place hallucinations come from, how you can hold them out of your personal work, and what to do if you discover them in an opponent’s transient.
Open any authorized information feed in 2026 and also you’ll discover a recent story a couple of lawyer caught citing instances that don’t exist. The protection has been regular sufficient that AI hallucinations in legislation have change into the boogeyman of authorized AI, the factor that comes up first at any time when a associate asks whether or not the agency must be utilizing these instruments in any respect.
The numbers inform a unique story. Hallucinations are actual they usually have actual penalties. As a proportion of whole filings, they’re additionally terribly uncommon. The attorneys who get caught share a number of habits. They used general-purpose chatbots as a substitute of instruments constructed for authorized work. They skipped the supervision step on the finish of drafting. And after they had been caught, they typically made issues worse by deflecting blame.
Right here’s the place hallucinations come from, how typically they’re truly exhibiting up in court docket filings, what to do to maintain them out of your personal work, and how you can deal with them if you discover them in an opponent’s transient.
What AI hallucinations in legislation truly are
In a authorized context, AI hallucinations are one in all two issues. They’re both citations to instances or statutes that don’t exist, or citations to actual authorities for propositions these authorities don’t truly help.
The primary type is the one making headlines. A lawyer or professional se litigant makes use of a general-purpose chatbot like ChatGPT, Claude, Gemini, Copilot, or Grok to assist draft a short. The mannequin, predicting the statistically doubtless subsequent phrase, decides a quotation belongs in a selected spot, and produces one. The reporter could be actual. The amount quantity may fall inside the fitting vary. The Bluebook formatting is usually higher than what most associates produce. The case itself simply doesn’t exist.
The second type is older than AI. Legal professionals have all the time often cited a case for a proposition that the case doesn’t stand for. AI has made this type of error simpler to commit and simpler to catch.
When you’re hoping the following era of fashions will repair this, set that hope apart. Sam Altman has acknowledged that hallucinations aren’t a bug in giant language fashions. They’re a characteristic of how the know-how works, and GPT-5 hallucinates greater than GPT-4 did. The hallucinations have gotten extra convincing, not rarer. That’s not a motive to swear off AI. It’s a motive to decide on your instrument correctly, and be disciplined about your workflow. We’ll cowl each under.
Why the citations look so convincing
There’s a psychological lure with hallucinated citations. In a short with 19 citations, an AI instrument might produce 18 which might be actual and one which isn’t. Reviewing the primary a number of and discovering them correct lulls you into trusting the remainder. Then quotation 14, completely Bluebooked and completely believable, factors to nothing.
For a era of attorneys, polished writing has been a proxy for cautious lawyering. That proxy is now damaged. A movement may be concurrently flawlessly written and badly lawyered. The right Bluebooking is now not a sign that anybody truly learn the case.
That places the burden of supervision again the place it has all the time belonged: on the supervising lawyer, on the finish of the drafting course of, earlier than the doc goes out. That is already required by ABA Mannequin Guidelines 5.1 and 5.3. Accuracy can also be required by federal Rule 11 (and its state-court analogs). In a court docket submitting, Rule 11 states that every thing above your signature is true and proper, whether or not it got here from a paralegal, a first-year affiliate, or an AI-backed instrument. Supervision is one piece of a broader set of moral duties that apply to AI in authorized follow.
Some jurisdictions are responding by including AI-specific guidelines. California is contemplating amendments to its skilled conduct guidelines to handle AI instantly, and Florida has already executed equally. These guidelines will most likely not age nicely. The responsibility to oversee individuals and instruments that produce work in your title has existed for the reason that career’s inception. It applies to AI for a similar motive it applies to a typist or a junior affiliate. We most likely don’t want a brand new rule. We’d like attorneys to observe present guidelines.
How typically are AI hallucinations actually taking place?
Damien Charlotin, a researcher who tracks AI hallucination authorized instances worldwide, has documented round 1,400 instances globally the place AI-generated errors made it right into a submitting. Greater than 955 of these are in the US.
For context, Docket Alarm incorporates roughly 40 million U.S. instances filed since January 1, 2023, when ChatGPT-style instruments entered widespread use. That works out to at least one documented hallucination per 41,000 instances, or about 0.002 %. Throughout the roughly 200 million filings in these instances, the speed is even smaller.
Two caveats. First, that depend solely contains hallucinations that had been caught. The actual quantity is nearly actually greater, since some unhealthy citations slip previous each opposing counsel and the court docket. Second, the denominator contains each submitting, not simply AI-assisted filings. If solely a fraction of attorneys are utilizing generic chatbots in drafting, then the speed inside that subset is way greater.
A number of different patterns from the information:
Greater than 60 % of the U.S. instances contain professional se litigants, not represented events.
The instances that do contain attorneys lower throughout agency sizes and follow areas. Sullivan & Cromwell was just lately known as out for hallucinated citations. These AI hallucination lawyer tales aren’t only a small-firm drawback.
The attorneys who get caught with hallucinations typically double down. They deny that they used AI. They may insist that the instances are actual—till they’re confirmed mistaken.
You’re statistically extra prone to encounter hallucinated citations in an opponent’s submitting than to supply one your self. Which is strictly why this issues in each instructions.
How one can hold AI hallucinations out of your personal work

Robust AI hallucination guardrails for authorized work come all the way down to 4 issues to search for in any AI instrument you employ.
It’s educated on actual authorized authority, not the open web. A general-purpose chatbot is educated on pablum like Reddit threads and YouTube feedback. You wouldn’t do authorized analysis in these doubtful sources. So don’t use a analysis instrument that discovered from them both. Options like Clio Work and Vincent by Clio are grounded in precise case legislation, statutes, and guidelines. We’re clearly not unbiased about these merchandise, however the precept stands no matter which instrument you select: use a instrument that makes use of actual legislation.
It may be confined to your jurisdiction. A persuasive case from one other circuit isn’t the identical as binding authority. Your AI instrument ought to allow you to direct it to the legislation that really applies to your matter.
It produces verifiable output with hyperlinks. Inside Clio, a phrase is extra frequent: “hyperlinks or it didn’t occur.” Citations in AI-generated drafts ought to hyperlink instantly to every underlying authority, making the quotation straightforward to confirm. The absence of a working hyperlink is itself a crimson flag. Earlier than you file, click on each hyperlink. Belief however confirm.
It produces a defensible report of the way you used it. If a court docket ever asks how AI suits into your workflow, it is best to have the ability to present your AI interactions, the output, and your verification steps. Instruments constructed for authorized use create that “belief however confirm” audit path. Public chatbots don’t.
Even with all 4 in place, you continue to want that end-stage supervision. Learn the instances. Click on each hyperlink. If a quotation doesn’t resolve to an actual case that really says what the transient claims it says, that’s the second to catch it, earlier than including your signature.
What to do if you discover AI hallucinations in opposing counsel’s transient
You’ll run into this, both in your work, or work from another person. Whenever you do, you will have an obligation to catch it. The responsibility of competence requires you to confirm the legislation cited towards you, the identical manner the supervising lawyer on the opposite aspect ought to have verified it earlier than submitting. In Noland v. Land of the Free, L.P., a 2025 California Court docket of Enchantment (Second District) resolution, the court docket sanctioned a celebration about $10,000 for submitting a short with hallucinated citations. When the non-erroneous get together then sought lawyer’s charges for the work brought on by the hallucinations, the court docket denied them, discovering that they need to have caught the errors themselves. Lawyer’s charges in these instances have a tendency to trace the additional work brought on by unhealthy citations, not a separate failure to flag the misconduct, however the precept stays the identical. Courts count on you to learn the legislation cited at you.
You even have a alternative about how you can deal with hallucinations when you’ve discovered them. The mannequin guidelines information you both manner. Rule 3.3 (responsibility of candor to the tribunal) and Rule 8.3 (responsibility to report misconduct) each help elevating the problem with the court docket. Nothing requires you to provide opposing counsel a heads-up first.
That stated, there’s a powerful skilled courtesy argument for notifying opposing counsel earlier than the court docket. We’ve heard an anecdote from a lawyer in a contentious case the place opposing counsel had been condescending all through. He filed a short with hallucinated citations. She had each motive to drop it on him with the court docket. As an alternative, she reached out to him instantly, advised him what she’d discovered, and provided him an opportunity to file an amended transient. His response was to threaten her with sanctions if she was making it up. A few week later, he refiled the transient with the citations corrected, no acknowledgment.
Even in that interplay, courtesy was the fitting name. The lawyer you’re throughout from at this time may refer you a case subsequent yr. Zealous advocacy doesn’t require being impolite.
Think about giving opposing counsel an opportunity to repair it in case you can. If they refuse, or if their response makes you doubt their good religion, report it to the court docket and take into account looking for charges for the time it took you to establish and doc the error. Convey receipts. Present the instances that don’t exist or the propositions that aren’t supported. Courts are taking this critically, and it is best to ask them to compensate for the work it takes to scrub up another person’s mess.
What to do in case you’re the one who filed the hallucination

When you discover a hallucination in one thing you’ve already filed, or opposing counsel does, take accountability. That sounds apparent. However watching how some attorneys deal with it within the second, apparently it isn’t.
The sample within the catalogued instances is hanging. Confronted with a hallucinated quotation, attorneys typically deny utilizing AI. They typically blame their affiliate, their software program vendor, or their paralegal. They may pivot to attacking opposing counsel’s conduct. Or they generally insist the instances are actual, then quietly appropriate the transient with out rationalization per week later. None of this works. Courts can see what occurred. The deflection makes issues worse.
The mannequin for the fitting response is what Sullivan & Cromwell did when it occurred to them: personal the error, take private accountability, apologize, appropriate the submitting, and don’t attempt to delegate the fault. You should still face a sanction. The sanction is nearly all the time smaller, and the skilled harm virtually all the time much less, than what comes from compounding the error with denial.
The underside line on AI authorized hallucinations
AI authorized hallucination dangers are actual, however manageable. They’ll and do occur, however there are a number of finest practices you possibly can undertake to maintain them out of your work and to deal with them after they present up in another person’s.
Use authorized AI for authorized work. Normal chatbots are nice for advertising and marketing copy. However they’re not constructed to quote case legislation. When you’re producing authorized work, use a instrument grounded in actual authorized authority—yesterday’s case, yesterday’s statute, yesterday’s regulation—with hyperlinked citations and a verification workflow.
Learn the instances. Or on the very least, click on the hyperlinks and pull a parenthetical quote from every one. The responsibility to oversee belongs on the finish of the drafting course of, on the supervising lawyer, earlier than the doc goes out. That has all the time been true. AI simply made it extra seen.
Civility prices nothing. When you discover hallucinations in opposing counsel’s submitting, give them an opportunity to repair it earlier than going to the court docket. If they refuse, then file. When you’re the one who filed the hallucination, take accountability shortly and cleanly.
Legal professionals utilizing purpose-built authorized AI instruments like Clio Work and Vincent by Clio, the place citations are grounded in actual legislation and verification is constructed into the workflow, will catch most hallucinations earlier than they go away the workplace, in their very own work and within the briefs filed towards them. Used nicely, AI is a power multiplier in authorized follow. Used carelessly, it’s a sanctions danger. The distinction is the supervision step.
Loading …


















