U.S. lawyers are filing AI-generated briefs with fictitious citations at an accelerating pace, court sanctions are setting new records, and the technology is spreading so deeply into legal software that experts say mandatory disclosure rules may already be obsolete.
Summary
U.S. lawyers are filing AI-generated briefs with fictitious citations at an accelerating pace, court sanctions are setting new records, and the technology is spreading so deeply into legal software that experts say mandatory disclosure rules may already be obsolete. According to NPR’s April 3 investigation, the volume of court sanctions for AI-generated errors surged through 2025 and has not slowed in 2026 — a pattern that carries direct consequences for any sector, including crypto, whose legal exposure depends on the quality of briefs filed in its defense.
Damien Charlotin, a researcher at HEC Paris who maintains a worldwide tally of court sanctions for AI-generated legal errors, told NPR the pace has not plateaued. “Recently we had 10 cases from 10 different courts on a single day,” he said. “We have this issue because AI is just too good — but not perfect.” The most prominent case of the past cycle was that of the lawyers for MyPillow CEO Mike Lindell, who were fined $3,000 each for filing briefs containing fictitious citations.
A federal court may have set a new record last month when an Oregon-based lawyer was ordered to pay $109,700 in sanctions and costs. State supreme courts have also been drawn in: Nebraska’s high court grilled an Omaha attorney in February over fictitious citations and referred him for discipline, and a similarly public scene unfolded at the Georgia Supreme Court in March. “I am surprised that people are still doing this when it’s been in the news,” said Carla Wale, associate dean of information and technology at the University of Washington School of Law.
Some courts have responded by requiring lawyers to label any AI-assisted content in their filings. Joe Patrice, senior editor of Above the Law and a lawyer-turned-journalist, told NPR those rules are likely to become unworkable almost immediately. “It’s going to become so integrated into how everything operates that to be diligently complying with the rule, you would have to put on everything you put out, ‘Hey, this is AI assisted,’ at which point it kind of becomes a useless endeavor,” he said. The economics of legal billing are also accelerating adoption rather than slowing it. As AI tools cut drafting time, law firms face pressure to find new billing models — and Patrice suggests the resulting time pressure makes it more tempting for lawyers to accept AI first drafts without adequate verification.
The DOJ’s own shift away from prosecuting crypto developershinged in part on the argument that code is neutral unless there is criminal intent — a distinction that requires exactly the kind of careful legal reasoning that rushed AI-assisted briefs consistently fail to replicate. A Texas federal court recently dismisseda crypto software liability case partly by citing a DOJ memo on developer prosecution standards, illustrating how the quality of legal reasoning in AI-adjacent cases directly shapes regulatory outcomes for the entire sector.
AI itself has now entered the legal crosshairs beyond the courtroom error problem. In March, OpenAI was sued in federal court in Illinois by Nippon Life Insurance Company of America, which alleged that a woman was using ChatGPT as a legal adviser, receiving guidance that led to frivolous lawsuits against the insurer. The complaint accused OpenAI of practicing law without a license. In a written statement to NPR, OpenAI said: “This complaint lacks any merit whatsoever.” Wale, for her part, rejects both extremes. “I think that lawyers who understand how to effectively and ethically use generative AI replace lawyers who don’t,” she said. “That’s what I think the future is.”