First US lawyer suspended for life over AI citations. France is on the same slope.
An Omaha lawyer just got indefinitely suspended for filing an appellate brief stuffed with case law invented by ChatGPT. France is twelve months behind on the same curve.

57 citations out of 63
On April 15, 2026, the Nebraska Supreme Court indefinitely suspended an Omaha lawyer named Greg Lake. First full suspension of a US attorney for AI use in a court proceeding. The sanction is not a fine, not mandatory training: it is a provisional disbarment, sine die.
The grounds are precise. In February 2026, Lake filed an appellate brief in a divorce case. Out of 63 case citations, 57 were defective. Twenty were pure AI hallucinations: rulings that have never existed, in any jurisdiction in the country. Four were entirely fabricated, with plausible docket numbers and convincing summaries.
When the judges questioned him at the hearing, Lake first tried the wrong-file-during-a-complicated-trip excuse. Before admitting, under pressure, that he had indeed used a generative AI tool and verified nothing. The Nebraska Counsel for Discipline qualified that initial version as a "breach of the duty of candor toward the court." The sanction landed two months later.
The Q1 2026 curve: from slip-up to escalation
The Lake case is not an isolated incident. It is the visible peak of a wave rising fast.
In Q1 2026 alone, US federal and state courts levied at least $145,000 in financial sanctions for fabricated AI citations. The breakdown tells the whole story. January: $5,000. February: $250. March: more than $100,000 within a few weeks.
Two rulings tipped March over the edge. The 6th Federal Circuit sanctioned two lawyers $30,000 ($15,000 each) for filings stuffed with imaginary citations. And an Oregon attorney was hit with $109,700 in cumulative sanctions and adverse fees, the current record for a single case.
The judicial message has become clear. The learning curve is over. The "I did not know ChatGPT made stuff up" argument no longer flies. The database maintained by French legal scholar Damien Charlotin now lists hundreds of documented cases worldwide. Nebraska just raised the bar one notch higher: disbarment is now on the table.
The paradox nobody states out loud
While bar associations harden their stance, an Ethics Reporter survey published in April revealed a number nobody has truly wanted to comment on: 61.6% of US federal judges said they use at least one AI tool in their work. Case law research, brief synthesis, first-pass document review.
The same magistrates sanctioning lawyers $30,000 use the same tools, sometimes for the same tasks. The official difference: they verify. Nobody has published an audit on the quality of that verification.
This asymmetry creates a strange precedent. The US judiciary is writing case law that bars practitioners from doing what judges allow themselves. Not illogical in itself: a judge who reviews their own work only commits themselves, while a lawyer who pollutes a docket commits their client. But it touches a sensitive nerve. The system demands of lawyers a diligence it does not impose on those judging them.
Greg Lake has become the perfect illustration of this tension. Suspended for life for doing, without the expected rigor, what most of the judges who sanctioned him do in their own offices.
And in France, where do we stand?
France is on the same curve as the United States, with roughly twelve months of lag.
On December 18, 2025, the Périgueux judicial court flagged for the first time in France fictitious case law references produced by a generative AI tool in a brief. Invented docket numbers, real rulings cited with wrong dates and wrong subjects. A few days earlier, the Grenoble administrative court had identified, in two separate orders (December 3 and 9), an unverified use of AI in litigation filings. The Orléans administrative court and the Bordeaux administrative appeals court have since issued solemn warnings to lawyers, reminding them of the obligation to verify that cited references are not hallucinations.
No French bar has yet issued a suspension. But the National Bar Council adopted, at its general assembly on March 12-13, 2026, a reference report on algorithmic hallucinations and their ethical implications. The report repeats what the profession already knows: the lawyer's responsibility remains intact, regardless of the tool used. Ignorance is now a potential ground for disciplinary sanction up to and including disbarment.
On the ground, the gap is striking. According to a barometer published in early 2026, 81% of French lawyers say they use generative AI in their practice. Eighty-one percent. Compared to the three major disciplinary sanctions actually issued in 2024-2025. The usage rate is massive, the control rate is marginal.
The wave is coming
In the US, it took two years and three months between the first Mata vs Avianca case (June 2023, symbolic sanction) and the first effective disbarment (April 2026, Greg Lake). In France, the first official detection dates from December 2025. If the same kinetics repeat, the first French lawyer suspension for invented AI citations should land between late 2027 and the first half of 2028.
Unless the AI Act accelerates things. Starting August 2, 2026, the obligations on so-called high-risk AI systems become fully enforceable. Tools used in professional legal contexts are not explicitly classified under Annex III, but the debate is open: an assistant that drafts conclusions, that selects case law, that suggests a line of reasoning, does intervene in a decision that affects a litigant's rights.
Greg Lake probably did not think he would be the first. When a French judge issues an equivalent ruling in eighteen months, they will be able to say they had been warned.



