When an AI Overview cancels your gig

5 min read
Article

On May 4, 2026, a Canadian fiddler is suing Google for CA$1.5 million. AI hallucinations are no longer a private problem.

The free AI newsletter
When an AI Overview cancels your gig

On May 4, 2026, fiddler Ashley MacIsaac filed a CA$1.5 million lawsuit against Google at the Ontario Superior Court of Justice. The trigger: an AI Overview, the AI-generated summaries Google now slots above search results, flatly stated he had been convicted of sexual assault, child internet luring, assault causing bodily harm, and was on the national sex offender registry. All false. Yet on December 19, 2025, the Sipekne'katik First Nation had already cancelled his concert based on exactly that display.

The detail that changes everything: MacIsaac never queried the AI. The First Nation didn't either, not really. They did what every event organizer does before booking an artist, a Google search. And it was Google that decided, at the top of the SERP, that MacIsaac was a sex offender.

The private sphere of hallucinations just cracked

For three years, AI hallucinations have been framed as an intimate problem. Someone asks ChatGPT a question, ChatGPT makes something up. The implicit rule: you use a probabilistic tool, you accept its randomness.

That boundary held as long as generative AI lived inside explicit chat interfaces. You went to ChatGPT knowing what you were doing. Not anymore.

In March 2026, AI Overview surfaces in 48% of global Google queries according to Position Digital, and in over 70% of informational ones. On those queries, organic CTR has dropped from 1.76% to 0.61% per the Seer Interactive study. The AI summary is no longer an optional service. It has become the planet's first layer of information.

And when that first layer gets a real person wrong, users aren't the ones paying. The named individuals are. Bookers who cancel, employers who don't call back, vendors who break a contract.

None of them "used the AI". They just typed a name into Google.

How third-party harm actually works

The technical detail of the MacIsaac error is almost mundane. The AI Overview confused Ashley MacIsaac with another individual sharing the same name in articles from Atlantic Canada. A textbook mistaken identity, the kind of error LLMs produce without particular malice. Except the output was aggregated, formatted, and displayed under Google's visual authority.

The Sipekne'katik First Nation did exactly what was expected of them: due diligence before booking an artist for their community. They saw severe criminal accusations presented as fact. They cancelled. Their public apology to MacIsaac afterward is rare in its honesty: "We deeply regret the harm caused to your reputation, your business, and your sense of personal safety."

Google never reached out to MacIsaac. Never apologized. Never retracted. The complaint reminds the court in a phrasing lawyers will dissect: "Google should not have lesser liability because the defamatory statements were published by software that Google created and controls."

The precedents that show what's actually new

Three cases serve as benchmarks, and each highlights what makes MacIsaac different.

Brian Hood, mayor of Hepburn Shire in Australia, threatened OpenAI with legal action in March 2023 after ChatGPT falsely described him as convicted of bribery. Demand letter sent. Lawsuit never filed. No case law.

Jonathan Turley, a Georgetown law professor, found himself in April 2023 accused by ChatGPT of a fabricated sexual harassment incident, with a fake Washington Post article as the source. Massive press coverage, no legal action.

Mark Walters, a Georgia radio host, sued OpenAI for being wrongly described as a defendant in a fraud case. On May 19, 2025, summary judgment for OpenAI. The court relied on three grounds: OpenAI's disclaimers made the output "not reasonably interpretable as factual", Walters was a public figure with no proven malice, and he had failed to show actual damages.

Walters still works for OpenAI. It works less well for Google in the MacIsaac case. Walters was the user querying ChatGPT, complaining about the result. MacIsaac queried nothing.

The third party who made the harmful decision saw a Google result, not a chatbot reply. And the economic damage is documented: a cancelled concert, a tangible hit to reputation and business.

What the European framework might say

The case unfolds in Ontario, under Canadian law. But the question reaches the European framework too, and the answer there isn't the same.

The AI Act has applied its obligations to GPAI model providers since August 2025. No specific right against hallucination, but a duty to document and mitigate the risk of generating false content about real people. The CNIL, in its 2025 guidance, flagged this risk as inherent to generative AI systems. And Article 5 of the GDPR, which demands the accuracy of personal data, remains a usable lever against a system producing false factual claims about an identified individual.

In the US, the disclaimer shield holds, as Walters showed. In Europe, that shield is thinner: a generic disclaimer doesn't release a data controller from the accuracy obligation. An AI Overview pushed at the top of a SERP, with no editorial intermediary, no retraction channel accessible to the person named, ticks several problematic boxes before even reaching a judge.

What's at stake now

The complaint is too fresh to predict the outcome. Google will plead the one-off failure, the probabilistic nature of the system, possibly the disclaimers. MacIsaac's lawyers will press on the design itself: an AI Overview is served on 48% of global queries, its viral spread is foreseeable, so the "foreseeable republication" of false content is a design defect, not an accident.

Whatever the verdict, the MacIsaac case raises a question the earlier precedents never quite did. If a human Google spokesperson had publicly stated those accusations, no one would be debating. The argument boils down to a single question: does automation soften that responsibility? The complaint's quote runs to a single sentence and doesn't wait for a rhetorical answer: Google should not bear less liability because the speech came from its software rather than its humans.

Topics covered:

SecurityGoogleAnalysis

Frequently asked questions

What is an AI Overview?
An AI Overview is the AI-generated summary Google now serves at the top of its results page. According to Position Digital, it appears in 48% of global queries as of March 2026, and over 70% of informational ones.
Why is the MacIsaac case different from earlier AI defamation suits?
In the Hood, Turley and Walters cases, the chatbot user complained about the output. In the MacIsaac case, the harm hits a third party who never queried the AI. A First Nation cancelled his concert based on a routine Google search.
What is the legal basis for the MacIsaac lawsuit?
Filed at the Ontario Superior Court of Justice, the suit invokes defamation and foreseeable republication: Google knows AI Overviews are pushed at scale, so the spread of false content amounts to a design defect rather than an accident.
Does the European framework offer better protection against AI hallucinations?
Probably yes. The EU AI Act has required GPAI providers since August 2025 to document and mitigate hallucination risk on real people. Article 5 of the GDPR demands data accuracy, something a generic disclaimer does not erase.
What can someone do if an AI Overview names them falsely?
In the EU, contact Google through GDPR rectification and erasure procedures, then escalate to the data protection authority if refused. In Canada and the US, the courts remain the main path, as MacIsaac's case shows. No standardised flagging procedure exists yet for public hallucinations.
The free AI newsletter