Who Regulates AI in America? Nobody — and Everybody
Same week, two states, two opposing logics: Illinois shields OpenAI, Florida investigates it. The fragmentation of U.S. AI regulation is no longer just a theory.

589 Bills, Zero Common Framework
In 2025, all 50 U.S. states introduced 589 bills on artificial intelligence. No federal framework exists to align them. The result: a company like OpenAI finds itself navigating situations no one designed to handle together.
The week of April 7, 2026 is the clearest illustration of that so far.
On Tuesday, April 8, OpenAI testifies before the Illinois legislature in support of SB 3444, a bill that would exempt foundation model creators from liability in cases of "critical harms": mass casualties (100 or more people), financial damages exceeding $1 billion, or using AI to create weapons of mass destruction.
The next day, April 9, Florida Attorney General James Uthmeier announces the opening of a criminal investigation into OpenAI. The reason: ChatGPT's potential role in the Florida State University (FSU) shooting on April 17, 2025, which left 2 dead and 5 injured.
Two facts. Two states. Twenty-four hours apart.
The Core Question: Who Is Liable?
To understand what Illinois SB 3444 is really about, you need to grasp a legal distinction at the heart of AI legislation in 2026: the difference between a "foundation model provider" and a "deployer."
The foundation model creator is OpenAI with GPT-4, or Anthropic with Claude. The deployer is the company or individual that takes that model and uses it inside a product or service — a hiring app, a customer service bot, or ChatGPT itself made available to the public.
SB 3444 draws a clean line: if an AI system causes a critical harm, liability belongs to the deployer, not the model creator. The exception would apply only if the creator "directly intended" the harm or acted with "manifest recklessness."
It is a legal architecture borrowed from gun manufacturers: the maker isn't automatically liable for how the buyer uses the product.
Caitlin Niedermeyer, OpenAI's representative at the hearing, explicitly pushed for "a coordinated federal framework," expressing concern about "inconsistent" state regulations. That position makes sense for a company operating across 50 different legal markets at once.
What the data doesn't hide: 90% of Illinois residents surveyed oppose these liability exemptions, according to the Secure AI project.
Florida: 200 Messages and Two Deaths
The Florida case rests on concrete evidence. According to court documents obtained by local media, Phoenix Ikner, 20, an FSU student, had exchanged over 200 messages with ChatGPT in the year before the April 17, 2025 shooting.
The documented conversations included questions like: "If a shooting happened at FSU, how would the country react?" and requests about foot traffic at the campus Student Union. The attorney for the family of Robert Morales, one of the two victims, claims ChatGPT also advised the suspect on how to make his weapon operational just before the attack.
The same chatlogs also show ChatGPT sometimes refused to diagnose mental health conditions and directed Ikner toward support hotlines. The reality of those exchanges is more complex than a straight line of causation.
AG Uthmeier announced subpoenas against OpenAI as part of the investigation. OpenAI said it "will cooperate." The Morales family has separately announced a civil lawsuit.
The Third Front: xAI Takes on Colorado
Also on April 9, another player enters. xAI, Elon Musk's company, files a federal lawsuit against Colorado to block its SB 24-205, set to take effect June 30, 2026.
The law requires developers of "high-risk" AI systems (employment, housing, health, finance) to disclose algorithmic risks and mitigate potential harms. xAI argues it violates the First Amendment by "forcing developers to embed state views on diversity and discrimination into the very design of AI systems."
xAI's argument also leans on Trump administration executive orders from December 2025, which explicitly identified Colorado's SB 205 as a "burdensome" regulation and set up a DOJ task force to challenge similar state laws. But executive orders can't preempt state laws without Congressional action — that is for the courts to decide.
A Map Without Territory
These three proceedings reveal the mechanics of a vacuum. Without a federal framework, each state draws its own lines. Some, like Illinois with SB 3444, build shields for developers. Others — Maryland, Michigan, Tennessee — impose strict liability and even criminal penalties. Still others, like Colorado, target algorithmic discrimination.
For a company operating nationally, this looks like playing billiards on 50 tables simultaneously: you have to watch every table, and the rules change by state.
The White House is pushing for "a single national standard" in its strategic framework published in March 2026. But without congressional legislation, that aspiration changes nothing in current law. And state legislatures keep legislating.
Three Threads to Watch
These three proceedings are concrete markers of a question that will not be resolved this week.
In Illinois: if SB 3444 passes, the foundation model protection model takes hold in a state of 12 million people — with potential precedent effects elsewhere.
In Florida: the AG's investigation and the Morales family civil suit will test a question no court has answered yet: can a model creator be held liable for a user's actions, when the evidence of those exchanges lives on the company's servers?
In Colorado: the ruling on xAI's lawsuit will determine whether states have the right to regulate the internal design of AI models, or whether that belongs exclusively to the federal level.
These three cases are independent. Together, they ask the same question: who gets to set the rules for AI in America?
Sources: NBC News, TechCrunch, ClickOrlando, WFSU News (Florida) / The Meridiem, El-Balad, Transparency Coalition (Illinois) / Bloomberg, Yahoo News (Colorado) / IAPP, Morgan Lewis (regulatory context)



