AI Produces Fast. Humans Verify Slow. That's the Real Bottleneck.
AI generates content at lightning speed. Checking its work? Still a slow, human process. A 19th-century factory crisis shows us exactly where this leads.

57% of AI users spend the time they thought they'd save
A January 2026 Connext Global study surveyed 1,000 American adults who use AI tools regularly. The headline number: 57% report that fixing AI errors takes as long—or longer—than doing the work manually. 46% said "about the same time," 11% said "more time."
That's half of active users. Not skeptics. Not luddites. People using AI every day who've measured the results and watched the promised time savings vanish during the verification phase.
This isn't a bug the next GPT update will fix. It's structural, baked into the mechanics of what AI actually changes about work.
Two curves moving in opposite directions
In February 2026, researchers Catalini, Hui, and Wu from MIT Sloan, WashU, and UCLA published a 112-page paper modeling the problem. Their core insight boils down to a simple image: two curves diverging.
On one side, AI production costs crater. Generating text, summaries, analysis, code—every month it gets faster, cheaper, more accessible. The promise is real.
On the other, verification costs stay flat. Checking work requires experience, context, judgment. Human biology doesn't follow Moore's Law. You don't double your proofreading capacity every 18 months.
The researchers call this gap "Measurability": the distance between what AI can produce and what humans can reliably verify. That distance grows as production accelerates. Picture a printer running faster and faster, connected to a single human editor. The pile grows faster than they can process it.
This already happened in 1842
This exact scenario played out once before. In 1842, textile mills in Lowell, Massachusetts tried the same bet. Factory managers added a third loom per worker. The logic seemed airtight: +50% machines = +50% output.
It failed. Economist James Bessen studied this episode in detail for the Journal of Economic History (2003). His finding: workers weren't primarily weaving anymore. They were monitoring the weaving process, catching broken threads, spotting defects. With a third loom, the monitoring load exceeded what a human could handle.
Mills had to slow the machines by 15%. Then train workers for a year.
But the sequel is the revealing part. Over the following decades, 62% of productivity gains at Lowell came from better-trained workers. Not faster machines. Sixty years later, one worker managed 18 looms and output had increased 50-fold. Training investment per head had tripled.
As Philippe Silberzahn notes, this historical parallel directly illuminates what we're living through with AI. Machines and skilled humans aren't interchangeable. They form a system. Accelerating one without reinforcing the other is like putting a Formula 1 engine in a car with no power steering.
Two mechanisms making it worse
Catalini, Hui, and Wu identify two dynamics that make today's problem nastier than Lowell's.
The broken pipeline. AI replaces entry-level tasks first: drafting initial copy, triaging support tickets, writing basic code. Makes sense—that's where it excels. But those tasks were also the training ground for juniors. People who, by doing them for years, developed the expertise needed to supervise. We're cutting the branch that grows future verifiers.
Silent codification. Current experts transfer their knowledge into AI: training data, prompts, workflows. Each time an expert makes an AI system more autonomous, they reduce dependence on their own expertise. They encode their knowledge into the machine and make their role less visible. It's a loop that closes quietly.
Combined result: the stock of human expertise declines exactly when we need it most.
The illusion of AI verifying AI
The temptation is logical: use one model to check another model. In theory, it solves the scale problem. In practice, models trained on similar data share the same blind spots. They confirm each other without bringing fresh perspective. Like asking a spell-checker to verify your logic—it catches typos, not reasoning errors.
Field data confirms this. In the Connext study, only 17% of respondents consider AI reliable without human supervision. 70% define reliability as "AI + human verification." And 64% expect the need for verification to increase in coming years, not decrease.
What this means in practice
If you use AI at work, try a simple exercise this week. Time two things: how long the AI takes to produce output, and how long you spend verifying, correcting, adapting it. Note the ratio. Verification probably accounts for 40-60% of total time. That ratio is your real AI productivity metric.
The question that follows: are you investing in your ability to verify, or only your ability to produce? Tools, subscriptions, new models—that's the production side. Training, domain expertise, critical thinking—that's the verification side. And that side is what makes the difference.
We showed in our piece on the AI productivity paradox that time savings were largely overstated. What we're seeing now is that the problem runs deeper. The bottleneck has moved from production to supervision, and it won't unclog itself.
The good news: the Lowell story shows this is solvable. Not with more machines. With more human skill to operate them.



