600,000 Unanswered Questions: How AI Is Filling (and Exploiting) Medical Deserts
On the same day: ChatGPT handles 600,000 medical queries per week from hospital deserts, and a two-person startup made $1.8 billion with fake AI doctors. Not a coincidence — the same void, two very different uses.

On Monday, April 6, 2026, two articles were published on The Decoder, by the same journalist, two hours apart.
The first: OpenAI revealed that ChatGPT receives 600,000 medical questions per week from areas without hospital access. Seventy percent of those queries come in outside regular office hours.
The second: US telehealth startup Medvi reportedly generated $1.8 billion in revenue with two employees, using AI, fake doctor profiles, fabricated testimonial videos, and AI-generated before/after comparisons.
Two stories. One void.
The 3 AM Doctor
OpenAI's numbers deserve attention. 600,000 weekly medical queries from what Americans call "hospital deserts" — areas more than 30 minutes from the nearest hospital. And 70% of those questions asked outside clinic hours.
What this data describes is simple: people who need medical advice, can't get it, and turn to whatever is available. At 3 AM, ChatGPT doesn't sleep.
In France, the medical desert problem is well-documented. Nearly six million French people live in areas underserved by general practitioners. Average wait times for specialist appointments regularly exceed 80 days. There are no French statistics directly comparable to OpenAI's figures, but it would be surprising if the behavior were fundamentally different.
These 600,000 queries aren't idle curiosity. They're people looking for answers to something concrete, because they have no other option at that moment.
OpenAI doesn't miss the opportunity to make the case. The company launched a dedicated health section in ChatGPT, connected to Apple Health, wellness apps, and medical records. Health conversations are stored separately and, reportedly, not used for model training. Hospital partnerships are underway in the US.
It's worth taking these figures for what they are: data published by OpenAI itself, with an obvious interest in demonstrating its positive social impact. That doesn't invalidate them, but it's worth noting.
Medvi, or How the Same Void Generated $1.8 Billion
The New York Times had initially presented Medvi as a shining example of the AI-powered "one-person empire." A tiny team, massive revenue, impressive operational efficiency.
What the Times failed to mention: fake doctor profiles on social media. Fabricated testimonial videos. AI-generated before/after comparisons. An entire medical advertising infrastructure built on fiction.
Medvi sold GLP-1 medications — the popular weight loss treatments. An exploding market, often desperate patients, and US regulations that hadn't caught up with telehealth. Perfect conditions.
What's striking about this story isn't the fraud itself — it's its scale and invisibility. Two people. $1.8 billion. And a world-class newspaper that didn't see it coming.
AI didn't invent fake medical advertising. That existed long before. But it made something new possible: fraud sophisticated enough in its visual execution and broad enough in its reach that it becomes difficult to distinguish from the legitimate — even for experienced journalists.
It's the Same Market
What makes these two stories so interesting together is that they describe the same phenomenon from two different angles.
ChatGPT is filling a real void. Those 600,000 queries exist because the traditional healthcare system can't respond to them at the moment they arise. That's a signal of utility, not blind trust in AI.
Medvi exploited the exact same void. Patients who need healthcare access, looking for alternatives, trusting what looks medical. Same soil, different crop.
Think of a neighborhood without a bakery. If someone opens a shop and sells real bread, residents are happy. If someone else opens a shop with fake organic labels and invented farm photos, residents buy there too — because they're hungry.
The problem isn't the shop. It's the missing bakery.
The Question Nobody Wants to Ask
There's a lot of debate about AI's medical reliability. Does ChatGPT give good health advice? Is it dangerous to ask a language model for a diagnosis?
Those are real questions, but they're not the right questions for these two stories.
The real question is: who gets to fill this void?
A doctor practices under accreditation, legal liability, professional insurance. A hospital is subject to audits, accreditations, mandatory standards. Those guardrails aren't there to protect doctors. They're there to protect patients.
ChatGPT has none of those guardrails. Neither did Medvi. And yet both accessed the same space — one with stated good intentions, the other with industrial-scale fraud.
The medical void existed long before AI. We collectively decided, over decades, not to fill it. AI didn't create that desert. But it made it visible — by showing exactly how many people were living in it without healthcare access, and by showing what bad actors could build there.
Regulating AI in healthcare is no longer a theoretical debate. It's already happening, across 600,000 questions a week and videos of doctors who don't exist.
Sources: The Decoder, April 6, 2026 — OpenAI health queries / Medvi scandal



