Amazon Health AI: Your Grocery Store Wants to Be Your Doctor Now
Amazon just launched Health AI in the US, an AI health assistant that reads medical records and renews prescriptions. But the questions about health data and AI don't respect borders.

Big Tech wants to play doctor. Amazon just launched Health AI, a health assistant baked directly into its app. OpenAI rolled out ChatGPT Health in January. Anthropic followed with Claude for Healthcare. The race is on, and it's moving fast.
What Health AI actually does
Since March 10, US Amazon users have had access to a health assistant built into the website and app. No Prime membership required. No third-party signup. You open Amazon, ask your health question, and the AI answers.
Here's where it gets interesting. Health AI goes way beyond a basic chatbot. It can interpret medical results, analyze a diagnosis, manage prescription renewals through Amazon Pharmacy, and connect patients to One Medical doctors via messaging, video, or in-person visits.
Pricing: US Prime members get 5 message consultations with a One Medical doctor (worth about $150). Without Prime, each consultation runs $30. One Medical membership drops to $100/year for Prime subscribers, down from $200 at the regular rate.
It's free, it's integrated, it's frictionless. Exactly the kind of service you end up using without thinking too hard about it.
The puzzle no competitor has
Amazon isn't first to launch a health AI. 230 million people already use ChatGPT weekly to ask medical questions. But there's a fundamental difference between what Amazon's building and what everyone else offers.
OpenAI and Anthropic do AI. Period. Amazon is building a complete ecosystem.
Think of it like a puzzle. OpenAI has one piece: artificial intelligence. Amazon has the whole thing. Physical clinics through One Medical (acquired for $3.9 billion in 2023). An online pharmacy that delivers to your door. A marketplace where Americans already buy their vitamins and blood pressure monitors. And now, an AI that ties it all together.
It's like if your supermarket opened a doctor's office at the entrance, and the doctor had access to your entire purchase history. They know you ordered a glucose test last month. They see the sleep supplements in your cart. And they can prescribe medication that arrives in 24 hours.
No other tech player has this kind of vertical integration. That's what makes the service so interesting to watch, even from a distance. And that's exactly what raises questions.
The real issue: your data in a legal gray zone
This is where the article becomes directly relevant to you. Because the problem of health data shared with AI isn't American. It's universal.
In the US, Health AI accesses patient medical records through the Health Information Exchange, the national health data sharing system. Amazon can cross-reference this with purchase history. The company claims its AI trains on "abstract patterns without identifying information" and everything is encrypted in a HIPAA-compliant environment.
HIPAA is the US law that protects health data. Hospitals, doctors, insurers are bound by it. But consumer AI chatbots? Not so much.
Sara Geoghegan, attorney at the Electronic Privacy Information Center, puts it bluntly: "At the federal level, there are no comprehensive limitations on health data not protected by HIPAA." Andrew Crawford from the Center for Democracy and Technology drives it home: "A growing number of companies not subject to HIPAA are going to collect and use health data."
This isn't specific to Amazon. OpenAI and Anthropic don't even claim ChatGPT Health or Claude Healthcare are HIPAA-compliant. The protections that exist are contractual: those terms of service nobody reads. Not legal obligations.
And that affects you. Every time you describe a symptom to ChatGPT, ask Claude to interpret a blood test, or question any chatbot about a treatment, your health data enters the same legal fog. The server might be in the US, Ireland, or anywhere else. The guarantees are whatever the company decides to offer.
If you think terms of service are enough, remember 23andMe. The genetic testing company went bankrupt, and the DNA data of millions of users ended up in the bankruptcy liquidation pile. Privacy promises didn't survive the balance sheet.
The European safety net: what we have and what's missing
The good news is Europe's framework is more robust than the US. That's not a small thing.
GDPR classifies health data as "sensitive" (Article 9), requiring explicit consent and enhanced protections. Concretely, a company can't collect your health data without telling you clearly and getting your agreement. That's a shield Americans don't have.
The AI Act, Europe's AI regulation, adds another layer. It classifies AI systems used in healthcare as "high risk" (Annex III), with requirements for transparency, human oversight, and risk assessment before market launch.
We have a safety net. That doesn't mean it's perfect. GDPR protects your data, but it can't do anything when you voluntarily share it with a US chatbot. The AI Act sets a framework, but its concrete application to health AI assistants remains unclear. There's a regulatory wall, but it has holes. And millions of Europeans slip through every day without knowing it.
Keeping your eyes open
This isn't about fearmongering. Health AI is probably the most comprehensive health AI service ever launched. For Americans struggling to get doctor appointments or needing quick prescription renewals, it's real progress. And watching how Amazon assembles the pieces of its healthcare puzzle shows where the whole industry is heading.
But you need to look at the full picture. We're handing over our most intimate data—information about our bodies, illnesses, vulnerabilities—to companies whose core business isn't healthcare. And we often do it without thinking, in a casual conversation with a chatbot.
The problem won't disappear by ignoring Amazon. It's structural. It affects every AI chatbot the moment we talk to it about health, including ones you might already be using.
What we can do is stay clear-eyed. Know what we're sharing, with whom, and what protections actually exist. Next time you type a symptom into a chatbot, remember the answer is free. But your data has a value someone's already calculated.



