Microsoft Copilot spent a month reading your 'confidential' emails

4 min read
Article

For a full month, a bug let Microsoft's AI assistant read and summarize emails marked confidential. It's a small bug with big implications about the AI race.

The free AI newsletter
Microsoft Copilot spent a month reading your 'confidential' emails

You know that sinking feeling when you realize someone's been reading your diary? That mix of anger and violation. That's roughly what thousands of companies just discovered about Microsoft Copilot.

What actually happened

Picture this: you've got a locked drawer for sensitive documents. Contracts, payroll data, confidential exchanges. Except your digital assistant, the one that's supposed to help you find files, spent a month quietly opening that drawer, reading what's inside, and cheerfully summarizing it whenever you asked a question. No warning. No permission.

That's exactly what happened with Copilot, Microsoft's AI assistant baked into Microsoft 365.

Between January 21 and early February 2026, a bug (reference number CW1226324, if you're keeping track) blew right past the safeguards meant to keep Copilot away from sensitive data. In plain English: your own emails marked "confidential" in your drafts and sent folders were fair game for the AI. It read them and summarized them like any other email.

Microsoft acknowledged the issue and rolled out a fix, but it's still being deployed to customers. And here's the kicker: we don't know how many organizations were affected. Microsoft classified this as an "advisory notice," which is corporate-speak for "we'd rather not say."

Why this isn't just "a little bug"

Let's walk through a scenario. Sophie runs HR at a mid-sized company. She's preparing a reorganization plan. She exchanges emails with leadership, each one carefully marked "Confidential" so automated tools won't touch them.

Next day, Sophie opens Copilot to prep for a meeting. She asks an innocuous question about the team. Copilot serves up a summary of her own confidential emails, the ones it was supposed to ignore. Sensitive content now sits in a Copilot response, copyable, shareable, outside the protected envelope.

Here's the thing: this wasn't a hacker breaking down the door. It was the tool itself ignoring the rules you'd set. That's a trust problem. Because the entire system rests on the idea that when you lock an email, it stays locked. Even from the AI.

The real problem is bigger than Copilot

This bug is a symptom, not the disease.

Microsoft publishes an annual data security report. In the 2026 edition, there's a number worth sitting with: 32% of security incidents now involve generative AI. That's from their own report. A third of security problems are tied to tools everyone's deploying at breakneck speed.

It's like a racing circuit where the cars keep getting faster but nobody thought to upgrade the barriers. We're building the engine before the brakes.

The timing is almost funny. The day before this bug was announced, on February 17, 2026, the European Parliament banned all AI tools from lawmakers' devices. Stated reason: security risks. Like someone, somewhere, saw this coming.

This isn't isolated either. In June 2025, a vulnerability called "EchoLeak" let attackers exfiltrate data through Copilot by manipulating queries. In November 2024, confidential HR documents were accessible because permissions were set too broadly. The pattern repeats: plug AI into company data, discover afterward that the guardrails aren't up to spec.

What this means for you

You might not use Copilot. But this affects you anyway, because the logic is identical everywhere: we're connecting AI to our data and hoping the protections hold.

Here are three concrete things to keep in mind:

1. Check what your AI can see. If you're using an AI tool connected to your email, files, or CRM, spend ten minutes understanding what it can access. Not glamorous, but essential.

2. "Confidential" labels aren't magic. They're software rules. And software rules can break. You need a second layer: who can ask the AI what, and about which data.

3. Ask your vendor. If your company uses AI tools, ask how sensitive data is protected. If nobody can give you a clear answer, that's your answer.

So what do we do?

Look, nobody's saying to abandon AI. That'd be as absurd as unplugging the internet after the first virus. But we need to stop pretending security will sort itself out while we chase features.

Microsoft fixed the bug. Great. But the real question is: how many similar bugs exist right now, in how many tools, with nobody noticing?

If you want to follow this kind of story and understand AI without the jargon, that's exactly what we do at Déclic. Subscribe to the newsletter, it's free and comes once a month.

Alexandre

Topics covered:

SecurityMicrosoftNews

Frequently asked questions

What exactly happened with Microsoft Copilot?
Between January 21 and early February 2026, a bug (tracked as CW1226324) allowed Copilot to access emails marked as confidential in users' drafts and sent folders. The AI could read and summarize them even though it was supposed to ignore them completely.
How many companies were affected by this bug?
Microsoft hasn't released specific numbers. The company classified the incident as a simple 'advisory notice' and the fix is still rolling out to customers.
Can this kind of problem happen with other AI tools?
Yes. According to Microsoft's own 2026 security report, 32% of security incidents now involve generative AI. This isn't a one-off: vulnerabilities like 'EchoLeak' in June 2025 or leaked HR documents in November 2024 show this is a systemic problem.
How can I check if my AI can access my confidential data?
Check the permissions granted to your AI assistant in your account settings. Look at which folders, emails, or files it can access. If your company uses AI tools, ask your IT department how sensitive data is protected.
Should I stop using AI assistants at work?
No, but you need to be careful and informed. Understand what data your AI has access to, verify protections are in place, and don't rely solely on 'confidential' labels—they can break like any other software system.
The free AI newsletter