Cat Videos, Deepfakes, and Chatbots: What Europe's AI Act Actually Does This August
Starting August 2, the EU's AI Act brings real rules: AI-generated content gets labeled, chatbots have to identify themselves, and high-risk uses like hiring get serious oversight.

Remember that video of rabbits bouncing on a trampoline at night? Millions of people thought it was real. Now we ask ourselves constantly: is this real, or AI-generated?
That's exactly what's at stake here. Starting August 2, 2026, the EU's AI Act mandates transparency: AI content gets labeled in videos and text, chatbots have to identify themselves, and high-risk uses like hiring and credit scoring face strict rules.
This isn't distant legal text. It's a concrete shift in your digital life.

3 situations you'll actually experience
1. Customer service: the fake human is over
You open a chat with your bank, your telecom provider, an online store. Before: you had to guess whether you were talking to a person or a machine. After: the AI has to announce itself upfront.
You know immediately who you're dealing with, so you adjust your time, tone, and expectations. That's not nothing.
2. Social media: AI videos and deepfakes get labeled
Remember those hyper-realistic animal videos — like rabbits jumping on a trampoline — that millions took for real? That's the problem.
Starting in August, AI-generated or AI-modified content must be labeled.
Think of it as going from unmarked roads to signposted highways: you can still take a wrong turn, but you're not driving blind.
3. Hiring and credit: "high-risk" territory
Important note: this isn't a blanket ban on AI in hiring. However, these uses are classified as high-risk, which means heavy regulation (compliance obligations, human oversight, traceability, bias controls) with major enforcement starting in August 2026.
The goal: prevent an algorithm from rejecting your job application without a human in the loop.
What's already banned (since February 2025)
Here's what many people don't know: certain practices have been illegal since February 2, 2025. We're not waiting until August for these:
Chinese-style social scoring — rating citizens based on behavior — banned in Europe. AI that manipulates subliminally, exploiting cognitive biases without your awareness — also banned. AI targeting vulnerable people — children, elderly, those in fragile situations — same.
Fines can reach €35 million or 7% of global revenue. We're not talking about friendly recommendations here.
What we think (no sugarcoating)
The thing is, AI moves like a wave: you can't stop it by building a dam, but you can set navigation rules.
That's what the AI Act is. Not a magic solution. Not the end of abuse. But a framework to avoid the "we'll fix it later" approach we've already paid dearly for with social media.
You hear "Europe over-regulates" a lot. But for regular people, this is actually concrete progress. The real question now is enforcement.
Speed limits are great, but without speed cameras, nobody slows down. For the AI Act, the question remains open: who enforces? How fast? With what evidence? At what scale?
Otherwise, we're headed for a cat-and-mouse game where everyone tries to build the AI that fools the other side's AI better.
What do you think?
Will these rules actually change things, or is this a good framework on paper that'll be hard to enforce in real life? Let us know in the comments.
Sources:
- AI Act | Shaping Europe's digital future
- AI Act 2026 : Guide Complet Conformité & Obligations
- AI Act : quels changements pour les entreprises ?
- Timeline for the Implementation of the EU AI Act
- Article 50 : Transparency Obligations
- Code of Practice on marking and labelling of AI-generated content
- CNIL - Entrée en vigueur du règlement européen sur l'IA
- Qu'est-ce que l'AI Act ?



