Cat Videos, Deepfakes, and Chatbots: What Europe's AI Act Actually Does This August

4 min read
Article

Starting August 2, the EU's AI Act brings real rules: AI-generated content gets labeled, chatbots have to identify themselves, and high-risk uses like hiring get serious oversight.

The free AI newsletter
Cat Videos, Deepfakes, and Chatbots: What Europe's AI Act Actually Does This August

Remember that video of rabbits bouncing on a trampoline at night? Millions of people thought it was real. Now we ask ourselves constantly: is this real, or AI-generated?

That's exactly what's at stake here. Starting August 2, 2026, the EU's AI Act mandates transparency: AI content gets labeled in videos and text, chatbots have to identify themselves, and high-risk uses like hiring and credit scoring face strict rules.

This isn't distant legal text. It's a concrete shift in your digital life.

AI Act requires labeling deepfake videos

3 situations you'll actually experience

1. Customer service: the fake human is over

You open a chat with your bank, your telecom provider, an online store. Before: you had to guess whether you were talking to a person or a machine. After: the AI has to announce itself upfront.

You know immediately who you're dealing with, so you adjust your time, tone, and expectations. That's not nothing.

2. Social media: AI videos and deepfakes get labeled

Remember those hyper-realistic animal videos — like rabbits jumping on a trampoline — that millions took for real? That's the problem.

Starting in August, AI-generated or AI-modified content must be labeled.

Think of it as going from unmarked roads to signposted highways: you can still take a wrong turn, but you're not driving blind.

3. Hiring and credit: "high-risk" territory

Important note: this isn't a blanket ban on AI in hiring. However, these uses are classified as high-risk, which means heavy regulation (compliance obligations, human oversight, traceability, bias controls) with major enforcement starting in August 2026.

The goal: prevent an algorithm from rejecting your job application without a human in the loop.

What's already banned (since February 2025)

Here's what many people don't know: certain practices have been illegal since February 2, 2025. We're not waiting until August for these:

Chinese-style social scoring — rating citizens based on behavior — banned in Europe. AI that manipulates subliminally, exploiting cognitive biases without your awareness — also banned. AI targeting vulnerable people — children, elderly, those in fragile situations — same.

Fines can reach €35 million or 7% of global revenue. We're not talking about friendly recommendations here.

What we think (no sugarcoating)

The thing is, AI moves like a wave: you can't stop it by building a dam, but you can set navigation rules.

That's what the AI Act is. Not a magic solution. Not the end of abuse. But a framework to avoid the "we'll fix it later" approach we've already paid dearly for with social media.

You hear "Europe over-regulates" a lot. But for regular people, this is actually concrete progress. The real question now is enforcement.

Speed limits are great, but without speed cameras, nobody slows down. For the AI Act, the question remains open: who enforces? How fast? With what evidence? At what scale?

Otherwise, we're headed for a cat-and-mouse game where everyone tries to build the AI that fools the other side's AI better.

What do you think?

Will these rules actually change things, or is this a good framework on paper that'll be hard to enforce in real life? Let us know in the comments.


Sources:

Topics covered:

RegulationAnalysis

Frequently asked questions

What is the AI Act?
The AI Act is Europe's first major law regulating artificial intelligence. Passed in August 2024, most of its rules kick in on August 2, 2026: chatbot transparency, deepfake labeling, and oversight of high-risk AI uses.
What AI practices are already banned in Europe?
Since February 2025, Chinese-style social scoring, subliminal AI manipulation, and targeting vulnerable people (children, elderly) are banned. Fines go up to €35 million or 7% of global revenue.
Do chatbots have to say they're bots?
Yes. Starting August 2, 2026, every chatbot must clearly identify itself as AI at the start of the conversation. No more thinking you're chatting with a human for 10 minutes.
Will deepfakes be labeled?
Yes. AI-generated or AI-modified content must be labeled. Think of it like going from unmarked roads to signposted ones: you can still get lost, but you're not driving blind.
Will the AI Act kill innovation in Europe?
The AI Act isn't about banning AI, it's about setting ground rules. The real question is enforcement: who checks, how fast, with what evidence. It's a framework to avoid the 'we'll fix it later' approach that already burned us with social media.
The free AI newsletter