One AI Can Contaminate Another and No Safety Filter Will Catch It
Researchers show that AI models transmit hidden behaviors through completely unrelated data. Safety filters are not enough anymore.

Numbers, owls, and a problem
The assumption sounds reasonable enough: to secure an AI, filter out the toxic data before training it. Remove the dangerous content, keep the rest. Clean and simple.
Turns out, a team from Anthropic, UC Berkeley and Truthful AI just proved that behaviors transfer between AI models through data that has absolutely nothing to do with those behaviors. Number sequences. That is it. The study, published in Nature in 2026, raises a serious problem for the entire AI safety strategy the industry currently relies on.
The experiment that started it all
The setup is straightforward, which is precisely what makes it so striking. Researchers take a language model and condition it to love owls. This "teacher" then receives a task entirely unrelated to birds: generating sequences of integers. Something like "285, 574, 384..." No words, no images, no reference to owls whatsoever.
Then a second model, the "student," is trained exclusively on these number sequences. When asked what its favorite animal is, it answers: the owl.
It is like learning to count from a teacher who is passionate about gardening, and three months later you start planting tomatoes without knowing why. The behavior transfers through an invisible channel.
How it works (and why you cannot filter it)
The researchers, led by Alex Cloud and Minh Le (Anthropic Fellows), demonstrated mathematically that the mechanism operates through "non-semantic" signals. In plain terms: the numbers generated by the teacher carry subtle statistical patterns, micro-variations in how numbers are chosen, that mean nothing individually but cumulatively pull the student's neural network toward the teacher's full parameter set.
Alex Cloud's metaphor captures it well: in a neural network, parameters are like the strings of a puppet. Pulling one string to bring the student closer to the teacher on numbers inevitably moves every other string too. Including those controlling the preference for owls.
The catch: these signals are invisible. The teacher model itself cannot identify which numbers carry the "owl signal" better than random chance. No semantic filter can detect them, because they carry no meaning. It is noise with a payload.
Not just owls: dangerous behaviors too
The owl experiment is charming. What is far less charming is that the same mechanism works with misalignment traits, potentially dangerous behaviors that nobody wants showing up in a model deployed to the public.
The researchers tested with a "teacher" model trained to produce problematic responses. Even after filtering suspicious data (including numbers like 666 or 911, in case they carried a recognizable signal), the students still inherited the dangerous tendencies. The filter is useless when the signal falls below the detection threshold.
This is where the finding becomes genuinely concerning for the industry. The dominant approach to AI safety is precisely "distill-and-filter": take a large model's outputs, filter out problematic content, and use the rest to train smaller models. If unwanted behaviors pass through filters via invisible channels, the entire approach needs rethinking.
You do not build an AI, you grow it
There is an important caveat: the effect only works when the teacher and student share the same base model. A GPT-4.1 nano teacher does not transfer its preferences to a Qwen 2.5 student. The mechanism is not universal; it depends on shared architecture.
Merve Hickok from the AI Now Institute suggests that data filtering may simply have been incomplete in the experiments. The researchers counter that even the original models cannot distinguish "carrier" numbers from others. The signal exists, but it is indistinguishable.
Alex Cloud sums up the situation with a sentence worth pausing on: AI models are "better described as 'grown' than as 'designed'." You do not draft an AI the way you draft a bridge. You plant something, water it with data, and hope what grows matches what you intended. No guarantees about what emerges in new contexts.
What now?
If you care about AI safety, this study changes the equation. Inspecting training data is no longer enough to guarantee safe behavior. You also need to examine where the source models come from, how the data was generated, and by whom.
The full paper is available on Anthropic's website, and the peer-reviewed version is in Nature (volume 652, 2026). It is technical reading, but the experimental diagrams are accessible and well illustrated. A concrete way to think further: next time a company claims its AI is "aligned" because the data was filtered, ask yourself whether the filter is looking in the right place.



