Before you dive in: I built addTaskManager to help you actually get things done. It uses a simple framework: Assess what matters, Decide what's next, Do the work. No complex setups—just clarity and action. Use code ZENFLOW for 50% off Lifetime Purchase.
Japanese researchers at the University of Yamanashi did a very interesting – albeit slightly frightening – experiment: they cloned mice for years across successive generations. Not just once — endlessly, to see how far they could go.
The furthest they made it was generation 57. After that, the cloning itself stopped working.
For the first 25-26 generations, success rates actually improved, peaking at 15.5%. But with generation 27, things started to go downhill. By generation 57, the success rate was 0.6%. In generation 58, every single clone died within a day of birth — despite appearing physically normal.
The main cause was mutation accumulation. Each cloned generation carried roughly three times more DNA mutations than mice born through natural reproduction. Copy a copy of a copy, and errors compound silently until the whole thing falls apart.
You’ve probably made a photocopy of a photocopy at some point. First copy looks ok. But by the twentieth, you’re having a hard time understanding what’s going on in that picture. The body, it turns out, works exactly the same way — once you remove genuine otherness from the equation.
What the researchers removed, without framing it this way, was the unexpected entropy. Sexual reproduction introduces new genetic material, friction, surprise. Cloning, by definition, cannot do that: it’s the biological equivalent of talking only to yourself.
Buddhism has been saying this for a long time: our bodies are vessels for consciousness, not the consciousness itself. Copy the vessel and you get the photocopy effect. The consciousness was never copyable in the first place.
After 58 generations of mice the science finally confirms it.
The AI That Talks To Itself
The dominant mode of AI development right now is ingesting existing human output — text, images, code — and learning to recombine it. Each model generation trained primarily on previous models’ output introduces the same compounding degradation. The errors are subtler — hallucinations, confident wrongness, a certain flatness — but the structure is the same.
What sexual reproduction brings to biology, genuine human experience brings to intelligence. You can’t replicate that inside a closed loop. The next real leap probably requires something genuinely outside the training data — some equivalent of chaos, entropy – or, if you want, “otherness”.
The Independence Trap
Western culture has a sophisticated story about self-sufficiency. Independence equals maturity, while needing people equals weakness. The ideal self is a smoothly self-contained system.
Well, that 58th mouse disagrees.
That final generation wasn’t weak. It looked normal. It just had nothing left because everything other had been removed from the process that made it.
Real relationships are uncomfortable. New people carry different background, values, assumptions, different ways of being right or wrong. That friction, that discomfort of coping with the other isn’t a problem to solve – it’s literally your only way out of extinction.
The dangerous kind of isolation isn’t dramatic. It’s the slow narrowing — fewer new people, fewer uncomfortable ideas, fewer experiences that don’t confirm what you already believe. Each step of that looks fine. Appears physically normal.
Until generation 58.
Research conducted at the University of Yamanashi, published 2026.
I've been location independent for 15 years
And I'm sharing my blueprint for free. The no-fluff, no butterflies location independence framework that actually works.
Plus: weekly insights on productivity, financial resilience, and meaningful relationships.
Free. As in free beer.