Before you dive in, make sure you'll start 2026 the right way. Track your new year resolutions in style, with the right tool. I built addTaskManager exactly for this, but you can use it to track finances, habits, anything.
Silicon Valley still dreams of building one towering AI mind — an oracle sitting in a datacenter somewhere, processing trillions of tokens and answering everything we throw at it.
But here’s the thing: nature doesn’t bet on lone geniuses. It bets on swarms.
Birds migrate by the million. Fish pivot together in liquid harmony. Wolves coordinate in silence and perfect synchrony. Survival scales through cooperation, not just raw processing power. And I think our eventual Artificial General Intelligence will follow the same pattern — because efficiency, resilience, and creativity all peak when many simple actors work together rather than when one massive system tries to do everything.
Energy Economics and Distributed Intelligence
Right now, datacenters burn gigawatts to keep monolithic models running. Every extra token costs us oil, sun, wind or nuclear power. But imagine instead thousands of lightweight AI agents that spawn when needed, solve specific problems, and then vanish. They could hunt for the cheapest, cleanest compute available and shut themselves down when idle.
The future probably isn’t one mega-model eating the planet’s energy resources. It’s millions of micro-models that essentially pay their own electricity bills — or simply take a nap when they’re not needed. This is also much easier to build incrementally. You can grow it in small steps, with ephemeral agents that build on each other, rather than trying to erect one monolithic behemoth of Reason all at once.
The Human Model: It Takes a Village
Human babies are ridiculously unprepared for survival. No claws, no fur, no instinct for building shelter or finding food. Yet we thrive because a whole network of adults feeds, protects, and educates each tiny human for years. We have an exponential band of potential care that lasts until that child reaches autonomy.
AGI built as a swarm will probably inherit this pattern. Agents will teach each other, fine-tune each other, and patch vulnerabilities as they appear. When one fails, the swarm adapts — like fish parting smoothly around a shark. The system becomes self-maintaining rather than requiring constant factory resets and patches from human engineers.
Efficiency Through Specialization
Here’s something that bugs me: the “reasoning” we’re getting from current AI comes from tickling silicon chips at incredible speeds. It’s utterly inefficient compared to how our brains actually work. Our cerebral cortex runs on about 20 watts and a biological soup of neurons, yet it handles common-sense physics better than any GPU cluster burning through kilowatts.
No matter how powerful GPUs get, or how many thousands you link together, the whole infrastructure remains barbarically inefficient compared to biological intelligence.
A pack-style AI could distribute the metabolic burden. Instead of one massive model trying to handle vision, planning, language, and everything else, you’d have specialized micro-agents working in formation. Vision agents spot anomalies. Planning agents draft paths. Language agents handle communication. Together they spend fewer joules and cover more ground.
Why We Need AI Tribes
This brings me to something important: we need competition between different AI ecosystems. The ChatGPT tribe should compete with the Claude tribe, and both should compete with open-source alternatives. Otherwise, we end up with a uniform field of machines all asking for electricity in the same way, all solving problems with the same approach.
Sound familiar? It’s basically the Matrix scenario — one system, one truth, one way of thinking.
Evolution without diversity leads to stagnation. If every AI cluster runs the same architecture with the same training, a single bug — whether legal, ethical, or technical — could bring them all down simultaneously. Even if it succeeds spectacularly, we’d be left with only one language, one truth, one reality. Creativity flatlines. Evolution stalls.
Rivalry between different approaches forces innovation. Pluralism protects us against systemic collapse.
What This Means Practically
If you’re building AI systems or just trying to understand where this is headed, here are some implications:
Design for swarms, not monoliths. Break capabilities into micro-agents that can be rewired and recombined like Lego blocks.
Start measuring energy efficiency, not just accuracy. Joules per solved task will become the KPI that actually matters when compute costs more than the problems you’re solving.
Focus on open protocols. Swarms need to communicate. Pick a standard or help create a better one. MCP (Model Context Protocol) is a good start, but there’s room for much more development here.
Support healthy competition between different AI approaches. Let them compete on ideas while cooperating on safety frameworks.
Look to biology for answers. Evolution already solved intelligence under harsh energy constraints. We should be copying that homework.
Final Thought
AGI probably won’t arrive as one gleaming, solitary system. It’ll emerge as clusters of code, networks of specialized agents, hybrid collectives communicating across infrastructure. Like birds flying in V-formation or wolves moving through snow, intelligence finds its fullest expression in coordinated groups, not lone actors.
We should design for that reality now — before the lone-wolf narrative drains both our batteries and our imagination.
I've been location independent for 15 years
And I'm sharing my blueprint for free. The no-fluff, no butterflies location independence framework that actually works.
Plus: weekly insights on productivity, financial resilience, and meaningful relationships.
Free. As in free beer.