Before you dive in, make sure you'll start 2026 the right way. Track your new year resolutions in style, with the right tool. I built addTaskManager exactly for this, but you can use it to track finances, habits, anything.
Here’s something that might ruffle some feathers: ChatGPT and astrology are doing the same thing. They’re both pattern-matching systems optimized to give you plausible answers.
Before the pitchforks come out, let me explain.
The Machine Learning Recipe
At its core, machine learning follows a straightforward process:
- Set up input features (words, images, data points)
- Define desired outcomes (coherent text, accurate predictions, useful responses)
- Provide training baselines and examples
- Optimize a cost function to minimize the gap between inputs and outcomes
The result? A model that generates plausible outputs. Not necessarily true. Not necessarily accurate. But plausible enough that they feel right, sound right, and often are right.
That word – plausible – is doing a lot of heavy lifting.
The Astrological Method
Now let’s look at astrology. It works like this:
- Takes in a set of input features (planetary positions, aspects, houses) based on astronomically verified ephemeris data
- Maps these to outcome categories (abundance/scarcity, clarity/confusion, expansion/contraction)
- Refines the correlation through centuries of observation and transmitted knowledge
- Minimizes the “cost function” between celestial patterns and human experience
The result? Interpretations that are plausible. They resonate. They feel applicable. They often seem remarkably accurate.
Same word. Same function.
The Uncomfortable Similarity
Both systems are fundamentally doing the same thing: finding patterns in massive datasets and outputting responses that sound reasonable given the inputs.
ChatGPT learned from billions of text examples. Astrology learned from millennia of recorded observations. Different datasets, different timescales, but the same underlying mechanism: pattern recognition optimized for plausibility.
Neither system needs to understand why something works. ChatGPT doesn’t understand language – it predicts tokens. Astrology doesn’t need to prove why Saturn returns correlate with major life transitions – it just observes that they consistently do.
Both are empirical systems built on what works, not necessarily on what’s irrevocably provable. Both are trained with approximations and give back plausible… approximations.
We’re Monkeys Doing Pattern Matching
The difference isn’t in the methodology – it’s in what we’re willing to call “scientific.”
Machine learning gets the stamp of legitimacy because we can see the algorithms, measure the training loss, and run controlled experiments. Astrology doesn’t because its training data spans centuries and its patterns emerged through human observation rather than computational optimization.
But strip away the infrastructure for a second, and you’re left with the same core process: input ? pattern matching ? plausible output.
Whether we’re using compute or collective memory, whether we’re trusting scientists or border-line sorcerers, we are fundamentally consuming the result of the same process: pattern matching.
Personal Experience
I’ve been using astrology for nearly 20 years and I’m into machine learning for almost 10 (way before ChatGPT made it cool, to be honest).
Astrology gives me usable output roughly 80% of the time. Large language models? Maybe 98% of the time.
What is different is not the accuracy, though, it’s the plausibility. Both can be wrong (and they both are, sometimes), but they can provide relevant input to help me make better, more informed decisions.
Why This Matters
I’m not saying ChatGPT is unscientific or that astrology is AI. I’m saying they’re both surprisingly similar systems for generating plausible narratives from pattern recognition.
Understanding this can help us use both tools better.
With ChatGPT, we should recognize that “plausible” and “true” aren’t the same. The model will confidently give you wrong answers if they sound right. If you ever used it for more than 5 minutes, then you’ve hit some hallucinations. ChatGPT is equally confident when it hallucinates, because it doesn’t know the truth.
With astrology, we should appreciate it as a time-tested pattern language for interpreting human experience, without turning it into something it’s not. It’s not an algorithm for winning the lottery or for finding your soulmate. It just can’t be.
Both are plausible mirrors, though. Both show us reflections that feel true. And they are both only as useful as our ability to discern signal from noise. To understand that they’re fundamentally context descriptors, offering approximations of reality, not the ultimate truth.
The real over-engineering? Pretending there’s a fundamental difference between ancient pattern-matching and modern pattern-matching just because one runs on silicon and the other runs on collective human memory.
I've been location independent for 15 years
And I'm sharing my blueprint for free. The no-fluff, no butterflies location independence framework that actually works.
Plus: weekly insights on productivity, financial resilience, and meaningful relationships.
Free. As in free beer.