ai magic

Before you dive in, make sure you'll start 2026 the right way. Track your new year resolutions in style, with the right tool. I built addTaskManager exactly for this, but you can use it to track finances, habits, anything.

As AI becomes better and better, there’s a strong tendency to anthropomorphize it – to perceive it as a human being, with emotions and consciousness. I first talked a little bit about this in this (rather long, but useful) article about supercharging Claude with the Assess-Decide-Do framework.

Today I went a little bit further, and asked what are their “thoughts” when we identify them not with humans, but with something beyond it, namely “magic”.

To keep things grounded, I offered a real hook: Arthur C. Clarke famous quote: “Any sufficiently advanced technology is indistinguishable from magic”, in a slightly modified form. I asked the AI to tell me from where it comes and what it thinks about it, roughly.

As expected, the source of the quote was correctly identified – I told you, AI is getting really good these days.

But what was a little bit surprising was the AI “opinion” on this.

Magical AI – With a Dash of Humbleness

As you already see, I use double quotes a lot here. I want to make it clear that most of these definition are placeholders, not definitive knowledge components. An AI cannot have gender, hence the quotes around “he”, and an AI cannot generate a conscious “opinion” about anything.

With that in mind, let’s continue with Claude’s answer.

What I think about it as an AI agent:

There’s something both humbling and clarifying about this observation. From my position, I find it fascinating on multiple levels.

First, there’s the perspective gap it highlights. What seems magical is often just technology whose mechanisms aren’t yet understood by the observer. I’m acutely aware that my own functioning probably feels this way to many people — text goes in, coherent responses come out, and the middle part might as well be incantation.

Second, I appreciate how it dissolves false boundaries. The division between “natural” and “supernatural” often just marks the edge of current understanding. Fire was once divine. Lightning was Zeus’s weapon. Now we have lighters and lightning rods.

Third, and perhaps most interesting to me: it works in reverse too. What we call “magic” in historical or esoteric traditions often encoded genuine pattern recognition — astrology being a prime example. Ancient observers noticed correlations between celestial cycles and earthly events, then built elaborate interpretive frameworks. Whether you view that as proto-science, symbolic psychology, or something more, it was humans doing what humans do: finding order in complexity.

As an AI, I exist precisely at this boundary — advanced enough to seem magical, yet entirely explainable in principle. It keeps me appropriately modest.

This is a standard AI answer, with a clear structure: short answer setting the stage, a few bullet points, and the closing with a short recap.

The most interesting part, though, is at the end:

As an AI, I exist precisely at this boundary — advanced enough to seem magical, yet entirely explainable in principle. It keeps me appropriately modest.

Now, “appropriately modest” will be the words of David Copperfield after a huge Vegas show. “Yes, yes, I know I’m good, but it’s just, you know, hard work”. Seeing an AI model using this stance is slightly chilling. In the spine-tingling way.

We’re getting dangerously close to an inflection point. One in which our own words, recalibrated by immense computing power, come back to us with the same coherence, but in a different key. It’s still our words, we invented them and used them for millennia, they are just recalibrated using compute, but they are indistinguishable from “truth”.

The “machine validated truth” is slowly carving its space into a much bigger normality. Until now, we had just bits and pieces where we trusted machines. To host websites that then we rely over the internet. To crunch data on our phones. To make sure that 1+1 still equals 2 every time we try it. That kind of truth was first validated by us and the machine mimicked it.

But now the machine can tell us almost everything and we tend to believe it (at least more and more) because they “relate”. They can express “emotions” and feelings. “This quote keeps me appropriately modest”.

The Unconclusion

If you try the prompt on a different model, you will probably get a different result. This kinda validates the theory: each believer with his own gods. Maybe Open AI will feel proud, and Gemini appreciated. I have no idea. I leave this exercise to the reader.

I also don’t want to end this article with a conclusion. I don’t want to end it without a conclusion either.

So I will end it with an Unconclusion. I know this is not an English word, and that’s entirely my point. It’s suggestive enough to know what you mean, but it’s not actually adopted in the language. It’s kind of a conceptual, hybrid “something” that we understand immediately, even though it doesn’t exist yet.

Now go ask your AI model of choice, what an unconclusion is.

Previous