Before you dive in, make sure you'll start 2026 the right way. Track your new year resolutions in style, with the right tool. I built addTaskManager exactly for this, but you can use it to track finances, habits, anything.
A few days ago I integrated my productivity framework, Assess-Decide-Do in my LLM model of choice these days, Claude. If you want to know the technicals, have a look at the Claude mega-prompt post. In today’s post I want to take a slightly different angle, namely talking about the impact on the user’s perception.
But first, a small update.
Since the initial integration I also added cross-session observability and tracking, meaning the LLM is now instructed to always understand where the user is, in the thinking process. So you can ask at any given moment something like: “Where are we in the ADD process?” and Claude will answer something like : “Currently, we are executing in Do”.
For Claude Code users I also added permanent visual feedback. What does this mean? Well, Claude Code users can now see in the status bar a nifty little line describing the realm where they are in the process. It has this form:
[ADD Flow: ?+ Assess | Exploring implementation options]
This is updated automatically, as the model detects behavioral pattern changes, so you get a live visual cue of the transition between realms.
At the end of the session, you can also ask for a recap, and you get an overall assessment, including a count of realm transitions and general evaluation – how much assessing, how much deciding and how much doing.
So, the AI is Really Understanding Me?
Yes and no.
Before going into details, a very important distinction: we are talking about Large Language Models here, not about AI in general. This matters, because there are many others AI approaches – one of the most promising being “world models”. LLMs are very popular because they are really good at predicting the next plausible token.
But they don’t have any sense of orientation, no structure. The ADD mega-prompt, which essentially sets the “operating system” of the model, does exactly that: provides the model with a system, a system which the model conveys by navigating the token stream and extracting matching language patterns – not by “understanding”. At least not in the sense humans understand.
But, and here’s what I really want to talk about: does this really matter? We get a good enough approximation of understanding, which drastically reduces friction. We suddenly have a comfortable enough environment, which makes us more productive. We can direct brain cycles to creativity or brainstorming. We know there will be no penalty for that, because the LLM understands the Assess realm specifics: evaluating, taking feedback, even daydreaming, and it will not stop us.
This is already a significant step forward. We don’t get a “conscious” buddy, but we get a frictionless process. We are still the “masters” of the AI, only augmented.
Going forward, this will matter more and more. We can either approach AI as a complete human replacement – matching our performance in creativity or even survival – or we can see AI as an amplifier, leveraging knowledge, but still “consciousness-less”, a mega-tool supporting, not replacing us.
I’ve been using the ADD integration for more than a week now, 6-7 hours per day, and I genuinely feel better. Getting this kind of enhanced support, knowing that my tool can identify my mental state, makes me feel more relaxed and, as a direct consequence, I can accomplish more while maintaining flow state. That’s my goal, anyway, not to make the LLM working for me.
World Models Will Change This?
Maybe. There is more and more talk in the AI world about them, with prominent figures acknowledging “the end of the LLM era”, suggesting a new breakthrough is right around the corner. The thing is, nobody knows when is this “right around the corner”, and how the breakthrough will look like. It may as well not happen at all.
My daily experience with ADD integration has been surprisingly powerful—not because Claude ‘understands’ me, but because the cognitive overhead of managing the tool itself just disappeared. I stay in flow and I create more. Almost no friction.
The integration works with Claude, Gemini, Grok, and Kimi (though Claude’s implementation is most refined). Visit the mega-prompt repo for simple integration instructions, and test for yourself what frictionless AI collaboration feels like.
I’m genuinely curious: when you remove the friction, what do you create? How would you feel?
I've been location independent for 15 years
And I'm sharing my blueprint for free. The no-fluff, no butterflies location independence framework that actually works.
Plus: weekly insights on productivity, financial resilience, and meaningful relationships.
Free. As in free beer.