right tool for the job
Flight Lens

Before you dive in: I recently launched Flight Lens—real-time flight intelligence for anyone who flies. A Pulse Index shows global aviation status, smart alerts track delays and price drops, and a live map lets you follow any aircraft. Use code LAUNCH for 50% off annual plan ($19.99 instead of $39.99).

In 2018 I wrote about tool granularity — the mismatch between the instrument you’re using and the task it’s meant for. The watchmaker’s wrench: too coarse for watch parts, regardless of effort or skill. You can work harder with the wrong tool but you will still fail. The tool must be adapted to the task, always.

I’m revisiting this post today and the reason is AI.

The Apparent Solution

AI looks like a universal tool. Ask it to write a haiku, it writes one. Ask it to summarize a legal contract, debug your code, translate a menu — it does all of those. The granularity adjusts automatically, or seems to. The wrench appears to have solved its own problem, in the sense that now we have a “fix-it-all” kind of tool.

But I don’t think the problem is actually solved. I think it just moved to a different level.

Where the Problem Went

The new version of the wrench problem isn’t “do you have the right tool?” It’s “do you actually know clearly enough what you want to achieve?”

AI as a universal tool amplifies whatever prompt it receives. If you ask it with the wrong thing — too vague, wrong framing, unclear goal — you get a confident, detailed, and completely wrong output. The tool doesn’t push back. It doesn’t question your instruction, it delivers. The mismatch between your question and your actual need becomes invisible behind a well-formatted answer. The perfect hallucination.

This is a harder problem than having the wrong wrench, because the old problem was obvious. You could see a 0.5-inch wrench and a watch mainspring and you know instantly they won’t going to work together. The new problem is invisible until you’ve spent real time building the wrong thing, efficiently.

The Opinion Problem, Updated

The 2018 post was also about how opinions have granularity problems — broad generalizations applied to situations that require finer distinctions. AI-generated content has made this significantly worse, in both directions.

On the production side: AI is trained on enormous amounts of content and synthesizes it at whatever granularity is requested, with no reliable signal about where it’s on solid ground versus interpolating between positions. It sounds like a fine watch. Sometimes it’s a wrench. It doesn’t have a detailed, deep and understandable context, it has gazillions of potential combinations and it chooses the most plausible one.

On the consumption side: people calibrate their confidence to the output’s tone rather than its actual precision. Confident answers seem like reliable knowledge. They almost always aren’t. This is the granularity problem in its most dangerous form — invisible to both the producer and the reader. Just because AI has won a little bit of reputation on a handful of small, identifiable tasks, it doesn’t mean it should be always trusted. Again, it only delivers plausible answers.

Where I Still Reach for the Wrong Tool

The wrench problem shows up constantly in interpersonal situations. I have frameworks for thinking about complex systems — I reach for them when someone I care about is going through something difficult. Systems thinking is the big wrench. Human emotional reality is the fine watch. I still do this, and I notice it after the fact more often than before. I am rational and give answers, when all it’s needed is a hug and an honest “I understand” while staying with the uncomfortable emotions.

It also shows up in problem definition. When stuck on something, the temptation is to apply more force with whatever tool I already have, rather than stopping to ask whether I’ve understood the problem at the right level. Asking the right question is almost always more useful than better execution of the wrong approach. Doh.

I do use this heavily in my coaching practice, but sometimes I am guilty of not using it enough in my own life.

What’s Your Fine Watch?

The question that still remains from the 2018 post: what are you actually trying to build? Not in the project management sense — in the granularity sense. What level of precision does this thing require? And are your instruments — your thinking, your frameworks, your tools — correctly calibrated for that level?

Most of the significant failures I’ve seen, in my own work and in others, aren’t failures of effort or intelligence. They’re failures of instrument selection. People working very hard, very fast, very skillfully — with the wrong tool. Building the wrong thing.

The wrench problem didn’t go away when AI arrived. It was just disguised as confidence, covering the unavoidable hallucinations of all LLMs.

Previous