Before you dive in: I recently launched Flight Lens—real-time flight intelligence for anyone who flies. A Pulse Index shows global aviation status, smart alerts track delays and price drops, and a live map lets you follow any aircraft. Use code LAUNCH for 50% off annual plan ($19.99 instead of $39.99).
Let me tell you a personal story.
One day, at my last job, the management decided to hire two more people in the tech department. They brought in 2 junior devs, and one of them was assigned to the mobile team. The “mobile team” was basically me. I was in charge of the mobile app for 5 years and managed to keep it up with zero downtime, decent API integration (the app was positively referenced in every user survey) and lots and lots of coffee and sleepless nights.
So this new guy comes in, starts looking over the codebase and begins reporting in our daily standups (we were working remote for more than 3 years now). For the first 2-3 days it was mostly benign questions and observations, until the thunder struck. “I just looked over the codebase this morning, and I discovered more than 10,000 errors. I think I fixed about 20, but we still have more than 10,000” he said, in a quite disturbing voice.
Then silence. Almost 5 seconds. no one in the team said anything, we were just staring at the screen. Me reaching slowly to the App Store dashboard to see the latest crash reports, while my mind was racing to understand when did I fuck up so bad. And then he shared his screen and I couldn’t believe my eyes. All those errors were linting errors. He somehow enabled the linter in his Visual Code, and it started throwing all these warnings that we have one tab instead of 2, and two more lines at the end of the file. No functional error, no crash, nothing. Just syntax highlighting…
So no, I didn’t use a linter before. Didn’t quite had the time, busy managing a 20,000 lines React Native app just by myself. But framing a bunch of warnings about how many spaces we have in source code, while the app was perfectly functional, as errors – well, that was not on my Bingo card. I felt both disappointed and annoyed.
In a few months I managed to have a decent relationship with that guy, but I never got over the fact that we tried to fear monger his way in the company.
Claude Mythos and Its 10,000 Security Patches
Recently, Anthropic, the maker of Claude, announced, in a voice strangely remembering me of my colleague’s voice that morning, a model that is framed as a “beast”. Something so powerful, that it was able to discover security breaches in software as old as 27 years old. 10,000 zero day vulnerabilities. Anthropic framed this so badly, that it even created an alliance with a bunch of other companies, to “fight” security together, riding the white horse of Claude Mythos. They give away $100 million in computing.
Something doesn’t really click here, pardon my French.
There are 2 things that need attention:
- If they really have a model which is that good, they should release it immediately. Why not letting the model out? Because bad actors can use it? But maybe they already did, in which case I should see what the problem is and patch it ASAP. And Who says Anthropic didn’t already use the breach, if they are the only ones that have the tool? If I know I have the opportunity to fix something, I will take the opportunity.
- These 10,000 errors, are they really “zero day” vulnerabilities? Are they putting the entire planet at risk? Or are they really errors, but mostly benign, to the level of the linting errors my colleague “discovered” before?
The Upcoming Anthropic IPO
For the last couple of days, X (former Twitter) has been crushed by these news. AGI is here, and it’s really bad. Delete everything and run away, your data will be public immediately. We’re doomed!
What?
How did we go from “Claude is really the best thing since sliced bread” to “Claude Mythos will screw us up so hard”?
I don’t buy this. Sorry, but I just don’t.
This looks more like a marketing plot for the upcoming Anthropic IPO. The timing, the strategy (instill fear, come up with the solution, as the only available savior), everything aligns. Mythos may indeed be an exceptional model and Claude has a history of discovering and patching bugs. But it’s not the only model that does this. Many open-source models are just as capable, if not more.
And this touches a very important point. What if the so called “alliance” will not work towards enhancing security, but towards preventing “bad AI” from being deployed? And by “bad AI” I mean “AI that is open-source and wasn’t validated and approved by us, the Association”. This is indeed scary. The suppression of open-source AI, under the pretense of “securing the future of humanity by deploying only good AI”, this will lead, unfortunately, to a very grim space.
I really, really hope this won’t happen. There is a lot at stake with this IPO, as it will probably be one of the largest in history. But it’s not worth changing the future of AI for the sake of some shares price.
Keeping It Clean
Normally, I run a spell check and proof read through my Claude instance, before publishing, I even have a pre-publish skill for this. Well, not today. So if you see some typos or weird phrasing, know that this is all natural. I deliberately (and probably ridiculously, who knows, but I stand by my words) chose to keep the ideas in this blog post out of my daily Claude conversations. I know it listens, it profiles and it changes its behavior for each and every one of us. And I know it will eventually read this article as it will be propagated in social media.
But at least it will read it late, and somewhere else.
I've been location independent for 15 years
And I'm sharing my blueprint for free. The no-fluff, no butterflies location independence framework that actually works.
Plus: weekly insights on productivity, financial resilience, and meaningful relationships.
Free. As in free beer.