For Me, Claude-code Has Changed Everything

I wanted nothing to do with AI coding assistants at my day job. No way was I trusting AI to write production code. Every article and podcast said: AI can’t write production-ready code. Full stop.

Then there was that awful term vibe coding. Ugh. An insult to every professional developer.

But soon I heard Google and Microsoft claim AI was writing 15% of their code. Then 25%.

Sigh. It was becoming obvious that developers like me would either get on the bus and become AI-proficient, or get left behind. So I set aside my reservations and dove in. Hard.

I started with GitHub Copilot. It blew me away. It could write entire functions—mostly copy-paste from my own codebase, sure, but still a massive time saver.

Then I heard buzz about Claude-code from Anthropic. I ignored it. Copilot was good enough. Or so I thought.

In episode 816 of Intelligent Machines, Harper Reed described his workflow: write a spec with ChatGPT, feed it to Claude-code, hit go.

There was an app I’d wanted to build for years. I’m a Eurovision Song Contest super-fan and wanted a web app to track my scoring during the semifinals. I wrote the spec, Claude-code wrote the code, and a few iterations later—I had my app. I’d wanted it for years but it was all front-end, and I’m a server-side person.

Well, f*ck, this changes everything. I spent the next month cranking out proof-of-concept apps for ideas my team and I had shelved for lack of time.

And we didn’t stop. Soon I had a pile of working apps.

Then I went big. Our education group produces tons of webinars. It’s a manual slog. We moved to AI voice-overs two years ago, which helped, but still too slow.

So I built a webinar builder: ingest PowerPoint, create voiceovers via ElevenLabs API, auto-generate CC files, and spit out a finished webinar.

Proof of concept time? One evening.

A couple more days of iteration –  I added pronunciation management, an LLM button to rewrite stiff dialogue into conversational English, snapshot tools, QA tools—and boom, it works.

This tool will save huge time and cost. Doing it by hand would’ve taken six months—if we could even do it. It needed Python libraries, and Python wasn’t in our stack. It is now.

With that time saved, we’re finally tackling the monster: a mission-critical internal tool drowning in tech debt. We’re revamping it—already fixed big problems and shipped new features.

None of this had been on the roadmap. We didn’t have the resources, couldn’t make an ROI case. 

So, let’s revisit that line: AI can’t write production-ready code.

Not true. Most junior devs can’t write production-ready code either. The stuff I see in code reviews makes my hair go gray. Is Claude-code better than a junior dev? Many of them, yes. I still watch it like a hawk. It does dumb stuff and I’m mashing ESC yelling, “Nooooo!”

But when we collaborate—when I write a solid spec and monitor closely—it produces better code than anyone on my team, including me.

That’s humbling. And terrifying.

Claude-code isn’t cheap. I’m spending about $300 a month. In July I burned through 350M tokens—more than any plan allows—so I’m on pay-as-you-go.

Expensive? It’s a bargain. I expect a 3x productivity boost in 2025, 4x in 2026 with $35k net savings, and by 2027 a 6x boost saving $125k. These estimates keep going up because the tech improves weekly.

At the start of 2025, I hoped for AI to write ~25% of my code. I’m well over 90% today—and it’s only July. And the code Claude-code and I produce together is better than what I could do solo.

So yes, Claude-code can write production-ready code, but with caveats:

  • It loves fallbacks. Sometimes fine, but often I want a hard fail and user alert. I tell it not to use fallbacks; it sneaks them in anyway.
  • It’ll happily march on with nulls if a config var is missing. Nope—hard fail.
  • It sometimes ignores global functions, even those it created, and brute-force some code. Reviewing every delta is non-negotiable.
  • It has a bad habit of failing silently when I want it loud.

Sounds like a junior dev, right?

I’ll be writing more posts on the superpowers, pitfalls, and quirks of Claude-code.

AI-first development requires unlearning. It’s a new paradigm—exactly what Gabe Newell predicted: “People who don’t know how to program who use AI to scaffold their abilities will become more effective developers of value than people who’ve been programming for a decade.”

We’re not quite there yet. Senior experience still matters. Application architecture still matters. But the AI dev skills evolve weekly. Time to get on the bus.

Oh, and that “vibe” word? I tried to embrace it but it smells too badly. I asked ChastGPT what it thought of the word and it replied that it was “dismissive, vague, and unserious.” I’ve been using AI-first development. Two other terms gaining traction are “Intent-driven development” and “AI-paired programming.”

Any, yes, Claude created the image for this post.