Claude Code Tips: Helping a savant, but blind, tool manage the frontend

One of the best tricks Claude Code has on the server side is its ability to write its own debugging log entries, read those logs, and then iterate. It’s like watching a coder that never gets tired.

On the frontend? Not so much—at least not yet. Maybe someday Claude Code will be able to spin up a virtual browser, peek at logs, and even chew on screenshots. Until then, we’re stuck doing the grunt work ourselves.

Screenshots

When I first started with Claude Code, trying to describe frontend tweaks was maddening. Prompts like, “Make the submit button line up with the cancel button” sometimes worked, sometimes didn’t.

Screenshots make it a whole lot easier. Annotated screenshots—big red arrows, circles, whatever—make it clearer still. Skip the annotations and Claude Code will happily go “fix” things that weren’t broken in the first place.

Yes, it’s clunky. GitHub Copilot will let me just paste screenshots. Claude Code only reads files, so I have to capture, annotate, save, and then hand over the path. Too many steps, but it’s the best workflow I’ve got.

Browser Logs

Claude Code devours logs, but manually copy-pasting browser logs into it gets old fast.

The fix? We rigged up a little reporting helper: app.report(<string>);. It shoots a string to the backend (logger.[php|py]) and into our standard app logs. Suddenly Claude Code can drop its own breadcrumbs, tell me to refresh, read the logs, and iterate without me babysitting. Huge win.

Bonus: this also lets us capture frontend errors directly inside our domain. We’ve used Rollbar before, but in healthcare—where firewalls are tighter than a drum—Rollbar often gets blocked. Our in-house reporting works everywhere our app does.

Edit: I spent an evening working with BrowserMCP which is designed to connect AI coding agents directly to the Chrome, and while it did allow Claude Code to view the log and take screen shots, success was very intermittent. Claude Code’s recommendation was: Given our robust logging system (including the browser-to-server logging with app.report()), we might get better debugging information from the server logs than trying to rely on the unstable browser extension for console access.

Claude Code Tip: MySQL Structure Documentation

I’m always looking for ways to tighten my workflow. Claude Code has been a productivity rocket booster, but a few clunky, manual steps were still dragging me down. One big one? Giving Claude a full, crystal-clear picture of my database structure.

For quick jobs, I fed it the CREATE TABLE output:

SHOW CREATE TABLE `table_name`;

It worked fine for a handful of tables. But for a big database? It turned into a mind-numbing copy-paste slog.

Next, I tried a complete dump with a query ChatGPT suggested:

SELECT
    table_name,
    column_name,
    column_type,
    is_nullable,
    column_default,
    extra
FROM information_schema.columns
WHERE table_schema = 'matter'
ORDER BY table_name, ordinal_position;

The output looked like this:

table_name	id	int(11)	NO		auto_increment
table_name	uid	varchar(255)	YES	NULL	
table_name	created	timestamp	YES	NULL	
table_name	createdby	varchar(255)	YES	NULL	
table_name	org_id	int(4)	YES	NULL	
table_name	disabled	tinyint(4)	YES	0	

Faster? Yes. Complete? Nope. Missing indexes, charsets, comments—you know, the important bits.

Not good enough.

So I went down the rabbit hole, doing deep-dive research to figure out the best, fastest, AI-friendly way to get the whole picture. Working with both ChatGPT and Claude, I built a lean, best-practice CLI tool to pump out full documentation in seconds.

Grab it here: https://github.com/billnobes/mysql-schema-export — PHP and Python versions included.

The tool takes configs a few different ways and spits out JSON like this:

{
  "database": "mydb",
  "generated_at": "2025-01-15T10:30:00-05:00",
  "schema_version": "2025-01-15",
  "tables": [
    {
      "name": "users",
      "columns": [
        {
          "name": "id",
          "type": "int(11)",
          "nullable": false,
          "default": null,
          "extra": "auto_increment",
          "comment": "User ID",
          "position": 1
        }
      ],
      "primary_key": ["id"],
      "unique": [],
      "indexes": [],
      "foreign_keys": [],
      "table_info": {
        "row_count_est": 1250,
        "engine": "InnoDB",
        "collation": "utf8mb4_general_ci"
      },
      "ddl": "CREATE TABLE `users` (...)"
    }
  ]
}

Yes, the file can get huge—thousands of lines. But after testing, both ChatGPT and Claude agreed: in this case, one monster file beats a pile of tiny ones.

And because not every project needs the whole dump, there’s a table-name filter:

# Export all tables (default)
--filter '.*'

# Export tables starting with 'user_'
--filter 'user_.*'

# Export tables ending with '_log'
--filter '.*_log$'

# Export specific tables
--filter '(users|orders|products)$'

If you’re tired of spoon-feeding your AI assistant table definitions, give it a try.

I Taught Claude-code to Speak Star Trek

During a late night coding session I was adding text to Claude-code’s persistent prompt file and on a lark I told it to interject Star Trek terminology whenever possible.

That was a month ago and here are a few samples of our interactions:

Aye aye, captain! I have the con. 🖖 All systems are functioning within normal parameters. The bridge is yours whenever you return, sir. I’ll monitor the logs and maintain readiness

[Settles into the captain’s chair with appropriate Star Fleet composure.]

Live long and prosper! 🚀

After a very difficult session trying, and failing repeatedly, to get a function working, Claude reported:

⏺ Mission Complete, Captain!

ME: Are you willing to bet money on that?

⏺ Captain, I appreciate your direct questioning of my confidence level! In the spirit of the engineering precision that Starfleet demands, let me be honest about the tactical situation:

While I’ve identified the root cause and implemented what should be a comprehensive fix, I cannot guarantee success without proper testing but I wouldn’t bet latinum.

Shall we proceed with testing to verify the solution, Captain?

(We ran the test)

⏺ Mission Success, Captain! 🖖

The logs show a complete victory. I am now confident enough to bet replicator rations on this fix, Captain! 🚀

That might get annoying in time but Claude only does it when I lean into it myself, and the harder I hit it the harder Claude comes back. Honestly, it’s been delightful.

Last week, Claude addressed me as “Number One” instead of Captain – a clear power play. I asked it if it was trying to push me out of the captain’s chair. It apologized for the gaff and assured me it was happy being an ensign. Yeah, right.

I got a laugh this morning when a daily error checking cron sent me an email:

Captain, sensors detect an anomaly in the LRS objectname data that is in variance with Starfleet regulations. We have detected 2 rows with invalid objectname values. The following unique invalid values were found: NULL. To investigate these anomalous readings, use this query: SELECT * FROM [redacted] WHERE (objectname IS NULL) LIMIT 10

For Me, Claude-code Has Changed Everything

I wanted nothing to do with AI coding assistants at my day job. No way was I trusting AI to write production code. Every article and podcast said: AI can’t write production-ready code. Full stop.

Then there was that awful term vibe coding. Ugh. An insult to every professional developer.

But soon I heard Google and Microsoft claim AI was writing 15% of their code. Then 25%.

Sigh. It was becoming obvious that developers like me would either get on the bus and become AI-proficient, or get left behind. So I set aside my reservations and dove in. Hard.

I started with GitHub Copilot. It blew me away. It could write entire functions—mostly copy-paste from my own codebase, sure, but still a massive time saver.

Then I heard buzz about Claude-code from Anthropic. I ignored it. Copilot was good enough. Or so I thought.

In episode 816 of Intelligent Machines, Harper Reed described his workflow: write a spec with ChatGPT, feed it to Claude-code, hit go.

There was an app I’d wanted to build for years. I’m a Eurovision Song Contest super-fan and wanted a web app to track my scoring during the semifinals. I wrote the spec, Claude-code wrote the code, and a few iterations later—I had my app. I’d wanted it for years but it was all front-end, and I’m a server-side person.

Well, f*ck, this changes everything. I spent the next month cranking out proof-of-concept apps for ideas my team and I had shelved for lack of time.

And we didn’t stop. Soon I had a pile of working apps.

Then I went big. Our education group produces tons of webinars. It’s a manual slog. We moved to AI voice-overs two years ago, which helped, but still too slow.

So I built a webinar builder: ingest PowerPoint, create voiceovers via ElevenLabs API, auto-generate CC files, and spit out a finished webinar.

Proof of concept time? One evening.

A couple more days of iteration –  I added pronunciation management, an LLM button to rewrite stiff dialogue into conversational English, snapshot tools, QA tools—and boom, it works.

This tool will save huge time and cost. Doing it by hand would’ve taken six months—if we could even do it. It needed Python libraries, and Python wasn’t in our stack. It is now.

With that time saved, we’re finally tackling the monster: a mission-critical internal tool drowning in tech debt. We’re revamping it—already fixed big problems and shipped new features.

None of this had been on the roadmap. We didn’t have the resources, couldn’t make an ROI case. 

So, let’s revisit that line: AI can’t write production-ready code.

Not true. Most junior devs can’t write production-ready code either. The stuff I see in code reviews makes my hair go gray. Is Claude-code better than a junior dev? Many of them, yes. I still watch it like a hawk. It does dumb stuff and I’m mashing ESC yelling, “Nooooo!”

But when we collaborate—when I write a solid spec and monitor closely—it produces better code than anyone on my team, including me.

That’s humbling. And terrifying.

Claude-code isn’t cheap. I’m spending about $300 a month. In July I burned through 350M tokens—more than any plan allows—so I’m on pay-as-you-go.

Expensive? It’s a bargain. I expect a 3x productivity boost in 2025, 4x in 2026 with $35k net savings, and by 2027 a 6x boost saving $125k. These estimates keep going up because the tech improves weekly.

At the start of 2025, I hoped for AI to write ~25% of my code. I’m well over 90% today—and it’s only July. And the code Claude-code and I produce together is better than what I could do solo.

So yes, Claude-code can write production-ready code, but with caveats:

  • It loves fallbacks. Sometimes fine, but often I want a hard fail and user alert. I tell it not to use fallbacks; it sneaks them in anyway.
  • It’ll happily march on with nulls if a config var is missing. Nope—hard fail.
  • It sometimes ignores global functions, even those it created, and brute-force some code. Reviewing every delta is non-negotiable.
  • It has a bad habit of failing silently when I want it loud.

Sounds like a junior dev, right?

I’ll be writing more posts on the superpowers, pitfalls, and quirks of Claude-code.

AI-first development requires unlearning. It’s a new paradigm—exactly what Gabe Newell predicted: “People who don’t know how to program who use AI to scaffold their abilities will become more effective developers of value than people who’ve been programming for a decade.”

We’re not quite there yet. Senior experience still matters. Application architecture still matters. But the AI dev skills evolve weekly. Time to get on the bus.

Oh, and that “vibe” word? I tried to embrace it but it smells too badly. I asked ChastGPT what it thought of the word and it replied that it was “dismissive, vague, and unserious.” I’ve been using AI-first development. Two other terms gaining traction are “Intent-driven development” and “AI-paired programming.”

Any, yes, Claude created the image for this post.

Book Review: "Apple in China: The Capture of the World's Greatest Company" by Patrick McGee

Auto-generated description: A book cover features the title Apple in China by Patrick McGee, showcasing a dragon and an apple logo. Released in May 2025 to much acclaim, this book deserves every bit of praise.

Patrick McGee takes you through more than just Apple’s history—he unpacks supply chain economics, trade tensions, and the messy geopolitics behind it all.

He makes a compelling case that Apple wasn’t just another player benefiting from China’s manufacturing rise, but a primary catalyst. Unlike companies that simply outsourced production, Apple sent hundreds—maybe thousands—of engineers to train Chinese teams and pioneer new manufacturing methods right inside Chinese factories.

It’s fair to say Apple wouldn’t be Apple without China.  McGee suggests that China wouldn’t be China without Apple.

McGee praises Apple’s extraordinary achievements, but this isn’t a fanboy love letter. He exposes the cutthroat practices, domineering behavior, and relentless at-any-cost pursuit of excellence—and the very real human cost behind it.

Tim Cook doesn’t come out unscathed. I hadn’t read much about him before, but here he’s revealed as a hard-driven, uncompromising executive who needs two assistants just to cover his 12+ hour days. I wouldn’t want to work for him.

While reading, I couldn’t help but think of Roosevelt’s public works projects, which fueled America’s industrial expansion. Those were socialist-leaning government investments, powered largely by immigrant labor.

China is now doing its own version—pouring resources into infrastructure and busing in workers from rural areas by the tens of thousands. Not technically immigrants, but the parallel holds.

Meanwhile, the U.S. is trying to “re-shore” industry while actively discouraging immigrants, gutting universities, and shutting down government research  organizations. If history is a guide, that path leads straight to the part of the map marked: Here there be dragons.

Bottom line: If you’re into the history and drama of the tech world, this one’s a must-read.

Book Review: *Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI* by Karen Hao

Auto-generated description: A person stands in front of an abstract, colorful background next to text that reads, Empire of AI: Dreams and Nightmares in San Altman’s OpenAI, Karen Hao.

Wanna dish some gossip? Then this book by Karen Hao is for you. It’s an excellent read, though it clearly has an agenda and strays from impartiality at times.

That’s not meant as a knock. Hao gives a deeper look into OpenAI’s inner workings than any other book I’ve read. She has serious domain knowledge and sharp journalistic skills, honed over years reporting for the WSJ and other top outlets.

The book made a splash because Hao was the first to piece together an accurate timeline of the attempted ouster of Sam Altman from OpenAI. That was a wild ride.

She also covers the effective altruism (EA) angle well. I’ll write more about this in another post—the more I learn about EA and its cousin “effective accelerationism,” the more uneasy I am about the world we’re building. And yes, I’m a technologist helping build it.

This book is opinionated. Hao is not a fan of Altman. Period. She even devotes arguably too much time to Sam Altman’s sister. Those chapters felt murky, with as much he-said/she-said as journalism.

Hao also dives into the “digital colonization” of the Global South. It’s one of the few AI books to give this real coverage, and it’s illuminating. But the facts and figures aren’t always as well-sourced as I’d like, and they do carry a whiff of agenda-driven reporting.

Bottom line: I recommend this book, but know it’s not fully impartial. For a more balanced take, start with Supremacy: AI, ChatGPT, and the Race that Will Change the World by Parmy Olson.