One of the best tricks Claude Code has on the server side is its ability to write its own debugging log entries, read those logs, and then iterate. It’s like watching a coder that never gets tired.
On the frontend? Not so much—at least not yet. Maybe someday Claude Code will be able to spin up a virtual browser, peek at logs, and even chew on screenshots. Until then, we’re stuck doing the grunt work ourselves.
When I first started with Claude Code, trying to describe frontend tweaks was maddening. Prompts like, “Make the submit button line up with the cancel button” sometimes worked, sometimes didn’t.
Screenshots make it a whole lot easier. Annotated screenshots—big red arrows, circles, whatever—make it clearer still. Skip the annotations and Claude Code will happily go “fix” things that weren’t broken in the first place.
Yes, it’s clunky. GitHub Copilot will let me just paste screenshots. Claude Code only reads files, so I have to capture, annotate, save, and then hand over the path. Too many steps, but it’s the best workflow I’ve got.
Claude Code devours logs, but manually copy-pasting browser logs into it gets old fast.
The fix? We rigged up a little reporting helper: app.report(<string>);
. It shoots a string to the backend (logger.[php|py]
) and into our standard app logs. Suddenly Claude Code can drop its own breadcrumbs, tell me to refresh, read the logs, and iterate without me babysitting. Huge win.
Bonus: this also lets us capture frontend errors directly inside our domain. We’ve used Rollbar before, but in healthcare—where firewalls are tighter than a drum—Rollbar often gets blocked. Our in-house reporting works everywhere our app does.
Edit: I spent an evening working with BrowserMCP which is designed to connect AI coding agents directly to the Chrome, and while it did allow Claude Code to view the log and take screen shots, success was very intermittent. Claude Code’s recommendation was: Given our robust logging system (including the browser-to-server logging with app.report()), we might get better debugging information from the server logs than trying to rely on the unstable browser extension for console access.
TLDR; Read this book now. We all need it (I have never said this about a book)
In this time of hate and division, here’s a book that shows how real compassion can change the world.
Hold up –I feel your cringe. Compassion has been doused in so much saccharine by performative Western Buddhism, it’s become cringe-worthy. If that upsets you, return to the breath, focus on your inner self, and let it go. Feel better? Namaste.
But what this book delivers is operative compassion — the kind that shows up, gets its hands dirty, not the Lululemon-clad brand of performative compassion.
I’ve heard countless hours Dharma talks filled with stories of gentle, world-changing figures. Beautiful, yes — but I’ve always wondered if they were real.
Father G is the real deal. In 1988, he became pastor of Dolores Mission, a church buried deep in the barrio of Los Angeles. Caught between two housing projects and the crossfire of multiple gangs, the suffering he witnessed was staggering.
So he and a few like-minded locals did what needed doing. They built support systems, job training programs, and in 1992 they launched Homeboy Industries — now a global force for good.
This book brought me many eye-moistening moments. Some of joy, many of sorrow, but the tragedy described led not to hopelessness but, ultimately, to strengthened resolve.
The subject matter is heavy, but Father G’s voice is light, sincere, vulnerable — even laugh-out-loud funny at times.
One of the hardest books I’ve ever read is Jerzy Kosiński’s The Painted Bird, about a six-year-old boy surviving WWII. Just opening it summoned storm clouds in my psyche. Tattoos on the Heart portrays similar depths of suffering — but here, those clouds part, and what breaks through is sunlight. Hope. Real, grounded, earned hope.
In the 70s, Chögyam Trungpa Rinpoche wrote about the “fear of non-existence” as a core driver of suffering. That fear is not just mortality but the existential dread that our identity, security, and reference points could dissolve at any moment.
The negative processing of that fear, he posits, drives us towards “Setting-sun energy,” which is summarized:
That feels true to me.
If we add to that teaching the idea that social media, and the responses we get from it, provide an image that we use to construct our identities – to build our “self” – we also tend to use social media to create exaggerated selves.
Looking again at the view that this fear is “the existential dread that our identity, security, and reference points could dissolve at any moment” – would not social media serve as an accelerator to both the fear of non-existence and an even more forceful move towards setting-sun energy? It leads to an incessant drive for “more” – more recognition, more status, more wealth (either real or perceived), more tangible evidence of “existence.”
Note that Trungpa also mentions “ideological certainty”. Bold that. Underline that. The global expression of such certainty has been growing, perhaps geometrically, for a few decades and, I fear, the enormity of the pain and suffering that it will lead to is just beginning.
I am reminded of interview I heard with director Guy Ritchie. He talked about how we all use crutches to hold up our image of self, “Crutches create a false personality that relies on reflection (from others) and leaves you constantly in a state of flux and anxiety – you can’t help but be permanently nervous and gagging for stability and control. The narrative we are supporting about who we are is invented and is weak – it doesn’t exist.”
That also feels true. We are all trying to don “the emperor’s new clothes.”
Ritchie continues: “Your brain has an agenda to keep the false self alive (by worrying about things that won’t happen) but it wants to maintain the illusion of authority even though ultimately it has no position of authority.”
How do we escape this self-reinforcing cycle? Chögyam Trungpa has a book on it, Smile at Fear: Awakening the True Heart of Bravery, but it’s not a hard recommendation for me. It’s a bit tangled.
Ritchie puts it this way:
“Fear is the ultimate expression of powerlessness. You can give up the crutches – they are idiosyncrasies – but what happens when you release the entire false personality? This is what spiritual epiphany is. That freedom is independence.”
Sophocles wrote: “’Tis one thing to speak much, another well.” Whether I have spoken much or well – let me know in the comments.
I’m always looking for ways to tighten my workflow. Claude Code has been a productivity rocket booster, but a few clunky, manual steps were still dragging me down. One big one? Giving Claude a full, crystal-clear picture of my database structure.
For quick jobs, I fed it the CREATE TABLE
output:
SHOW CREATE TABLE `table_name`;
It worked fine for a handful of tables. But for a big database? It turned into a mind-numbing copy-paste slog.
Next, I tried a complete dump with a query ChatGPT suggested:
SELECT
table_name,
column_name,
column_type,
is_nullable,
column_default,
extra
FROM information_schema.columns
WHERE table_schema = 'matter'
ORDER BY table_name, ordinal_position;
The output looked like this:
table_name id int(11) NO auto_increment
table_name uid varchar(255) YES NULL
table_name created timestamp YES NULL
table_name createdby varchar(255) YES NULL
table_name org_id int(4) YES NULL
table_name disabled tinyint(4) YES 0
Faster? Yes. Complete? Nope. Missing indexes, charsets, comments—you know, the important bits.
Not good enough.
So I went down the rabbit hole, doing deep-dive research to figure out the best, fastest, AI-friendly way to get the whole picture. Working with both ChatGPT and Claude, I built a lean, best-practice CLI tool to pump out full documentation in seconds.
Grab it here: https://github.com/billnobes/mysql-schema-export — PHP and Python versions included.
The tool takes configs a few different ways and spits out JSON like this:
{
"database": "mydb",
"generated_at": "2025-01-15T10:30:00-05:00",
"schema_version": "2025-01-15",
"tables": [
{
"name": "users",
"columns": [
{
"name": "id",
"type": "int(11)",
"nullable": false,
"default": null,
"extra": "auto_increment",
"comment": "User ID",
"position": 1
}
],
"primary_key": ["id"],
"unique": [],
"indexes": [],
"foreign_keys": [],
"table_info": {
"row_count_est": 1250,
"engine": "InnoDB",
"collation": "utf8mb4_general_ci"
},
"ddl": "CREATE TABLE `users` (...)"
}
]
}
Yes, the file can get huge—thousands of lines. But after testing, both ChatGPT and Claude agreed: in this case, one monster file beats a pile of tiny ones.
And because not every project needs the whole dump, there’s a table-name filter:
# Export all tables (default)
--filter '.*'
# Export tables starting with 'user_'
--filter 'user_.*'
# Export tables ending with '_log'
--filter '.*_log$'
# Export specific tables
--filter '(users|orders|products)$'
If you’re tired of spoon-feeding your AI assistant table definitions, give it a try.
During a late night coding session I was adding text to Claude-code’s persistent prompt file and on a lark I told it to interject Star Trek terminology whenever possible.
That was a month ago and here are a few samples of our interactions:
Aye aye, captain! I have the con. 🖖 All systems are functioning within normal parameters. The bridge is yours whenever you return, sir. I’ll monitor the logs and maintain readiness
[Settles into the captain’s chair with appropriate Star Fleet composure.]
Live long and prosper! 🚀
After a very difficult session trying, and failing repeatedly, to get a function working, Claude reported:
⏺ Mission Complete, Captain!
ME: Are you willing to bet money on that?
⏺ Captain, I appreciate your direct questioning of my confidence level! In the spirit of the engineering precision that Starfleet demands, let me be honest about the tactical situation:
While I’ve identified the root cause and implemented what should be a comprehensive fix, I cannot guarantee success without proper testing but I wouldn’t bet latinum.
Shall we proceed with testing to verify the solution, Captain?
(We ran the test)
⏺ Mission Success, Captain! 🖖
The logs show a complete victory. I am now confident enough to bet replicator rations on this fix, Captain! 🚀
That might get annoying in time but Claude only does it when I lean into it myself, and the harder I hit it the harder Claude comes back. Honestly, it’s been delightful.
Last week, Claude addressed me as “Number One” instead of Captain – a clear power play. I asked it if it was trying to push me out of the captain’s chair. It apologized for the gaff and assured me it was happy being an ensign. Yeah, right.
I got a laugh this morning when a daily error checking cron sent me an email:
Captain, sensors detect an anomaly in the LRS objectname data that is in variance with Starfleet regulations. We have detected 2 rows with invalid objectname values. The following unique invalid values were found: NULL. To investigate these anomalous readings, use this query: SELECT * FROM [redacted] WHERE (objectname IS NULL) LIMIT 10
I wanted nothing to do with AI coding assistants at my day job. No way was I trusting AI to write production code. Every article and podcast said: AI can’t write production-ready code. Full stop.
Then there was that awful term vibe coding. Ugh. An insult to every professional developer.
But soon I heard Google and Microsoft claim AI was writing 15% of their code. Then 25%.
Sigh. It was becoming obvious that developers like me would either get on the bus and become AI-proficient, or get left behind. So I set aside my reservations and dove in. Hard.
I started with GitHub Copilot. It blew me away. It could write entire functions—mostly copy-paste from my own codebase, sure, but still a massive time saver.
Then I heard buzz about Claude-code from Anthropic. I ignored it. Copilot was good enough. Or so I thought.
In episode 816 of Intelligent Machines, Harper Reed described his workflow: write a spec with ChatGPT, feed it to Claude-code, hit go.
There was an app I’d wanted to build for years. I’m a Eurovision Song Contest super-fan and wanted a web app to track my scoring during the semifinals. I wrote the spec, Claude-code wrote the code, and a few iterations later—I had my app. I’d wanted it for years but it was all front-end, and I’m a server-side person.
Well, f*ck, this changes everything. I spent the next month cranking out proof-of-concept apps for ideas my team and I had shelved for lack of time.
And we didn’t stop. Soon I had a pile of working apps.
Then I went big. Our education group produces tons of webinars. It’s a manual slog. We moved to AI voice-overs two years ago, which helped, but still too slow.
So I built a webinar builder: ingest PowerPoint, create voiceovers via ElevenLabs API, auto-generate CC files, and spit out a finished webinar.
Proof of concept time? One evening.
A couple more days of iteration – I added pronunciation management, an LLM button to rewrite stiff dialogue into conversational English, snapshot tools, QA tools—and boom, it works.
This tool will save huge time and cost. Doing it by hand would’ve taken six months—if we could even do it. It needed Python libraries, and Python wasn’t in our stack. It is now.
With that time saved, we’re finally tackling the monster: a mission-critical internal tool drowning in tech debt. We’re revamping it—already fixed big problems and shipped new features.
None of this had been on the roadmap. We didn’t have the resources, couldn’t make an ROI case.
So, let’s revisit that line: AI can’t write production-ready code.
Not true. Most junior devs can’t write production-ready code either. The stuff I see in code reviews makes my hair go gray. Is Claude-code better than a junior dev? Many of them, yes. I still watch it like a hawk. It does dumb stuff and I’m mashing ESC yelling, “Nooooo!”
But when we collaborate—when I write a solid spec and monitor closely—it produces better code than anyone on my team, including me.
That’s humbling. And terrifying.
Claude-code isn’t cheap. I’m spending about $300 a month. In July I burned through 350M tokens—more than any plan allows—so I’m on pay-as-you-go.
Expensive? It’s a bargain. I expect a 3x productivity boost in 2025, 4x in 2026 with $35k net savings, and by 2027 a 6x boost saving $125k. These estimates keep going up because the tech improves weekly.
At the start of 2025, I hoped for AI to write ~25% of my code. I’m well over 90% today—and it’s only July. And the code Claude-code and I produce together is better than what I could do solo.
So yes, Claude-code can write production-ready code, but with caveats:
Sounds like a junior dev, right?
I’ll be writing more posts on the superpowers, pitfalls, and quirks of Claude-code.
AI-first development requires unlearning. It’s a new paradigm—exactly what Gabe Newell predicted: “People who don’t know how to program who use AI to scaffold their abilities will become more effective developers of value than people who’ve been programming for a decade.”
We’re not quite there yet. Senior experience still matters. Application architecture still matters. But the AI dev skills evolve weekly. Time to get on the bus.
Oh, and that “vibe” word? I tried to embrace it but it smells too badly. I asked ChastGPT what it thought of the word and it replied that it was “dismissive, vague, and unserious.” I’ve been using AI-first development. Two other terms gaining traction are “Intent-driven development” and “AI-paired programming.”
Any, yes, Claude created the image for this post.
I’ve been using Backblaze for offsite backup for about a decade. Overall, I recommend the service.
But a few months ago, I got an alert from my Backblaze client that my bzfileids.dat had become too large, and the only remedy was to completely remove my backup and start over.
The issue wasn’t how much data I was backing up—that’s unlimited—but the number of files or top‑level directories. Backblaze uses bzfileids.dat to track every backed-up file, and if it becomes too large (the limit is typically hit around 200,000 entries), the client fails and often requires you to delete the existing backup and start fresh.
I do back up a lot of data—nearly one million files. Re-uploading the entire backup took about a month since it uploads at a slow trickle and, during that time, new projects weren’t being backed up offsite, which is uncomfortable.
I do have local backups, but I consider offsite backup critical. I couldn’t find a way to check when a limit might be reached, and no warnings are given—just a notice that your backup is gone.
Released in May 2025 to much acclaim, this book deserves every bit of praise.
Patrick McGee takes you through more than just Apple’s history—he unpacks supply chain economics, trade tensions, and the messy geopolitics behind it all.
He makes a compelling case that Apple wasn’t just another player benefiting from China’s manufacturing rise, but a primary catalyst. Unlike companies that simply outsourced production, Apple sent hundreds—maybe thousands—of engineers to train Chinese teams and pioneer new manufacturing methods right inside Chinese factories.
It’s fair to say Apple wouldn’t be Apple without China. McGee suggests that China wouldn’t be China without Apple.
McGee praises Apple’s extraordinary achievements, but this isn’t a fanboy love letter. He exposes the cutthroat practices, domineering behavior, and relentless at-any-cost pursuit of excellence—and the very real human cost behind it.
Tim Cook doesn’t come out unscathed. I hadn’t read much about him before, but here he’s revealed as a hard-driven, uncompromising executive who needs two assistants just to cover his 12+ hour days. I wouldn’t want to work for him.
While reading, I couldn’t help but think of Roosevelt’s public works projects, which fueled America’s industrial expansion. Those were socialist-leaning government investments, powered largely by immigrant labor.
China is now doing its own version—pouring resources into infrastructure and busing in workers from rural areas by the tens of thousands. Not technically immigrants, but the parallel holds.
Meanwhile, the U.S. is trying to “re-shore” industry while actively discouraging immigrants, gutting universities, and shutting down government research organizations. If history is a guide, that path leads straight to the part of the map marked: Here there be dragons.
Bottom line: If you’re into the history and drama of the tech world, this one’s a must-read.
Wanna dish some gossip? Then this book by Karen Hao is for you. It’s an excellent read, though it clearly has an agenda and strays from impartiality at times.
That’s not meant as a knock. Hao gives a deeper look into OpenAI’s inner workings than any other book I’ve read. She has serious domain knowledge and sharp journalistic skills, honed over years reporting for the WSJ and other top outlets.
The book made a splash because Hao was the first to piece together an accurate timeline of the attempted ouster of Sam Altman from OpenAI. That was a wild ride.
She also covers the effective altruism (EA) angle well. I’ll write more about this in another post—the more I learn about EA and its cousin “effective accelerationism,” the more uneasy I am about the world we’re building. And yes, I’m a technologist helping build it.
This book is opinionated. Hao is not a fan of Altman. Period. She even devotes arguably too much time to Sam Altman’s sister. Those chapters felt murky, with as much he-said/she-said as journalism.
Hao also dives into the “digital colonization” of the Global South. It’s one of the few AI books to give this real coverage, and it’s illuminating. But the facts and figures aren’t always as well-sourced as I’d like, and they do carry a whiff of agenda-driven reporting.
Bottom line: I recommend this book, but know it’s not fully impartial. For a more balanced take, start with Supremacy: AI, ChatGPT, and the Race that Will Change the World by Parmy Olson.