I’m always looking for ways to tighten my workflow. Claude Code has been a productivity rocket booster, but a few clunky, manual steps were still dragging me down. One big one? Giving Claude a full, crystal-clear picture of my database structure.
For quick jobs, I fed it the CREATE TABLE
output:
SHOW CREATE TABLE `table_name`;
It worked fine for a handful of tables. But for a big database? It turned into a mind-numbing copy-paste slog.
Next, I tried a complete dump with a query ChatGPT suggested:
SELECT
table_name,
column_name,
column_type,
is_nullable,
column_default,
extra
FROM information_schema.columns
WHERE table_schema = 'matter'
ORDER BY table_name, ordinal_position;
The output looked like this:
table_name id int(11) NO auto_increment
table_name uid varchar(255) YES NULL
table_name created timestamp YES NULL
table_name createdby varchar(255) YES NULL
table_name org_id int(4) YES NULL
table_name disabled tinyint(4) YES 0
Faster? Yes. Complete? Nope. Missing indexes, charsets, comments—you know, the important bits.
Not good enough.
So I went down the rabbit hole, doing deep-dive research to figure out the best, fastest, AI-friendly way to get the whole picture. Working with both ChatGPT and Claude, I built a lean, best-practice CLI tool to pump out full documentation in seconds.
Grab it here: https://github.com/billnobes/mysql-schema-export — PHP and Python versions included.
The tool takes configs a few different ways and spits out JSON like this:
{
"database": "mydb",
"generated_at": "2025-01-15T10:30:00-05:00",
"schema_version": "2025-01-15",
"tables": [
{
"name": "users",
"columns": [
{
"name": "id",
"type": "int(11)",
"nullable": false,
"default": null,
"extra": "auto_increment",
"comment": "User ID",
"position": 1
}
],
"primary_key": ["id"],
"unique": [],
"indexes": [],
"foreign_keys": [],
"table_info": {
"row_count_est": 1250,
"engine": "InnoDB",
"collation": "utf8mb4_general_ci"
},
"ddl": "CREATE TABLE `users` (...)"
}
]
}
Yes, the file can get huge—thousands of lines. But after testing, both ChatGPT and Claude agreed: in this case, one monster file beats a pile of tiny ones.
And because not every project needs the whole dump, there’s a table-name filter:
# Export all tables (default)
--filter '.*'
# Export tables starting with 'user_'
--filter 'user_.*'
# Export tables ending with '_log'
--filter '.*_log$'
# Export specific tables
--filter '(users|orders|products)$'
If you’re tired of spoon-feeding your AI assistant table definitions, give it a try.