Six Slash Commands I Built on Top of Claude Code
Six custom Claude Code slash commands I rely on daily on a Laravel legacy project — and the one specific piece of toil each one absorbs.
I have fourteen markdown files in ~/.claude/commands/. They look like this:
$ ls -1 ~/.claude/commands/
close-gl-issue.md
draft-post.md
gh-issue.md
gl-issue.md
handoff.md
improve-issue.md
prod-db-akluma.md
publish-post.md
ship-akluma.md
ship-consentio.md
staging-db-akluma.md
staging-db.md
update-docs.md
worktree.md
Each file is a Claude Code slash command — a markdown document whose contents are prepended to the agent's prompt when you type /<filename> into a session. They are, quite literally, just text. They do not feel like they should matter very much. They do, though, and they are the difference, for me, between a raw AI assistant that is vaguely useful on a good day and a set of sharp, purpose-built tools that absorb specific pieces of toil from a Laravel legacy codebase I did not originally write. The CLI is not really my tool. These files are.
This post is a tour of six of them, ordered loosely by the chronology of a working day. Some of the fourteen are variants for a different project, and a few are database tunnels that are useful without being interesting; I have left those out. The six I have picked are the ones whose job description surprised me the most — either when I wrote them, or later, when I caught myself realising how much of my day they had quietly eaten.
/improve-issue <url>
The first command in the catalog is also the most embarrassing. I create GitLab issues during meetings, and those meetings move fast enough that my issue descriptions are often one-third of a sentence and a typo. Sometimes the title is literally the last thing someone said out loud. I know that when I get back to my desk, I will not remember what it meant, and more importantly, the Claude Code session I eventually point at the issue will not either. So I built a command that rewrites my own bad issues into real ones before I start work on them.
It takes the full URL of an issue, sniffs the platform, fetches the issue and every comment through the appropriate CLI, and then picks between three triage paths depending on how clear the thing is. Before any of that, though, if the issue smells UI-shaped — a mention of a page, a form, a button, a layout — the command is hard-coded to stop and demand a screenshot before going any further. No exceptions. I have wasted too much time guessing which page was broken from a title like "fix asset register" to ever want the command to be polite about this again.
The output is not a comment on the issue. It is a full replacement of the title and description, posted via the API:
glab api "projects/d3team%2Fconsentio_docker_src/issues/<number>" \
--method PUT -f title="$TITLE" -f description="$BODY"
Comments annotate; a rewritten description replaces. I want the original carelessness gone, not appended to.
/gl-issue <number>
This is the load-bearing command of the whole directory. It is the one I run most days, and the one that most obviously rewards the effort of having written the command file in the first place. The friction it absorbs is a subtle one: starting work on an issue on a legacy codebase used to mean the agent would either drown in the project's several hundred pages of documentation, reading everything and running out of context before it even touched the code, or skip the documentation entirely and implement the issue blind. Neither was a good outcome. What I wanted was selective reading — load the specific documentation categories this particular issue actually needs, and nothing else.
The way the command does this is, I think, the single nerdiest thing in the directory. When the command fires, it spawns a separate Claude subagent running on Sonnet — cheaper and faster than the main Opus session — whose only job is to read docs/doc-router.yaml, read the issue, and return a JSON array of category IDs for the main agent to load. The instruction inside the subagent's prompt is explicit about the format:
Return ONLY a JSON array of category IDs, e.g.: ["auth", "database", "tenant-isolation"]
The main agent then reads every file under each matched category plus an "always" set that loads on every issue, and only then starts investigating the codebase. The triage adds about thirty seconds to the start of a session and saves easily twenty minutes of the main agent chasing the wrong files. The command also has a hard-coded screenshot gate that fires before any code is read — if the issue involves anything visual, the agent is required to stop and ask for a screenshot before going any further — and a TDD Red-Green-Pause loop inside the implementation step, because I like my agents to write the failing test before they touch the production code. The doc triage is the part that surprised me; the rest is just the command enforcing habits I already had and kept forgetting to apply.
/worktree <number>
Parallel worktrees have their own two-post story, which I will link at the end of this one, so I will skip over the infrastructure and talk only about the command I eventually wrapped around it. Before this command existed, spinning up a new parallel session looked like this: fetch the issue, read the title, derive a short kebab-case slug by hand, pick a prefix (feature/ or bugfix/ based on the labels), assemble the branch name, run ./scripts/parallel-session.sh create feature/export-audit-pdf-345 345, wait for the script to print the worktree path, and then run code ../consentio-wt-export-audit-pdf/ to open the new window. Six steps, most of them mechanical and therefore forgettable.
Now it is:
/worktree 345
One integer. The command fetches the issue with glab, slugifies the title, picks the prefix from the labels, runs the script, parses the worktree path from the script's output, and opens a new VS Code window on the worktree directory. The output it hands back looks like this:
Session ready for issue #345!
Directory: ../consentio-wt-export-audit-pdf
App: http://localhost:902
A new VS Code window is open. Start Claude there and run:
/gl-issue 345
The last line is a deliberate pass-off to the next command in the pipeline. The destroy mode is the symmetrical opposite — /worktree destroy 345 finds the session by issue number, tears down the Docker stack, syncs develop, and deletes the branch — which means the whole lifecycle of a parallel session is now bookended by two commands that each take a single integer as their argument.
/handoff
This is the weird one. It is also the only command in this post that you do not use by typing a number into a fresh session — you use it by pausing the one you are already in. The friction it absorbs is a specific one: sometimes, mid-investigation, a small side-task turns up that does not really warrant a branch of its own. A one-line config change, a dependency bump, a typo I noticed in a file I was reading for a completely unrelated reason. Creating a new feature branch for a ten-line change feels heavy, and so does ignoring it and hoping I remember later. What I wanted was a way to hand the task to another Claude session — one that already has a live feature branch — and have that session fold the work into its own next commit.
The command produces a single fenced code block, nothing else, which I copy and paste into the other session's chat window. The block contains a self-contained implementation brief: investigation summary, file paths with line numbers, the exact edits, the verification steps. But the part that makes the whole thing work is the preamble of git rules that the brief opens with:
Git rules — READ CAREFULLY:
- Do NOT create a branch, switch branches, or run any git commands
- Do NOT commit this work separately — fold it into your next commit alongside your own work
- Do NOT stash, discard, or touch your existing uncommitted changes
- Do NOT question why this task is on your branch — it's intentional
That last rule is the one I had to add after the first time I tried this, because the receiving session, very reasonably, looked at the task it had just been handed, noticed that it had nothing to do with the branch it was on, and started asking me whether I was sure about this. I was. I am still. Every branch has a minimum cognitive cost — a new MR to review, a new CI run to wait for, a new context to hold open in my head — and some ten-line tasks are not worth paying that cost for. The command encodes that judgment so I do not have to re-argue it every time.
/ship-consentio push
This is the command that absorbs the git shipping workflow: commit, push, create the merge request, wait for the pipeline, merge, sync, clean up. It has four modes — push, develop, staging, and quick — and the most interesting of the four is quick, because of the guardrail built into it.
The problem the guardrail solves is that I trust myself to run quick for small, low-risk changes, and I do not trust myself to know the difference. A typo fix is obviously quick. A routing refactor is obviously not. Somewhere between the two there is a long tail of changes where my judgment is wrong often enough to matter. So the command does not trust me either. Before it proceeds with a quick flow, it checks the diff against two criteria. The first is a hard-coded list of high-risk file patterns:
Dockerfile, docker-compose.yml, docker/**, deploy/**
.gitlab-ci.yml, composer.lock, package-lock.json
routes/web.php, config/**, app/Http/Middleware/**
app/Providers/**, app/Services/**, database/migrations/**
If any file in the diff matches any of those patterns, the command refuses to proceed quietly and asks me to confirm that I really do want to take the quick route. The second criterion is a numeric threshold — four files and a hundred changed lines, calculated from the project's eightieth-percentile commit size — with an exclusion list of always-low-risk paths like resources/views, tests, and docs that do not count toward the total. Between the two criteria, the command catches essentially every change that my past self would have regretted shipping on the quick route.
/update-docs
If /gl-issue is the load-bearing command of the catalog, /update-docs is the most quietly valuable one. I run it after every issue I close, and roughly half the time it does something I did not expect. The obvious half of its job is the expected one: read the documentation index, compare it against the diff, and update any docs that have drifted out of sync with the code. That is the UPDATE action in the command's plan output. The less obvious half is an action called CAPTURE, which is governed by a step inside the command file called Knowledge Audit.
The step asks a single question: if a fresh LLM session had to work on the same area of code tomorrow, what would it waste significant time re-discovering? Then it applies a filter — the knowledge has to have taken more than a few minutes to figure out, it has to not be obvious from reading the code casually, and it has to not already exist in any documentation the command has just read. Anything that survives the filter is written out as a short markdown note and saved into a codebase-notes/ subfolder, named after the area of code rather than after the issue — data-inventory-normalization.md, not issue-309-notes.md. The template for these notes is deliberately sparse:
# <Area of Code>
## What This Covers
<one-sentence summary>
## Key Findings
### <finding title>
<what it is, why it matters, where it lives>
## Files Involved
- `path/to/file.php` — role in this flow
Those notes are, in effect, a knowledge base I am building for my own future agents. Every time a fresh session starts work on an area I have been in before, it can read the note that the previous session left behind and skip the part where it re-discovers, for the third or fourth time, that the double-query pattern in FormsController is there because of a legacy French translation fallback. The command does not replace my human memory of the project — the project is too large for that to be a realistic goal — but it gives my agents a working memory that survives between sessions, which is a thing the raw tool does not offer.
Six commands, one pattern
None of these commands is individually clever. Each one started as a single frustration I had hit in a single session, and each one got written the way a sysadmin writes an alias — to stop having to type the same thing twice. What they have in common is not the specific instructions inside each file, which are mostly unremarkable, but the shape of the thing they are doing underneath. Each command absorbs one specific piece of cognitive toil that I used to carry in my head and am quietly relieved not to carry anymore. The CLI is not my tool. These files are. There are eight more in the directory I did not write about, and I would miss every one of them if they vanished tomorrow.
Related: How I Run Parallel AI Coding Sessions on the Same Laravel Project and The half I left out of my parallel worktrees post — the two earlier posts in this loose series on building tooling around Claude Code for a Laravel legacy project.