<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[ubeyd.dev]]></title><description><![CDATA[Writing about web dev, tools, and things I learn while building stuff.]]></description><link>https://ubeyd.dev</link><generator>RSS for Node</generator><lastBuildDate>Wed, 29 Apr 2026 14:20:28 GMT</lastBuildDate><atom:link href="https://ubeyd.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Six Slash Commands I Built on Top of Claude Code]]></title><description><![CDATA[I have fourteen markdown files in ~/.claude/commands/. They look like this:
$ ls -1 ~/.claude/commands/
close-gl-issue.md
draft-post.md
gh-issue.md
gl-issue.md
handoff.md
improve-issue.md
prod-db-akluma.md
publish-post.md
ship-akluma.md
ship-consenti...]]></description><link>https://ubeyd.dev/six-claude-code-slash-commands</link><guid isPermaLink="true">https://ubeyd.dev/six-claude-code-slash-commands</guid><category><![CDATA[AI]]></category><category><![CDATA[claude-code]]></category><category><![CDATA[developer experience]]></category><category><![CDATA[Productivity]]></category><category><![CDATA[Laravel]]></category><category><![CDATA[workflow]]></category><dc:creator><![CDATA[Ubeydullah Keleş]]></dc:creator><pubDate>Tue, 14 Apr 2026 16:07:25 GMT</pubDate><content:encoded><![CDATA[<p>I have fourteen markdown files in <code>~/.claude/commands/</code>. They look like this:</p>
<pre><code class="lang-bash">$ ls -1 ~/.claude/commands/
close-gl-issue.md
draft-post.md
gh-issue.md
gl-issue.md
handoff.md
improve-issue.md
prod-db-akluma.md
publish-post.md
ship-akluma.md
ship-consentio.md
staging-db-akluma.md
staging-db.md
update-docs.md
worktree.md
</code></pre>
<p>Each file is a Claude Code slash command — a markdown document whose contents are prepended to the agent's prompt when you type <code>/&lt;filename&gt;</code> into a session. They are, quite literally, just text. They do not feel like they should matter very much. They do, though, and they are the difference, for me, between a raw AI assistant that is vaguely useful on a good day and a set of sharp, purpose-built tools that absorb specific pieces of toil from a Laravel legacy codebase I did not originally write. The CLI is not really my tool. These files are.</p>
<p>This post is a tour of six of them, ordered loosely by the chronology of a working day. Some of the fourteen are variants for a different project, and a few are database tunnels that are useful without being interesting; I have left those out. The six I have picked are the ones whose job description surprised me the most — either when I wrote them, or later, when I caught myself realising how much of my day they had quietly eaten.</p>
<h2 id="heading-improve-issue"><code>/improve-issue &lt;url&gt;</code></h2>
<p>The first command in the catalog is also the most embarrassing. I create GitLab issues during meetings, and those meetings move fast enough that my issue descriptions are often one-third of a sentence and a typo. Sometimes the title is literally the last thing someone said out loud. I know that when I get back to my desk, I will not remember what it meant, and more importantly, the Claude Code session I eventually point at the issue will not either. So I built a command that rewrites my own bad issues into real ones before I start work on them.</p>
<p>It takes the full URL of an issue, sniffs the platform, fetches the issue and every comment through the appropriate CLI, and then picks between three triage paths depending on how clear the thing is. Before any of that, though, if the issue smells UI-shaped — a mention of a page, a form, a button, a layout — the command is hard-coded to stop and demand a screenshot before going any further. No exceptions. I have wasted too much time guessing which page was broken from a title like "fix asset register" to ever want the command to be polite about this again.</p>
<p>The output is not a comment on the issue. It is a full replacement of the title and description, posted via the API:</p>
<pre><code class="lang-bash">glab api <span class="hljs-string">"projects/d3team%2Fconsentio_docker_src/issues/&lt;number&gt;"</span> \
  --method PUT -f title=<span class="hljs-string">"<span class="hljs-variable">$TITLE</span>"</span> -f description=<span class="hljs-string">"<span class="hljs-variable">$BODY</span>"</span>
</code></pre>
<p>Comments annotate; a rewritten description replaces. I want the original carelessness gone, not appended to.</p>
<h2 id="heading-gl-issue"><code>/gl-issue &lt;number&gt;</code></h2>
<p>This is the load-bearing command of the whole directory. It is the one I run most days, and the one that most obviously rewards the effort of having written the command file in the first place. The friction it absorbs is a subtle one: starting work on an issue on a legacy codebase used to mean the agent would either drown in the project's several hundred pages of documentation, reading everything and running out of context before it even touched the code, or skip the documentation entirely and implement the issue blind. Neither was a good outcome. What I wanted was selective reading — load the specific documentation categories this particular issue actually needs, and nothing else.</p>
<p>The way the command does this is, I think, the single nerdiest thing in the directory. When the command fires, it spawns a separate Claude subagent running on Sonnet — cheaper and faster than the main Opus session — whose only job is to read <code>docs/doc-router.yaml</code>, read the issue, and return a JSON array of category IDs for the main agent to load. The instruction inside the subagent's prompt is explicit about the format:</p>
<pre><code class="lang-text">Return ONLY a JSON array of category IDs, e.g.: ["auth", "database", "tenant-isolation"]
</code></pre>
<p>The main agent then reads every file under each matched category plus an "always" set that loads on every issue, and only then starts investigating the codebase. The triage adds about thirty seconds to the start of a session and saves easily twenty minutes of the main agent chasing the wrong files. The command also has a hard-coded screenshot gate that fires before any code is read — if the issue involves anything visual, the agent is required to stop and ask for a screenshot before going any further — and a TDD Red-Green-Pause loop inside the implementation step, because I like my agents to write the failing test before they touch the production code. The doc triage is the part that surprised me; the rest is just the command enforcing habits I already had and kept forgetting to apply.</p>
<h2 id="heading-worktree"><code>/worktree &lt;number&gt;</code></h2>
<p>Parallel worktrees have their own two-post story, which I will link at the end of this one, so I will skip over the infrastructure and talk only about the command I eventually wrapped around it. Before this command existed, spinning up a new parallel session looked like this: fetch the issue, read the title, derive a short kebab-case slug by hand, pick a prefix (<code>feature/</code> or <code>bugfix/</code> based on the labels), assemble the branch name, run <code>./scripts/parallel-session.sh create feature/export-audit-pdf-345 345</code>, wait for the script to print the worktree path, and then run <code>code ../consentio-wt-export-audit-pdf/</code> to open the new window. Six steps, most of them mechanical and therefore forgettable.</p>
<p>Now it is:</p>
<pre><code class="lang-bash">/worktree 345
</code></pre>
<p>One integer. The command fetches the issue with <code>glab</code>, slugifies the title, picks the prefix from the labels, runs the script, parses the worktree path from the script's output, and opens a new VS Code window on the worktree directory. The output it hands back looks like this:</p>
<pre><code class="lang-text">Session ready for issue #345!

  Directory:  ../consentio-wt-export-audit-pdf
  App:        http://localhost:902

A new VS Code window is open. Start Claude there and run:
  /gl-issue 345
</code></pre>
<p>The last line is a deliberate pass-off to the next command in the pipeline. The <code>destroy</code> mode is the symmetrical opposite — <code>/worktree destroy 345</code> finds the session by issue number, tears down the Docker stack, syncs <code>develop</code>, and deletes the branch — which means the whole lifecycle of a parallel session is now bookended by two commands that each take a single integer as their argument.</p>
<h2 id="heading-handoff"><code>/handoff</code></h2>
<p>This is the weird one. It is also the only command in this post that you do not use by typing a number into a fresh session — you use it by <em>pausing</em> the one you are already in. The friction it absorbs is a specific one: sometimes, mid-investigation, a small side-task turns up that does not really warrant a branch of its own. A one-line config change, a dependency bump, a typo I noticed in a file I was reading for a completely unrelated reason. Creating a new feature branch for a ten-line change feels heavy, and so does ignoring it and hoping I remember later. What I wanted was a way to hand the task to <em>another</em> Claude session — one that already has a live feature branch — and have that session fold the work into its own next commit.</p>
<p>The command produces a single fenced code block, nothing else, which I copy and paste into the other session's chat window. The block contains a self-contained implementation brief: investigation summary, file paths with line numbers, the exact edits, the verification steps. But the part that makes the whole thing work is the preamble of git rules that the brief opens with:</p>
<pre><code class="lang-text">Git rules — READ CAREFULLY:
- Do NOT create a branch, switch branches, or run any git commands
- Do NOT commit this work separately — fold it into your next commit alongside your own work
- Do NOT stash, discard, or touch your existing uncommitted changes
- Do NOT question why this task is on your branch — it's intentional
</code></pre>
<p>That last rule is the one I had to add after the first time I tried this, because the receiving session, very reasonably, looked at the task it had just been handed, noticed that it had nothing to do with the branch it was on, and started asking me whether I was sure about this. I was. I am still. Every branch has a minimum cognitive cost — a new MR to review, a new CI run to wait for, a new context to hold open in my head — and some ten-line tasks are not worth paying that cost for. The command encodes that judgment so I do not have to re-argue it every time.</p>
<h2 id="heading-ship-consentio-push"><code>/ship-consentio push</code></h2>
<p>This is the command that absorbs the git shipping workflow: commit, push, create the merge request, wait for the pipeline, merge, sync, clean up. It has four modes — <code>push</code>, <code>develop</code>, <code>staging</code>, and <code>quick</code> — and the most interesting of the four is <code>quick</code>, because of the guardrail built into it.</p>
<p>The problem the guardrail solves is that I trust myself to run <code>quick</code> for small, low-risk changes, and I do not trust myself to know the difference. A typo fix is obviously quick. A routing refactor is obviously not. Somewhere between the two there is a long tail of changes where my judgment is wrong often enough to matter. So the command does not trust me either. Before it proceeds with a <code>quick</code> flow, it checks the diff against two criteria. The first is a hard-coded list of high-risk file patterns:</p>
<pre><code class="lang-text">Dockerfile, docker-compose.yml, docker/**, deploy/**
.gitlab-ci.yml, composer.lock, package-lock.json
routes/web.php, config/**, app/Http/Middleware/**
app/Providers/**, app/Services/**, database/migrations/**
</code></pre>
<p>If any file in the diff matches any of those patterns, the command refuses to proceed quietly and asks me to confirm that I really do want to take the quick route. The second criterion is a numeric threshold — four files and a hundred changed lines, calculated from the project's eightieth-percentile commit size — with an exclusion list of always-low-risk paths like <code>resources/views</code>, <code>tests</code>, and <code>docs</code> that do not count toward the total. Between the two criteria, the command catches essentially every change that my past self would have regretted shipping on the quick route.</p>
<h2 id="heading-update-docs"><code>/update-docs</code></h2>
<p>If <code>/gl-issue</code> is the load-bearing command of the catalog, <code>/update-docs</code> is the most quietly valuable one. I run it after every issue I close, and roughly half the time it does something I did not expect. The obvious half of its job is the expected one: read the documentation index, compare it against the diff, and update any docs that have drifted out of sync with the code. That is the <code>UPDATE</code> action in the command's plan output. The less obvious half is an action called <code>CAPTURE</code>, which is governed by a step inside the command file called <strong>Knowledge Audit</strong>.</p>
<p>The step asks a single question: <em>if a fresh LLM session had to work on the same area of code tomorrow, what would it waste significant time re-discovering?</em> Then it applies a filter — the knowledge has to have taken more than a few minutes to figure out, it has to not be obvious from reading the code casually, and it has to not already exist in any documentation the command has just read. Anything that survives the filter is written out as a short markdown note and saved into a <code>codebase-notes/</code> subfolder, named after the area of code rather than after the issue — <code>data-inventory-normalization.md</code>, not <code>issue-309-notes.md</code>. The template for these notes is deliberately sparse:</p>
<pre><code class="lang-markdown"><span class="hljs-section"># <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">Area</span> <span class="hljs-attr">of</span> <span class="hljs-attr">Code</span>&gt;</span></span></span>

<span class="hljs-section">## What This Covers</span>
<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">one-sentence</span> <span class="hljs-attr">summary</span>&gt;</span></span>

<span class="hljs-section">## Key Findings</span>
<span class="hljs-section">### <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">finding</span> <span class="hljs-attr">title</span>&gt;</span></span></span>
<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">what</span> <span class="hljs-attr">it</span> <span class="hljs-attr">is</span>, <span class="hljs-attr">why</span> <span class="hljs-attr">it</span> <span class="hljs-attr">matters</span>, <span class="hljs-attr">where</span> <span class="hljs-attr">it</span> <span class="hljs-attr">lives</span>&gt;</span></span>

<span class="hljs-section">## Files Involved</span>
<span class="hljs-bullet">-</span> <span class="hljs-code">`path/to/file.php`</span> — role in this flow
</code></pre>
<p>Those notes are, in effect, a knowledge base I am building for <em>my own future agents</em>. Every time a fresh session starts work on an area I have been in before, it can read the note that the previous session left behind and skip the part where it re-discovers, for the third or fourth time, that the double-query pattern in <code>FormsController</code> is there because of a legacy French translation fallback. The command does not replace my human memory of the project — the project is too large for that to be a realistic goal — but it gives my agents a working memory that survives between sessions, which is a thing the raw tool does not offer.</p>
<h2 id="heading-six-commands-one-pattern">Six commands, one pattern</h2>
<p>None of these commands is individually clever. Each one started as a single frustration I had hit in a single session, and each one got written the way a sysadmin writes an alias — to stop having to type the same thing twice. What they have in common is not the specific instructions inside each file, which are mostly unremarkable, but the shape of the thing they are doing underneath. Each command absorbs one specific piece of cognitive toil that I used to carry in my head and am quietly relieved not to carry anymore. The CLI is not my tool. These files are. There are eight more in the directory I did not write about, and I would miss every one of them if they vanished tomorrow.</p>
<hr />
<p><em>Related: <a target="_blank" href="https://ubeyd.dev/parallel-ai-sessions-docker-worktrees">How I Run Parallel AI Coding Sessions on the Same Laravel Project</a> and <a target="_blank" href="https://ubeyd.dev/laravel-parallel-worktrees-demo-seeder">The half I left out of my parallel worktrees post</a> — the two earlier posts in this loose series on building tooling around Claude Code for a Laravel legacy project.</em></p>
]]></content:encoded></item><item><title><![CDATA[The half I left out of my parallel worktrees post]]></title><description><![CDATA[A few weeks ago I wrote about how I run parallel AI coding sessions on the same Laravel project, each one isolated in its own git worktree with its own Docker stack, its own database, and its own localhost port. That post was about the infrastructure...]]></description><link>https://ubeyd.dev/laravel-parallel-worktrees-demo-seeder</link><guid isPermaLink="true">https://ubeyd.dev/laravel-parallel-worktrees-demo-seeder</guid><category><![CDATA[AI]]></category><category><![CDATA[claude-code]]></category><category><![CDATA[developer experience]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Laravel]]></category><category><![CDATA[Seeders]]></category><category><![CDATA[Testing]]></category><dc:creator><![CDATA[Ubeydullah Keleş]]></dc:creator><pubDate>Tue, 14 Apr 2026 14:24:11 GMT</pubDate><content:encoded><![CDATA[<p>A few weeks ago I wrote about <a target="_blank" href="https://ubeyd.dev/parallel-ai-sessions-docker-worktrees">how I run parallel AI coding sessions on the same Laravel project</a>, each one isolated in its own git worktree with its own Docker stack, its own database, and its own localhost port. That post was about the infrastructure — how to spin up three independent development environments from a single repository in about ninety seconds. What I didn't admit in that post, and what I think I owe the follow-up to, is that the infrastructure on its own was not quite enough. It solved a real problem, but it left a quieter one untouched.</p>
<h2 id="heading-the-empty-worktree">The empty worktree</h2>
<p>The quieter problem was that each freshly spawned worktree was a ghost town. Running <code>migrate:fresh --seed</code> would populate the database with countries, question templates, rating scales, and a handful of starter users, and that was all. Structurally, it was a complete application. Functionally, every page beyond the admin panel was an empty table. There were no filled-out forms, no assets, no incidents, no remediation plans, and no way to tell whether a visual change to the audit register was actually working without first clicking through ten minutes of form-filling to generate something for the register to display. That rather defeats the point of being able to spin up three sessions in ninety seconds.</p>
<p>What I really needed was a seeder that produced a <em>populated</em> application. Not just the reference data that every install needs — the lookup tables and the question templates — but the kind of transactional data that a real tenant would accumulate after a week of actual use. Finished assessments. Audits with ratings and the remediation plans those ratings imply. A handful of incidents. Assets with real map coordinates so the location map would actually render. The obvious move was to build Laravel model factories for every domain model I cared about and then compose them inside a single <code>DemoDataSeeder</code> that stitched them together into a believable scenario.</p>
<h2 id="heading-factories-and-a-seeder">Factories and a seeder</h2>
<p>That is what I set out to do, and it went more or less the way you would expect. Working with Claude, I built the factories in dependency order — starting with the simple lookup tables, then users and organizations, then the structure of the forms, then the form instances, then the assets, then the responses, and finally the remediation plans and incidents that hang off everything else. Each factory got a small smoke test that asserted it could create a single instance and persist it without anything blowing up. The repository's test count went from eighty-nine to one hundred and nine, and every page I cared about in the application started showing data again. The dashboards had numbers. The audit register had two completed audits. The asset register had twelve assets, four of them with map pins. The application looked like a functioning product for the first time in my worktrees.</p>
<p>The shape of the result I was aiming for was a single method chain that would do most of the setup work for me — something like this:</p>
<pre><code class="lang-php">User::factory()-&gt;organization()-&gt;create();
</code></pre>
<p>One line, and behind the scenes it would create a new organization account, attach the fifteen rows of reference data that every real organization in this application needs in order to function, and leave me with an object I could then hang assets, audits, and responses off of inside the demo seeder. Getting a single line to do that much useful work is really what all twenty-or-so factories were for.</p>
<p>One small design decision is worth mentioning, because it turned out to make a disproportionate difference to how pleasant the environment was to use. My original plan had been to create separate demo organizations — Acme Corp, Beta Industries — each with their own fictitious admin users, so that you could log in as <code>demo-orgadmin@example.com</code> and see their data. I prototyped this, and then I threw it away. Remembering a new set of credentials every time you spin up a worktree is friction, and friction compounds when you are spinning up worktrees all day long. The better approach was to hang the demo data off the Test Organization that already existed in the bootstrap seeders, so that the familiar <code>orgadmin@example.com</code> account — the one I use everywhere else — would quietly find itself in front of a fully populated application with no new accounts, no new passwords, and no mental overhead at all.</p>
<p>The final piece was the environment guard. I wanted the seeder to run on every environment that wasn't production — local development for me, but also our staging server, where the team needed a populated application for the same reasons I did. One line inside <code>DatabaseSeeder</code> was enough:</p>
<pre><code class="lang-php"><span class="hljs-keyword">if</span> (! app()-&gt;environment(<span class="hljs-string">'production'</span>)) {
    <span class="hljs-keyword">$this</span>-&gt;call(DemoDataSeeder::class);
}
</code></pre>
<p>A normal deployment runs <code>php artisan migrate</code>, which doesn't touch seeders at all, so shipping this code to staging was entirely safe. The demo data only appears when someone explicitly runs <code>migrate:fresh --seed</code>, which is exactly when — and only when — you want it to.</p>
<h2 id="heading-one-more-check-before-i-merged">One more check, before I merged</h2>
<p>At this point I thought I was done. The factories existed, the seeder ran, the application was populated, and the test suite was green. I was ready to open the merge request and move on. Before I did, though, I wanted to do one more check — the kind of check that doesn't feel necessary until you have been bitten by skipping it. I wanted to compare what my seeder was actually putting into the database against what a real user had been writing into our staging environment over the previous few weeks. Not the shape of the data, not the row counts, but the exact column values, row by row and field by field.</p>
<p>I am very glad I did, because my seeder was lying to me, and the lies were subtle enough that no automated test would ever have caught them.</p>
<h2 id="heading-a-column-called-useremail">A column called <code>user_email</code></h2>
<p>The first mismatch I found is still, several days later, my favourite. My seeder was populating a column called <code>user_email</code> on internal audit responses with, reasonably enough, the email address of the user who had submitted the response. The real application, it turned out, writes the literal string <code>'0'</code> into that column for internal users, and nothing in the schema, nothing in the column name, and nothing in any reasonable person's mental model would ever have hinted at this.</p>
<pre><code class="lang-text">What my seeder wrote:    user_email = 'orgadmin@example.com'
What production writes:  user_email = '0'
</code></pre>
<p>It was one of those small pieces of behaviour that only exists because a controller, at some point years ago, needed a sentinel value and nobody ever revisited the decision. My seeder was not wrong in a way that would break anything — the application was perfectly happy with <code>'orgadmin@example.com'</code> sitting in that column. It was wrong in a way that made the seeded data look slightly nicer than the real data, and that, I am now convinced, is the worst kind of wrong, because it makes local development quietly drift away from production reality without anybody noticing.</p>
<p>There were ten of these mismatches in total, and every single one of them traced back to the same underlying pattern. The controllers in this application write their data using raw SQL INSERT statements that only touch a specific subset of columns — nine, in some cases, out of a table with thirty — and every other column in the row is left sitting at whatever default the migration originally set, which was almost always NULL. A client ID my seeder was cheerfully populating. A progress percentage my seeder was calculating. A type field my seeder was deriving from the form. None of these fields were actually written by the application in production. All of them had, quietly and without any particular announcement, become columns that the schema promised but the code simply never touched.</p>
<p>Correcting the seeder, once I understood the pattern, was the easy part. I went back through the <code>DemoDataSeeder</code> and stopped setting any column that the application itself did not set, and the staging diff came back clean. The harder question — and I think the reason this whole detour is worth writing about at all — is how I would ever have caught any of this without the staging comparison. I do not think I would have. Every automated test I could have written would have been reading the same schema I was writing to, which means the tests would have agreed with the seeder that everything was in order. The only authoritative source of truth was the production code path itself — the actual controller writing the actual INSERT statement, and the actual staging database that had been touched by that code path for weeks on end. Anything less than that would have missed it.</p>
<h2 id="heading-what-i-actually-learned">What I actually learned</h2>
<p>This, I think, is the lesson that matters most, and it is specifically a lesson about working with an AI collaborator on a legacy codebase. Claude was a very capable partner throughout this work. It built the factories in the right order. It wrote smoke tests without being asked. It raised sensible questions about ownership columns and prerequisite chains and the difference between assessment responses and audit responses, and it caught several mistakes I would otherwise have made on my own. What it could not do — and I do not think any AI collaborator could reasonably have done — was know that <code>user_email = '0'</code> was the right value. That knowledge does not live in the schema, it does not live in the column names, it does not live in the tests, and it does not live in any piece of documentation I had put into the conversation. It lives only in the line of controller code that writes the INSERT, and in the staging database that reflects the consequences of that line running against real users over time. If I had not gone looking for it, neither of us would have found it.</p>
<p>So the moral of this follow-up, for me, is a slightly adjusted version of the one I ended the first post with. That first post was about teaching the AI about the environment — making sure it understood that a worktree was not an ordinary branch, and making sure it would not run cleanup commands that would kill the session. This one is about something adjacent, and a little more uncomfortable. It is about accepting that on a legacy codebase, an AI collaborator can only ever be as correct as the ground truth you show it.</p>
<blockquote>
<p>The schema is not ground truth. The code that actually runs in production is.</p>
</blockquote>
<p>If you are going to let the AI write your seed data, you have to be willing to compare what it wrote against a database that was shaped by that production code — and you have to be willing to do the comparison yourself, because nothing else will do it for you.</p>
<h2 id="heading-where-things-stand-now">Where things stand now</h2>
<p>The parallel worktrees, for what it is worth, are now as genuinely useful as I had hoped they would be when I wrote the first post. I can spin up three sessions at once, and every single one of them boots into an application that looks and behaves like a real product. The infrastructure, it turns out, was the easy half of the problem. The data — and the small, uncomfortable process of making sure the data wasn't quietly lying to me — was the other half, the half I didn't write about the first time round. This is me writing about it now.</p>
<hr />
<p><em>Related: <a target="_blank" href="https://ubeyd.dev/parallel-ai-sessions-docker-worktrees">How I Run Parallel AI Coding Sessions on the Same Laravel Project</a> — the first post in this pair, where the whole parallel-worktree setup got built in the first place.</em></p>
]]></content:encoded></item><item><title><![CDATA[How I Run Parallel AI Coding Sessions on the Same Laravel Project]]></title><description><![CDATA[I work solo on a Laravel monolith. My AI pair programmer (Claude Code) is fast, thorough, and relentless — but it can only work on one branch at a time. While it's deep in a feature implementation, I'm just... sitting there. Waiting. Reading the diff...]]></description><link>https://ubeyd.dev/parallel-ai-sessions-docker-worktrees</link><guid isPermaLink="true">https://ubeyd.dev/parallel-ai-sessions-docker-worktrees</guid><category><![CDATA[AI]]></category><category><![CDATA[claude-code]]></category><category><![CDATA[developer experience]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Laravel]]></category><category><![CDATA[Productivity]]></category><dc:creator><![CDATA[Ubeydullah Keleş]]></dc:creator><pubDate>Fri, 03 Apr 2026 11:39:45 GMT</pubDate><content:encoded><![CDATA[<p>I work solo on a Laravel monolith. My AI pair programmer (Claude Code) is fast, thorough, and relentless — but it can only work on one branch at a time. While it's deep in a feature implementation, I'm just... sitting there. Waiting. Reading the diff. Running the verification steps.</p>
<p>I wanted to launch a second session on a different issue. And a third. Each one visible in the browser. Each one isolated. Each one disposable.</p>
<p>The problem? Docker Compose is bound to one directory. One project. One set of ports. You can't just <code>docker-compose up</code> twice and hope for the best.</p>
<p>Here's how I solved it.</p>
<p><strong>My setup:</strong> macOS, VS Code with the Claude Code extension, Docker Desktop, and a Laravel 12 app running in Docker Compose. Everything in this post assumes a similar stack, but the core ideas — parameterized ports, isolated databases, worktree-aware tooling — apply to any Dockerized web application.</p>
<h2 id="heading-the-problem-in-one-sentence">The problem in one sentence</h2>
<p>Docker Compose ties your <code>localhost:8000</code> (or whatever your app port is) to a single project directory. If you want to work on two branches simultaneously and <strong>see both in the browser</strong>, you need two separate Docker stacks on different ports — and they can't collide.</p>
<h2 id="heading-why-not-just-use-git-stash">Why not just use git stash?</h2>
<p>Because stashing is sequential. I don't want to:</p>
<ol>
<li>Stash my work</li>
<li>Switch branches</li>
<li>Spin up the other feature</li>
<li>Verify it</li>
<li>Switch back</li>
<li>Pop the stash</li>
<li>Remember where I was</li>
</ol>
<p>I want to open two VS Code windows, each on a different branch, each with its own <code>localhost</code> URL. Work on both. Alt-tab between them. No context switching tax.</p>
<h2 id="heading-the-architecture-worktrees-parameterized-docker">The architecture: worktrees + parameterized Docker</h2>
<p>The solution has two parts:</p>
<p><strong>1. Git worktrees</strong> give you multiple checked-out copies of the same repo. They share the git object store (so they're lightweight — not full clones), but each has its own branch, working directory, and files.</p>
<pre><code>~<span class="hljs-regexp">/dev/</span>
├── my-project/                    # Main repo (localhost:<span class="hljs-number">8000</span>)
├── my-project-wt-<span class="hljs-keyword">export</span>-button/   # Worktree <span class="hljs-number">1</span> (localhost:<span class="hljs-number">8001</span>)
└── my-project-wt-fix-sidebar/     # Worktree <span class="hljs-number">2</span> (localhost:<span class="hljs-number">8002</span>)
</code></pre><p><strong>2. Parameterized Docker Compose ports</strong> let each directory run its own stack without port collisions.</p>
<h3 id="heading-step-1-make-your-ports-configurable">Step 1: Make your ports configurable</h3>
<p>Replace hardcoded ports in <code>docker-compose.yml</code> with environment variables:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">services:</span>
  <span class="hljs-attr">app:</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"${APP_PORT:-8000}:80"</span>
  <span class="hljs-attr">mysql:</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"${DB_PORT:-3306}:3306"</span>
  <span class="hljs-attr">mailpit:</span>
    <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"${MAIL_HTTP_PORT:-8025}:8025"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"${MAIL_SMTP_PORT:-1025}:1025"</span>
</code></pre>
<p>The <code>:-8000</code> syntax means "use 8000 if the variable isn't set." Your main repo works exactly as before — zero behavior change. But now each worktree can set its own ports in a local <code>.env</code>.</p>
<h3 id="heading-step-2-isolate-session-cookies">Step 2: Isolate session cookies</h3>
<p>This one bit me during testing. I logged into <code>localhost:8000</code>, then <code>localhost:8001</code>, then went back to <code>:8000</code> — and I was logged out.</p>
<p>Why? Browsers share cookies across ports on the same domain. Both apps set a cookie called <code>laravel_session</code> for <code>localhost</code>. Logging into one overwrites the other.</p>
<p>The fix: each worktree gets a unique session cookie name.</p>
<pre><code class="lang-env"># In the worktree's .env:
SESSION_COOKIE=my_app_session_8001
</code></pre>
<p>Laravel reads <code>SESSION_COOKIE</code> from the environment. One line, problem solved.</p>
<h3 id="heading-step-3-automate-everything-with-a-script">Step 3: Automate everything with a script</h3>
<p>I wrote a ~200 line bash script that handles the entire lifecycle:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Create a parallel session</span>
./scripts/parallel-session.sh create feature/export-button
<span class="hljs-comment"># Output:</span>
<span class="hljs-comment">#   Session ready!</span>
<span class="hljs-comment">#   App: http://localhost:8001</span>
<span class="hljs-comment">#   MySQL: localhost:3307</span>
<span class="hljs-comment">#   Mailpit: http://localhost:8026</span>

<span class="hljs-comment"># List active sessions</span>
./scripts/parallel-session.sh list
<span class="hljs-comment"># Output:</span>
<span class="hljs-comment">#   my-project-wt-export-button  http://localhost:8001  [feature/export-button]  (running)</span>

<span class="hljs-comment"># Destroy when done</span>
./scripts/parallel-session.sh destroy feature/export-button
</code></pre>
<p>The <code>create</code> command:</p>
<ol>
<li>Runs <code>git worktree add</code> to create the worktree (from the <code>develop</code> branch)</li>
<li>Copies <code>.env</code> from the main repo</li>
<li>Overwrites port variables with auto-calculated unique values</li>
<li>Sets a unique <code>SESSION_COOKIE</code></li>
<li>Runs <code>docker-compose up -d</code></li>
</ol>
<p>Port assignment is simple: session N gets base port + N. If you destroy session 1 and create a new one, it reuses the gap.</p>
<pre><code>| Session     | App    | MySQL  | Mailpit |
|-------------|--------|--------|---------|
| Main repo   | :<span class="hljs-number">8000</span>  | :<span class="hljs-number">3306</span>  | :<span class="hljs-number">8025</span>   |
| Worktree <span class="hljs-number">1</span>  | :<span class="hljs-number">8001</span>  | :<span class="hljs-number">3307</span>  | :<span class="hljs-number">8026</span>   |
| Worktree <span class="hljs-number">2</span>  | :<span class="hljs-number">8002</span>  | :<span class="hljs-number">3308</span>  | :<span class="hljs-number">8027</span>   |
</code></pre><h3 id="heading-step-4-seed-independently">Step 4: Seed independently</h3>
<p>Each worktree gets its own MySQL volume. The database starts empty. After creating a session:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> ~/dev/my-project-wt-export-button
docker-compose <span class="hljs-built_in">exec</span> app php artisan migrate:fresh --seed
</code></pre>
<p><strong>Timing matters here.</strong> If your Dockerfile uses a named volume for <code>vendor/</code>, it starts empty on first boot. Your entrypoint likely runs <code>composer install</code> to populate it. You need to wait for that to finish before running artisan commands — otherwise you'll get "class not found" errors because the packages aren't installed yet. My script waits for Apache to respond (which means the entrypoint completed) before attempting to seed.</p>
<p>This is actually a <strong>feature, not a bug</strong>. Each session has a clean database, seeded from scratch. No stale data from another branch's migrations. No "works on my machine because I manually ran that one migration."</p>
<h2 id="heading-the-daily-workflow">The daily workflow</h2>
<p><strong>Starting a parallel session:</strong></p>
<ol>
<li>Run the create script (30 seconds — Docker images are cached)</li>
<li>Run <code>migrate:fresh --seed</code> (10 seconds)</li>
<li>Open the worktree directory in VS Code</li>
<li>Start working</li>
</ol>
<p><strong>While working:</strong></p>
<ul>
<li>Each VS Code window is a completely independent project</li>
<li>Each has its own <code>localhost</code> URL for browser testing</li>
<li>Git works normally — <code>commit</code>, <code>push</code>, create PRs/MRs</li>
<li>Tests target the correct containers automatically (Docker Compose resolves from the current directory)</li>
</ul>
<p><strong>Cleanup after merging:</strong></p>
<p>Once you've merged the MR in GitLab, the cleanup flow is straightforward:</p>
<ol>
<li>Close the worktree's VS Code window — there's nothing left to do there.</li>
<li>Switch to the main repo's VS Code window and run the destroy command:</li>
</ol>
<pre><code class="lang-bash">./scripts/parallel-session.sh destroy feature/export-button
</code></pre>
<p>This stops the Docker containers, removes the database volume, deletes the worktree directory, and leaves you back on <code>develop</code>. From there you can sync your local branch with a quick <code>git pull origin develop</code> and delete the feature branch with <code>git branch -d feature/export-button</code>. The entire teardown takes about five seconds.</p>
<h2 id="heading-but-what-about-database-conflicts">"But what about database conflicts?"</h2>
<p>This was my biggest concern too. If session 1 adds column <code>pdf_path</code> and session 2 adds column <code>export_format</code> to the same table, what happens?</p>
<p>Nothing dramatic. Each session creates a migration file with a unique timestamp. When both branches merge:</p>
<ul>
<li>Both migration files coexist in <code>database/migrations/</code></li>
<li><code>php artisan migrate</code> runs them in timestamp order</li>
<li>Both columns get added</li>
</ul>
<p>The worktree databases are <strong>disposable scratch pads</strong>. You never merge database <em>data</em> — you merge migration <em>files</em>. The reconciliation happens in git, the same way it always does.</p>
<p>The only real conflict scenario: two branches modify the <em>exact same column</em>. That shows up as a git merge conflict in the PR, same as if you'd built the features sequentially.</p>
<h2 id="heading-resource-cost">Resource cost</h2>
<p>Each parallel session runs its own MySQL, Redis, and app container. On my M-series Mac:</p>
<ul>
<li>~250MB RAM per session</li>
<li>~500MB disk per MySQL volume</li>
<li>Docker image layers are shared (cached), so no extra disk for the app image</li>
</ul>
<p>I comfortably run 2-3 sessions alongside my main repo. Beyond that, my laptop would probably start complaining.</p>
<h2 id="heading-what-id-do-differently">What I'd do differently</h2>
<p><strong>Cookie isolation should be the default.</strong> I wasted 20 minutes debugging mysterious logouts before realizing cookies were the issue. If you're building this, add <code>SESSION_COOKIE</code> to your <code>.env.example</code> from day one.</p>
<p><strong>Named volumes are your friend.</strong> Docker Compose auto-prefixes volume names with the project name (derived from the directory name). Each worktree directory has a unique name, so volumes are automatically isolated. I didn't have to configure anything — it just worked.</p>
<p><strong>Don't share the database.</strong> I briefly considered having all worktrees connect to the same MySQL instance (different databases). Bad idea. Different branches have different migrations. Isolation is the whole point.</p>
<p><strong>Don't race your own entrypoint.</strong> My first attempt added a <code>composer install</code> step to the setup script — but the container's entrypoint was already running <code>composer install</code> to populate the empty vendor volume. Two concurrent composer installs on the same directory corrupted zip downloads and crashed the container. The fix was to let the entrypoint do its job and have the script wait for Apache to respond before proceeding.</p>
<p><strong>Check your API key restrictions.</strong> If you use third-party APIs with domain or URL restrictions (Google Maps, Stripe, OAuth callbacks), they're probably locked to <code>localhost:8000</code>. Your worktree on <code>localhost:8001</code> will get silent failures — a blank map, a rejected OAuth redirect, a 403 from a payment form. I got a console error on Google Maps because my API key's "Website restrictions" only listed <code>:8000</code>. The fix was trivial (add <code>:8001/*</code> and <code>:8002/*</code> to the allowed list), but the debugging wasn't obvious. Add your worktree ports to every restricted API key upfront.</p>
<p><strong>Audit every <code>git add -A</code> in your tooling.</strong> If you symlink shared directories into worktrees (documentation, assets, anything gitignored), a blanket <code>git add -A</code> can pick them up in unexpected ways. I now require explicit file-by-file staging in worktree contexts. It's a minor inconvenience that eliminates a whole category of "why did that get committed?" moments.</p>
<h2 id="heading-teach-your-ai-agent-about-worktrees">Teach your AI agent about worktrees</h2>
<p>This is the part nobody warns you about. Your Docker setup can be flawless, your ports perfectly isolated, your databases independently seeded — and your AI coding assistant will still trip over itself if it doesn't know it's running inside a worktree.</p>
<p>The first time I shipped code from a worktree session, the agent finished by cheerfully suggesting: "When you've merged, run <code>/ship-consentio develop</code> to sync up." That command switches to the <code>develop</code> branch — which is already checked out in the main repo. Git refuses to check out the same branch in two worktrees. The command fails immediately.</p>
<p>The fix was to teach the agent to detect its environment. If the working directory name contains <code>consentio-wt-</code>, the agent knows it's in a worktree session and adjusts its behavior accordingly. Instead of suggesting branch switches and local cleanup, it tells you to close the VS Code window and run the destroy command from the main repo. Same outcome, correct path to get there.</p>
<p>The second issue was subtler. I symlink the project's <code>docs/</code> directory into each worktree so the agent can reference documentation during implementation. The directory is gitignored, so it should be invisible to version control. But <code>git add -A</code> — the lazy, catch-all staging command that every AI agent defaults to — doesn't always play nicely with symlinks in worktree contexts. The solution was to forbid <code>git add -A</code> entirely when working in a worktree and require the agent to stage files by name. It's a small constraint that prevents a class of confusing errors.</p>
<p>If you're building a similar setup with an AI coding tool, the lesson is this: the infrastructure is the easy part. The hard part is making sure your agent's playbook — its prompt templates, command files, or system instructions — includes conditional logic for the worktree context. Anywhere your prompts say "switch to develop," "sync the branch," or "stage all changes," you need a worktree-aware variant. Otherwise your carefully isolated sessions will work perfectly right up until the moment the AI tries to clean up after itself.</p>
<h2 id="heading-the-result">The result</h2>
<p>I went from working on one issue at a time to running 2-3 parallel AI sessions. While Claude is writing tests for feature A, I'm reviewing the implementation plan for feature B. While waiting for feature B's CI pipeline, I'm verifying feature C in the browser.</p>
<p>My throughput roughly doubled. Not because the AI got faster, but because <em>I</em> stopped being the bottleneck.</p>
<p>The entire setup is:</p>
<ul>
<li>One <code>docker-compose.yml</code> change (4 hardcoded ports to 4 env variables)</li>
<li>One <code>.env.example</code> update (document the new variables)</li>
<li>One bash script (~200 lines)</li>
<li>One line in <code>.gitignore</code> (<code>docker-compose.override.yml</code>)</li>
</ul>
<p>No external tools. No paid services. No complex orchestration. Just git, Docker, and a shell script.</p>
<hr />
<p><em>Follow-up: <a target="_blank" href="https://ubeyd.dev/laravel-parallel-worktrees-demo-seeder">The half I left out of my parallel worktrees post</a> — how I populated each worktree with realistic data, and the staging comparison that caught my demo seeder quietly lying to me.</em></p>
<p><em>The code snippets in this post are simplified from a real production setup. Your ports, service names, and seeder commands will differ. The principles are universal for any Dockerized web app.</em></p>
]]></content:encoded></item></channel></rss>