Code Merge Is Not Deployment
PR merges don't deploy themselves. How code-to-runtime gaps create silent outages across distributed infrastructure.
The PR was merged on March 21. CI passed. All 414 tests were green. The commit showed up in git log on the remote. By every visible signal, the feature was live.
Eight days later, mind_search returned HTTP 404.
The Akashic Records container on the Mac Mini had never been rebuilt.
The False Assumption
When you merge a PR, you have updated a text file on a server. That is the complete scope of what a merge does. The code does not teleport into a running process. Docker containers do not reach out and pull new images. Python services do not restart themselves. Launchd plists do not detect that a repo has new commits.
The gap between "merged" and "running" is not a workflow edge case. It is the default state of any infrastructure that does not have automated continuous deployment.
In cloud-native environments — AWS ECS with rolling deployments, Kubernetes with ArgoCD or Flux, Vercel, Railway — this gap is closed automatically. The platform detects the merge and triggers a redeploy. Developers in those environments rarely encounter this failure mode because the tooling removes the manual step.
In homelab and multi-machine setups, that automation does not exist unless you build it. Every deploy is a manual act. And manual acts get forgotten.
The Incident: Akashic Records, March 2026
PR #19 added the /api/query/discover endpoint to Akashic Records. This was a material feature — mind_search depended on it. The PR was merged to main, CI passed, and the work was considered complete.
What happened next: nothing. The Docker container on the Mac Mini (knox-mini) continued running the pre-PR image. The image had been built weeks earlier. The new endpoint did not exist in the running container.
Every call to mind_search that routed through /api/query/discover returned 404. For eight days, any agent session that called mind_search got a silent failure. The feature existed in git but not in production.
The container health check reported Up (healthy) the entire time. More on that in Lesson 233.
Why This Happens More Than You Think
The merge-is-deployment assumption is a cognitive shortcut that develops naturally. You write code. You push it. You open a PR. You merge it. The next time you think about the feature, it feels like it should be available. The mental model collapses the deployment step because in many modern workflows, it is invisible.
In a distributed homelab setup, the machine running the service is not the machine where development happens. The Mac Mini runs Docker containers. The MacBook Pro is where code is written. After merging a PR, there is a required SSH step — connect to the Mac Mini, pull the new image, rebuild the container, restart the service. This step is separate from the merge. It requires intention. It requires remembering.
When you are finishing a feature at 11pm and the PR merges cleanly, the rebuild step is easy to defer. "I'll do it in the morning." Morning comes with other priorities. The service keeps running. The feature keeps failing. No alarm fires.
The Deployment Gap Taxonomy
Not all gaps are equal. Understanding the type of gap helps you close the right one.
Code gap: The merge happened but the machine never pulled the new code. Affects non-containerized services running directly from a git repo.
Image gap: The code was pulled but the Docker image was never rebuilt. The container runs the old compiled artifact. git log shows new commits; docker image inspect shows an old build timestamp.
Container gap: The image was rebuilt but the container was not restarted. The running process is the old version. docker ps shows the container up; docker inspect shows the new image; the process inside is the old one.
Config gap: The code deployed correctly but a required environment variable, feature flag, or config file was not updated on the target machine. The new code path exists but cannot execute.
Each type requires a different fix command. The general pattern is: git pull → docker compose build → docker compose up -d. All three steps. Every time.
The Manual Deployment Checklist
For any service without automated CD:
# 1. SSH to target machine
ssh tesseract # or knox-mini, depending on service
# 2. Confirm you are on main and up to date
cd ~/Documents/Dev/<project>
git fetch origin
git status
# 3. Pull latest
git pull origin main
# 4. Rebuild the image (do not skip)
docker compose build <service>
# 5. Restart with new image
docker compose up -d <service>
# 6. Verify the new code is running
docker inspect <container> | grep -i "created\|image"
curl -s http://localhost:<port>/api/health | jq .version
Step 6 is the one most commonly skipped. If the health endpoint embeds the git SHA or version string, you can verify in one command that the running container is the expected version.
Closing the Gap with Automation
The permanent fix is automated continuous deployment. For homelab setups, this does not require enterprise tooling.
Watchtower polls Docker Hub for new images and restarts containers automatically. Useful for images built in CI and pushed to a registry.
GitHub Actions with SSH can run the docker compose build && docker compose up -d sequence over SSH as a step in the CI pipeline. The merge triggers the rebuild on the target machine.
Webhook receivers on the target machine can listen for GitHub push events and trigger local rebuilds. A simple Flask endpoint receiving a webhook is enough.
Deploy scripts as first-class artifacts — not shell history, not a Notion page, but a committed scripts/deploy.sh that encodes every step. When the script is the deploy record, the steps are never forgotten.
The Verification Step
After any deployment — automated or manual — add a verification step before closing the task:
- Check the health endpoint returns the expected version or git SHA
- Hit at least one critical API endpoint and confirm a valid response (not a 404 or 500)
- Check the container logs for startup errors
The Akashic incident would have been caught in under 60 seconds if this verification step had been run after the PR merge. One curl to /api/query/discover would have returned 404, revealing that the container had not been rebuilt.
Key Takeaways
- Merging a PR updates a file on GitHub. It does not update the running process on any target machine.
- In environments without automated CD, every deployment is a manual act that must be consciously performed after every merge.
- The Akashic Records
mind_searchfeature was broken for 8 days because the Docker container was never rebuilt after PR #19 merged. - Container health checks do not detect code gaps — a container can be "healthy" while running stale code.
- The full deployment sequence is: pull → build → restart → verify. Stopping before verify leaves the gap potentially open.
What's Next
A healthy container does not mean a correct container. In Lesson 233, we dissect why the Akashic health check returned 200 the entire time the /api/query/discover endpoint was missing — and how to build health checks that actually reflect API correctness.