The Missing Piece of the Claude Code Workflow: Isolated Worktree Databases
Git worktrees + Claude Code need isolated databases. Here's how to spin up separate Supabase stacks per branch with automatic port allocation.
Introduction
Recently, Boris Cherny (creator of Claude Code) shared a a tweet that about how best to use Claude code that went viral: https://x.com/bcherny/status/2017742741636321619
The first point he makes is about using Git Worktrees for parallel development. These allow you (and here, your Claude Code AI agents) to keep multiple features checked out simultaneously, giving your AI tools clean, isolated context.
But when we actually tried this, we noticed an issue. Sure, the code is isolated, but the local database usually isn't.
The Friction
When multiple worktrees hit the same local database:
-
Migrations from feature-A break feature-B. You're mid-debug on a auth refactor when suddenly your tables don't exist because another branch ran a down migration.
-
Test data bleeds across contexts. The seed data you carefully crafted for testing one feature gets polluted by another.
-
Debugging becomes impossible because you can't rely on the state of your data. Is this bug from my code, or from the other branch's half-applied migration?
This friction defeats the entire purpose of worktrees. You're context-switching cleanly in code, but your data layer is still a shared, mutable mess.
The Solution: Ephemeral Supabase Stacks
To make the "Boris Workflow" viable for full-stack dev, we automated the creation of isolated database stacks for each worktree.
Below you'll find the scripts we're using to perform the precise sequence that you can just drop into any project that can deal with typescript files (or get an LLM to convert to whatever language you prefer). Here are a couple of the things that are included in there:
1. Port Calculation
It hashes the current branch name to generate a deterministic, unique port offset. The same branch always gets the same ports, making it predictable across restarts.
function getPreferredPortOffset(branch: string): number {
if (branch === "main" || branch === "master") {
return 0;
}
const hash = crypto.createHash("md5").update(branch).digest("hex");
const num = parseInt(hash.substring(0, 8), 16);
return ((num % 20) + 1) * 100;
}
2. Config Generation
It writes a config.toml for Supabase using these unique ports. Each worktree gets its own project ID, ensuring Docker containers and volumes are completely isolated.
| Worktree | API Port | DB Port | Studio Port |
|---|---|---|---|
| main | 54321 | 54322 | 54323 |
| feature-auth | 54421 | 54422 | 54423 |
| feature-api | 54521 | 54522 | 54523 |
3. Stack Spin-up
It launches a full Supabase instanceβDB, Auth, Storage, Realtimeβon those ports. Each stack is completely independent.
4. Environment Sync
It automatically updates the .env file in that worktree with the new connection strings. No manual copy-paste. No forgetting to update the port.
True Isolation
The result is a collision-free environment. Your main branch runs on the standard ports (54321), while feature-auth-refactor runs silently on 54421.
The script even scans for conflicts. If your preferred ports happen to collide with another running stack (or Spotify's P2P range - yes, that was a fun debug session), it automatically finds the next available slot:
π Checking port availability...
β οΈ Preferred ports (offset 1400) are in use, scanning for available ports...
β
Found available ports at offset 100
Context switching becomes instant. You simply cd to your directory, and your entire infrastructure - code and data - is ready.
The Workflow
Setting up a new worktree now looks like this:
# Create the worktree in a new branch
git worktree add ../feature-awesome -b feature/awesome
# Or in a pre-existing branch
git worktree add ../feature-awesome feature/awesome
cd ../feature-awesome
# Install dependencies
npm install
# Spin up isolated Supabase stack
# NOTE!!!: the .env.development.local file must exist in the mentioned folder, otherwise things won't work.
npm run supabase:worktree:init -- --env-from=../main-worktree
# Start coding
npm run dev
That's it. You now have a completely isolated full-stack environment. Migrations, test data, auth stateβall scoped to this branch.
When you're done with the feature:
npm run supabase:worktree:delete
This tears down the Supabase stack and removes the worktree in one command.
Resource Considerations
Each Supabase stack uses roughly 500MB-1GB of RAM. Running 2-3 stacks simultaneously is fine on modern machines. For worktrees you're not actively using, npx supabase stop will pause the containers while preserving your data.
The Full Scripts
For those who want to implement this in their own projects, here's the complete initialization script:
Click to expand: supabase-worktree-init.ts
/**
* Supabase Worktree Initialization
*
* Sets up an isolated Supabase stack for the current worktree with unique ports.
* Each worktree gets its own Supabase containers (DB, Auth, Storage, etc.)
*
* Usage:
* npm run supabase:worktree:init # Fresh setup
* npm run supabase:worktree:init -- --env-from=../main # Copy API keys from another worktree
*/
import { execSync } from "child_process";
import * as fs from "fs";
import * as path from "path";
import * as crypto from "crypto";
import * as net from "net";
// Base ports (Supabase defaults)
const BASE_PORTS = {
api: 54321,
db: 54322,
shadow: 54320,
pooler: 54329,
studio: 54323,
inbucket: 54324,
analytics: 54327,
inspector: 8083,
};
/**
* Get the current git branch name.
*/
function getCurrentBranch(): string {
try {
return execSync("git rev-parse --abbrev-ref HEAD", {
encoding: "utf-8",
}).trim();
} catch {
throw new Error("Failed to get current git branch. Are you in a git repository?");
}
}
/**
* Sanitize branch name for use as project ID.
*/
function sanitizeBranchName(branch: string): string {
return branch
.replace(/[^a-zA-Z0-9]/g, "-")
.replace(/-+/g, "-")
.replace(/^-|-$/g, "")
.toLowerCase()
.substring(0, 50);
}
/**
* Check if a port is currently in use.
*/
function isPortInUse(port: number): Promise<boolean> {
return new Promise((resolve) => {
const server = net.createServer();
server.once("error", (err: NodeJS.ErrnoException) => {
if (err.code === "EADDRINUSE") {
resolve(true);
} else {
resolve(false);
}
});
server.once("listening", () => {
server.close();
resolve(false);
});
server.listen(port, "127.0.0.1");
});
}
/**
* Check if the key ports for an offset are available.
*/
async function arePortsAvailable(offset: number): Promise<boolean> {
const portsToCheck = [
BASE_PORTS.api + offset,
BASE_PORTS.db + offset,
BASE_PORTS.studio + offset,
];
for (const port of portsToCheck) {
if (await isPortInUse(port)) {
return false;
}
}
return true;
}
/**
* Calculate a deterministic port offset from branch name.
* Returns a multiple of 100 between 100 and 2000 to avoid port conflicts.
* (Keeping ports below 57000 to avoid Spotify's P2P port range)
*/
function getPreferredPortOffset(branch: string): number {
// Special case: main/master use offset 0 (default ports)
if (branch === "main" || branch === "master") {
return 0;
}
// Hash the branch name and convert to offset
const hash = crypto.createHash("md5").update(branch).digest("hex");
const num = parseInt(hash.substring(0, 8), 16);
// Use offset 100-2000 (20 slots, keeps ports below ~56400 to avoid Spotify)
return ((num % 20) + 1) * 100;
}
/**
* Find an available port offset, starting with the preferred one.
* Falls back to incrementing by 100 if ports are in use.
*/
async function findAvailablePortOffset(branch: string): Promise<number> {
const preferred = getPreferredPortOffset(branch);
// main/master always use default ports (offset 0)
if (preferred === 0) {
return 0;
}
// Try preferred offset first
if (await arePortsAvailable(preferred)) {
return preferred;
}
console.log(` β οΈ Preferred ports (offset ${preferred}) are in use, scanning for available ports...`);
// Scan for available offset (100-2000 range)
for (let offset = 100; offset <= 2000; offset += 100) {
if (offset === preferred) continue; // Already checked
if (await arePortsAvailable(offset)) {
console.log(` β
Found available ports at offset ${offset}`);
return offset;
}
}
throw new Error("No available port range found (all offsets 100-2000 are in use). Stop some Supabase stacks with 'npx supabase stop'.");
}
/**
* Generate config.toml from template with calculated ports.
*/
function generateConfig(branch: string, offset: number): void {
const templatePath = path.join(process.cwd(), "supabase", "config.toml.template");
const configPath = path.join(process.cwd(), "supabase", "config.toml");
if (!fs.existsSync(templatePath)) {
throw new Error(`Template not found: ${templatePath}`);
}
const projectId = `minerva-${sanitizeBranchName(branch)}`;
console.log(`\nπ Generating Supabase config...`);
console.log(` Project ID: ${projectId}`);
console.log(` Port offset: ${offset}`);
let template = fs.readFileSync(templatePath, "utf-8");
// Replace placeholders
template = template.replace(/\{\{PROJECT_ID\}\}/g, projectId);
template = template.replace(/\{\{API_PORT\}\}/g, String(BASE_PORTS.api + offset));
template = template.replace(/\{\{DB_PORT\}\}/g, String(BASE_PORTS.db + offset));
template = template.replace(/\{\{SHADOW_PORT\}\}/g, String(BASE_PORTS.shadow + offset));
template = template.replace(/\{\{POOLER_PORT\}\}/g, String(BASE_PORTS.pooler + offset));
template = template.replace(/\{\{STUDIO_PORT\}\}/g, String(BASE_PORTS.studio + offset));
template = template.replace(/\{\{INBUCKET_PORT\}\}/g, String(BASE_PORTS.inbucket + offset));
template = template.replace(/\{\{ANALYTICS_PORT\}\}/g, String(BASE_PORTS.analytics + offset));
template = template.replace(/\{\{INSPECTOR_PORT\}\}/g, String(BASE_PORTS.inspector + offset));
fs.writeFileSync(configPath, template);
console.log(` β
Generated supabase/config.toml`);
// Print port summary
console.log(`\n Ports:`);
console.log(` API: ${BASE_PORTS.api + offset}`);
console.log(` DB: ${BASE_PORTS.db + offset}`);
console.log(` Studio: ${BASE_PORTS.studio + offset}`);
}
/**
* Start Supabase and return the status output.
*/
function startSupabase(): string {
console.log(`\nπ Starting Supabase...`);
console.log(` This may take a few minutes on first run.\n`);
try {
execSync("npx supabase start", {
stdio: "inherit",
cwd: process.cwd(),
});
} catch (error) {
console.error(" Error details:", error);
throw new Error("Failed to start Supabase. Check Docker is running and supabase CLI is available.");
}
// Get status to extract keys
try {
const status = execSync("npx supabase status", {
encoding: "utf-8",
cwd: process.cwd(),
});
return status;
} catch (error) {
console.error(" Error getting status:", error);
throw new Error("Failed to get Supabase status.");
}
}
/**
* Parse Supabase status output to extract URLs and keys.
* Handles both old format (key: value) and new table format (β key β value β)
*/
function parseSupabaseStatus(status: string): {
apiUrl: string;
dbUrl: string;
studioUrl: string;
anonKey: string;
serviceKey: string;
} {
const extract = (patterns: RegExp[]): string => {
for (const pattern of patterns) {
const match = status.match(pattern);
if (match) return match[1].trim();
}
return "";
};
return {
apiUrl: extract([
/β\s*Project URL\s*β\s*(\S+)\s*β/,
/API URL:\s+(\S+)/,
]),
dbUrl: extract([
/β\s*URL\s*β\s*(postgresql:\S+)\s*β/,
/DB URL:\s+(\S+)/,
]),
studioUrl: extract([
/β\s*Studio\s*β\s*(\S+)\s*β/,
/Studio URL:\s+(\S+)/,
]),
anonKey: extract([
/β\s*Publishable\s*β\s*(\S+)\s*β/,
/anon key:\s+(\S+)/,
]),
serviceKey: extract([
/β\s*Secret\s*β\s*(\S+)\s*β/,
/service_role key:\s+(\S+)/,
]),
};
}
/**
* Update .env.development.local with Supabase URLs and keys.
*/
function updateEnvFile(config: {
apiUrl: string;
dbUrl: string;
anonKey: string;
serviceKey: string;
}): void {
const envPath = path.join(process.cwd(), ".env.development.local");
let content = "";
if (fs.existsSync(envPath)) {
content = fs.readFileSync(envPath, "utf-8");
}
// Helper to update or add a variable
const setVar = (name: string, value: string) => {
const regex = new RegExp(`^${name}=.*$`, "m");
if (regex.test(content)) {
content = content.replace(regex, `${name}=${value}`);
} else {
content = `${name}=${value}\n${content}`;
}
};
setVar("DATABASE_URL", config.dbUrl);
setVar("SUPABASE_URL", config.apiUrl);
setVar("SUPABASE_PUBLISHABLE_KEY", config.anonKey);
setVar("SUPABASE_SECRET_KEY", config.serviceKey);
fs.writeFileSync(envPath, content);
console.log(`\nβ
Updated .env.development.local`);
}
/**
* Copy non-Supabase env vars from another worktree.
*/
function copyEnvFrom(sourceFolder: string): void {
const sourcePath = path.join(sourceFolder, ".env.development.local");
const targetPath = path.join(process.cwd(), ".env.development.local");
if (!fs.existsSync(sourcePath)) {
console.warn(`β οΈ No .env.development.local found at ${sourcePath}`);
return;
}
// Read source and filter out Supabase-specific vars (we'll regenerate those)
const sourceContent = fs.readFileSync(sourcePath, "utf-8");
const filteredContent = sourceContent
.split("\n")
.filter((line) => {
const varName = line.split("=")[0];
return ![
"DATABASE_URL",
"BASE_DATABASE_URL",
"SUPABASE_URL",
"SUPABASE_PUBLISHABLE_KEY",
"SUPABASE_SECRET_KEY",
].includes(varName);
})
.join("\n");
fs.writeFileSync(targetPath, filteredContent);
console.log(`β
Copied env vars from ${sourceFolder} (excluding Supabase vars)`);
}
/**
* Run database migrations.
*/
function runMigrations(): void {
console.log(`\nπ Running migrations...`);
try {
execSync("npm run db:migrate", {
stdio: "inherit",
cwd: process.cwd(),
});
console.log(`β
Migrations completed`);
} catch (error) {
console.error(`β Migration failed:`, error);
throw new Error("Migration failed");
}
}
/**
* Parse command line arguments.
*/
function parseArgs(): { envFrom?: string } {
const args = process.argv.slice(2);
const options: { envFrom?: string } = {};
for (const arg of args) {
if (arg.startsWith("--env-from=")) {
options.envFrom = arg.split("=")[1];
}
}
return options;
}
/**
* Main entry point.
*/
async function main(): Promise<void> {
const options = parseArgs();
const branch = getCurrentBranch();
console.log("ποΈ Supabase Worktree Initialization\n");
console.log(` Branch: ${branch}`);
// Copy env from source if specified
if (options.envFrom) {
console.log(`\nπ Copying environment file...`);
copyEnvFrom(options.envFrom);
}
// Find available port offset
console.log(`\nπ Checking port availability...`);
const offset = await findAvailablePortOffset(branch);
// Generate config.toml with unique ports
generateConfig(branch, offset);
// Start Supabase
const status = startSupabase();
// Parse status and update env file
const supabaseConfig = parseSupabaseStatus(status);
if (!supabaseConfig.apiUrl || !supabaseConfig.anonKey) {
console.error("β Failed to parse Supabase status. Check supabase start output.");
process.exit(1);
}
updateEnvFile(supabaseConfig);
// Run migrations
runMigrations();
// Print summary
console.log(`
β
Supabase worktree initialization complete!
Studio: http://127.0.0.1:${BASE_PORTS.studio + offset}
API: http://127.0.0.1:${BASE_PORTS.api + offset}
DB: postgresql://postgres:postgres@127.0.0.1:${BASE_PORTS.db + offset}/postgres
Run 'npm run dev' to start development.
Run 'npx supabase stop' to stop this Supabase stack.
`);
}
main().catch((error) => {
console.error("\nβ Initialization failed:", error.message);
process.exit(1);
});
And here's the script to clean things up once you're done with your worktree:
Click to expand: supabase-worktree-delete.ts
/**
* Supabase Worktree Delete
*
* Deletes the Supabase stack and removes the git worktree.
* Must be run from within the worktree you want to delete.
*
* Usage:
* npm run supabase:worktree:delete
*/
import { execSync } from "child_process";
import * as path from "path";
import * as readline from "readline";
function getCurrentBranch(): string {
try {
return execSync("git rev-parse --abbrev-ref HEAD", {
encoding: "utf-8",
}).trim();
} catch {
throw new Error("Failed to get current git branch.");
}
}
function getWorktreePath(): string {
return process.cwd();
}
function isMainWorktree(): boolean {
try {
// Check if this is the main worktree (not a linked worktree)
const gitDir = execSync("git rev-parse --git-dir", { encoding: "utf-8" }).trim();
// Main worktree has .git as a directory, linked worktrees have .git as a file
return gitDir === ".git";
} catch {
return true; // Assume main if we can't determine
}
}
async function confirm(message: string): Promise<boolean> {
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
return new Promise((resolve) => {
rl.question(`${message} (y/N): `, (answer) => {
rl.close();
resolve(answer.toLowerCase() === "y");
});
});
}
async function main(): Promise<void> {
const branch = getCurrentBranch();
const worktreePath = getWorktreePath();
const worktreeName = path.basename(worktreePath);
console.log("ποΈ Supabase Worktree Delete\n");
console.log(` Branch: ${branch}`);
console.log(` Path: ${worktreePath}`);
// Safety check: don't delete main worktree
if (isMainWorktree()) {
console.error("\nβ Cannot delete the main worktree.");
console.error(" This command is for removing linked worktrees only.");
process.exit(1);
}
// Safety check: don't delete main/master branch
if (branch === "main" || branch === "master") {
console.error("\nβ Cannot delete main/master branch worktree.");
process.exit(1);
}
const confirmed = await confirm(
`\nβ οΈ This will permanently delete:\n` +
` - All Supabase data (database, auth, storage)\n` +
` - The worktree at ${worktreePath}\n\n` +
` Continue?`
);
if (!confirmed) {
console.log("\n Cancelled.");
process.exit(0);
}
// Step 1: Stop Supabase and delete data
console.log("\nπ¦ Stopping Supabase and deleting data...");
try {
execSync("npx supabase stop --no-backup", {
stdio: "inherit",
cwd: worktreePath,
});
console.log(" β
Supabase data deleted");
} catch (error) {
console.warn(" β οΈ Failed to stop Supabase (may already be stopped):", error);
}
// Step 2: Remove the worktree
console.log("\nπ Removing git worktree...");
try {
execSync(`git worktree remove "${worktreeName}" --force`, {
stdio: "inherit",
});
console.log(" β
Worktree removed");
console.log(`\nβ
Deleted worktree: ${worktreeName}`);
} catch (error) {
console.error(" β Failed to remove worktree automatically");
console.log(`\nπ To remove manually, run:
cd .. && git worktree remove ${worktreeName}
`);
}
}
main().catch((error) => {
console.error("\nβ Delete failed:", error.message);
process.exit(1);
});
Key Implementation Details
A few things we learned building this:
-
Use
project_idin config.toml β This is what Supabase uses to namespace Docker containers and volumes. Different project IDs = completely isolated stacks. -
Keep ports below 57000 β Spotify's desktop app uses ports in the 57000+ range for P2P. We hit cryptic "EOF" errors during health checks until we figured this out.
-
Scan for conflicts β Hash collisions happen. Port conflicts happen. The script checks if ports are actually available before committing to them.
-
Parse the new Supabase CLI output β The CLI recently changed from
key: valueformat to a table with box-drawing characters. The script handles both.
Conclusion
The "Boris Workflow" of using Git worktrees with Claude Code is powerful, but it's only half the story for full-stack development. True parallel development requires parallel infrastructure.
With isolated Supabase stacks per worktree, you get:
- Clean context for your AI tools
- No migration conflicts
- No data pollution
- Instant context switching
The code is straightforward to adapt to your own setup. The key insight is that your database layer needs the same isolation strategy as your code.
Built while developing Minerva β an AI platform for scientific research.