Giving My AI a Persistent Memory That Syncs Across Three Machines
My Claude Code on the Mac remembered things my Claude Code on the VPS didn't. My personal AI assistant — a separate agent called Tiger — had its own brain entirely. And none of it was in version control.
I realized this when I finally opened the Jira ticket I'd filed a month ago: "Build unified AI Vault. Knowledge scattered across three places. No git backup for any of it." A day later, everything talked to one notebook.
The mess I didn't know I had
I work with Claude Code in three places.
Locally on my Mac — where most of my daily work happens. The memory lives at ~/.claude/projects/-Users-carlfung/memory/.
On a Hetzner VPS — a small server I SSH into when I'm away from the Mac, or pipe Telegram messages to so I can talk to Claude Code from bed. Its memory lives at /root/.claude/projects/-root/memory/.
With Tiger — a separate always-on assistant running in Docker on the same VPS, with its own workspace at ~/.openclaw/workspace/ and its own SOUL.md, USER.md, journal entries, everything.
Three independent brains. No sync. I'd been adding memories to whichever instance happened to be in front of me.
The audit was scarier than the architecture. When I listed the VPS's memory files, seven of them weren't anywhere else — travel notes for an upcoming trip, a card-spending reminder for a sign-on bonus, a property-tax schedule, a personal profile, credits that expire soon. All written from VPS sessions. If the Hetzner box had died between when I filed the ticket and when I finally did the work, that's what I would have lost.
One notebook, three readers
The fix was conceptually simple: pick one copy to be canonical, put it in git, give everyone else a read-only link to it.
- Mac = writer. Only the Mac writes into the Vault.
- GitHub = canonical. A private repo called
ai-vault. Every commit on the Mac gets pushed here. - VPS = reader. A read-only clone that auto-pulls every five minutes.
The "Mac writes, VPS reads" rule solved a second problem I hadn't planned for: merge conflicts. With one writer, there's no one to fight with.
The Vault isn't a separate folder
The Vault isn't in a new directory. The Vault is the Mac's memory folder — I just ran git init inside it and added a .gitignore. Claude Code keeps reading from the same path it always did. It doesn't know it's now backed by git.
On top of that, three triggers cause the Mac to push:
- Every 30 minutes when awake, via a macOS LaunchAgent.
- When I end a Claude Code session, via a Stop hook in
~/.claude/settings.json. - When the Mac wakes from more than 30 minutes of sleep — same LaunchAgent.
And one trigger for the VPS to pull:
- Every 5 minutes, a cron job runs
git pull.
Typical latency: under a minute. If I write a memory during a session and the session ends, the Stop hook pushes immediately, and the VPS picks it up within five minutes.
The push script itself is short:
# vault-push.sh — commits any uncommitted Vault changes and pushes. # Always reconciles with remote — catches the "committed locally but push failed" # case after the laptop woke back up. cd "$VAULT" if [ -n "$(git status --porcelain)" ]; then git add -A git commit -m "auto: vault sync ($TIMESTAMP)" fi UNPUSHED=$(git rev-list --count @{u}..HEAD) if [ "$UNPUSHED" -gt 0 ]; then git push --quiet fi
The UNPUSHED check is the one that saved me. I'll get to why in a moment.
The VPS's access, without a GitHub account
I don't want my VPS to have a GitHub account. I also don't want to paste a personal access token into a server config.
The right primitive here is a deploy key — a per-repo SSH key, scoped to exactly one repo, marked read-only.
# On the VPS: generate a per-repo SSH key ssh-keygen -t ed25519 -f ~/.ssh/ai_vault_deploy -N "" # On the Mac: register the public key with the repo (read-only) gh repo deploy-key add ~/.ssh/ai_vault_deploy.pub \ --repo <me>/ai-vault --title "vps-readonly" # On the VPS: add an SSH alias so git uses the deploy key for this repo, # then clone via the alias git clone git@github-aivault:<me>/ai-vault.git
If the VPS ever gets compromised, the attacker gets read-only access to one repo. Not my whole GitHub account.
Then my laptop went to sleep mid-push
This is where it got interesting.
A day after shipping the whole setup, I noticed the VPS was reporting its last sync was from 43 hours ago. That shouldn't happen — my LaunchAgent fires every 30 minutes.
I SSH'd in from another Claude Code session and asked it to compare memory timestamps. It reported back:
vault-push.loghas a commit from Apr 21 03:30 that's still local. Push never completed. And the sync log on the Mac has a 12-hour gap afterward.
The story: the 03:30 cron had committed a change (Obsidian had edited .obsidian/graph.json in the background when I viewed the graph view). Then it tried to push — and within seconds, my Mac went to sleep. Laptop lid closed, work stopped. macOS doesn't wake cron jobs back up to finish what they started.
The commit was on the local branch. The remote was unaware. And nothing in my pipeline was going to retry, because vault-push.sh only pushed right after a fresh commit. The next morning, git status showed no changes to commit. Nothing ran.
vault-push.shgit status is clean — no new commits.Three fixes, each addressing a distinct failure mode.
1. Make vault-push.sh idempotent. Every run, check for unpushed commits regardless of whether this run just made one. That's the UNPUSHED check in the snippet above — five lines, killed the whole class of failure.
2. Replace cron with a LaunchAgent. macOS's cron doesn't know about sleep. launchd does — specifically, a LaunchAgent with StartInterval (every N seconds) plus RunAtLoad: true fires on login and on wake when the interval has elapsed during sleep.
<key>StartInterval</key> <integer>1800</integer> <!-- every 30 min --> <key>RunAtLoad</key> <true/>
3. Push on session end. Add a Stop hook to ~/.claude/settings.json that runs vault-push.sh in the background the moment a Claude Code session ends. This catches the "I was just working, closed the session, then closed the lid" case without waiting 30 minutes.
{ "hooks": { "Stop": [{ "matcher": "", "hooks": [ { "type": "command", "command": "~/.claude/vault-push.sh &" } ] }] } }
Belt and suspenders. One of these catches every case I've seen.
The full circuit, finally
Here's what happens now when I write a memory on the Mac:
- Worst case, ~35 minutes to reach the VPS (30-min LaunchAgent tick + 5-min VPS pull).
- Typical case, under a minute, because the Stop hook pushes as soon as I end the session.
- If I edit while asleep (or offline), the next wake triggers a push and the VPS catches up within 5 minutes.
And the same rail carries ~/CLAUDE.md, my global project-instructions file. Instead of scp-ing it to the VPS on a separate 30-minute cron, I added a one-line snapshot step to vault-push.sh that refreshes a copy inside the Vault, and symlinked /root/CLAUDE.md on the VPS to point at the Vault's copy. One pipeline, two files.
What I'd do differently
I should have started with git from day one. The audit where I discovered seven VPS-only files was a near-miss. If the Hetzner box had died in the month between when I filed the ticket and when I did the work, I'd have lost content I didn't even know I had. That month-long gap is on me.
Auto-commit-on-session-end has a secrets-risk footnote. If Claude Code writes an API key into a memory file during a session — because I pasted a key into a message and asked it to remember something — the Stop hook will commit and push it to GitHub before I see the diff. Private repo ≠ safe for secrets. I'm treating "don't put secrets in memory files" as a rule now, and I'll probably add a pre-commit hook that scans for the common patterns.
Tiger still isn't on the Vault rail. I pulled Tiger's persona files into the Vault as a backup mirror, but Tiger still writes to its own workspace and reads from there. Moving Tiger onto the Vault directly requires reconfiguring its Docker container, and I wasn't confident enough to do that in the same session. Backup first, unification later.
The cleanest part was routing ~/CLAUDE.md through the same pipeline. Before, I had two sync systems doing similar jobs: a git one for memory, an scp one for CLAUDE.md. The scp path had the same sleep-gap problem but it wasn't urgent because rsync is idempotent — it just catches up. After unifying, there's one rail, one failure mode, one set of fixes. The simplest systems are the ones where every moving part does the same kind of thing.
If you're building your own personal AI setup and your memory lives in more than one place, it's worth pulling it into a single git-backed folder sooner rather than later. The day you do the audit is the day you find the thing you didn't know you were going to lose.