Building a TPM Agent Ecosystem with Claude Code
Building a TPM Agent Ecosystem with Claude Code

It started with a simple question: What if my morning routine could run itself?
As a TPM managing multiple workstreams at my company, I was spending 2+ hours every morning just getting caught up — reading Slack threads, checking Jira tickets, pulling sprint data, and prepping for meetings.
The Problem
Every morning looked the same:
- Open 10+ Slack channels, skim for overnight activity
- Pull up Jira, check sprint board status across 3 workstreams
- Cross-reference roadmap initiatives with sprint goals
- Prep talking points for 4-6 daily meetings
- Draft standup updates for team channels
This was 2 hours of pure context-switching before I could do any actual TPM work.
The Architecture
I built a hierarchical dispatch system where a team lead orchestrator routes compound requests to specialist agents:
tpm-team-lead (orchestrator)
├── tpm-daily-assistant (Opus) — calendar-driven morning briefing
├── tpm-eod-summary (Opus) — end-of-day synthesis
├── dct-program-monitor (Opus) — cross-team program tracking
├── ai-pillar-monitor (Opus) — AI pillar status across Slack/Jira/Confluence
├── daily-update-publisher (Sonnet) — Slack standup text from Jira
├── sprint-board-publisher (Sonnet) — sprint data → Confluence
├── roadmap-publisher (Sonnet) — Jira ROADMAP → Confluence
├── standup-note-to-jira (Sonnet) — Gemini notes → Jira comments
└── ... 9 more specialists
Tiered Model Assignment
Not all agents need the same intelligence:
- Opus: Complex synthesis agents that cross-reference multiple data sources
- Sonnet: Structured/mechanical agents that follow templates
This saves ~60% on token costs while maintaining quality where it matters.
Key Agents
tpm-daily-assistant
Reads my Google Calendar, enriches each meeting with Jira/Confluence/Slack context, pulls inbox highlights from labeled emails, and generates an actionable daily agenda with meeting prep notes.
dct-program-monitor
Scans 10 Slack channels, pulls Jira ROADMAP initiatives, reads Confluence tracking pages, and generates a cross-referenced digest with risk flags. Week-over-week context from cross-team meeting notes.
sprint-board-publisher
Pulls all tickets from the active sprint, groups by status, calculates velocity metrics, and publishes a formatted report to Confluence.
Results
| Metric | Before | After |
|---|---|---|
| Morning prep | 2 hours | 5 minutes |
| Sprint updates | 45 minutes | 2 minutes |
| Missed action items | ~3/week | 0 |
| Context switching | Constant | Minimal |
Lessons Learned
- Prompt quality > model quality — When agent results were suboptimal, the fix was always the prompt, never upgrading the model.
- DB-first with static fallback — Every content function tries the database first, falls back to constants. Zero downtime during DB migrations.
- Agent instruction adherence — Sonnet agents need strong, explicit language. "ALWAYS use desk first" works. "Prefer desk" doesn't.
- Compound workflows — "Get me caught up" dispatches 3 agents in parallel. Single requests go directly to specialists.