Vision & Strategy¶
Vision without execution is hallucination. Execution without vision is thrashing.
Most engineering organizations have one or the other out of balance. Some have a grand vision painted on conference room walls that has no connection to what teams actually work on. Others ship relentlessly but couldn't tell you where they're headed or why.
This chapter is about the connection between the two: how to articulate a direction that's clear enough to guide decisions, and how to translate that direction into a portfolio of work that advances it quarter by quarter.
The problem this solves¶
No clear direction: Teams optimize locally. Each team makes reasonable decisions in isolation, but the sum of those decisions doesn't add up to a coherent product or platform.
Too many priorities: Everything is important. Every team has a backlog of "strategic" work. Nothing gets finished. Focus is scattered.
Feature factories: Teams ship features without knowing if they matter. Roadmaps are lists of things to build, not outcomes to achieve.
Strategy by backlog: The order of work is determined by whoever yells loudest or whichever customer paid most recently. There's no coherent sequencing.
Zombie initiatives: Projects that started for good reasons continue long after the context has changed. No one has the authority—or the courage—to kill them.
The system described here addresses these by providing:
- A clear hierarchy: Vision → Tenets → Strategy → Themes → Bets
- Explicit focus: 3–4 themes, not 30
- Bet-driven execution: every initiative has a hypothesis, an owner, and a kill criteria
- Portfolio review cadence: regular moments to start, stop, or pivot
When to use this approach¶
Use this framework if:
- You lead an engineering org, a platform team, or a product engineering group.
- You need to align multiple teams around a coherent direction.
- You want to move from output-focused to outcome-focused.
- You're tired of the annual planning theater that produces documents no one reads.
This framework assumes:
- A quarterly cadence with monthly check-ins.
- Cross-functional collaboration with Product and Design.
- Remote-first or hybrid execution.
Adapt it to your context. The principles matter more than the specific rituals.
The alignment ladder¶
Strategy is a hierarchy. Each level provides context for the level below it.
Vision (3–5 years)
↓
Tenets (timeless decision rules)
↓
Strategy (12–18 months: where to play, how to win)
↓
Themes (3–4 annual focus areas)
↓
Bets (6–12 week initiatives with hypotheses)
↓
Iterations & Releases (PRs, deploys)
Vision (3–5 years)¶
The vision answers: What does the world look like if we succeed? Who benefits, and how?
A good vision is:
- Aspirational but plausible: It stretches but doesn't require magic.
- Customer-centered: It describes impact on users, not internal metrics.
- Stable: It shouldn't change quarter to quarter. If it changes frequently, it's not a vision—it's a reaction.
A bad vision is:
- A tagline ("We're the best platform for X").
- A financial target ("$100M ARR by 2028").
- A list of features we want to build.
Vision example
"Within five years, any developer at [Company] can deploy a production-ready service in under a day, with security, observability, and compliance built in—without filing a ticket or waiting on another team."
Tenets (timeless decision rules)¶
Tenets are the principles that don't change even when the strategy does. They answer: When we face a trade-off, which way do we lean?
Good tenets are:
- Opinionated: They exclude reasonable alternatives. If everyone would agree, it's not a tenet—it's a truism.
- Actionable: You can point to a decision and say "we chose this because of Tenet 3."
- Stable: They outlive any individual strategy cycle.
Tenet examples
- "Prefer boring technology for core infrastructure. New is not better; proven is."
- "APIs first, UI second. Every capability must be programmable."
- "Reliability is product. Error budgets gate feature velocity."
- "Optimize for team autonomy. Centralize only what must be consistent."
Strategy (12–18 months)¶
Strategy answers: Given our vision, what will we focus on in the next 12–18 months? What will we NOT do?
A good strategy:
- Makes choices: It says "we'll invest here, not there." It has clear non-goals.
- Responds to context: It considers market conditions, customer feedback, competitive pressure, and internal capabilities.
- Is specific enough to guide resource allocation: Teams can look at the strategy and know whether their work aligns.
A bad strategy is:
- A list of projects.
- A statement so generic that any roadmap could fit under it.
- Something that changes every quarter.
Themes (3–4 annual focus areas)¶
Themes are the investment categories that implement the strategy. They answer: Where are we putting our resources?
Keep it to 3–4 themes. If you have more, you have no strategy—you have a list.
Each theme should have:
- A target allocation (e.g., "40% of engineering capacity").
- A clear goal (not a deliverable, an outcome).
- An owner (usually a senior leader or cross-functional triad).
Theme examples
- Theme: Platform reliability — Goal: reduce incident rate 50%, achieve 99.9% availability. Allocation: 35%.
- Theme: Developer velocity — Goal: P50 time from code to production under 1 hour. Allocation: 30%.
- Theme: Self-serve capabilities — Goal: 80% of common requests handled without a ticket. Allocation: 25%.
- Theme: Exploratory bets — Goal: validate 2–3 new capabilities for next year. Allocation: 10%.
Bets (6–12 weeks)¶
Bets are time-boxed initiatives with explicit hypotheses. They answer: What are we doing this quarter to advance the themes, and how will we know if it worked?
A bet is not a project. A project is a scope of work. A bet is a hypothesis about outcomes.
Every bet should have:
- A hypothesis: "If we do X, we expect Y to happen, evidenced by Z."
- An owner: One person accountable for driving it.
- A timebox: When do we evaluate?
- Kill criteria: What would cause us to stop early?
- Signals to watch: Leading indicators we'll track.
Bet card example
Bet: Unified runbook system
Owner: TL Platform | Theme: Reliability | Window: Q1 2026
Hypothesis: If we consolidate runbooks into a single searchable system with required templates, MTTR p50 will decrease 30% and on-call NPS will improve.
Slices: 1. Template and migration for Tier 1 services (4 weeks) 2. Search and discovery UX (3 weeks) 3. Integration with incident tooling (3 weeks)
Kill criteria: If adoption < 50% by week 8, or if MTTR doesn't improve after 12 weeks.
Portfolio management¶
Strategy is implemented through portfolio management. You're not managing projects—you're managing a portfolio of bets with different risk profiles and time horizons.
The Horizon model¶
A common way to think about portfolio allocation is the "Three Horizons" model:
| Horizon | Focus | Risk/Return | Time to impact | Example allocation |
|---|---|---|---|---|
| H1: Core | Reliability, maintenance, incremental improvements to existing capabilities | Low risk, predictable return | Weeks to months | 50–60% |
| H2: Adjacent | Extensions and enhancements that build on existing foundations | Medium risk, medium return | Quarters | 25–35% |
| H3: New | Exploratory bets, new capabilities, things that might fail | High risk, high potential | Quarters to years | 10–20% |
The specific percentages depend on your context. A mature platform might be 60/30/10. A platform in rapid growth might be 40/40/20.
The key discipline is: track actual allocation against target. It's easy to let H1 (urgent, reactive work) crowd out H2 and H3. That's how you end up maintaining legacy forever.
The reliability floor¶
Reliability work is non-negotiable. If your error budget is burning, if MTTR is regressing, if on-call is burning out your team—H3 bets pause. H2 bets might pause. H1 gets the resources it needs to stabilize.
This isn't a punishment. It's physics. You can't build new capabilities on a foundation that's crumbling.
The reliability trap
Some teams get stuck in perpetual firefighting because they never invest in H2/H3 work that would reduce incidents long-term. The reliability floor is a lower bound, not a target. Once you're stable, invest in capabilities that keep you stable.
Cadences¶
Annual: Vision & Strategy Refresh (2–3 hours)¶
Once a year—typically in Q4—revisit the vision, update tenets if needed (rarely), and set the strategy for the coming 12–18 months.
Inputs:
- Business context and goals
- Customer feedback and user research
- Technical debt and reliability state
- Competitive and market signals
- Team capacity and capabilities
Outputs:
- Reaffirmed or updated vision
- Updated tenets (if any)
- Strategy document (1–2 pages)
- 3–4 themes with target allocations
Quarterly: Portfolio Review (90 minutes)¶
Every quarter, evaluate the portfolio:
- Active bets: What's working? What's not? Stop, continue, or pivot?
- Allocations: Are we hitting our H1/H2/H3 targets? Why not?
- New bets: What are we starting this quarter?
- Dependencies: What's blocking progress? What needs to be unblocked?
Outputs:
- Updated bet status (stop/continue/pivot)
- New bets for the coming quarter
- Rebalanced allocations if needed
- Escalated blockers
Monthly: Strategy Sync (45 minutes)¶
Cross-team sync on theme progress. Not a status update—a strategic discussion.
- Are we learning what we expected to learn?
- Are assumptions holding?
- What's emerging that we didn't anticipate?
Weekly: Bet Check-in (30 minutes per bet)¶
Each active bet reviews:
- Current state vs. hypothesis
- Leading signals
- Next slice
- Blockers and risks
- Decision date (if upcoming)
Common failure modes¶
| Failure mode | What it looks like | Mitigation |
|---|---|---|
| Vision-as-slogan | Inspirational words with no testable implications | Vision should have consequences. If it doesn't change any decisions, it's not a vision. |
| KPI soup | Dozens of metrics, no clarity on what matters | One North Star metric, 2–4 input metrics. That's it. |
| Shiny-object drift | New frameworks and ideas derail themes mid-quarter | Gate new work through portfolio review. No exceptions. |
| Pet projects | Bets without hypotheses, owners, or kill criteria | Every bet needs these. No exceptions. |
| Overstuffed roadmap | Everything is "Now." Nothing is prioritized. | WIP limits at the theme level. Finish things before starting things. |
| No "stop" muscle | Zombie initiatives that never die | Celebrate stopping. Make it a metric. "Number of bets stopped for cause" should be >0. |
| Strategy by backlog | Sequencing determined by ticket order, not leverage | Sequence by learning. Do the highest-uncertainty work first when possible. |
Metrics and signals¶
Leading indicators (strategy health)¶
- Theme allocation adherence: Actual % vs. target by H1/H2/H3.
- Bet hygiene: % of bets with hypothesis, owner, decision date, kill criteria.
- Discovery-to-delivery ratio: Time spent learning vs. building for new bets.
- Start/stop cadence: Number of bets stopped for cause per quarter. Higher is healthy—it means you're learning and adjusting.
- Dependency lead time: Time to unblock enabling work.
Lagging indicators (outcomes)¶
- North Star movement: Is the primary outcome metric improving?
- Adoption metrics: For platform teams—% of services on paved road, % of developers using self-serve tools.
- Reliability: Error budget burn, CFR, MTTR. Strategy must honor reliability floor.
- Cost/efficiency: When relevant—infrastructure unit cost, operational toil reduction.
Templates¶
One-Page Vision & Tenets¶
# Vision
[One paragraph: Who we serve, what changes for them, and why we are uniquely suited to deliver this.]
## Tenets
1. **[Tenet name]:** [Rule] — [Rationale]
2. **[Tenet name]:** [Rule] — [Rationale]
3. **[Tenet name]:** [Rule] — [Rationale]
## North Star & Inputs
- **North Star:** [Single outcome metric]
- **Input 1:** [Leading indicator]
- **Input 2:** [Leading indicator]
- **Input 3:** [Leading indicator]
## Non-goals
What we explicitly will NOT do:
- [Non-goal 1]
- [Non-goal 2]
Strategy 1-Pager (12–18 months)¶
# Strategy: [Name]
**Period:** [Start] – [End]
**Owner:** [Name]
## Context
What signals, constraints, or shifts are driving this strategy?
- Customer/user signals
- Technical/operational signals
- Market/competitive signals
- Resource constraints
## Where to play
What segments, capabilities, or areas will we focus on?
## How to win
What's our differentiated approach? What trade-offs are we making?
## Choices and trade-offs
What we ARE doing:
- [Choice 1] — because [rationale]
What we are NOT doing:
- [Non-choice 1] — because [rationale]
## Themes and target allocations
| Theme | Goal | Allocation |
| --------- | --------- | ---------- |
| [Theme 1] | [Outcome] | [%] |
| [Theme 2] | [Outcome] | [%] |
| [Theme 3] | [Outcome] | [%] |
## Risks and assumptions
| Risk/Assumption | Mitigation/Validation |
| --------------- | --------------------- |
| [Risk 1] | [How we address it] |
## Measures of success
- **North Star:** [Metric and target]
- **Input 1:** [Metric and target]
- **Input 2:** [Metric and target]
Bet Card¶
# Bet: [Name]
**Owner:** [Name] | **Theme:** [Theme] | **Window:** [Start – Decision date]
## Hypothesis
If we [change], we expect [user/system outcome], evidenced by [leading metrics].
## Plan (slices)
1. [Slice 1] — [Timeframe]
2. [Slice 2] — [Timeframe]
3. [Slice 3] — [Timeframe]
## Signals to watch
| Signal | Threshold | Current |
| ---------- | --------- | ------- |
| [Metric 1] | [Target] | [Value] |
| [Metric 2] | [Target] | [Value] |
## Kill criteria
We stop or pivot if:
- [Condition 1]
- [Condition 2]
Next decision on: [Date]
## Dependencies and risks
| Dependency/Risk | Owner | Status |
| --------------- | ------ | -------- |
| [Item] | [Name] | [Status] |
## Links
- [Issue/Epic]
- [Dashboard]
- [ADR if applicable]
Quarterly Portfolio Review Checklist¶
## Pre-review prep
- [ ] All bet owners have updated status
- [ ] Allocation actuals calculated
- [ ] Reliability metrics reviewed
## Review agenda
1. **Active bets** (60 min)
- [ ] Each bet: status, signals, recommendation (continue/stop/pivot)
- [ ] At least one stop decision made
2. **Allocations** (15 min)
- [ ] H1/H2/H3 actuals vs. targets
- [ ] Reliability floor check
3. **New bets** (15 min)
- [ ] Proposed bets for next quarter
- [ ] Each has hypothesis, owner, kill criteria
## Outputs
- [ ] Updated bet statuses published
- [ ] New bets approved and assigned
- [ ] Allocation adjustments documented
- [ ] Blockers escalated with owners
The art of saying no¶
Strategy is as much about what you don't do as what you do. The hardest part of portfolio management is stopping things and saying no to new things.
Why stopping is hard¶
- Sunk cost fallacy: We've already invested so much.
- Optimism bias: It'll work out if we just give it more time.
- Fear of conflict: The person who started this will be upset.
- No clear owner: No one has the authority—or thinks they have the authority—to stop it.
How to make stopping easier¶
Set kill criteria upfront. When you start a bet, define what would cause you to stop. This makes stopping a mechanical decision, not a judgment call.
Celebrate stopping. Publicly recognize teams that stopped bets based on learning. "We tried X, learned it wasn't working because of Y, and reallocated to Z" is a success story, not a failure.
Track "stopped for cause" as a metric. If your number of stopped bets is zero quarter after quarter, you're not experimenting—you're just executing a fixed roadmap.
Make portfolio review a ritual. Stopping is easier when it's expected. If every quarter includes a "what are we stopping?" question, it's normal, not exceptional.
Further reading¶
- Good Strategy Bad Strategy by Richard Rumelt — The clearest explanation of what strategy actually is (and isn't).
- Playing to Win by A.G. Lafley and Roger Martin — The "where to play / how to win" framework.
- The Three Horizons of Growth by Baghai, Coley, and White — Original source of the H1/H2/H3 model.
- Measure What Matters by John Doerr — OKRs and the discipline of outcome-focused goals.
- Thinking in Bets by Annie Duke — Making decisions under uncertainty, treating outcomes as bets.
- Escaping the Build Trap by Melissa Perri — Moving from output to outcome focus in product organizations.
Related chapters¶
- Core Principles — The principles that guide strategic choices.
- Decision Making & ADRs — How to make and record the decisions that implement strategy.
- Delivery: Metrics in Execution — How to track progress on strategic bets.
- Scaling: Scaling Systems — How architecture decisions connect to strategy.