Module 5
Experiment System
Run measurable experiments tied to the KPI tree, with clear decisions.
Why it matters
Without a system, your backlog becomes wishful thinking. A consistent experiment template + scoring makes trade-offs explicit and turns learning into an asset.
Template
Experiment card
Hypothesis If we [change], then [metric] will [move], because [mechanism].
Primary KPI: Guardrails: Segment: Duration: Owner:
Design
- What exactly changes?
- Who is impacted?
- How do we measure success?
- Which events/properties must exist?
Decision log
- Ship / Iterate / Rollback
- Key learnings
Scoring (RICE)
- Reach: how many users/workspaces it touches
- Impact: expected KPI impact (1–3)
- Confidence: 0.5 / 0.8 / 1.0
- Effort: person-days
Score = (R * I * C) / E
Common mistakes
- Running experiments without instrumentation.
- No guardrails (breaking quality).
- No decision log (no compounding learning).
Example output
Experiment: reduce onboarding steps + prefill integrations → goal: -20% TTV, +8pp activation rate.