Navigation

Case Studies

Appearance

Operating Model · Team Leadership

How I Run a Team

The operating rhythms I use to keep product, design, and engineering moving in lockstep: a pre-sprint design iteration cycle that de-risks features before development, a two-week sprint cadence that everyone can predict, and an OKR practice that compounds quarter over quarter.

Company
Personal Operating Model
Role
Product & Design Leadership
Period
Refined across 15+ years of practice
Operating rhythms
3
Operating ModelAgileOKRsDesign ProcessTeam Leadership
Part 1 — Design Iteration Cycle

De-risk the feature before sprint zero

The most expensive bug in product development is not in production — it's in the spec. By the time a poorly-validated idea hits sprint planning, you've already committed engineering time, set stakeholder expectations, and burned momentum that's hard to reset.

My answer is a structured pre-sprint loop that runs 2–3 sprints ahead of development. Issues get surfaced, suggestions get tested, and only a validated hypothesis with a defined user story ever enters Sprint 1.

The pre-sprint flow

Pre-sprint runs 2–3 sprints ahead. Sprint 1 turns the validated suggestion into a hypothesis + user story. Sprint 2 ships.

The four pre-sprint stages

  1. Stage 1 — Find Issue (UX / UI / PM)

    Outcome: ideation and backlog discussion. The team surfaces friction, drop-off, or unmet needs from research, analytics, support tickets, or experiment failures. The output is not a feature — it's a clearly framed problem.
  2. Stage 2 — Design Suggestion (UI / UX / PM)

    Outcome: design tasks and deliverables. Repeat 2–3 rounds until you have promising variations. Deliverable #2: 2–3 suggestions presented as mockups or prototypes — not one polished mock that pre-decides the answer.
  3. Stage 3 — Test / Review (UX / UI / BA / PM)

    Outcome: feedback and approval. Stakeholders pressure-test the suggestions; usability validation happens here, not in production. Killing a bad idea at this stage saves a sprint of engineering.
  4. Stage 4 — Define Hypothesis (UX / BA / PM + Dev) — Sprint 1

    Outcome: user story. Deliverable #1: a wireframe presented to share the issue finding and hypothesis. Engineering joins here so estimates and architectural concerns surface before commitment.
Part 2 — Typical Sprint

A predictable two-week cadence

Velocity comes from predictability, not heroics. A team that knows exactly what happens on each day of a sprint can focus on the work; a team that doesn't spends half its energy on coordination overhead.

My default sprint is two weeks, organized so design plans the next sprint while engineering builds the current one — keeping the design pipeline always one step ahead.

The two-week sprint structure

Sprint kickoff Monday. Story dev / QA / UAT through the middle. Demo prep, review, retrospective on the final Friday.

Sprint planning detail — design and engineering interlock

Design plans Sprint N+1 while engineering builds Sprint N. The handoff is continuous, not transactional.

What this cadence enforces

  • Design always one sprint ahead

    Engineering never waits on design; design never waits on engineering questions. Stories enter Sprint N with a wireframe, a hypothesis, and answered architectural questions.
  • QA is continuous, not a phase

    Test cases and automation get written alongside the feature, not after. By the final week of the sprint, regression testing is incremental — not a fire drill.
  • Demo + retro on the same day

    Closing the sprint with both rituals back-to-back keeps learnings fresh. Retros that drift to the following Monday lose their teeth.
  • Friday-Friday rhythm

    Sprint planning the Friday before kickoff; demo + retro the Friday at the end. The team always knows where it is in the cycle.
Part 3 — OKRs

Quarterly direction, biweekly accountability

OKRs fail in two predictable ways: they get set in a leadership room and never make it to the team, or they get tracked once a quarter as a retrospective formality. Both fail for the same reason — there's no operating rhythm connecting the goal to the work.

My approach treats OKRs as an active feedback loop: clear at setting, visible at every biweekly check-in, and honestly debriefed at quarter end. The goal is not 100% attainment — that means the OKR was too easy. The goal is calibrated learning.

The OKR rhythm

Setting → biweekly execution check-ins → quarter-end retrospective. Monthly business reviews keep leadership in the loop.

The three OKR moments

  1. Setting OKRs

    Ensure OKRs are clear and well understood, with specific outcomes and business impact attached. When OKRs cascade, team leaders and members work together to personalize them so they're actually relevant — not a copy-paste of the company OKR with team-level vocabulary.
  2. Execution + biweekly check-ins

    Use OKRs to guide feedback conversations every two weeks. Track progress visibly so the whole team sees where they are. Biweekly cadence is short enough to course-correct and long enough to actually move a metric.
  3. Retrospectives

    With measurable and timebound OKRs, quarter-end accomplishments are easy to quantify — and misses are easier to debrief without blame. The retrospective output feeds the next quarter's OKR setting; the loop closes.

Velocity is not how hard the team works. It's how predictable the rhythm is, how few surprises there are in any given week, and how cleanly learnings from the last sprint become decisions in the next one.

How they fit together

Three rhythms, one feedback loop

The Design Iteration Cycle decides what ships. The sprint cadence decides how fast it ships. OKRs decide why it ships and whether the bet actually paid off. Run any one of these in isolation and you get partial value; run all three on a shared calendar and they compound.

The teams I've led that compounded learnings quarter over quarter were the ones that respected the boundaries between these rhythms. The teams that mixed them — running design exploration inside the dev sprint, or treating OKRs as sprint backlog — burned out and learned less per quarter.