Designing with experimentation in mind
Opinions don't move metrics. I've worked in teams where every design decision came down to preference — seniority, gut feel, aesthetics. And I've worked in teams where decisions came down to data. The difference in output is significant. Experimentation is what closes the gap between what we think will work and what actually does.
Why experimentation matters
Product design is full of assumptions. You assume users understand the value prop. You assume the new flow is clearer than the old one. You assume the CTA change will improve clicks. Experimentation doesn't eliminate assumptions — it validates them before they go to production at scale.
The business case is straightforward: a 5% improvement in conversion on a high-traffic flow compounds. One validated experiment can outperform months of redesign work. The question isn't whether to test — it's how to design experiments that actually answer something.
What designers get wrong
Designing without measuring
Shipping a new flow without instrumentation is designing blind. If you can't measure the outcome, you can't learn from it. Every design that touches a conversion point needs tracking built in from the start — not added after the fact.
Optimising for aesthetics over outcomes
A cleaner UI is not the goal. The goal is a flow that converts, retains, or activates better than what was there before. Visual improvements matter when they remove cognitive load or build trust — not as ends in themselves.
Not working with data
Designers who don't look at analytics are making decisions in a vacuum. Drop-off rates, funnel data, heatmaps, and session recordings are design inputs — not just analyst deliverables. I start every project by understanding where the numbers break down, not by opening a design tool.
How I approach experimentation
Define the problem
What's the metric? Where is it underperforming? What does winning look like? A well-defined problem makes hypothesis design straightforward and keeps the experiment focused on one thing at a time.
Identify friction points
Before designing anything, I map the current flow and mark where users drop off, hesitate, or make errors. These are the hypotheses waiting to be tested. Friction isn't guesswork — it shows up in the data.
Design hypotheses
Each experiment starts with a statement: "We believe that [change] will [outcome] because [rationale]." This forces clarity before any design work begins. If you can't complete that sentence, the experiment isn't ready.
Run A/B tests
I work directly with product and engineering to define test parameters — sample size, duration, success metrics, guardrail metrics. Running a test without statistical rigour produces noise, not signal.
Learn and iterate
A failed experiment is not a failure — it's a result. The most useful thing a negative result tells you is what not to build next. The learning gets documented. The next hypothesis gets sharper.
What gets tested
Across onboarding and conversion projects, the experiments I've been closest to span a range of hypotheses — each designed to test one specific assumption:
- Onboarding flow variations — different step orders, entry points, and value messaging
- CTA changes — placement, copy, visual weight, and surrounding context
- Step reduction — removing fields, deferring questions, replacing forms with defaults
- Trust signals — adding transparency copy at high-anxiety decision points
- Messaging changes — reframing value propositions at key drop-off moments
Each test is scoped tightly. Broad redesigns are hard to learn from. Small, isolated changes produce clear signal.
What this approach produces
The compounding effect of validated experiments is that every shipped change has evidence behind it. Conversion improves because each iteration is grounded in what users actually did, not what we thought they'd do.
Drop-off decreases because the experiments target exactly the moments where users were leaving. Decision-making across the team gets better because there's a shared language of hypotheses, results, and learnings — not opinions and preferences.
See how experimentation shaped a real redesign at a high-stakes conversion point.
CLARK — Offer view redesign → Read: Designing onboarding flows that improve conversion → Read: Reducing friction to improve conversion →