Yunsi Yang
Back to life

Analytics

Designing Experiments for Noisy Marketplace Traffic

A practical framework for building trustworthy experiments when publisher mix, seasonality, and marketplace behavior are all shifting at once.

Analytics · 6 min readExperimentationMonetizationDecision systems

The problem with clean theory

The cleanest experiment design on paper often breaks down in production. Marketplace traffic changes by hour, placements behave differently across publishers, and demand can move independently of the product change being tested.

That does not mean experimentation should become soft or improvised. It means the operating system around the test has to be strong enough to separate true lift from changes in traffic shape.

What a usable framework looks like

The most useful framework starts with disciplined KPI choices: one primary metric, a small number of guardrails, and a pre-committed definition of what counts as a meaningful win.

From there, the design should make mix-shifts visible instead of pretending they do not exist. That is where methods like Welch's t-test, bootstrap intervals, and difference-in-differences become practical rather than academic.

Why teams trust it

People trust experimentation when they can see how the decision was made. Good analysis should reduce ambiguity, not force stakeholders to decode a statistical ritual.

The best outcome is not just a result. It is a repeatable way of shipping, learning, and deciding faster the next time.

Continue reading

More selected writing

Back to life