Yunsi Yang

Experimentation · Growth · Monetization

Yunsi Yang

Turning product and marketplace data into clearer decisions

I design experiments, monitoring systems, and monetization frameworks that help teams move with more clarity.

Yunsi Yang portrait

Core Strengths

Core strengths

Experimentation

Design tests that stay credible under noisy traffic and lead to decisions teams trust.

  • Welch's t-test, bootstrap intervals, and diff-in-diff when mix shifts matter
  • Primary metrics and guardrails defined before the readout
  • Reusable analysis workflows that shorten setup time

Monitoring & Decision Systems

Build KPI monitoring that makes issues legible before they become escalations.

  • Seasonality-aware baselines for business-critical metrics
  • Alerts designed around severity, scope, and next action
  • Dashboards that stay calm under pressure

Monetization & Growth

Translate traffic, pricing, and placement behavior into practical revenue levers.

  • Dayparting, frequency caps, and traffic segmentation
  • Pricing, pacing, and floor decisions tied back to yield
  • Partner growth playbooks that stay grounded in data

Forecasting & Modeling

Use forecasting and feature-rich modeling to support planning without overcomplicating the story.

  • Demand and yield forecasting for planning decisions
  • Feature engineering across contextual and behavioral signals
  • Models explained in business terms, not black-box theater

Languages

PythonSQL

Platforms

BigQueryGCPAWSDatabricks

Analytics

ExperimentationCausal thinkingForecastingFeature engineeringROI analysis

Visualization

Looker StudioTableauPower BI

Workflow

ETLAPIsAirflowdbt

Featured Case Studies

Testing, monitoring, and growth in practice.

Case Study 01

A/B Testing & Iteration Framework

A reusable measurement framework for ad layouts, creative formats, and frequency caps where publisher mix and seasonality could easily distort the readout.

+10%

CTR lift

+5%

Sustainable RPM lift

70% faster

Experiment setup time

Tested

Ad layouts, static vs. AI-generated creatives, and interstitial frequency caps measured under noisy traffic.

Setup

Ad-request-level randomization kept the split close to delivery.

CTR was the primary KPI, with RPM and margin as guardrails.

Shared BigQuery tables and dashboards made new tests faster to launch.

Measurement

Welch’s t-test for CTR and RPM under unequal variance.

Bootstrap intervals for skewed revenue metrics.

Difference-in-differences when traffic shifts risked distorting results.

Impact

Cut setup time, made results more consistent, and delivered +10% CTR lift with +5% sustainable RPM improvement.

Experiment Readout

Single-pane vs multi-pane

Layout test across live publisher traffic

+10% CTR

Static vs AI GIF

Creative test with RPM guardrails

+5% RPM

Frequency cap tuning

Exposure test with sustainability checks

Faster iteration

Lift versus prior benchmark

Layout testing+10%
Creative testing+5%
Frequency cap+7%

Different tests, same measurement standard across layouts, creatives, and cap changes.

Case Study 02

Automated Anomaly Detection

A monitoring system for expected versus actual performance across publisher, campaign, and placement slices.

Monitoring Scope48+ slices

Monitoring scope

Hourly KPI monitoring across ROI, margin, CTR, RPM, win rate, and pacing.

Cuts by publisher, campaign, placement, platform, and cadence so the alert could point toward a root cause, not just a symptom.

Detection logic

Rolling baselines with seasonality so expected performance moved with the business instead of against it.

Z-score thresholds and EWMA smoothing to separate real anomalies from ordinary volatility.

Alerting workflow

Slack alerts with expected versus actual values, severity, and affected scope.

Minimum-volume and persistence rules so alerts stayed useful instead of noisy.

Operational impact

The system cut manual monitoring time, surfaced pacing and ROI issues before clients noticed, and gave revenue teams a faster path from detection to diagnosis.

ROI Anomaly Detection1 anomaly detected
z=-3.2
Actual ROIExpected bandAnomaly

Publisher slices

48+

Campaigns monitored

120+

Placement cuts

Real-time

Latency

12 min

Case Study 03

Account & Partnership Growth

A monetization operating model for scaling traffic, improving pricing decisions, and growing partner revenue over time.

Ad-Tech Data Analyst · BigQuery · Looker · Python

Turning Raw Auction Data Into Revenue Intelligence

This work translated raw marketplace behavior into operating decisions on dayparting, frequency caps, traffic quality, and pricing. The goal was steady partner growth, not a one-time bump.

Impressions

7M

Revenue

5x

ROI

+6%

CPAU

$0.08

revenue_ops.pyLive

$ connect BigQuery --dataset=auction_metrics

✓ Connected. Partner traffic loaded.

$ segment traffic --by=placement,hour,source

$ optimize_levers --dayparting --freq_cap --floors

⚡ Budget shifted to higher-yield inventory

✓ Revenue efficiency improved over time

Monetization control center

Month 1Month 5

Dayparting

Shift spend toward stronger conversion windows

Higher yield

Frequency caps

Reduce fatigue without crushing scale

Better efficiency

Traffic segmentation

Separate higher-quality placements from weak supply

Cleaner mix

Pricing + floors

Tune pacing and monetization rules using auction signals

Improved margin

Life Outside Work

From the road to the mat, the rituals that keep life open.