Your cart is currently empty!
How to Run a Quasi Geo Lift Test That Actually Proves Incrementality

Hook
Want to know if a new channel really moves the needle when you cannot randomize users? Here is a simple fact that surprises teams: with the right market selection and enough history, treating a few cities for three weeks can reveal a 4 to 5 percent lift that actually matters to the business.
Here s What You Need to Know
Quasi geo lift testing uses cities or regions as treatment units and builds a synthetic control from the remaining markets. It reads outcomes in business terms, for example rides, sales, or leads per city per day. The core steps are measure, find the lever that matters, run a focused test, read the result and iterate.
Bottom line, this method gives you causal answers without user level tracking, and it fits staggered rollouts or retroactive audits.
Why This Actually Matters
Here s the thing, most platform level randomized tests are great when you can run them. But they may not fit your channel, timing, or privacy needs. Quasi geo lift fills that gap by letting you:
- Measure incrementality in the same units the business manages, not a platform metric.
- Pick markets based on strategic priorities, not random assignment.
- Run tests with fewer geographies and still get defensible answers if you have stable history.
Market context is the why behind prioritization. If your unit economics show a profit of six euros per ride, knowing whether a channel can drive enough incremental rides to beat your cost per incremental conversion is what changes budgets and creative briefs.
How to Make This Work for You
Think of this as a short operational playbook. Follow these steps like you are talking to a product manager and a growth lead in the same room.
1. Assess
- Define the business outcome you will measure, for example daily rides per city. That becomes your Y.
- Confirm you have clean daily data, with date, location and KPI filled for each cell. Aim for 4 to 5 times the test length in stable pre treatment history, and ideally 52 weeks of history to capture seasonality.
2. Budget to a Minimum Detectable Effect
- Run a power analysis against historical variance to pick a realistic MDE. Example from practice, treating 3 cities for 21 days can detect roughly a 4 to 5 percent uplift, if variance is similar to other ride hailing markets.
- Translate MDE into spend by dividing expected incremental conversions into unit economics. In one example the minimum spend to detect a 5 percent effect was about €3,038 for three weeks, or about €48.23 per treated city per day.
3. Construct
- Choose treated cities by business priority and operational feasibility, then let the synthetic control method pick weighted combinations of remaining cities that match pre treatment trends.
- Set operational guardrails, for example fence city boundaries tightly to reduce spillovers, freeze local promotions or mirror them in controls, and keep creatives and bids constant for the window.
- Choose a test window that covers at least one purchase cycle and gives you 15 days if you use daily data, or 4 to 6 weeks for weekly data.
4. Deliver
- Run the test and report outcomes as ATT in business units per day, total incremental outcomes over the test window, cost per incremental conversion by dividing spend by incremental outcomes, and net profit using your unit economics.
- Always show MDE alongside results so stakeholders know what the test could and could not have detected.
5. Evaluate
- Calibrate your MMM and MTA with the experimental result. Use the experimental ATT as a calibration multiplier to make model guided priorities.
- Replicate positive results in new geographies before broad rollouts. Run placebo tests in time or space to stress test the signal.
Quick Example That Teaches the Pattern
Picture this scenario. You have 13 cities with daily ride panels. You plan a new channel in three cities for 21 days. Historical priors say cost per ride is €6 to €12 and profit per ride is €6. Your power analysis says a three city, 21 day test will detect a 4 to 5 percent lift. The experiment cost for that sensitivity is roughly €3,038 for the window.
Decision pattern to follow
- If observed CPIC is below profit per ride, scale the channel in similar markets slowly and replicate the test.
- If observed lift is smaller than MDE, label the result inconclusive and either extend the test duration or increase treated markets before reallocating budget.
- If lift is statistically compatible with zero but creative resonance seems poor, iterate on creative and rerun a short test rather than reallocating to other channels immediately.
What to Watch For
Metrics that matter and how to read them.
- Average Treatment Effect on Treated ATT, expressed in your business unit per day, for example rides per city per day. This is your primary causal read.
- Total Incremental Outcomes, the sum of ATT across treated markets and days. Use this to compute CPIC.
- Cost per Incremental Conversion CPIC, spend divided by total incremental outcomes. Compare to unit economics to decide whether the channel earns its keep.
- Minimum Detectable Effect MDE, reported up front. If the true effect is below this threshold, a null result is expected and informative.
- P values and confidence intervals. The P value is a compatibility score between the data and the full statistical model, not proof. A 95 percent confidence interval shows the range of effect sizes compatible with your data given the model, not a probability that the true value is inside the interval.
Here s the thing, treat a small P value as a prompt to check assumptions, not as a final verdict. Placebo tests and replication are cheap sanity checks that pay dividends.
Your Next Move
Do this this week. Pick one channel you want to test, select three candidate cities that are operationally clean, and run a power analysis using your historical variance. Translate the MDE into the minimum spend and present that to the business as a test budget and a clear decision rule.
Example ask for stakeholders
- Approve a three city, 21 day pilot with budget of roughly €48 per treated city per day, total about €3,000, conditional on the power analysis that uses our historical daily rides.
Want to Go Deeper?
If you want market context and benchmarks to set realistic MDEs and to translate ATT into allocation choices, AdBuddy can help map test sensitivity to industry benchmarks and unit economics, and provide playbooks that turn your result into model guided priorities and rollout steps.
Bottom line, quasi geo lift gives you faster, cheaper, defensible answers. Measure with market context, pick the lever that matters, run a focused test, and use results to reweight your models and your media mix.

Leave a Reply