Atticus Li designed and executed one of the first causal OOH-to-digital measurement frameworks at Silicon Valley Bank, using a 3-market geo-incrementality experiment to isolate billboard impact on digital demand. This framework proved that offline advertising produced measurable, statistically significant lifts in web traffic and paid digital performance.
The Problem Nobody Wanted to Solve
SVB was spending serious money on out-of-home advertising. Billboards in Austin. Taxi cab wraps in Miami. Bus stop displays in Boston. Airport signage in Seattle. The creative looked great. The placements were premium. And nobody could tell you whether any of it worked.
This is the dirty secret of OOH advertising: most companies treat it as brand spend and never try to measure it. The CMO signs off on a budget, the media agency places the buys, someone takes a photo of the billboard for the internal deck, and everyone moves on.
At SVB, I wasn't willing to accept that. Not because I thought OOH was a waste — I actually suspected it was working. But "I think it's working" is not a sentence that survives a budget review when finance is looking for places to cut.
The challenge was real. OOH doesn't have click-through rates. There's no cookie. No UTM parameter on a highway billboard. The standard digital attribution toolchain is completely useless here.
So I designed something different.
Why Geo-Incrementality Is the Right Framework for OOH
When you can't track individual user journeys from a billboard to your website, you have to change the unit of analysis. Instead of tracking people, you track markets.
Geo-incrementality testing works like a clinical trial for advertising. You expose some markets to the treatment (OOH advertising) and keep others as controls. Then you measure the difference in outcomes — web traffic, lead volume, paid media performance — between the two groups.
The logic is simple: if Miami gets billboards and Seattle doesn't, and Miami's web traffic goes up significantly more than Seattle's during the same period, the billboards probably caused it. But the execution is anything but simple.
You need markets that behave similarly before the test starts. You need to account for seasonality. You need to normalize for baseline differences. And you need enough statistical power to detect a real signal through the noise of normal business variation.
This is the kind of measurement problem I find genuinely interesting — the kind where getting the design right matters more than having sophisticated tools. I've written about the difference between tool proficiency and analytical capability in my piece on enterprise analytics at SVB and NRG, and this project was a perfect example.
The Experiment Design
I structured the test around three markets:
- Austin — test market (billboards active)
- Miami — test market (billboards active)
- Seattle — control market (no OOH spend)
Why These Markets
Market selection is the most important decision in a geo-test and the one that gets the least attention. You can't just pick cities at random. They need to be structurally similar enough that differences in outcomes are attributable to the treatment, not to underlying market dynamics.
I selected Austin and Miami as test markets because they had comparable baseline web traffic patterns, similar seasonal trends in SVB's core banking products, and both were active OOH markets where we could place media efficiently. Seattle served as the control because it shared traffic profile characteristics with the test markets but had no planned OOH activity during the test window.
Before running anything, I pulled 12 months of pre-test data to verify that all three markets tracked together. If Miami's traffic was already diverging from Seattle before the billboards went up, the test would be meaningless. The pre-test normalization confirmed the markets were comparable.
Handling Seasonality and Noise
Financial services traffic isn't steady. It fluctuates with earnings seasons, market events, Fed announcements, and startup funding cycles. A naive pre/post comparison would be contaminated by these factors.
I built seasonality adjustments into the baseline using the prior year's traffic patterns to create expected values for each market during the test window. The lift calculation compared actual performance against the seasonally adjusted baseline, not against raw pre-period numbers.
I also established matched market baselines — normalizing each test market's performance against the control's performance over the same period. This controls for any macro factors (economic news, competitor activity, industry trends) that affected all markets simultaneously.
The Results
The numbers weren't subtle.
Miami showed a +97.8% web traffic lift compared to the Seattle control during the OOH flight period.
Austin showed a +94.4% web traffic lift compared to the Seattle control during the same period.
These weren't marginal gains that required squinting at confidence intervals. The OOH campaigns nearly doubled web traffic in both test markets relative to the control. And the consistency across two independent test markets made the finding more robust — it's much harder to explain away a result that replicates in two different cities.
Building the Offline-to-Online Attribution Pipeline
The geo-incrementality test answered the big question: does OOH work? But the leadership team had a follow-up: can you show us specific leads that came from a billboard?
This is a harder problem, but not an impossible one.
I built SVB's first offline-to-online attribution pipeline using QR-enabled billboard tracking. Each OOH placement carried a unique QR code that routed through a dedicated landing page. That landing page was instrumented to capture the source placement, tie the visit to a CRM contact record, and track the lead through SVB's pipeline stages.
This was the first time in SVB's history that we had directly attributable OOH-sourced sales leads. Not modeled. Not estimated. Actual people who scanned a QR code on a billboard, landed on our site, entered the CRM, and progressed through deal stages.
Were QR scans the majority of OOH-driven traffic? No. Most people who see a billboard don't scan a QR code — they Google the company later, or they become slightly more receptive to the next LinkedIn ad they see. The geo-incrementality test captured that broader effect. The QR pipeline captured the direct response component.
Together, they gave us a complete picture: the geo-test showed the aggregate market-level impact, and the QR pipeline provided individual lead-level proof points for sales and executive audiences who needed to see named accounts.
The Halo Effect on Paid Digital
One of the most interesting findings wasn't about OOH in isolation — it was about what OOH did to our other channels.
In the test markets, paid digital performance improved materially. LinkedIn ad response rates went up. Paid social engagement increased. The effect was statistically significant and consistent across both Austin and Miami.
This is the halo effect that OOH advocates talk about but rarely prove. When people see your brand on a billboard and then see your LinkedIn ad an hour later, they're more likely to engage. The billboard creates familiarity. The digital ad converts it.
The important thing is that we didn't just assert this. We measured it. By comparing paid digital performance in OOH markets versus the control market during the same time period, we isolated the OOH contribution to digital channel performance. This is the kind of cross-channel measurement that Atticus Li's PRISM Method is built to address — understanding how channels interact, not just how they perform in isolation.
From Measurement to Budget Reallocation
Data without decisions is just decoration. The real value of this work wasn't the charts — it was the budget conversation it enabled.
Before the geo-test, OOH budget discussions were based on brand proxy metrics. Estimated impressions. Reach and frequency models from the media agency. CPM comparisons against other "awareness" channels. These metrics aren't useless, but they don't answer the question finance actually cares about: is this spend generating demand?
After the geo-test, we had a different conversation. I built executive-ready incrementality models that translated the geo-test results into projected ROI by market. We could show the cost of OOH in a given market, the incremental web traffic it generated, the lead conversion rate from that traffic, and the estimated pipeline value.
This shifted leadership from evaluating OOH as a brand line item to evaluating it as a performance channel with measurable returns. Budget reallocation followed — not because I lobbied for more OOH spend, but because the data made the case for optimizing the channel mix based on actual incremental impact rather than media agency recommendations.
What Most Companies Get Wrong About OOH Measurement
Having gone through this process, I see three mistakes companies make consistently:
Mistake 1: They don't try to measure it at all. OOH gets treated as unmeasurable brand spend, and the budget survives or dies based on executive opinion rather than evidence. This is the most common failure mode and the easiest to fix — geo-incrementality testing isn't new technology. It's available to any company willing to design the experiment properly.
Mistake 2: They only measure direct response. QR codes and vanity URLs are useful but they capture a tiny fraction of OOH's impact. If you only measure scans, you'll conclude that billboards don't work — when in reality, the billboard's primary effect is on branded search, direct traffic, and paid media lift. You need both the market-level test and the direct response tracking.
Mistake 3: They run the test badly. Poor market selection. No pre-test normalization. No seasonality adjustment. No control market at all. I've seen "geo-tests" where the test and control markets were so different that the results were meaningless regardless of what happened. The design rigor matters as much as the execution.
Why This Matters Beyond OOH
The geo-incrementality framework I built at SVB wasn't just useful for billboards. It's the right approach for any channel where individual-level tracking isn't possible or isn't reliable.
TV advertising. Podcast sponsorships. Conference sponsorships. PR campaigns. Brand partnerships. Any marketing activity where the impact is diffuse and indirect benefits from this measurement approach.
The methodology is the same: select comparable markets, establish baselines, apply the treatment to test markets, measure the differential outcome, and adjust for confounders. What changes is the specific channel being tested and the outcome metrics being measured.
This is also why I built the measurement capabilities at SVB the way I did — not as one-off analyses, but as repeatable frameworks. When I moved to NRG and started building the experimentation program there, the same measurement principles applied even though the industry, the channels, and the tools were completely different. Good analytical frameworks transfer across contexts.
The Bigger Lesson
Marketing measurement isn't about having the perfect tool or the perfect data. It's about asking clear causal questions and designing studies that can answer them.
The SVB geo-incrementality experiment wasn't sophisticated by academic standards. It was a straightforward test/control design with appropriate normalization. But it answered a question that SVB had never been able to answer before, and it changed how leadership thought about a significant line item in the marketing budget.
That's the standard I hold every measurement project to: did it change a decision? If the analysis is methodologically beautiful but doesn't influence how the company allocates resources, it's academic exercise, not applied analytics.
The OOH experiment at SVB changed a decision. And the framework I built to run it became a template for every offline channel measurement question that followed.
Have a question about measuring offline advertising impact or designing geo-incrementality tests? Reach me at [email protected].