CAUSAL IDENTIFICATION UNDER CONFOUNDING IN DIGITAL ADVERTISING: FROM PEARL’S ADJUSTMENT FORMULA TO OPERATIONAL LIFT STUDIES - Наукові конференції

Вас вітає Інтернет конференція!

Вітаємо на нашому сайті

Рік заснування видання - 2011

CAUSAL IDENTIFICATION UNDER CONFOUNDING IN DIGITAL ADVERTISING: FROM PEARL’S ADJUSTMENT FORMULA TO OPERATIONAL LIFT STUDIES

27.02.2026 00:41

[2. Економічні науки]

Автор: Igor Ivitskiy, PhD, Associate Professor, Doctor Ads, London



Digital advertising generates observational data at scale, yet the core economic question facing advertisers is causal: does a given expenditure produce incremental revenue? The distinction carries direct consequences for resource allocation. When an advertiser observes that a campaign correlates with higher conversion rates, the naive conclusion is that the campaign caused the uplift. This inference is unreliable whenever unobserved confounders simultaneously influence both the treatment (advertising exposure) and the outcome (purchase behaviour). Seasonal demand, competitor pricing, and platform algorithmic shifts are among the most common confounding factors [1]. Misattributed causality leads to systematic misallocation of budgets, producing deadweight losses that compound across successive budget cycles [2].

The formal apparatus for separating correlation from causation originates in structural causal models. Consider an advertising treatment X, an outcome Y, and confounders Z that affect both. The observed conditional distribution follows the law of total probability: P(Y | X) = Σₖ P(Y | X,Z) P(Z | X). This expression captures what one sees in data. The causal effect, however, requires intervening on X. Pearl’s adjustment formula [3] provides the identification result: P(Y | do(X)) = Σₖ P(Y | X,Z) P(Z). The critical operation is replacing the confounded term P(Z | X) with the marginal P(Z). When confounders correlate with the treatment, as they almost invariably do in non-experimental advertising settings, these two distributions diverge and decisions based on the observational distribution are systematically biased [2].

An alternative identification strategy draws on the potential outcomes framework [4]. For each unit i, two potential outcomes exist: Yᵢ(1) under treatment and Yᵢ(0) under control. The individual causal effect τᵢ = Yᵢ(1) - Yᵢ(0) is fundamentally unobservable. Random assignment resolves this by balancing treatment and control groups across all confounders, enabling estimation of the Average Treatment Effect: ATE = E[Y(1)] - E[Y(0)]. In operational practice, this logic is implemented through lift studies: a randomly selected holdout group is withheld from advertising exposure, and the conversion rate difference estimates the incremental effect, isolated from confounders [5].




Figure 1. Left: confounded relationship between spend X and outcome Y via latent Z. Right: deconfounded causal relationship after randomization blocks confounding paths. Solid arrow represents isolated causal effect.

Two binding constraints limit lift study design. First, the holdout group imposes an opportunity cost proportional to its size and the true campaign effect; advertisers with thin margins resist diverting spend. Second, statistical power requirements impose minimum sample sizes governed by n = (Zₐ/₂ + Zβ)² · 2σ² / d², where σ² is outcome variance and d is the minimum detectable effect [6]. In categories with low conversion rates, required sample sizes often exceed available traffic within a realistic measurement window. Nevertheless, even infrequent lift studies provide calibration benchmarks for less costly observational methods, anchoring model-based estimates to experimentally validated ground truth [7].

Even when experimental designs are feasible, interpretation is subject to cognitive biases. Confirmation bias, post-hoc rationalization, and selective reporting are well-documented failure modes [8]. Structured countermeasures, collectively termed cognitive hygiene, include pre-registration of hypotheses before data examination, adversarial review by an independent analyst, and ensemble interpretation across multiple model specifications [2]. These procedures impose discipline that narrows the range of permissible interpretations and reduces the probability of acting on spurious findings.

From an economic efficiency standpoint, the gap between observational and causal estimates is quantitatively significant. Platform-reported correlational data may overestimate incremental impact by 30–60%, depending on confounding structure and the degree of observational collapse induced by privacy regulations [9]. The resulting misallocation is not random but systematically biased toward channels where organic demand and paid exposure overlap. Causal identification, whether through the adjustment formula or randomized lift studies, provides the corrective mechanism and a structural prerequisite for allocative efficiency in advertising markets under declining observability [2, 7].

References

1. Varian H. R. Causal inference in economics and marketing. Proceedings of the National Academy of Sciences. 2016. Vol. 113, No. 27. P. 7310–7315. DOI: 10.1073/pnas.1510479113.

2. Ivitskiy I. The M.A.T.H. Framework: A First-Principles Approach to Quant Marketing in High-Uncertainty Environments. Zenodo. 2026. DOI: 10.5281/zenodo.18552246.

3. Pearl J. Causality: Models, Reasoning, and Inference. 2nd ed. Cambridge: Cambridge University Press, 2009. 484 p.

4. Rubin D. B. Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association. 2005. Vol. 100, No. 469. P. 322–331. DOI: 10.1198/016214504000001880.

5. Ivitskiy I. Incrementality at scale: a blueprint for corporate holdouts, geo-experiments, and platform leakage control [Electronic resource]. Молодий вчений. 2025. URL: https://molodyivchenyi.ua/omp/index.php/conference/catalog/view/149/2415/5025-1  (accessed: 19.02.2026).

6. Imbens G. W., Rubin D. B. Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction. Cambridge: Cambridge University Press, 2015. 625 p.

7. Ivitskiy I. Calibrate or crash: a guide to using incrementality tests to stop model drift in MMM and MTA [Electronic resource]. Research Europe. 2026. URL: https://researcheurope.org/wp-content/uploads/2026/01/re-17.01.2026-59-62.pdf  (accessed: 19.02.2026).

8. Nickerson R. S. Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology. 1998. Vol. 2, No. 2. P. 175–220. DOI: 10.1037/1089-2680.2.2.175.

9. Ivitskiy I., Savchenko D., Sydorenko D. Data Integrity as the Terminal Constraint in AI-Driven Advertising: An Information-Theoretic Analysis. Zenodo. 2026. DOI: 10.5281/zenodo.18675362.



Creative Commons Attribution Ця робота ліцензується відповідно до Creative Commons Attribution 4.0 International License
допомога Знайшли помилку? Виділіть помилковий текст мишкою і натисніть Ctrl + Enter

Інші наукові праці даної секції

Конференції

Конференції 2026

Конференції 2025

Конференції 2024

Конференції 2023

Конференції 2022

Конференції 2021



Міжнародна інтернет-конференція з економіки, інформаційних систем і технологій, психології та педагогіки

Наукова спільнота - інтернет конференції

:: LEX-LINE :: Юридична лінія

Інформаційне суспільство: технологічні, економічні та технічні аспекти становлення