Skip to content

Beyond Last-Click: An Optimal Mechanism for Ad Attribution

Conference: NeurIPS 2025 arXiv: 2511.22918 Code: To be confirmed Area: Ad Attribution / Mechanism Design Keywords: Last-Click, Strategic Manipulation, Dominant Strategy Incentive Compatibility, Peer-Validated, Ad Platforms

TL;DR

This paper analyzes the strategic manipulation vulnerabilities of the Last-Click attribution mechanism from a game-theoretic perspective—platforms can obtain unfair attribution credit by falsifying timestamps—and proposes the Peer-Validated Mechanism (PVM), in which each platform's credit depends solely on the reports of other platforms (analogous to peer review). The paper theoretically proves that PVM is dominant strategy incentive compatible (DSIC) and optimal under homogeneous settings, improving attribution accuracy from 34% to 75% in the two-platform case.

Background & Motivation

Background: Ad attribution determines "which platform's advertisement caused a user conversion." The industry standard is Last-Click attribution, which assigns conversion credit to the platform whose ad was clicked last by the user.

Limitations of Prior Work: In redirect-less tracking, platforms can strategically falsify interaction timestamps—reporting their own click time as later in order to claim Last-Click credit. Existing systems lack defenses against such manipulation.

Key Challenge: The Last-Click mechanism incentivizes platforms to report false (later) timestamps—the later the reported time, the more likely a platform is to receive attribution credit, causing all platforms to tend toward misreporting.

Goal: Design an attribution mechanism that eliminates platforms' incentive to falsify timestamps—i.e., one that is strategically safe (DSIC).

Key Insight: Peer validation—each platform's attribution credit does not depend on its own report, but only on the reports of other platforms. This is analogous to peer review, where a paper's score is not determined by its authors.

Core Idea: Each platform's credit = whether other platforms' reports validate that platform's interaction → misreporting one's own data yields no benefit = DSIC.

Method

Overall Architecture

\(n\) platforms each report user interaction timestamps → PVM: platform \(i\)'s credit = \(\mathbb{I}[\max_{j \neq i}\{t_j\} \leq \alpha_i]\), where the validation threshold \(\alpha_i\) satisfies \(\prod_{j \neq i} F_j(\alpha_i) = \beta_i\) → truthful reporting is a dominant strategy.

Key Designs

  1. Peer-Validated Mechanism (PVM):

    • Function: Eliminates platforms' incentive to manipulate their own reports.
    • Mechanism: Platform \(i\)'s attribution credit depends solely on whether the maximum timestamp reported by all other platforms does not exceed \(\alpha_i\). In essence, \(i\)'s credit is "validated" by peers—if other platforms' timestamps are all early (indicating that \(i\) was indeed the last click), then \(i\) receives credit.
    • Design Motivation: Key insight—platform \(i\)'s own report does not appear in its credit formula → manipulating one's own report is entirely futile → DSIC.
  2. Proof of DSIC Optimality (Theorem 5):

    • Function: Proves that PVM is the optimal DSIC mechanism in the homogeneous setting.
    • Mechanism: Among all attribution mechanisms satisfying DSIC, PVM achieves the highest attribution accuracy.
    • Design Motivation: PVM is not only strategically safe but also maximally accurate.
  3. Accuracy Analysis:

    • Function: Quantifies attribution accuracy of PVM vs. LCM.
    • Mechanism: Two-platform homogeneous setting—LCM accuracy \((2-\sqrt{2})^2 \approx 34.3\%\) (equilibrium under manipulation); PVM accuracy 75%; three-platform: PVM 61.5% vs. LCM 33.4%.
    • Design Motivation: Doubling accuracy demonstrates the substantial value of manipulation resistance.

Loss & Training

  • Game-theoretic analysis; no training involved.
  • Utility function \(\mathcal{U}_i = E[\text{f}_i(\text{credit})]\).
  • Real-world experiment: actual ad data from 4 platforms.

Key Experimental Results

Main Results

Setting PVM Accuracy LCM Accuracy Gain
2-platform homogeneous 75% 34.3% +40.7%
3-platform homogeneous 61.5% 33.4% +28.1%
2-platform heterogeneous 70.4%
Real 4-platform +4–12% baseline

Key Findings

  • LCM achieves only 34.3% accuracy in the two-platform game—meaning two-thirds of attributions are incorrect.
  • PVM doubles accuracy—from 34% to 75%—representing a substantial fairness improvement.
  • Accuracy decreases as the number of platforms grows (61.5% for \(n=3\))—more platforms make peer validation harder.
  • In real-data experiments, PVM consistently outperforms LCM by 4–12%.

Highlights & Insights

  • The peer-review analogy is highly precise: paper scores are not determined by authors → ad attribution credit is not determined by the attributed platform.
  • LCM's 34% accuracy is striking: the industry-standard attribution method performs near-randomly under strategic manipulation.
  • DSIC optimality guarantees that no better manipulation-resistant mechanism can be designed—PVM is the theoretical upper bound.

Limitations & Future Work

  • Focuses on the Last-Click paradigm—Multi-Touch attribution is not considered.
  • Assumes independent click-time distributions—correlations may exist in practice.
  • Accuracy continues to decline as the number of platforms increases—scalability in large-scale settings remains a concern.
  • vs. LCM: LCM performs near-randomly under strategic manipulation; PVM doubles accuracy.
  • vs. VCG mechanism: VCG requires monetary transfers; PVM involves only credit allocation.

Rating

  • Novelty: ⭐⭐⭐⭐⭐ First systematic game-theoretic analysis of strategic manipulation in ad attribution.
  • Experimental Thoroughness: ⭐⭐⭐⭐ Theoretical proofs + simulations + real-world data.
  • Writing Quality: ⭐⭐⭐⭐⭐ Theorems are clearly stated with well-grounded intuitive explanations.
  • Value: ⭐⭐⭐⭐⭐ Direct implications for the digital advertising industry.

Supplementary Method Notes

  • Derivation of LCM manipulation equilibrium: In a symmetric two-platform game, each platform's optimal strategy is to report the timestamp as \(F^{-1}(1-\sqrt{1-F(t)})\)—always reporting later. The equilibrium accuracy \((2-\sqrt{2})^2 \approx 34.3\%\) theoretically quantifies the harm of manipulation.
  • Computing PVM's validation threshold \(\alpha_i\): Determined via \(\prod_{j \neq i} F_j(\alpha_i) = \beta_i\), where \(\beta_i\) reflects the attribution probability owed to platform \(i\). The threshold is independent of \(i\)'s own report.
  • Implications of DSIC: Regardless of how other platforms report (truthfully or not), platform \(i\)'s dominant strategy is always truthful reporting—a stronger strategic safety guarantee than Nash equilibrium.
  • Accuracy decline with more platforms: \(n=2\): 75% → \(n=3\): 61.5%—because more platforms cause peer validation information to become more dispersed.
  • Fairness improvement in real-world experiments: In the four-platform experiment, fairness (Gini coefficient) improves by ~1.3%—PVM is not only more accurate but also more equitable.
  • Comparison with Shapley attribution: Shapley is an ex-post allocation method that does not prevent manipulation; PVM is an ex-ante mechanism design approach that does—the two serve complementary objectives.
  • Practical impact on the advertising industry: Hundreds of billions of dollars in ad spending depend on attribution—a 34% accuracy rate implies that two-thirds of ad expenditure is misallocated.
  • Generalizability of the mechanism design principle: The peer-validation idea underlying PVM is transferable to other multi-party reporting scenarios, such as supply chain traceability and academic contribution assessment.
  • Deployment pathway: No changes to existing ad-tech infrastructure are required—only the attribution logic needs to be modified, making deployment costs extremely low.
  • Compatibility with privacy protection: PVM does not require sharing raw user data, only aggregated timestamps, and is thus compliant with privacy regulations such as GDPR.
  • Open problem: How non-independent click-time distributions affect the optimality of PVM remains to be investigated.
  • Future work: Extend to Multi-Touch attribution models and handle non-independent click-time distributions.