Skip to content

Debating the Unspoken: Role-Anchored Multi-Agent Reasoning for Half-Truth Detection

Conference: ACL 2026 arXiv: 2604.19005 Code: https://github.com/tangyixuan/RADAR Area: Fact Verification / Misinformation Detection Keywords: Half-Truth Detection, Multi-Agent Debate, Omission Reasoning, Role Anchoring, Adaptive Termination

TL;DR

RADAR uses role-anchored (politician vs scientist) multi-agent debate to detect half-truths — statements that are factually correct but misleading due to omitted context — with dual-threshold adaptive early stopping, consistently outperforming single-agent and traditional multi-agent baselines under noisy retrieval conditions.

Method

Key Designs

  1. Role-Anchored Debate Protocol: Politician agent constructs the most persuasive supporting narrative (confirmatory reasoning); scientist agent probes for missing, weak, or selectively presented information (analytical reasoning). The contrast naturally models half-truth creation and detection mechanisms.

  2. Dual-Threshold Adaptive Early Stopping: Terminates only when stop margin \(s \geq \tau_s\) AND maximum label confidence \(c \geq \tau_v\) are both met, preventing premature stopping on uncertain cases.

  3. Retrieval-Anchored Evidence Sharing: All agents share the same evidence pool, grounding arguments in retrieved evidence rather than parametric knowledge.

Key Experimental Results

Method Accuracy F1_macro F1_HalfTrue
D2D (MAD) 63.0 50.9 39.7
RADAR_multi 77.7 63.3 56.5

Highlights & Insights

  • The "politician-scientist" role metaphor is ingenious — half-truths are common in political discourse, and using agents that model this discourse strategy to detect them creates a "fighting fire with fire" design philosophy
  • Paradigm shift from "finding contradictions" to "discovering omissions" opens new directions for fact verification

Rating

  • Novelty: ⭐⭐⭐⭐⭐
  • Experimental Thoroughness: ⭐⭐⭐⭐
  • Writing Quality: ⭐⭐⭐⭐⭐
  • Value: ⭐⭐⭐⭐⭐