Skip to content

CounterRefine: Answer-Conditioned Counterevidence Retrieval for Inference-Time Knowledge Repair in Factual Question Answering

Conference: ACL 2026 arXiv: 2603.16091 Code: None Area: Information Retrieval / Question Answering Keywords: inference-time repair, counterevidence retrieval, answer conditioning, factual QA, RAG augmentation

TL;DR

This paper proposes CounterRefine, a lightweight inference-time repair layer: a standard RAG pipeline first generates a preliminary answer, which then conditions a counterevidence retrieval step to collect supporting and contradicting evidence; a constrained KEEP/REVISE decision gate combined with deterministic validation corrects erroneous answers, improving GPT-5's accuracy on SimpleQA from 67.3% to 73.1%.

Background & Motivation

Background: Retrieval-Augmented Generation (RAG), which grounds language model generation in external evidence, has become the standard approach for knowledge-intensive NLP. Variants such as multi-round retrieval and query rewriting have further improved retrieval quality.

Limitations of Prior Work: Many factual errors are not retrieval failures but rather "commitment failures"—the system retrieves relevant evidence yet remains locked onto the wrong answer. In short-answer factual QA, such errors are unforgivable: an incorrect year, a near-miss entity, or an almost-correct title all count as complete failures. First-round retrievers are optimized for topical relevance rather than for discriminating among candidate answers.

Key Challenge: Once a preliminary answer is produced, the most useful subsequent query is often not the original question but one conditioned on that candidate answer. If the preliminary year is wrong, incorporating it into the query can surface passages that directly refute it.

Goal: Design a simple inference-time repair layer that can be stacked on top of existing retrieval pipelines and corrects factual errors through answer-conditioned counterevidence retrieval.

Key Insight: Reframe the role of retrieval from "gathering more context" to "stress-testing a tentative answer." Rather than broadening the search indiscriminately, use the preliminary answer to guide a targeted second retrieval pass.

Core Idea: Generate a preliminary answer, perform answer-conditioned counterevidence retrieval, and then apply a constrained KEEP/REVISE gate together with deterministic validation to decide whether to revise the answer.

Method

Overall Architecture

A three-stage pipeline: Stage 1 (baseline drafting) → Stage 2 (answer-conditioned counterevidence retrieval) → Stage 3 (constrained refinement + deterministic validation). The input is a factual question; the output is a repaired short answer.

Key Designs

  1. Answer-Conditioned Counterevidence Retrieval

    • Function: Constructs new queries from the preliminary answer to collect evidence that supports or refutes it.
    • Mechanism: Queries are constructed according to the question type \(t(q)\) as \(Q(q, a_0) = \{q, q \| a_0\} \cup \mathbb{I}[t(q) \in \mathcal{T}]\{a_0\}\), where \(\mathcal{T} = \{\text{who, where, when, year, number}\}\). For each query, \(k_r=5\) passages are retrieved; results are merged with baseline evidence and deduplicated to form \(R_1\). The key intuition is that the second retrieval pass asks not "which documents are topically relevant?" but "what evidence most directly supports or refutes this candidate answer?"
    • Design Motivation: The original query optimizes for topical relevance, whereas the answer-conditioned query optimizes for candidate discriminability. When the preliminary answer is a wrong near-miss entity or year, incorporating it into the query tends to surface passages that precisely negate it.
  2. Constrained Refinement Gate

    • Function: Decides whether to retain or revise the preliminary answer based on the expanded evidence.
    • Mechanism: The refiner receives the question, the baseline answer, and the merged evidence set, and must output three fields: DECISION (KEEP/REVISE), ANSWER (short answer), and EVIDENCE (passage or NONE). The prompt instructs the model to REVISE only when the additional evidence strongly supports a different answer. The output format is highly constrained rather than open-ended rewriting.
    • Design Motivation: Restricting refinement to a binary decision (KEEP/REVISE) rather than allowing the model to re-solve the question from scratch substantially reduces the risk of introducing new errors.
  3. Deterministic Validation and Normalization

    • Function: Blocks unsupported, type-mismatched, or improperly formatted revisions.
    • Mechanism: A proposed revision is rejected if any of the following conditions hold: the answer is empty or identical to the preliminary answer; yes/no answers are mismatched; entity-type answers are excessively long or contain descriptive phrases; temporal/numeric answers lack explicit markers; no supporting passage is provided; or lexical overlap between the revised answer and the supporting passage is too weak. Revisions that pass validation are subject to question-type-specific normalization (e.g., extracting four-digit years, compressing numeric ranges).
    • Design Motivation: Model decisions cannot be fully trusted; deterministic rule-based validation provides hard quality guarantees. This ensures revisions are accepted only when adequately supported by evidence, while KEEP decisions bypass validation and directly retain the original answer.

Loss & Training

No training is required. CounterRefine is a purely inference-time pipeline that uses off-the-shelf LLMs (Claude Sonnet 4.6 or GPT-5) and a web search API.

Key Experimental Results

Main Results

Benchmark Metric Claude Base-RAG Claude +CounterRefine GPT-5 Base-RAG GPT-5 +CounterRefine
SimpleQA (4326) Correct↑ 63.7 67.7 (+4.0) 67.3 73.1 (+5.8)
SimpleQA (4326) F1↑ 64.1 68.1 (+4.0) 58.6 72.1 (+13.5)
HotpotQA (300) EM↑ 70.0 74.0 (+4.0) 68.0 71.0 (+3.0)

Intervention Analysis (Claude, SimpleQA Full Set)

Metric Value
Revision rate 5.6%
Beneficial revisions 180
Harmful revisions 8
Beneficial-to-harmful ratio 22.5:1

Key Findings

  • CounterRefine consistently improves exact-match metrics across all settings, spanning backbone models, datasets, and evaluation scales.
  • Interventions are highly precise: only 5.6% of examples are revised, with a beneficial-to-harmful ratio of 22.5:1, demonstrating that deterministic validation effectively filters erroneous revisions.
  • The F1 gain of 13.5 points on GPT-5 substantially exceeds the 5.8-point accuracy gain, indicating that repaired answers exhibit marked improvements in lexical precision.
  • Primary success patterns include entity confusion, date errors, and numerical imprecision; failure patterns include relational confusion and event misattribution.

Highlights & Insights

  • From "gathering evidence" to "testing hypotheses": The role of retrieval is reframed from passive context collection to active hypothesis testing. This conceptual shift is more important than any technical detail—once a candidate answer exists, the most valuable retrieval is one targeted at that answer.
  • Deterministic validation as an indispensable safety net: The 22.5:1 beneficial-to-harmful ratio demonstrates the value of hard rule-based verification. Purely model-driven refinement would likely introduce more errors; deterministic validation restricts revisions to high-confidence cases.
  • Minimalist design philosophy: The entire method adds only one additional retrieval pass, one model call, and rule-based validation, without modifying model parameters or the retrieval pipeline. This "thin repair layer" design makes it stackable on any existing RAG system.

Limitations & Future Work

  • Applicability is limited to short-answer factual QA; repairing long-form generation requires different mechanisms.
  • Failure patterns such as relational confusion and event misattribution are difficult to address through simple answer-conditioned retrieval.
  • Deterministic validation rules are hand-crafted and may not generalize well to new question types.
  • Multi-round iterative refinement is not explored (the current method performs only one round), potentially missing errors that require multi-step reasoning to detect.
  • vs. Chain-of-Verification: CoVe generates verification questions and answers them, but at high computational cost. CounterRefine requires only one additional retrieval pass and one model call.
  • vs. CRITIC: CRITIC uses tool-interactive verification and is more general but more complex. CounterRefine focuses on short-answer repair and is simpler and more efficient.
  • vs. ROME/MEMIT: Model editing modifies factual associations in model parameters. CounterRefine is a complementary inference-time repair approach that leaves model parameters unchanged.

Rating

  • Novelty: ⭐⭐⭐ The answer-conditioned retrieval idea is intuitive yet effective; deterministic validation is the key contribution.
  • Experimental Thoroughness: ⭐⭐⭐⭐ Full SimpleQA official evaluation + cross-model and cross-dataset experiments + intervention analysis.
  • Writing Quality: ⭐⭐⭐⭐⭐ Exceptionally clear writing with a complete motivation–method–analysis logical chain.
  • Value: ⭐⭐⭐⭐ Highly practical; directly stackable on existing RAG systems.