Using Certifying Constraint Solvers for Generating Step-wise Explanations¶
Conference: AAAI 2026 arXiv: 2511.10428 Code: None Area: Interpretability Keywords: Constraint solving, step-wise explanation, proof logging, explainable AI, unsatisfiability
TL;DR¶
This paper proposes leveraging unsatisfiability proofs generated by certifying constraint solvers as a starting point, and applies a series of simplification and transformation techniques to efficiently produce user-facing step-wise explanation sequences, achieving speedups of up to 100× over approaches that construct explanations from scratch.
Background & Motivation¶
- Background: Declarative problem solving allows users to describe only the problem itself rather than the solution procedure, with general-purpose solvers automatically finding solutions. However, when a problem is unsatisfiable, explaining "why no solution exists" to the user remains a critical challenge.
- Limitations of Prior Work: Minimal Unsatisfiable Subsets (MUS) can help users focus on a subset of constraints, but may still be large and difficult to comprehend. Step-wise explanation frameworks explain unsatisfiability through sequences of simple inference steps, but are computationally expensive — each step requires multiple calls to an NP-oracle, making the full sequence even more costly.
- Key Challenge: High-quality user-level explanations require simple and concise inference steps, yet a substantial gap exists between the computational complexity of such explanations and the efficient proof formats used internally by solvers.
- Goal: Significantly reduce the computational overhead of generating step-wise explanation sequences, enabling their application to larger-scale constraint problems.
- Key Insight: Utilize proof logs (DRCP proofs) generated by certifying solvers as a starting point, and apply a suite of transformation and simplification techniques to convert machine-level proofs into human-comprehensible step-wise explanations.
- Core Idea: Solvers already produce a complete reasoning trace (proof) when verifying unsatisfiability; rather than constructing explanations from scratch, this work reuses and transforms these existing proofs.
Method¶
Overall Architecture¶
The overall pipeline consists of three stages: (1) a constraint solver solves the problem and generates a DRCP proof log; (2) a series of simplification operations (elimination of auxiliary variables, reduction to domain-level reasoning, replacement of solver-level constraints with user-level constraints) transforms the proof into an abstract proof conforming to the step-wise explanation format; (3) reason minimization further refines the justification at each step to make explanations as concise and comprehensible as possible.
Key Designs¶
-
Abstract Proof Framework:
- Function: Provides a unified formal framework that encompasses both DRCP proof logs and step-wise explanations.
- Mechanism: Defines an abstract proof as a sequence of pairs \((R_i, C_i)\), where \(R_i\) is the set of justifications and \(C_i\) is the set of derived constraints. By imposing different property constraints (P-proof) on proof steps, both solver proofs and user explanations can be expressed within the same framework.
- Design Motivation: A unified framework makes the transformation from proof to explanation systematic — it suffices to incrementally tighten the property constraints.
-
Proof Simplification:
- Function: Removes steps in the DRCP proof that do not conform to the explanation format while preserving proof validity.
- Mechanism: Two key simplification operations are introduced: (a) auxiliary variable simplification — removes proof steps involving auxiliary variables introduced by the modeling system and merges their dependencies into subsequent steps (P: \(scope(C) \subseteq \mathcal{X}\)); (b) domain reduction simplification — ensures that each step derives only atomic constraints concerning the domain of a single variable (P: \(|scope(C)| \leq 1\)).
- Design Motivation: Users are concerned only with decision variables, not with auxiliary variables internal to the solver; step-wise explanations require each step to derive concrete domain value changes.
-
Solver-to-User Level Transformation + Reason Minimization:
- Function: Replaces solver-level constraints in proof steps with the corresponding user-level constraints, and minimizes the set of justifications at each step.
- Mechanism: By tracking the correspondence between user-level and solver-level constraints, Lemma 1 (if a user-level constraint is at least as strong as the solver-level constraint over the decision variables, then it can be safely substituted) is applied for replacement. Reason minimization (Algorithm 1) traverses the proof from the final step backward, applying MUS computation at each step to remove redundant justifications.
- Design Motivation: Users understand only the original constraints they modeled; minimizing justifications reduces cognitive load.
Loss & Training¶
This work does not involve deep learning; the core algorithms are combinatorial optimization methods. Key complexity optimizations include: - Trimming: removes redundant proof steps that are not referenced by any subsequent step. - Local vs. Global reason minimization: Local mode finds a minimal subset within the existing justifications, while Global mode may substitute justifications with any available constraint from the proof history. - A key property of proof simplification: it is always feasible for unsatisfiability proofs, since \(\bot\) does not involve any variables.
Key Experimental Results¶
Main Results¶
| Setting | Metric | Ours | Prev. SOTA | Gain |
|---|---|---|---|---|
| Various CSP benchmarks | Generation time | — | — | Up to 100× speedup |
| Step-wise explanation quality | Sequence length / step size | Comparable to SOTA | — | No significant quality loss |
Ablation Study¶
| Configuration | Key Metric | Description |
|---|---|---|
| Full method | Best speed + quality | All simplifications + minimization |
| Without trimming | More redundant steps | Proof not pruned |
| Local vs. Global minimization | Global yields shorter but slower explanations | Trade-off |
Key Findings¶
- Starting from proof logs can accelerate explanation generation by up to 100×, while explanation quality (sequence length and per-step size) remains comparable to existing state-of-the-art methods.
- Auxiliary variable simplification is critical for bridging the solver's internal representation and user understanding.
- Proof simplification may degenerate to a single-step trivial proof in extreme cases, but this rarely occurs in practice.
Highlights & Insights¶
- Reuse Rather Than Rebuild: This work embodies a "standing on the shoulders of giants" philosophy — solvers already contain a complete reasoning trace internally, and what is needed is format conversion rather than construction from scratch.
- Elegance of the Unified Framework: The Abstract Proof framework elegantly unifies machine-level proofs and human-facing explanations within a single mathematical structure, providing theoretical guarantees for the transformation process.
- Practical Value: Explainable constraint solving is highly important for real-world applications such as scheduling and planning — when a system declares "no solution exists," users need to understand which constraints are in conflict.
- New Use of Proof Logs: Traditionally, proof logs from certifying solvers are used primarily for correctness verification; this paper demonstrates their novel utility for interpretability.
Limitations & Future Work¶
- Currently only DRCP-format proof logs are supported; other formats (e.g., DRAT) require additional adaptation.
- Explanation quality depends on proof quality — an inefficient solver may generate verbose proofs, resulting in longer explanations even after simplification.
- Global reason minimization produces shorter explanations but incurs higher computational cost, requiring a trade-off between quality and speed.
- The current approach targets unsatisfiability explanations; support for optimality explanations remains limited.
Related Work & Insights¶
- vs. MUS-based methods: MUS provides only a conflicting subset of constraints without a reasoning process; this paper provides a complete step-by-step inference chain.
- vs. From-scratch step-wise methods (BGG21, GBG23): These methods require multiple NP-oracle calls to construct each step, incurring very high computational cost; the proposed approach starts from existing proofs, substantially reducing the computational burden.
- vs. DRAT (SAT proof format): DRCP is a proof format designed specifically for CP solvers, supporting a richer variety of constraint types.
Rating¶
- Novelty: ⭐⭐⭐⭐ First systematic transformation of certifying solver proof logs into user-level step-wise explanations.
- Experimental Thoroughness: ⭐⭐⭐⭐ Evaluated across multiple application domains; the 100× speedup is impressive.
- Writing Quality: ⭐⭐⭐⭐⭐ Rigorous formal definitions, rich examples, and clear organization.
- Value: ⭐⭐⭐⭐ A significant advance in explainable constraint solving, though the research area is relatively niche.