Skip to content

Comparing Human and Large Language Model Interpretation of Implicit Information

Conference: ACL 2026 arXiv: 2604.17085 Code: Available (link in paper) Area: Knowledge Graph / Implicit Information Understanding Keywords: Implicit Information Extraction, Knowledge Graph, Human-AI Comparison, Reasoning Verification, Temporal Analysis

TL;DR

This paper proposes the Implicit Information Extraction (IIE) task and a three-stage LLM pipeline (information extraction → reasoning verification → temporal analysis), building structured knowledge graphs to represent implicit textual meaning. Crowdsourced human comparisons reveal LLMs are more conservative in socially-rich contexts but humans are more conservative in short factual contexts.

Method

Key Designs

  1. ATOMIC-Based Implicit Reasoning Types: Guides LLMs to systematically infer implicit information through structured reasoning types: preconditions, postconditions, participant intentions, emotional reactions, perceived attributes.

  2. Reasoning Verification (Self-Critique + Correction): Model reviews each implicit triple for textual support, with up to 3 correction rounds.

  3. Nested Triples (RDF Reification-Inspired): Handles subordinate clauses and modal verbs through recursive nesting.

Key Experimental Results

  • Humans agree with most LLM-extracted triples but consistently suggest substantial supplements — indicating limited coverage of LLM implicit reasoning
  • LLMs are more conservative in socially-rich contexts; humans are more conservative in short factual contexts
  • Temporal reasoning is a weak point for LLMs

Highlights & Insights

  • Formalizing implicit information understanding as a knowledge graph construction task provides a quantitatively comparable framework
  • The context-dependent conservatism finding offers new perspective for understanding human-AI differences

Rating

  • Novelty: ⭐⭐⭐⭐
  • Experimental Thoroughness: ⭐⭐⭐⭐
  • Writing Quality: ⭐⭐⭐⭐
  • Value: ⭐⭐⭐⭐