🕸️ Graph Learning¶
💬 ACL2026 · 8 paper notes
- AgentGL: Towards Agentic Graph Learning with LLMs via Reinforcement Learning
-
AgentGL is the first RL-based agentic graph learning (AGL) framework that enables LLM agents to autonomously navigate text-attributed graphs (TAGs) via graph-native search tools, achieving up to 17.5% and 28.4% absolute accuracy gains on node classification and link prediction respectively.
- ARK: Answer-Centric Retriever Tuning via KG-augmented Curriculum Learning
-
ARK filters positive samples through three-dimensional answer sufficiency scoring (Forward + Backward + Retriever alignment) and generates progressively difficult hard negatives via LLM-constructed knowledge graphs for curriculum contrastive learning, averaging +14.5% F1 across 10 datasets.
- AutoPKG: An Automated Framework for Dynamic E-commerce Product-Attribute Knowledge Graph Construction
-
AutoPKG is a multi-agent LLM framework that automatically constructs Product-Attribute knowledge graphs (PKGs) from multimodal e-commerce content, using Type Induction Agent, Attribute Key Discovery Agent, Attribute Value Extraction Agent, and centralized KGD Decision Agent, achieving 0.953 WKE for types and +7.89% recommendation GMV in online A/B tests on Lazada.
- Comparing Human and Large Language Model Interpretation of Implicit Information
-
This paper proposes the Implicit Information Extraction (IIE) task and a three-stage LLM pipeline (information extraction → reasoning verification → temporal analysis), building structured knowledge graphs to represent implicit textual meaning. Crowdsourced human comparisons reveal LLMs are more conservative in socially-rich contexts but humans are more conservative in short factual contexts.
- From Nodes to Narratives: Explaining Graph Neural Networks with LLMs and Graph Context
-
Gspell is a lightweight post-hoc explanation framework that projects GNN node embeddings into LLM embedding space and constructs hybrid prompts (soft prompts + text), enabling LLMs to directly reason over GNN internal representations and generate natural language explanations with explanation subgraphs, achieving a good balance of faithfulness and interpretability on text-attributed graphs.
- Graph-Based Alternatives to LLMs for Human Simulation
-
GEMS models closed-form human behavior simulation as link prediction on heterogeneous graphs with three node types (subgroups, individuals, choices) and two bidirectional relations, matching or surpassing strong LLM baselines across three datasets and three evaluation settings while using 1000x fewer parameters.
- LLMs Underperform Graph-Based Parsers on Supervised Relation Extraction for Complex Graphs
-
Across six RE datasets comparing four LLMs (7B-70B) against a lightweight graph parser (124M parameters), graph parsers consistently and significantly outperform LLMs when average relation graph edges exceed ~18, with F1 gaps reaching 13.2 points on the most complex ERFGC dataset, revealing fundamental LLM limitations in complex linguistic graph structure extraction.
- Which Bird Does Not Have Wings: Negative-Constrained KGQA with Schema-Guided Semantic Matching and Self-Directed Refinement
-
This paper defines the NEST KGQA task and NestKGQA dataset for negation-constrained knowledge graph QA, designs PyLF (Python-format logical form) for clear negation expression, and proposes CUCKOO framework with constraint-aware draft generation, schema-guided semantic matching, and self-directed refinement, achieving efficient and precise answers for multi-constraint questions in few-shot settings.