Skip to content

✏️ Knowledge Editing

💬 ACL2026 · 4 paper notes

Aligning Language Models with Real-time Knowledge Editing

This paper introduces CRAFT (a continuously updated Chinese financial knowledge editing dataset) and KEDAS (a knowledge editing alignment paradigm based on diverse edit augmentation and self-adaptive inference), addressing the problem that existing knowledge editing methods cannot simultaneously achieve high editing success rate, locality, and portability in real-time scenarios.

CLaRE-ty Amid Chaos: Quantifying Representational Entanglement to Predict Ripple Effects in LLM Editing

CLARE proposes a lightweight representation-level method that quantifies the entanglement between facts through forward activations of a single intermediate layer to predict ripple effects of model editing, achieving a 62.2% average Spearman correlation improvement over gradient methods while being 2.74× faster and requiring 2.85× less memory.

EvoEdit: Evolving Null-space Alignment for Robust and Efficient Knowledge Editing

This paper proposes EvoEdit, which achieves large-scale sequential knowledge editing through dynamically evolving null-space projectors. It efficiently injects new knowledge while preserving existing knowledge, maintaining SOTA performance at the 10K edit scale while being 3.5× faster than AlphaEdit.

FABLE: Fine-grained Fact Anchoring for Unstructured Model Editing

This paper discovers that existing unstructured model editing methods can holistically recall edited text but cannot perform fine-grained fact access, and proposes FABLE, a framework that uses a two-stage hierarchical strategy to anchor fine-grained facts in shallow layers and integrate holistic narratives in deep layers, along with the UnFine diagnostic benchmark for systematic evaluation.