✏️ Knowledge Editing¶
🔬 ICLR2026 · 9 paper notes
- Bilinear Representation Mitigates Reversal Curse and Enables Consistent Model Editing
-
By training Transformers from scratch on a synthetic relational knowledge graph, this work demonstrates that appropriate regularization induces the emergence of bilinear relational structure in hidden representations. This structure not only overcomes the reversal curse but also enables logically consistent propagation of edits to related facts.
- EAMET: Robust Massive Model Editing via Embedding Alignment Optimization
-
This paper identifies the root cause of large-scale model editing failures as structural inconsistency (embedding misalignment) between key embeddings and residual embeddings, and proposes EAMET, which progressively saves optimized residual embeddings and aligns their neighborhood structure to the key embedding space via a dual KL divergence + MSE loss. EAMET outperforms MEMIT by an average of 14% (CounterFact) and 8% (ZsRE) when simultaneously editing 10k facts across 6 LLMs and 3 datasets, while remaining robust in two challenging scenarios: long-prefix inputs and multi-fact editing under the same subject.
- Energy-Regularized Sequential Model Editing on Hyperspheres
-
This paper interprets performance degradation in sequential model editing through the lens of hyperspherical uniformity (Hyperspherical Energy, HE), and proposes SPHERE: by projecting editing perturbations onto the orthogonal complement of the principal hyperspherical directions of pre-trained weights, SPHERE enables stable large-scale sequential editing, outperforming the strongest baseline by an average of 16.41% on LLaMA3-8B.
- Fine-tuning Done Right in Model Editing
-
This paper reveals that the underestimation of fine-tuning in model editing stems from an incorrect training pipeline (depth-first, sample-by-sample optimization). By correcting it to standard breadth-first mini-batch training and combining it with localized parameter updates, the proposed LocFT-BF achieves, for the first time, support for 100K sequential edits and models up to 72B parameters.
- GOT-Edit: Geometry-Aware Generic Object Tracking via Online Model Editing
-
GOT-Edit integrates 3D geometric information from VGGT into a 2D generic object tracker via null-space-constrained online model editing, enhancing geometric awareness while preserving semantic discriminability, and achieving significant tracking improvements in occlusion and cluttered-background scenarios.
- PICS: Pairwise Image Compositing with Spatial Interactions
-
This paper proposes PICS—a parallel pairwise image compositing method that simultaneously composes two objects in a single inference pass via mask-guided MoE and adaptive α-blending within an Interaction Transformer, explicitly modeling spatial interactions such as occlusion and contact, and consistently outperforming existing sequential compositing methods.
- Rote Learning Considered Useful: Generalizing over Memorized Data in LLMs
-
This paper proposes a "memorize-then-generalize" framework that employs a two-stage strategy—first memorizing factual associations via semantics-free synthetic tokens through rote learning, then fine-tuning with a small number of semantic prompts—to demonstrate that LLMs can generalize from rote-memorized data. Deeper memorization yields better generalization, and the paper further identifies security risks arising from potential malicious exploitation of this mechanism.
- Rote Learning Considered Useful: Generalizing over Memorized Training Examples
-
This paper proposes a two-stage "memorize-then-generalize" framework, demonstrating that LLMs can generalize effectively after rote-memorizing synthetic key tokens, requiring only minimal semantic fine-tuning — thereby challenging the conventional view that memorization impedes generalization.
- When Large Multimodal Models Confront Evolving Knowledge: Challenges and Explorations
-
This paper proposes the EVOKE benchmark to systematically evaluate the ability of Large Multimodal Models (LMMs) to incorporate evolving knowledge, identifies two core challenges (poor performance of existing methods and catastrophic forgetting induced by fine-tuning), and explores two mitigation strategies: knowledge augmentation and continual learning.