Skip to content

✏️ Knowledge Editing

📷 CVPR2026 · 3 paper notes

Attribution-Guided Model Rectification of Unreliable Neural Network Behaviors

This paper proposes an attribution-guided dynamic model rectification framework that repurposes rank-one model editing from domain adaptation to behavior rectification. By quantifying per-layer editability via Integrated Gradients, the framework automatically localizes suspect layers and repairs three categories of unreliable behaviors—backdoor attacks, spurious correlations, and feature leakage—using as few as a single clean sample.

MoKus: Leveraging Cross-Modal Knowledge Transfer for Knowledge-Aware Concept Customization

This paper identifies and exploits the cross-modal knowledge transfer phenomenon—modifications to knowledge within an LLM text encoder naturally transfer to visual generation—and proposes MoKus, a two-stage framework (visual concept learning + textual knowledge updating) for knowledge-aware concept customization.

MoKus: Leveraging Cross-Modal Knowledge Transfer for Knowledge-Aware Concept Customization

This paper introduces a new task termed "knowledge-aware concept customization," and discovers that knowledge editing applied to LLM text encoders naturally transfers to the visual generation modality (cross-modal knowledge transfer). Building on this finding, the paper proposes MoKus: a two-stage framework that first binds a rare token to a visual concept as an anchor representation via LoRA fine-tuning, then efficiently maps multiple natural-language knowledge statements onto the anchor representation via knowledge editing—requiring only ~7 seconds per knowledge update.