🔎 AIGC Detection¶
💬 ACL2026 · 9 paper notes
- Beyond the Final Actor: Modeling the Dual Roles of Creator and Editor for Fine-Grained LLM-Generated Text Detection
-
This paper proposes RACE (Rhetorical Analysis for Creator-Editor Modeling), which leverages Rhetorical Structure Theory (RST) to construct logic graphs that model the "creator's" cognitive architecture, while extracting discourse unit-level features to capture the "editor's" linguistic style, achieving fine-grained four-class LLM-generated text detection (human-written/LLM-written/LLM-polished human text/human-rewritten LLM text).
- BIASEDTALES-ML: A Multilingual Dataset for Analyzing Narrative Attribute Distributions in LLM-Generated Stories
-
BiasedTales-ML constructs a corpus of ~350K LLM-generated children's stories across 8 languages, using full-permutation prompt design and a distributional analysis framework to reveal that social attribute distributions in narratives vary significantly across languages, and English-centric evaluation fails to capture bias patterns in multilingual settings.
- CiteGuard: Faithful Citation Attribution for LLMs via Retrieval-Augmented Validation
-
CiteGuard proposes a retrieval-augmented agent framework with extended retrieval actions (including full-text search and context retrieval) to provide a more faithful basis for scientific citation attribution, achieving 68.1% accuracy on the CiteME benchmark — a 10-point improvement over baselines, approaching human performance (69.2%).
- FlexGuard: Continuous Risk Scoring for Strictness-Adaptive LLM Content Moderation
-
FlexGuard outputs continuous risk scores (0-100) instead of binary safe/unsafe judgments for LLM content moderation, achieving SOTA robustness and accuracy across varying strictness deployment scenarios through rubric-guided distillation and GRPO risk alignment training.
- Frankentext: Stitching Random Text Fragments into Long-Form Narratives
-
This paper proposes Frankentext, a paradigm where LLMs stitch random human text fragments into coherent long-form narratives under extreme constraints (90% content verbatim-copied from human writing), revealing severe failures of existing AI text detectors in mixed-authorship scenarios (72% of Frankentext is misclassified as human-written).
- Reasoning-Based Refinement of Unsupervised Text Clusters with LLMs
-
A reasoning-based cluster refinement framework that uses LLMs as semantic judges (rather than embedding generators) to verify and restructure unsupervised clustering outputs through coherence verification, redundancy adjudication, and label grounding, significantly improving cluster consistency and human-aligned annotation quality on social media corpora.
- Temporal Flattening in LLM-Generated Text: Comparing Human and LLM Writing Trajectories
-
This paper constructs a longitudinal writing dataset spanning 12 years and discovers "temporal flattening" in LLM-generated text—while lexical diversity is high, temporal drift in semantic and cognitive-emotional dimensions is significantly lower than human writing, achieving 94% accuracy in distinguishing human vs. LLM text using temporal variation patterns alone.
- When Personalization Tricks Detectors: The Feature-Inversion Trap in Machine-Generated Text Detection
-
This paper reveals the "feature-inversion trap" in MGT detectors under personalization—features that distinguish human-written and machine-generated text in the general domain get inverted in the personalized domain, causing detector performance to plummet or even flip. The proposed StyloCheck framework predicts cross-domain performance changes by quantifying detector reliance on inverted features, achieving prediction correlation above 0.85.
- Who Wrote This Line? Evaluating the Detection of LLM-Generated Classical Chinese Poetry
-
This paper constructs the first benchmark for detecting LLM-generated classical Chinese poetry, ChangAn (30,664 poems), systematically evaluating 12 AI detection methods across different text granularities and generation strategies, revealing severe limitations of current Chinese text detectors in the classical poetry domain.