Skip to content

Neural Distribution Prior for LiDAR Out-of-Distribution Detection

Conference: CVPR 2026 arXiv: 2604.09232 Code: https://cs-lzz.github.io/ndp-demo Area: Autonomous Driving / Safety Perception Keywords: OOD Detection, LiDAR Perception, Class Imbalance, Perlin Noise, Distribution Prior

TL;DR

NDP introduces a learnable neural distribution prior module to model the distributional structure of network predictions. Combined with Perlin-noise-based pseudo-OOD sample generation and a soft anomaly exposure strategy, NDP achieves 61.31% AP on the STU benchmark, surpassing the previous best result by more than 10×.

Background & Motivation

Background: LiDAR perception is critical for autonomous driving, yet current models operate under a closed-set assumption and cannot recognize unexpected OOD objects (e.g., fallen branches, construction machinery, road debris), which may lead to severe safety consequences.

Limitations of Prior Work: LiDAR data exhibits severe class imbalance — roads and buildings account for the majority of point clouds, while traffic participants such as cyclists are extremely sparse. Existing OOD scoring functions assume uniform class distributions and thus fail on imbalanced data.

Key Challenge: Static OOD scoring overfits to frequent classes and fails on tail classes; dataset-level class priors are insufficient to correct the bias introduced by class imbalance in LiDAR data.

Goal: Design a learnable OOD scoring mechanism that adapts to class imbalance, and generate diverse auxiliary OOD samples for robust training.

Key Insight: Rather than relying on static scoring, learn the distributional patterns of network predictions; simultaneously leverage Perlin noise to synthesize OOD samples directly from training data.

Core Idea: NDP dynamically captures the logit distribution patterns of training data via an attention mechanism and corrects class-dependent confidence bias.

Method

Overall Architecture

Built upon the Mask4Former-3D framework: a sparse UNet extracts point features → an MLP generates logits for OOD detection → a Transformer decoder performs closed-set segmentation → the NDP module projects logits into a latent space and performs cross-attention with a learnable prior matrix → calibrated OOD scores are output.

Key Designs

  1. Neural Distribution Prior (NDP) Module:

    • Function: Adaptively re-weights OOD scores according to the network's predicted distribution.
    • Mechanism: Each sample's logits are projected into a latent embedding space and subjected to cross-attention with a learnable prior matrix \(\psi\) to capture inter-class distributional relationships. A re-weighting term \(W(f_\Theta, \psi)\) is generated to adjust the static OOD score. NDP acts as a reference distribution to regularize model outputs, improving calibration and robustness.
    • Design Motivation: Static scoring functions ignore severe class imbalance; NDP adaptively corrects bias by learning the typical behavior of network predictions on training data.
  2. Perlin Noise OOD Synthesis:

    • Function: Generates diverse pseudo-OOD samples without requiring external datasets.
    • Mechanism: Perlin noise — a smooth, spatially coherent noise function — is used to perturb the local surface geometry of in-distribution point clouds, introducing realistic variations in shape and contour while preserving the global semantic layout.
    • Design Motivation: External datasets introduce domain adaptation complexity, while void-class points have limited diversity and many are not truly anomalous. Perlin noise, already proven effective in industrial anomaly detection, can generate diverse and geometrically consistent OOD samples.
  3. Soft Anomaly Exposure (SOE) Strategy:

    • Function: Exploits unreliable void regions as an auxiliary OOD source.
    • Mechanism: Rather than treating void points as fully reliable OOD samples, soft OOD labels are assigned to reflect their inherently uncertain nature. This allows the model to learn from ambiguous regions while preventing overfitting to specific object categories.
    • Design Motivation: Void-class points carry both "meaningful but unannotated semantics" and "truly anomalous" characteristics; hard labels lead to overfitting.

Loss & Training

Closed-set segmentation and OOD detection are trained jointly. Perlin-synthesized OOD samples and void regions (with soft labels) provide negative supervision; the NDP module's re-weighting term adjusts the final OOD score at inference time.

Key Experimental Results

Main Results

Dataset Metric Ours Prev. SOTA Gain
STU Test Set Point-level AP 61.31% ~6% >10×
SemanticKITTI OOD AP SOTA Significant

Ablation Study

Configuration Key Metric Note
w/o NDP Module AP drops significantly Static scoring cannot handle imbalance
w/o Perlin Synthesis AP drops Insufficient auxiliary OOD samples
w/o SOE (hard labels) AP drops Overfitting on void points
Full NDP Framework 61.31% AP All three components work synergistically

Key Findings

  • The NDP module is compatible with various OOD scoring functions, demonstrating that the calibration capability of the distribution prior is generalizable.
  • OOD samples synthesized via Perlin noise are more effective than both void-class points and external datasets.
  • The large gap between 61.31% AP and the previous ~6% AP confirms that class imbalance is the core bottleneck for LiDAR OOD detection.

Highlights & Insights

  • 10× Performance Leap: The jump from ~6% AP to 61.31% AP reveals that prior methods were nearly non-functional on LiDAR OOD tasks, with class imbalance being the central issue.
  • Creative Application of Perlin Noise: A noise function borrowed from computer graphics proves highly effective for generating geometrically consistent 3D anomaly samples.
  • NDP as a General Calibration Module: Compatible with multiple existing OOD scoring functions, demonstrating strong extensibility.

Limitations & Future Work

  • Validation is primarily conducted on SemanticKITTI and STU; performance on larger-scale datasets (e.g., nuScenes) remains untested.
  • Perlin noise synthesis remains a geometry-based perturbation approach; generated OOD samples may lack semantic diversity.
  • The cross-attention mechanism in NDP introduces additional computational overhead; real-time feasibility requires further evaluation.
  • vs. LiON: LiON synthesizes anomalous shapes from ShapeNet and requires external datasets; NDP generates OOD samples directly from training data.
  • vs. REAL: REAL generates pseudo-OOD representations by scaling point clouds, yielding limited diversity.

Rating

  • Novelty: ⭐⭐⭐⭐ Both the learnable distribution prior and Perlin noise synthesis are novel designs.
  • Experimental Thoroughness: ⭐⭐⭐⭐ The 10× improvement is highly convincing.
  • Writing Quality: ⭐⭐⭐⭐ Problem analysis is thorough and well-structured.
  • Value: ⭐⭐⭐⭐⭐ Opens a new performance frontier for LiDAR OOD detection.