Skip to content

VLM-Loc: Localization in Point Cloud Maps via Vision-Language Models

Conference: CVPR 2026 arXiv: 2603.09826 Code: Available (see paper repository) Area: Multimodal VLM Keywords: Text-to-point-cloud localization, BEV, scene graph, VLM spatial reasoning, autonomous driving

TL;DR

This paper proposes VLM-Loc, a framework that converts 3D point cloud maps into BEV images and scene graphs for structured spatial reasoning with VLMs, and introduces a Partial Node Assignment (PNA) mechanism for fine-grained text-to-point-cloud localization. On the newly constructed CityLoc benchmark, VLM-Loc achieves a 14.20% improvement in Recall@5m over the previous state of the art.

Background & Motivation

Background: Text-to-point-cloud (T2P) localization aims to infer precise spatial positions within 3D point cloud maps from natural language descriptions—a task relevant, for example, to passengers in robotaxi scenarios describing their surroundings to aid localization. Existing methods such as Text2Pos, Text2Loc, and CMMLoc adopt a coarse-to-fine strategy.

Limitations of Prior Work: (a) The sub-maps used in the fine localization stage are typically constrained to small, simple regions (e.g., 30m×30m), overly simplifying real-world environmental complexity; (b) existing methods follow an end-to-end position prediction paradigm that lacks explicit spatial reasoning, limiting localization accuracy in complex environments.

Key Challenge: Simple text-to-point-cloud correspondence matching cannot effectively handle large-scale, complex spatial environments—models must be capable of interpreting spatial relationships expressed in language and grounding them in the environment.

Goal: (a) Perform fine-grained T2P localization over larger and more complex regions; (b) introduce explicit spatial reasoning capability; (c) handle partial matching between textual descriptions and the map.

Key Insight: Leverage the powerful multimodal reasoning capability of VLMs for spatial description understanding and localization, by converting 3D point clouds into BEV images and scene graphs that VLMs can process.

Core Idea: Convert point clouds into BEV images and scene graphs for VLM spatial reasoning; use a Partial Node Assignment mechanism to explicitly align textual cues with scene graph nodes, enabling interpretable fine-grained localization.

Method

Overall Architecture

The input point cloud map undergoes two-stage conversion: (1) generating a BEV image (bird's-eye-view color projection) to provide a dense geometric layout; and (2) constructing a scene graph in which each object is represented as a node encoding a semantic label and BEV pixel coordinates. The VLM (Qwen3-VL-8B-Instruct) receives the BEV image as visual input, and the scene graph, system prompt, and text query as textual input. Through autoregressive decoding, it outputs partial node assignments and position estimates.

Key Designs

  1. BEV Rendering and Scene Graph Construction:

    • Function: Convert 3D point clouds into two complementary representations processable by VLMs.
    • Mechanism: The BEV image is obtained by projecting the point cloud onto the ground plane and rasterizing it (\(I \in \mathbb{R}^{H \times W \times 3}\)), with each object assigned its mean RGB color. The scene graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) represents each object as a node \(n_i=(i, l_i, \mathbf{u}_i)\) encoding index, semantic label, and BEV pixel coordinates.
    • Design Motivation: BEV images provide dense visual cues but lack explicit semantics; scene graphs provide structured relational information. The two representations are complementary, enabling the VLM to leverage both fine-grained geometric cues and high-level semantic relationships.
  2. Partial Node Assignment (PNA) Mechanism:

    • Function: Explicitly supervise the VLM to align object mentions in the text with scene graph nodes, handling partial visibility.
    • Mechanism: For each object mentioned in the text query, the distance between its projected center in the map (A) and the center of its visible portion within the pose cell (B) is computed. If the distance is below a threshold \(\tau\), the object is marked as valid and linked to the corresponding node; otherwise it is marked as invalid (null assignment). The threshold \(\tau\) is set dynamically per semantic category (5m for "object" class, 15m for "stuff" class).
    • Design Motivation: Map coverage is limited, and objects mentioned in the text may lie outside the map extent. PNA trains the model to determine which cues are usable and which are not, improving robustness.
  3. Position Estimation:

    • Function: Predict the target 2-DoF position in the BEV image coordinate system based on the node assignment results.
    • Mechanism: Position prediction is integrated into VLM autoregressive decoding. The model outputs a JSON structure containing matched text–node pairs and a 2D pixel position, which is then converted to world coordinates.
    • Design Motivation: A unified decoding strategy ensures consistency in reasoning from correspondences to spatial coordinates.

Loss & Training

Standard autoregressive cross-entropy loss is used for training. LoRA fine-tuning (rank=8, \(\alpha\)=16) is performed via the Swift framework, updating only LoRA parameters while keeping the visual encoder and language backbone frozen. Training runs for 2 epochs on 8×RTX 4090 GPUs.

Key Experimental Results

Main Results — CityLoc-K Localization Accuracy

Method Val R@5m Val R@10m Test R@5m Test R@10m
Text2Pos 16.48 40.69 14.62 38.27
Text2Loc 18.91 45.26 17.97 41.22
CMMLoc 20.77 48.65 21.71 46.67
VLM-Loc 36.23 63.66 35.91 63.81

Ablation Study — Component Contributions

Configuration BEV SG PNA Test R@5m
(a) BEV only 13.21
(b) SG only 24.62
(c) SG+PNA 32.34
(d) BEV+SG 29.79
(e) Full 35.91

Key Findings

  • VLM-Loc achieves 35.91% Recall@5m on the CityLoc-K test set, surpassing the strongest baseline CMMLoc by 14.20 percentage points.
  • Scene graphs contribute more to localization than BEV images (24.62 vs. 13.21), indicating that relational structure is more informative than dense appearance.
  • PNA contributes substantially: adding PNA improves SG+PNA over SG-only by 7.72%, and the full model over BEV+SG by 6.12%.
  • Directional cues are the most critical textual component: removing them reduces R@5m from 35.91% to 18.01%.
  • Cross-domain generalization is strong: VLM-Loc also leads by a large margin on CityLoc-C, which uses an entirely different point cloud source (UAV aerial vs. vehicle-mounted LiDAR).

Highlights & Insights

  • Paradigm innovation for VLM-based T2P localization: This work is the first to apply VLM spatial reasoning to text-to-point-cloud localization, bridging 3D point clouds and 2D VLMs via BEV images and scene graphs—a conceptually elegant approach.
  • Partial Node Assignment mechanism: Elegantly addresses the practical problem of objects mentioned in text being absent from the map; this design yields 18%+ improvement over full assignment and offers genuine practical inspiration.
  • Dominant role of directional information: Experiments clearly demonstrate the decisive contribution of directional cues to spatial reasoning, with performance nearly halved upon their removal.

Limitations & Future Work

  • Text queries are generated automatically from templates and may diverge from natural human language descriptions.
  • BEV rendering discards height information, which may be insufficient for scenarios requiring full 3D reasoning.
  • LoRA fine-tuning may limit the VLM's ability to adapt to domain shift in the BEV image space.
  • Although larger and more complex than KITTI360Pose, the CityLoc benchmark remains predominantly urban.
  • Iterative dialogue-based localization (multi-turn interaction for progressive refinement) has not been explored.
  • vs. Text2Pos / Text2Loc / CMMLoc: These methods directly learn text-to-3D correspondences without explicit reasoning; VLM-Loc substantially outperforms them through structured representation combined with VLM reasoning.
  • vs. 3DRS / SpatialVLM and similar VLM+3D methods: These primarily address indoor scene understanding and grounding; VLM-Loc is the first to apply such an approach to large-scale outdoor localization.

Rating

  • Novelty: ⭐⭐⭐⭐ Applying VLMs to T2P localization is a novel direction; the BEV+scene graph conversion design is creative.
  • Experimental Thoroughness: ⭐⭐⭐⭐ Comprehensive ablations including cross-domain generalization and multi-backbone experiments.
  • Writing Quality: ⭐⭐⭐⭐ Clear structure with well-defined problem formulation.
  • Value: ⭐⭐⭐⭐ Provides an effective paradigm for applying VLM spatial reasoning to localization tasks.