Skip to content

Universal 3D Shape Matching via Coarse-to-Fine Language Guidance

Conference: CVPR 2026 arXiv: 2602.19112 Code: None Area: Segmentation Keywords: 3D Shape Matching, Functional Maps, Language Guidance, Contrastive Learning, Cross-Category Correspondence

TL;DR

This paper proposes UniMatch, a semantics-aware coarse-to-fine 3D shape matching framework. The coarse stage establishes part-level correspondences via category-agnostic 3D segmentation, MLLM-based part naming, and FG-CLIP language embeddings. The fine stage learns dense correspondences within an extended functional map framework using a Group-wise Ranking Contrastive (RnC) Loss, enabling universal matching across categories and non-isometric shapes.

Background & Motivation

3D shape matching is a core task in computer vision and graphics, with applications in texture transfer, parametric human body modeling, robotic manipulation, and shape interpolation. Existing methods face three key challenges:

Isometry assumption in functional map methods: Classical functional maps and their deep learning variants rely on near-isometry assumptions, leading to performance degradation under large non-isometric deformations or topological noise. Moreover, purely geometric cues are insufficient to support cross-category matching.

Limitations of semantic methods: Diff3F relies on diffusion models but lacks generality; DenseMatcher requires manual part annotations; ZSC requires predefined part proposals, limiting generalization to open-world objects.

Lack of a universal solution: Existing methods either handle only same-category shapes or require category-specific priors, and cannot process in-the-wild objects in a fully unsupervised setting.

The core insight of UniMatch is to elevate coarse semantic cues into fine-grained correspondences—first using language to establish part-level semantic associations, then driving dense matching via ranking contrastive learning.

Method

Overall Architecture

UniMatch is a two-stage framework:

  • Coarse stage: Category-agnostic 3D segmentation → MLLM-based part naming → FG-CLIP language embeddings → implicit part-level correspondences
  • Fine stage: Extended functional map pipeline + SD-DINO semantic feature field + Group-wise RnC contrastive loss → dense correspondences

Key Designs

Coarse Stage: Semantic Region Relationship Establishment

Category-Agnostic Part Segmentation

Function: PartField is applied to input 3D shapes to obtain category-agnostic, non-overlapping semantic regions.

Mechanism: Given input shape \(\mathcal{X}\) and part count \(n_\mathcal{R}\), segmentation results \(\mathcal{R}_x\) are obtained directly, without predefined part proposals or category prompts.

Design Motivation: Four reasons motivate the choice of PartField over text-prompted segmentation: (i) text-referral methods perform poorly on textureless, low-resolution meshes; (ii) requiring predefined semantic part names limits open-vocabulary objects; (iii) incomplete shape coverage leads to incomplete matching; (iv) PartField supports fast feed-forward inference.

Multimodal Semantic Region Naming

Function: An MLLM (GPT-5) assigns part names to each semantic region.

Mechanism: 3D masks are rendered into multi-view images; each 2D mask is overlaid on the original image and submitted to GPT-5 for naming. Masks occupying less than 5% of pixels are discarded, and results are aggregated into the 3D domain via known camera parameters.

Design Motivation: A key advantage is that the MLLM is used only during training, unlike ZSC which also requires it at inference time.

Language-Based Disambiguation

Function: FG-CLIP language embeddings establish implicit part correspondences rather than explicit hard-coded mappings.

Mechanism: Part names are mapped to the FG-CLIP embedding space \(\mathcal{E} \in \mathbb{R}^{C_{\text{lang}}}\), and inter-part semantic similarity is measured by embedding distance. For example, the "mouth" of a human and the "muzzle" of a dog naturally converge in the embedding space.

Design Motivation: Continuous language embeddings are more robust than explicit hard-coded correspondences, handle ambiguity in MLLM outputs, and reveal semantic ordering relationships among parts.

Fine Stage: Dense Correspondence Learning

Semantic Feature Field

Function: Constructs per-vertex features combining geometric and semantic information.

Mechanism: Geometric descriptors \(\boldsymbol{f}_{\text{geo}}\) (WKS) and semantic features \(\boldsymbol{f}_{\text{sem}}\) extracted via SD-DINO + FeatUp are concatenated and fed into a refinement network (DiffusionNet):

\[\boldsymbol{f}_{\text{in}} = \text{Concat}(\boldsymbol{f}_{\text{geo}}, \boldsymbol{f}_{\text{sem}})\]

For textureless shapes, SyncMVD is applied for view-consistent texture synthesis.

Group-wise Ranking Contrastive Loss (Group-wise RnC Loss)

Function: Supervises dense correspondence learning by exploiting the ordinal relationships in language embeddings.

Mechanism: Traditional contrastive losses require explicit positive/negative pairs, which is ill-suited here. The RnC Loss leverages language embedding distances to define ranking relationships, sorting all samples by semantic distance to the anchor before performing contrastive learning.

For anchor feature \(\boldsymbol{f}_i^x\) and reference group \(\mathcal{G}_j^y\), negative samples are dynamically grouped by language embedding distance:

\[\mathbb{P}(\mathcal{G}_j^y | \boldsymbol{f}_i^x, \mathcal{S}_{i,j}) = \frac{\sum_l \exp(\text{sim}(\boldsymbol{f}_i^x, \boldsymbol{f}_l^y)/\tau)}{\sum_{\boldsymbol{f}_k^y \in \mathcal{S}_{i,j}} \exp(\text{sim}(\boldsymbol{f}_i^x, \boldsymbol{f}_k^y)/\tau)}\]

The final loss is the mean negative log-likelihood over all source anchors:

\[\mathcal{L}_{\text{RnC}} = \frac{1}{n_x} \sum_{i=1}^{n_x} \ell_{\text{RnC}}^{(i)}(\mathcal{X}, \mathcal{Y})\]

Design Motivation: Complexity is reduced from point-wise contrastive (\(O(n_x \times n_y)\)) to group-wise contrastive (\(O(n_x \times n_R)\)), where \(n_R \ll n_y\), while inter-group dependencies are modeled via embedding distances to maintain semantic consistency.

Loss & Training

The total loss combines the functional map objective with the ranking contrastive term:

\[\mathcal{L} = \mathcal{L}_{\text{fm}} + \mathcal{L}_{\text{RnC}}\]

The functional map objective comprises: - Data preservation loss \(\mathcal{L}_{\text{data}}\): retains refined features - Regularization loss \(\mathcal{L}_{\text{reg}}\): enforces bijectivity and orthogonality - Coupling loss \(\mathcal{L}_{\text{couple}}\): ensures consistency between soft correspondences and functional maps

The framework builds on URSSM, with DiffusionNet as the feature refinement network. MLLM prompting is used only during training and is not required at inference time.

Key Experimental Results

Main Results

Cross-category shape matching (mean geodesic error, lower is better):

Method SNIS TOSCA SHREC07
ZoomOut 0.51 0.55 0.57
URSSM 0.49 0.53 0.49
Diff3F 0.57 0.45 0.50
ZSC 0.36 0.56 0.60
DenseMatcher 0.28 0.30 0.39
UniMatch 0.19 0.23 0.37

Non-isometric shape matching (mean geodesic error ×100):

Method SMAL TOPKIDS
URSSM 6.0 8.9
DenseMatcher 4.7 6.2
UniMatch 4.8 5.9

Near-isometric shape matching (mean geodesic error ×100):

Method FAUST SCAPE SHREC19
URSSM 1.6 1.9 5.7
DenseMatcher 1.6 2.0 3.1
UniMatch 1.6 1.9 3.2

Ablation Study

Variant SNIS TOSCA SHREC07
Language embedding model
CLIP 0.21 0.26 0.37
SigLip 0.19 0.24 0.37
FG-CLIP (ours) 0.19 0.23 0.37
Semantic feature field
Geometry only 0.49 0.53 0.49
Geometry + Semantics (ours) 0.22 0.26 0.39
Contrastive loss
SupCon loss 0.21 0.29 0.40
No contrastive loss 0.22 0.26 0.39
Group-wise RnC (ours) 0.19 0.23 0.37

Key Findings

  1. Cross-category matching advantage is substantial: error on SNIS drops from 0.28 (DenseMatcher) to 0.19, a relative improvement of 32%.
  2. The semantic feature field is critical: removing it raises error from 0.19 to 0.49 on SNIS, demonstrating that geometric descriptors alone cannot support semantic matching.
  3. Group-wise RnC outperforms SupCon, as SupCon relies on discrete positive sample selection and cannot capture the continuous semantic relationships encoded in language embeddings.
  4. FG-CLIP outperforms standard CLIP, particularly on TOSCA (0.23 vs. 0.26), confirming the value of fine-grained embeddings.
  5. UniMatch achieves state-of-the-art or competitive performance across all three settings—near-isometric, non-isometric, and cross-category—realizing true universality.
  6. The learned features exhibit emergent semantically consistent co-segmentation capability, despite not being explicitly designed for this purpose.

Highlights & Insights

  • Language as a universal semantic bridge: Leveraging natural language embeddings to resolve semantic alignment in cross-category matching is an elegant solution—"mouth" and "muzzle" are naturally associated in the continuous embedding space.
  • The coarse-to-fine cascaded design avoids the difficulties of cross-modal alignment in end-to-end training; the coarse stage provides structured supervision signals while the fine stage focuses on refinement.
  • The Group-wise RnC Loss is the core contribution: it reduces the intractable \(O(n^2)\) point-wise contrastive objective to \(O(n \times n_R)\), while exploiting semantic ordering rather than binary positive/negative labels.
  • The MLLM is used only for training data processing and is not invoked at inference time, making the approach practical for real-world deployment.

Limitations & Future Work

  • Part ordering errors for geometrically symmetric parts (e.g., chair legs all labeled "leg") require the introduction of object orientation information.
  • The approach depends on PartField segmentation quality—segmentation errors propagate to downstream matching.
  • Textureless shapes require SyncMVD texture synthesis, introducing additional computation and potential artifacts.
  • End-to-end runtime efficiency (PartField + GPT-5 + SD-DINO) has not been evaluated alongside matching accuracy.
  • Matching under extreme topological differences (e.g., octopus vs. table) may still fail.

Rating

  • Novelty: ⭐⭐⭐⭐⭐ — The first systematic integration of language guidance into 3D shape matching; both the coarse-to-fine framework and the Group-wise RnC Loss are original contributions.
  • Experimental Thoroughness: ⭐⭐⭐⭐⭐ — Covers six benchmarks across three settings (cross-category, non-isometric, near-isometric), with complete ablations and generalization demonstrations on co-segmentation and in-the-wild objects.
  • Writing Quality: ⭐⭐⭐⭐ — Method is clearly presented with rich illustrations, though some details (e.g., MLLM prompt templates) are relegated to the appendix.
  • Value: ⭐⭐⭐⭐⭐ — Establishes a new paradigm for universal 3D shape matching with broad impact on graphics, robotics, and 3D scene understanding.