Skip to content

ConTextTab: A Semantics-Aware Tabular In-Context Learner

Conference: NeurIPS 2025 arXiv: 2506.10707 Code: SAP-samples/sap-rpt-1-oss Area: Tabular Learning / In-Context Learning Keywords: Tabular Learning, In-Context Learning, Semantic Encoding, Foundation Models, Zero-Shot Prediction

TL;DR

ConTextTab integrates semantic embeddings (text encodings of column names and categorical values) into a table-native ICL architecture, and pretrains on large-scale real-world tabular data (T4, ~2.18M tables). It achieves a new state of the art on the semantics-rich CARTE benchmark while remaining competitive with existing methods on non-semantic benchmarks.

Background & Motivation

  • Background: Table-native ICL methods such as TabPFN and TabICL demonstrate strong performance on small-to-medium scale tabular prediction tasks, but rely entirely on synthetic data for training and cannot exploit semantic information such as column names or categorical labels present in real-world data.
  • Limitations of Prior Work: LLM-based approaches such as TabuLa-8B possess deep semantic understanding, yet text serialization leads to poor token efficiency (context limited to at most 32 rows) and sacrifices the 2D structure of tabular data.
  • Key Challenge: Table-native methods are efficient but semantics-free, while LLM-based methods are semantics-aware but inefficient.
  • Goal: To unify the advantages of both paradigms — injecting semantic understanding into a table-native ICL framework and training on real-world data.

Method

Overall Architecture

ConTextTab builds upon the TabPFN architecture with the following core mechanism: multimodal embedding layer → alternating attention backbone → task-specific output heads. Each column is encoded by a type-specific encoder (text / date / numerical), column names are injected as "positional encodings" via text embeddings, and the overall design preserves row and column permutation equivariance.

Key Designs

  1. Multimodal Semantic Feature Encoding:

    • Text / Categorical columns: Each cell is encoded into a vector using a pretrained text embedding model (default: all-MiniLM-L6-v2), then projected to the target dimension \(d\) via a learnable linear layer. Categorical columns follow the same pathway, preserving label semantics.
    • Date columns: The day, month, and year components are each embedded separately and summed, accommodating both ordinal magnitude and recognition of special dates (e.g., holidays).
    • Numerical columns: Values are clipped to the 2%–98% quantile range, standardized to zero mean and unit variance (Chebyshev's inequality guarantees values lie in \((-7.1, 7.1)\)), then multiplied by a learnable vector with a bias term. NaN values are replaced by 0, with the bias acting as an "is-NaN" flag.
    • Column names: Encoded with the same text embedding model and projected via an independent linear layer, then added to the cell embeddings.
    • All embeddings are passed through LayerNorm before entering the backbone, fully preserving row and column permutation equivariance.
  2. Alternating Attention Backbone and Weight Sharing:

    • Follows TabPFN's alternating "horizontal" (across columns) and "vertical" (across rows) self-attention structure.
    • Cross-column attention is unmasked; cross-row attention employs a causal mask (query rows attend only to context rows).
    • Weight sharing is enabled by default: a single transformer block shares parameters across all layers, interpretable as a depth-unrolled RNN. This reduces parameter count from 172M to 16M trainable parameters with no observed performance degradation.
  3. Large-Scale Real-World Data Training Strategy:

    • Uses the T4 dataset; after filtering, 2.18M tables are retained (median: 750 rows × 9 columns).
    • 1,000 rows are sampled per table; 50–900 rows serve as queries and the remainder as context.
    • A random column is selected as the target (excluding date columns, numerical columns with >50% NaN, and columns with >20% unique values).
    • Non-numerical columns are upsampled to approximately balance regression and classification tasks.
    • An optional curriculum learning stage uses TabDPT data (123 tables, median 11k rows × 34 columns), increasing the training row count to 4,000.

Loss & Training

  • Classification: Standard cross-entropy loss with an MLP output head.
  • Regression: L2 loss predicting clipped and standardized float values, with inverse transformation at inference.
  • Alternative — Supervised Clustering Head: Cosine similarities are computed between query–context row pairs, and element-wise binary cross-entropy is applied against a same-class/different-class adjacency matrix, with no upper bound on the number of classes.
  • Alternative — Soft Binning: Numerical values are soft-encoded into quantile bins via linear interpolation between adjacent bins, converting regression into classification; predictions are recovered via probability-weighted mean.
  • Training: AdamW, lr=\(10^{-4}\), linear warmup for 1,000 steps, gradient accumulation to batch size 256, gradient clipping, 4–10M steps (2–5 epochs).
  • Inference: 8-fold bagging (8 bootstrap samples of context), default context size \(c=8192\), up to 500 columns.

Key Experimental Results

Main Results

Evaluation spans 91 regression + 112 classification tasks, with dataset sizes ranging from 400 to ~400k training samples and 5 to 3k columns.

Benchmark ConTextTab Performance Key Comparison
CARTE (semantics-rich) New SOTA, consistently best across all sample sizes Significantly outperforms TabPFN / TabICL / TabDPT (\(p<0.05\))
OpenML-CC18 (classification) Competitive No significant difference from best models
TALENT-Tiny (mixed) Competitive No significant difference from best models
TabReD (large-scale) Competitive Tuned tree ensembles retain advantage on large datasets
OpenML-CTR23 (regression) Slightly weaker No significant difference from tuned ensemble trees

Ablation Study

Ablation Finding
Training data scale Critical to model performance; data volume is a key factor
Weight sharing Reduces parameters from 172M to 16M with no performance degradation
Text embedding model all-MiniLM-L6-v2 achieves a favorable speed–accuracy trade-off
ISAB attention Applied to the first \(m=3\) cross-row attention layers to reduce inference cost on large tables
Curriculum learning A second training stage on large tables yields further improvements

Key Findings

  • Decisive gap from semantic understanding: On CARTE, TabPFN (semantics-free) even underperforms untuned gradient-boosted trees, whereas ConTextTab surpasses all single-model methods.
  • Notable low-data advantage: In CARTE subsampling experiments (128 rows to full data), ConTextTab outperforms AutoGluon at ≤2,048 rows.
  • Parity on non-semantic benchmarks: On traditional benchmarks such as OpenML-CC18 and TALENT-Tiny, performance is competitive with TabPFN and tuned trees, with no statistically significant gaps.
  • Large-scale dataset challenges: Tuned tree ensembles retain an advantage on large datasets (e.g., TabReD), occasionally surpassing even AutoGluon, indicating room for ICL methods to scale to larger contexts.

Highlights & Insights

  • Clear methodological contribution: The first systematic integration of semantic embeddings into table-native ICL with real-world data training — a conceptually clean and effective approach.
  • Surprising finding on weight sharing: A 172M → 16M parameter reduction with no performance loss suggests that the effective parameter space for tabular ICL may be far smaller than total parameter count.
  • Permutation equivariance: Semantic encoding naturally preserves this property, reducing the need for random mappings (e.g., category-to-ID assignments) that motivate bagging in TabPFN.
  • Elegant supervised clustering head: Using cosine similarity with an adjacency matrix for classification imposes no limit on class count and preserves label semantics — an elegant alternative to the conventional cross-entropy head.
  • Reasonable training efficiency: ~10 tables/s on a single H100 GPU; full training requires 4–12 days.

Limitations & Future Work

  • No breakthrough on non-semantic benchmarks: Performance merely matches, rather than surpasses, existing methods on traditional numerical tabular benchmarks; better numerical encodings or larger models may be needed.
  • Large-scale dataset bottleneck: ICL methods still lag behind tuned ensemble trees on datasets with hundreds of thousands of samples; context scaling remains the key bottleneck.
  • AutoGluon still competitive overall: As a multi-model ensemble, AutoGluon generally outperforms individual models, indicating room to improve single-model performance ceilings.
  • Fixed text embedding model: The lightweight MiniLM may lose information in complex semantic scenarios; stronger embedding models or end-to-end training warrant exploration.
  • Inference cost: The overhead of 8-fold bagging combined with 8,192-row context is non-trivial and requires careful cost–benefit analysis in deployment.
  • TabPFN / TabICL: Table-native ICL baselines trained on synthetic data; ConTextTab extends their architectures with semantic capabilities.
  • TabuLa-8B: An LLM-based ICL approach that curated the T4 dataset (reused in this work), but is constrained to a 32-row context.
  • CARTE: A semantics-aware tabular pretraining method requiring task-specific fine-tuning; its benchmark constitutes the primary showcase for this paper.
  • TabDPT: Trains on real-world data with retrieval-based context selection, inspiring the curriculum learning strategy adopted here.
  • Insights: Semantic information is severely underutilized in tabular learning — injecting text embeddings alone can yield decisive advantages in semantics-rich settings. This further suggests that future tabular foundation models should advance toward multimodal fusion.

Rating

Dimension Score (1–10) Notes
Novelty 7 First systematic integration of semantic embeddings into table-native ICL with real-world training data, though individual components are not entirely new
Technical Depth 8 Multimodal encoding is carefully designed; alternative architectures such as the supervised clustering head and ISAB reflect thorough exploration
Experimental Thoroughness 9 Five major benchmarks, 203 datasets, extensive baseline comparisons, ablation analyses, and subsampling experiments — highly comprehensive
Writing Quality 8 Clear structure, well-motivated problem formulation, and detailed method description
Value 7 Open-source code and model; high practical value in semantics-rich settings, though advantages are less pronounced in non-semantic scenarios
Overall 7.8 A solid and systematic contribution that establishes a new standard in semantics-aware tabular learning