Papers
arxiv:2602.07026

Modality Gap-Driven Subspace Alignment Training Paradigm For Multimodal Large Language Models

Published on Feb 2
· Submitted by
Yu_xm
on Feb 10
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Researchers address the modality gap in multimodal learning by proposing a fixed-frame theory and a training-free alignment method that enables efficient scaling of multimodal models using unpaired data.

AI-generated summary

Despite the success of multimodal contrastive learning in aligning visual and linguistic representations, a persistent geometric anomaly, the Modality Gap, remains: embeddings of distinct modalities expressing identical semantics occupy systematically offset regions. Prior approaches to bridge this gap are largely limited by oversimplified isotropic assumptions, hindering their application in large-scale scenarios. In this paper, we address these limitations by precisely characterizing the geometric shape of the modality gap and leveraging it for efficient model scaling. First, we propose the Fixed-frame Modality Gap Theory, which decomposes the modality gap within a frozen reference frame into stable biases and anisotropic residuals. Guided by this precise modeling, we introduce ReAlign, a training-free modality alignment strategy. Utilizing statistics from massive unpaired data, ReAlign aligns text representation into the image representation distribution via a three-step process comprising Anchor, Trace, and Centroid Alignment, thereby explicitly rectifying geometric misalignment. Building on ReAlign, we propose ReVision, a scalable training paradigm for Multimodal Large Language Models (MLLMs). ReVision integrates ReAlign into the pretraining stage, enabling the model to learn the distribution of visual representations from unpaired text before visual instruction tuning, without the need for large-scale, high-quality image-text pairs. Our framework demonstrates that statistically aligned unpaired data can effectively substitute for expensive image-text pairs, offering a robust path for the efficient scaling of MLLMs.

Community

Paper author Paper submitter

Modality Gap-Driven Subspace Alignment Training Paradigm For Multimodal Large Language Models

Here are the paper's main results explained, with references to the key figures:

1. The Modality Gap is Anisotropic, Not Isotropic

Figure 1 shows the paper's core theoretical contribution: the Fixed-frame Modality Gap Theory. Unlike prior work (e.g., C3) that treated the gap as isotropic noise, the authors decompose it into:

  • Stable Bias: A systematic, slowly-drifting offset in the orthogonal subspace $V$
  • Anisotropic Residuals: Direction-dependent fluctuations with extreme condition numbers ($>10^3$ in the semantic subspace $U$)

Figure 2 validates this empirically by tracking a dual-encoder model during training:

  • (a) Gradients concentrate in the task subspace $U_t$, with leakage into $V$ bounded by $\sin\theta(U_t, U)$
  • (b) The orthogonal bias $\gamma(t)$ exhibits high cosine stability but slow cumulative drift ("passive evolution")
  • (c) In the semantic subspace $U$, residuals show extreme anisotropy (condition number $>10^3$) and "signal locking" (correlation $\rho_{align} \approx 1$ with gradient covariance)
  • (d) In the orthogonal subspace $V$, noise remains stretched ($\kappa > 10^1$) and geometrically decoupled from the bias (angle $\approx 90^\circ$)

This "Phantom Drift" phenomenon—where anisotropic noise creates spurious angular concentration upon spherical projection—invalidates simple mean-shift corrections.

2. ReAlign: Training-Free Statistical Alignment

Figure 3 illustrates the three-step ReAlign pipeline that maps text representations into the image distribution without training:

  1. Anchor Alignment: Center and shift text to match image mean $\mu_x$ (eliminates first-order bias)
  2. Trace Alignment: Scale by $s = \mathcal{T}_x / \mathcal{T}_y$ to match global variance (preserves anisotropic structure while adjusting energy)
  3. Centroid Alignment: Correct the "Phantom Drift" $\mu'$ induced by spherical projection, then re-normalize

Figure 4 shows ReAlign reduces the modality gap to $10^{-4}$ scale ($2.64 \times 10^{-4}$ on Bunny, $1.39 \times 10^{-4}$ on DenseFusion), while C3 plateaus at $\approx 0.0023$ due to its isotropic bottleneck.

3. ReVision: Scalable MLLM Training

Table 1 compares MLLM performance across alignment strategies (same architecture: LLM2CLIP + Llama-3-8B):

  • ReVision achieves 51.16 average score, significantly outperforming:
    • No alignment (47.50)
    • C3 alignment (48.06)
    • Blind text-only baseline (7.85)

Key advantages include superior reasoning (MMMU: 31.51 vs 30.69) and reduced hallucination (CRPE: 81.78 vs 79.99).

Table 2 demonstrates the cost-efficiency breakthrough:

  • ReVision-2M (2M unpaired text samples) achieves 49.75 average score, surpassing the 1M paired image-text baseline (48.91) at only 74% of the cost (0.74 vs 1.0)
  • Unicorn (prior text-only method using simple mean shift) scores only 43.94 despite using 1M samples
  • Scaling unpaired text to 2M outperforms expensive paired data, proving that "quality via quantity" works when geometric alignment is precise

The paradigm enables:

  • Democratization: Training high-performance MLLMs without billion-scale image-text pairs
  • Domain expansion: Using domain-specific text corpora for pretraining with minimal visual examples
  • 26% cost reduction while maintaining superior performance (50.16 vs 48.91 on comparable scales)

Appendix D (Figure 5-6) confirms ReAlign preserves semantic hierarchy (power-law exponent $\alpha \approx 1.33$ vs C3's flattened $\alpha \approx 1.06$) and angular topology (JS Divergence 0.0066 vs C3's 0.1924), achieving 4.35% k-NN mixing rate with visual manifolds versus C3's 1.31%.

Good work!

·
Paper author

Thanks🥰

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.07026 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.07026 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.07026 in a Space README.md to link it from this page.

Collections including this paper 1