TRANSPORTER: Transferring Visual Semantics from VLM Manifolds

Alexandros Stergiou

University of Twente, NL

ArXiv GitHub HF Data




Abstract


How do video understanding models acquire their answers? Although current Vision Language Models (VLMs) reason over complex scenes with diverse objects, action performances, and scene dynamics, understanding and controlling their internal processes remains an open challenge. Motivated by recent advancements in text-to-video (T2V) generative models, this paper introduces a logits-to-video (L2V) task alongside a model-independent approach, TRANSPORTER, to generate videos that capture the underlying rules behind VLMs' predictions. Given the high-visual-fidelity produced by T2V models, TRANSPORTER learns an optimal transport coupling to VLM's high-semantic embedding spaces. In turn, logit scores define embedding directions for conditional video generation. TRANSPORTER generates videos that reflect caption changes over diverse object attributes, action adverbs, and scene context. Quantitative and qualitative evaluations across VLMs demonstrate that L2V can provide a fidelity-rich, novel direction for model interpretability that has not been previously explored.



Method Overview


L2V overview
Coupling network training \(\Phi\) Concept bank \(\mathbf{Q}\) Inference step

L2V with TRANSPORTER: Embeddings \(\mathbf{z}_\Xi \in \mathbb{R}^\Xi\) are coupled with network \(\Phi\) and concept bank \(\mathbf{Q}\). Coupling network \(\Phi\) projects \(\mathbf{z}_\Xi\) with condition \(\pi_\Xi\) to \(\widehat{\mathbf{z}}_{\Omega_1}=\Phi_{\Omega_1}(\mathbf{z}_\Xi,\pi_\Xi)\). Latents \(\widehat{\mathbf{z}}_{\Omega_2} \in \mathbb{R}^\Omega\) are obtained via \(\Phi_{\Omega_2}\) over decoder \(\mathcal{D}_\Xi\) and encoder \(\mathcal{E}_\Omega\). The Learnable Optimal Transport (\(\rho\)-OT) uses projection vectors \(\mathbf{p}_{\Omega_1},\mathbf{p}_{\Omega_2}\) to transport embeddings to \(\tilde{\mathbf{z}}_\Omega\). Concept bank \(\mathbf{Q}=\{\mathbf{q}_o:o\in \mathcal{O}\}\) is trained using probability path difference \(\Delta v\) weighted by logit distribution change \(\Delta \omega\). Inference: Latents \(\mathbf{q}_o\) are added to conditions to transport noise \(\mathbf{\epsilon}\sim \mathcal{N}(0,\mathbf{I})\) and generate videos.