Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs

a) Sorbonne University     b) Valeo.ai  



  • Overview
    Large Language Models (LLMs) have demonstrated impressive performance on multimodal tasks, without any multimodal finetuning. They are the de facto building block for Large Multimodal Models (LMMs), yet, we still lack a proper understanding of their success. In this work, we expose frozen LLMs to image, video, audio and text inputs and analyse their internal representation with the attempt to understand their generalization beyond textual inputs.

  • Motivation and Research Questions
    Large Language Models (LLMs) are able to generalize to multimodal inputs. Specifically, with minimal computation resources (i.e. amouting to training only a linear layer), it is possible to feed a frozen LLM, with multimodal inputs, so that the model can reason and chat about an image, video or audio. In this work, we are curios about this observation and try to investigate the following questions:
    • Are multimodal inputs converted to "textual" inputs due to the connector, so that they can be simply considered as "frozen language"?
    • How multimodal tokens differ inside LLMs?
    • What are the main factors allowing LLMs to generalize to multimodal inputs?
    • What are the implications of this investigation on model performance, mitigating safety problems and improving computation efficiency?

  • Findings
    This works led to the following findings:
    1. Perceptual tokens are easily distinguishable from textual ones inside LLMs, with significantly different representations (e.g. live in different narrow cones), and complete translation to textual tokens does not exists.
    2. Both perceptual and textual tokens activate similar LLM weights.
    3. Despite their differences, perceptual tokens are implicitly aligned to textual tokens inside LLMs. We call this the implicit multimodal alignment effect (IMA), and argue that this is linked to architectural design, helping LLMs to generalize.
    4. This provide more evidence to believe that the generalization of LLMs to multimodal inputs is mainly due to their architecture.

  • Implications
    Thse findings have several implications:
    1. We find a positive correlation between the implicit alignment score and the task performance, suggesting that this could act as a proxy metric for model evaluation and selection.
    2. A negative correlation exists regarding hallucinations (\emph{e.g.} describing non-existing objects in images), revealing that this problem is mainly due to misalignment between the internal perceptual and textual representations.
    3. Perceptual tokens change slightly throughout the model, thus, we propose different approaches to skip computations (\empg{e.g.} in FFN layers), and significantly reduce the inference cost.
    4. Due to the slowly changing embeddings across layers, and the high overlap between textual and multimodal activated weights, we compress LLMs by keeping only 1 subnetwork (called \alphasubnet) that works well across a wide range of multimodal tasks. The code will be made public.




BibTeX


      @article{shukor2024implicit,
        title={Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs},
        author={Shukor, Mustafa and Cord, Matthieu},
        journal={arXiv preprint arXiv:2405.16700},
        year={2024}
      }