Multimodal LLMs: Beyond Text

This blog post was written by Brain John Aboze as part of the Deepchecks Community Blog. If you would like to contribute your own blog post, feel free to reach out to us via blog@deepchecks.com. We typically pay a symbolic fee for content that's accepted by our reviewers.

Introduction

Have you ever pondered how humans adeptly comprehend and interact with their surroundings? It’s our innate ability to amalgamate data from our senses – sight, hearing, smell, taste, and touch. This multi-sensory input gives us a holistic grasp of our environment. Drawing inspiration from this, Multimodal AI systems are trained on diverse data types like text, images, audio, and video. Their capacity to correlate different data forms allows them to discern intricate patterns, enhancing their performance across tasks. In AI, especially Large Language Models (LLMs), these multimodal systems are not merely upgrades but game-changers. They can tackle challenges singular mode systems falter at, like refining medical diagnoses or elevating virtual reality experiences. So, if you’re curious about the significance of multimodal fusion in AI and LLMs, brace yourself for an enlightening journey into multimodality.

Understanding Multimodality in AI & LLMs

Initially, AI models were unimodal. A model created for text would process just that, while another model might solely process images. As our computational prowess and access to varied datasets surged, these distinctions began to fade, paving the way for multimodal systems. Essentially, multimodality in AI denotes the capability of models to interpret and generate outputs spanning various data types – whether it’s text, imagery, audio, or beyond. This capability hinges on grasping individual modes and merging them for in-depth insights.

Large Language Models (LLMs) like OpenAI’s ChatGPT, which are chiefly trained on expansive text datasets, are experts at tasks ranging from language translation to answering queries informatively. Training multimodal LLMs on diverse modalities enhances their learning, allowing for intricate correlations between diverse data sets. The ramifications? Vast potential, be it in health diagnostics merging medical imaging with patient data or virtual assistants that perceive and respond with augmented context awareness.

This fusion led by multimodal learning offers models a comprehensive understanding. The premise is straightforward: when a single modality might fall short, tapping into multiple sources bridges the gap. By discerning intricate relationships across these modalities, multimodal learning enhances performance across a spectrum of tasks.

Understanding Multimodality in AI & LLMs

Source: Author

It is worth noting that combining models, such as stacking, bagging, and ensemble, is a technique for improving the performance of machine learning models by combining the predictions of multiple models. Alternatively, multimodal learning enhances machine learning model performance by integrating data from diverse sources.

Multimodal learning techniques

There are several different multimodal learning techniques, which can be broadly classified into three categories:

  1. Fusion-Based Approach: The fusion-based approach integrates data from diverse modalities to form a unified multimodal representation or a shared representation space. This singular representation subsequently trains a machine-learning model. The process extracts features from each modality, which are then combined into the learning model. When executed adeptly, the approach fosters a profound, holistic understanding of the data. However, it’s imperative to ensure individual modalities retain their distinctive nuances. Here, these representations merge, giving rise to a modality-agnostic representation imbued with the semantic essence from all sources. Based on its occurrence, the fusion timeline can be categorized into early and mid-fusion.
  2. Alignment-based approach: This method emphasizes correlating and harmonizing representations across various modalities. While features from each modality are extracted independently, they are tailored to align or correlate with other modalities. The strength of this strategy lies in preserving each modality’s unique traits while establishing commonalities. It leans on advanced techniques for accurate alignment and may demand significant computational resources.
  3. Late fusion: In the late fusion technique, each modality remains distinct until the final stages of processing. They’re independently analyzed, with each generating its own predictions. Only at the culmination are these individual decisions merged, often through methods like averaging or weighting. This ensures that the unique characteristics of each modality are preserved and can be fine-tuned in isolation. While advantageous for modalities that offer complementary insights or aren’t closely related, this method might overlook subtle patterns that arise from early interactions between modalities.
Deepchecks For LLM VALIDATION

Multimodal LLMs: Beyond Text

  • Reduce Risk
  • Simplify Compliance
  • Gain Visibility
  • Version Comparison
TRY LLM VALIDATION

The Significance of Multimodality

In AI:

  • Richer Context: Emulating the human reliance on numerous senses, AI models with multimodal inputs perceive situations more holistically.
  • Robustness & Adaptability: They can balance out insufficiencies in one modality using data from another, enhancing accuracy.
  • Diverse Applications: Their scope ranges from advanced robotics to immersive digital art.

In LLMs:

  • Beyond Mere Text: LLMs are adept at textual comprehension, but the real world is multi-dimensional. Blending in visuals or audio, for instance, deepens understanding.
  • Elevated Interaction: Picture a digital assistant discerning a user’s sentiment through vocal nuances or facial cues and responding with emotional accuracy.
  • Melding Digital & Tangible Realms: As LLMs integrate into varied platforms, from AR wearables to intuitive homes, their multimodal understanding becomes crucial for fluid human-tech interactions.

In essence, multimodality propels AI and LLMs towards a future mirroring human information processing – a blend of senses and cues. It’s more than just a tech enhancement; it’s progressing towards crafting machines that interact with our world in a more complete, human-like manner.

Limitations of Unimodal LLMs

Unimodal LLMs are trained on a single modality of data, such as text. While impressive in their capabilities, Unimodal LLMs exhibit certain limitations that can be addressed by transitioning to multimodal systems. Here’s a deeper dive into those constraints and how multimodality can provide solutions:

1. Lack of Visual Context:

  • Unimodal Limitation: Unimodal LLMs only process textual data. They cannot inherently understand or interpret visual cues, even though many real-world scenarios and questions rely on a combination of textual and visual information.
  • Multimodal Solution: Multimodal models can process and integrate information from images, potentially allowing for better understanding and responses when visual context is important.

2. Absence of Audio Interpretation:

  • Unimodal Limitation: LLMs can’t process audio signals directly, missing out on nuances like tone, emphasis, or emotion.
  • Multimodal Solution: A multimodal system capable of processing audio can detect such nuances, making interactions more contextually and emotionally appropriate.

3. Difficulty in Understanding and Responding to Complex and Nuanced Queries:

  • Unimodal Limitation: Solely relying on text can limit an LLM’s ability to grasp and respond to queries requiring multisensory understanding entirely. For example, explaining the mood of a movie scene or the emotion behind a piece of music can be challenging without visual or auditory context.
  • Multimodal Solution: Simultaneously processing text, visuals, sounds, and possibly other sensory data, multimodal systems can offer more nuanced and comprehensive responses to complex queries. For instance, they can provide insights based on the lyrical content and the tone of a song or combine visual cues with dialogues to interpret a movie scene. Such systems can better simulate the human ability to integrate multiple forms of information for a holistic understanding, making them better suited to tackle intricate questions.

4. Data Bias and Ethical Issues:

  • Unimodal Limitation: Unimodal LLMs, especially those focused on text, are trained on vast amounts of internet data. This makes them vulnerable to replicating societal biases, stereotypes, and misconceptions present in the training data. Furthermore, being limited to text might restrict them from capturing the full context in which information is presented, potentially leading to skewed interpretations.
  • Multimodal Solution: Incorporating diverse modalities (images, videos, sounds) can provide a richer context, potentially counteracting some biases present in textual data. For instance, while text might carry a stereotype, visual data might present a broader range of representations that can mitigate it. However, it’s essential to note that multimodal data can introduce its own biases. For instance, visual data could have cultural, demographic, or geographic biases. It’s imperative that the training data for multimodal systems be carefully curated and continually refined to minimize these issues.

5. Impoverished Emotion Recognition:

  • Unimodal Limitation: Purely text-based models can have difficulty discerning emotional nuances, especially when they’re commonly expressed through non-textual cues.
  • Multimodal Solution: By analyzing facial expressions, voice modulations, or physiological data, multimodal systems can achieve better emotion recognition and responsiveness.

6. Overfitting to Textual Patterns:

  • Unimodal Limitation: Relying solely on text, LLMs might sometimes overfit to textual patterns and anomalies in their training data.
  • Multimodal Solution: By diversifying input sources, multimodal systems can potentially reduce over-reliance on any single data type, resulting in a more balanced and robust model.

7. Contextual Ambiguities:

  • Unimodal Limitation: Without visual or auditory clues, certain textual contexts can remain ambiguous to LLMs.
  • Multimodal Solution: Access to multi-sensory data can help disambiguate many such situations, leading to more accurate responses.

8. Restricted Applications in Augmented Reality (AR) and Virtual Reality (VR):

  • Unimodal Limitation: AR and VR often require a fusion of visual, auditory, and textual information for seamless user experiences. Unimodal LLMs can only partially cater to these requirements.
  • Multimodal Solution: Multimodal systems can be integrated into AR and VR platforms to process multi-sensory inputs and outputs, enhancing user immersion and interactivity.

While unimodal LLMs are powerful in their domain, our world is inherently multimodal. Transitioning towards multimodal systems can bridge the gap between the textual digital realm and the multisensory physical world, making AI interactions more intuitive, comprehensive, and effective.

While multimodal systems offer potential answers to the shortcomings of unimodal LLMs, they aren’t without challenges. Curating multimodal training data is arduous; these models demand higher computational power and intricate techniques for coherent data integration from varied sources. The ethical concerns deepen as multiple data types intertwine, each carrying its biases and risks. Deciphering the decision-making of multimodal LLMs is intricate due to their reliance on multifaceted datasets, making it a challenge to discern the weightage of each modality in predictions. Thus, it is crucial to foster their development and application with care, transparency, and an unwavering dedication to improvement.

Case Studies: Successful Multimodal LLM Implementations

  • OpenAI’s GPT-4V:  GPT-4 with Vision (GPT-4V) empowers users to guide GPT-4 in analyzing user-provided image inputs. This enhancement to GPT-4 integrates an additional modality-the image input-into its LLM framework. Further details are available in the linked paper.
  • Google’s Med-Palm 2: This multimodal LLM, developed by Google’s health research teams, is imbued with extensive medical knowledge. Setting new benchmarks in medical proficiency,  Med-Palm 2 is the pioneering LLM to attain an expert standard in the U.S. Medical Licensing Exam-style queries. Its multimodal prowess allows it to interpret visual data, such as X-rays and mammograms.
  • Microsoft Kosmos-1: In the “Language Is Not All You Need: Aligning Perception with Language Models” paper, Microsoft researchers introduced Kosmos-1. This Multimodal Large Language Model (MLLM) is adept at perceiving diverse modalities, embracing zero-shot learning by following instructions and few-shot learning through contextual understanding. The ambition behind this MLLM was to harmonize perception with LLMs, empowering the models to visualize and articulate. To achieve this, Kosmos-1 was trained from the ground up using a myriad of web data, encompassing texts, images, image-caption pairs, and textual data.
  • Salesforce Blip-2: Salesforce introduces its New Open Source Visual-Language Model, an innovative technique for cost-effective pre-training of visual-language models. At the heart of BLIP-2 is the aim to facilitate a compute-efficient Vision-Language Pre-training (VLP) by leveraging existing pre-trained vision and language models. Bridging the vision-language divide is essential for LLMs to decode visual content. Yet, the challenge arises when LLMs haven’t encountered images in their text-centric pre-training. To navigate this, Salesforce researchers suggest a novel two-stage pre-training approach that employs a Querying Transformer (Q-Former) to mend this modality disparity.
  • Meta CM3leon: Meta has unveiled “CM3leon” (pronounced like a chameleon), a transformative multimodal model adept at text-to-image and image-to-text generation. At its core, CM3leon is a transformer-based, autoregressive model fine-tuned for tasks involving text and image synthesis. It undergoes a two-phase training process, drawing from a vast multimodal dataset and enhanced with retrieval pretraining. The model further employs contrastive decoding to amplify the quality of its outputs. CM3leon’s expertise in text-directed image generation and editing has the potential to reshape how users engage with and modify visual content. Meta champions the model’s proficiency across a spectrum of vision-language activities, such as visual question answering and extended captioning.
  • Open Source Multimodal LLM Repository: Dive into a handpicked collection of Multimodal Large Language Models (MLLMs). This list encompasses datasets, multimodal instruction tuning, in-context learning, chain-of-thought approaches, LLM-assisted visual reasoning, foundation models, and more. Explore the complete list on GitHub here.

Applications of Multimodal LLMs

The fusion of LLMs with multiple modalities goes beyond academic advancements—it’s a tangible game-changer for industries, enterprises, and individuals alike. Here are some pivotal real-world applications:

  • Healthcare: By melding textual patient records with medical imagery like X-rays or MRIs, Multimodal LLMs can offer clinicians a holistic diagnostic view. They integrate data from medical images, lab results, and patient symptoms to foster precision in diagnosis.
  • Education: Multimodal LLMs cater to holistic educational experiences. Educators can refine teaching techniques by assessing visual cues, such as a student’s expression during virtual sessions. Such systems immerse students with multimedia content, facilitating diverse interaction avenues.
  • Content Recommendation: Multimodal systems extend beyond text, evaluating visual and auditory patterns from content and fine-tuning user recommendations.
  • Consumer Insight Analysis: Brands gain comprehensive insights by dissecting verbal reactions, facial expressions during product interactions, and written reviews, all working towards a nuanced understanding of consumer sentiment.
  • Accessible Assistance: Multimodal systems offer inclusivity for the differently-abled, interpreting a blend of vocal, gestural, and textual cues to deliver personalized aid.
  • Smart Home Integration: With the ability to discern voice, gestures, and text, Multimodal LLMs facilitate fluid interactions with home automation systems.
  • Immersive Guidebooks: Travel enthusiasts can enrich their experience by pointing devices at landmarks, and the system supplies an amalgamation of visual insight and contextual narrative.
  • Virtual Reality Showcases: Potential property buyers can embark on virtual tours, with the system responsively linking visual elements to property specifics as they navigate and inquire.
  • Emergency Response Optimization: In crises, integrating live visual data with textual emergency alerts equips responders with a comprehensive overview, optimizing response strategy.

Conclusion

Multimodal learning stands as a testament to the evolving landscape of machine learning, drawing inspiration from the multifaceted ways humans interact with the world. We can amplify model efficacy and widen AI’s contextual grasp by assimilating insights from varied data sources. The emergence of Multimodal LLMs heralds a transformative phase in LLM evolution, with the potential to significantly influence numerous sectors and use cases. These models discern intricate interplays between diverse data types, enhancing their prowess. Multimodal learning is redefining our perception of data amalgamation in AI by spanning foundational principles to sophisticated techniques like fusion-based, alignment-based, and late fusion strategies. The tangible impact of Multimodal LLMs is palpable across sectors like healthcare, education, entertainment, and more, hinting at a future replete with further innovation and integration. Yet, every technological leap brings its set of hurdles. As we endeavor to merge modalities seamlessly, staying attuned to the intricacies and inherent challenges is imperative. But, fueled by relentless research and innovation, the trajectory for multimodal learning remains optimistic. It isn’t a fleeting trend but a strategic move towards sculpting AI systems that resonate with the depth of human cognition. The potential of Multimodal LLMs is vast, marking a thrilling chapter in AI research.

Deepchecks For LLM VALIDATION

Multimodal LLMs: Beyond Text

  • Reduce Risk
  • Simplify Compliance
  • Gain Visibility
  • Version Comparison
TRY LLM VALIDATION

Recent Blog Posts

LLM Evaluation With Deepchecks & Vertex AI
LLM Evaluation With Deepchecks & Vertex AI
The Role of Root Mean Square in Data Accuracy
The Role of Root Mean Square in Data Accuracy
5 LLMs Podcasts to Listen to Right Now
5 LLMs Podcasts to Listen to Right Now