A Deep Dive into Embeddings: From Theory to Practical Applications

If you would like to contribute your own blog post, feel free to reach out to us via blog@deepchecks.com. We typically pay a symbolic fee for content that’s accepted by our reviewers.

Introduction

Humans have the ability to read and understand text, while computers “think” in numbers and don’t automatically comprehend the meaning behind words and sentences. To bridge this gap, we need to translate text into a format that computers can process numerically as vectors.

One of the earliest methods to make text understandable to machines was through encoding systems like ASCII, which facilitated the rendering and transfer of text but failed to capture the essence or meaning of the words. This led to reliance on keyword searches, where documents were found based on the presence of specific words or N-grams. However, the advent of embeddings marked a significant evolution in this area. Embeddings are also numerical vectors, but unlike their predecessors, they are capable of encapsulating meaning. This breakthrough allows for semantic searches and even the processing of documents across different languages.

This article explores the concept of embeddings, from their theoretical foundations to their practical applications. Let’s begin with a historical overview of how text representation has evolved.

The essence of embeddings

At its core, embedding is a form of representation learning where high-dimensional data (like text or images) is translated into a lower-dimensional, dense vector space. Embedding vectors, the outcome of this process, serve as a compact representation of the original data, capturing its essential characteristics while stripping away redundancies. This transformation facilitates a more efficient computation and reveals underlying patterns in the data that are not immediately apparent in its raw form.

As in deep learning, embeddings are considered essential in several different applications, including transformers, recommendation systems, singular value decomposition (SVD), matrix decomposition, and the layers within deep neural networks, as well as encoders and decoders. The importance of embeddings is seen as follows:

  • They offer a unified mathematical form to represent diverse data types.
  • They enable data compression, making large datasets more manageable.
  • They maintain relationships present in the original data.
  • They serve as the outcome of deep learning layers, offering insights into the complex, non-linear relationships that models learn.

Imagine we’re working with a dataset that includes just four images: a cat, a dog, a kitten, and a puppy. Initially, consider using one hot encoding with a three-dimensional sparse vector for categorization. This means you’d set up three columns, filled mostly with zeros, to represent each category.

ImageColumn_catColumn_dogColumn_kittem
cat100
dog010
kitten001
puppy000

However, these images can be differentiated based on two key attributes: species and age. This allows for a more efficient representation:

ImageColumn_petColumn_young
cat10
dog10
kitten11
puppy11

By adjusting our approach, not only do we reduce the number of columns required, but we also manage to capture the essential characteristics of each image. While this simplified example might not seem like a significant improvement, consider the implications for a dataset containing millions of images across various categories. Such a detailed categorization would be impractical to design manually, but its existence would be incredibly valuable for understanding and organizing the data.

In real-world applications, embeddings capture these details not with clear-cut ones and zeros but with nuanced values that fall somewhere in between, making direct interpretation challenging. Yet, this compact representation effectively preserves critical information, demonstrating the power and utility of embeddings in managing and interpreting complex datasets.

Deepchecks For LLM VALIDATION

A Deep Dive into Embeddings: From Theory to Practical Applications

  • Reduce Risk
  • Simplify Compliance
  • Gain Visibility
  • Version Comparison
TRY LLM VALIDATION

Embedding in machine learning

The power of embeddings lies in their ability to learn these representations directly from data, a process known as learned embedding. Unlike handcrafted feature engineering, which requires domain expertise and is often limited by human imagination, learned embeddings automatically discover the features that are most relevant to the task at hand. This is achieved through training processes, where models like neural networks adjust the embedding vectors to minimize a loss function, iteratively learning to capture the relationships and details within the data.

Embedding models are essential in machine learning, particularly in tasks involving natural language processing (NLP), computer vision, and recommendation systems. The theoretical underpinning of embedding models relies on the hypothesis that “similar” inputs, based on some definition of similarity, are mapped to proximate points in the embedding space. This concept is operationalized through an embedding matrix, which acts as a lookup table where each row corresponds to a specific entity (such as a word or image) and contains its embedding vector.

Looking at the previous example, the representation of the embedding matrix can be presented.

Embedding in machine learning

Visualization of an image of embedding – author’s work

Recommender systems are among the most impactful machine learning applications in the commercial sector today. They are used to improve user engagement, personalize product recommendations, and deliver relevant news content. A prevalent technique in these systems is collaborative filtering, which often incorporates embeddings to analyze and predict user preferences based on similarities with others.

Historically, embeddings gained prominence beyond recommender systems with the introduction of word-to-vec models. These models departed from traditional matrix factorization methods used in recommendations by learning word relationships through linear vectors. This innovation inspired many to explore the potential of embeddings to capture a wide range of relationships beyond just words.

Real-world applications

Let’s consider practical scenarios where embeddings find their application in everyday technologies.

Voice assistants

A common area where embeddings are used is in voice-assisted technologies. Imagine you’re developing a voice recognition system that helps a smart speaker understand different commands. A key feature of this system is recognizing the command “play music.” However, people might use varied phrases or accents to convey this command. Training your model to recognize all these variations directly could be overwhelming.

Fortunately, there’s a solution. A separate team within your organization has developed an embedding for voice commands. This embedding can interpret the essence of what’s being said, regardless of the specific wording or accent. By utilizing this voice command embedding, your focus narrows down to refining the smart speaker’s response to the “play music” command while the other team handles the complexity of understanding the varied voice inputs. Embeddings here act as a bridge between the voice recognition model and your specific application, akin to how a REST interface facilitates communication between different microservices. While you might need to coordinate the size and scale of the embeddings, the intricate details of how the voice commands are processed remain encapsulated within the embedding model.

Document classification

NLP encompasses a wide range of tasks, such as translation, sentiment analysis, topic discovery, and summarization. The development of neural networks, which extensively use embeddings in their architecture for both input and output, marks significant progress in handling the unstructured nature of language data. Even simpler neural network architectures depend on embeddings to efficiently represent input data, highlighting their versatility.

Embeddings not only offer a compact representation of data but also serve as an effective means for data compression. For instance, the ImageNet dataset, which is around 150GB, could potentially be represented in just 1/50th of its original size with the help of embeddings. This compact representation retains meaningful linear relationships within the data, such as similarity, averages, and relational mappings, which can be supported to perform various computational tasks efficiently.

Computing Embeddings

While deep neural networks are commonly associated with generating embeddings, it’s important to note that it is not the only method. Non-neural approaches like GloVe (for word embeddings) or mathematical techniques such as SVD and principal component analysis (PCA) also obtain embeddings. These methods, originating from dimensionality reduction strategies, can process datasets effectively. DNNs have led a transformation in machine learning, offering flexibility in capturing complex patterns and relationships within large datasets. Their ability to automatically learn and improve from experience without being explicitly programmed makes them particularly suited for the dynamic domain of embeddings. By focusing on DNNs and, by extension, transformers, the cutting-edge of embedding computation will be highlighted, where the latest achievements are most pronounced and the potential for future innovation is expected.

Embeddings can be extracted from deep neural networks (DNNs) in multiple ways, depending on the model architecture and application. For example, in a translation system, a DNN trained on multilingual text can generate word embeddings that capture the semantic similarities across languages. The process involves using the activation values from a specific layer within the model as the embedding for a word. Various strategies can be employed to refine these embeddings, such as averaging activations over multiple layers or adjusting the focus between the encoder and decoder sections of the model.

Let’s consider a basic example of computing word embeddings using a simplistic model to illustrate this point. Imagine we have a tiny corpus of text consisting of just three sentences:

  • The cat sat on the mat.
  • The dog sat on the log.
  • The cat chased the dog.

From this corpus, we want to compute embeddings for the words based on their context, specifically using their co-occurrence within a certain window size (let’s say a window size of 2 words before and after the target word):

  • First, we count how often each word appears in the context of every other word within our defined window size. This results in a co-occurrence matrix where rows and columns represent the words in our corpus, and each cell value indicates the co-occurrence count.
  • While there are several methods to convert this matrix into embeddings, a simple approach is to apply dimensionality reduction techniques like PCA to condense the information into a 2-dimensional space (for simplicity) for each word. This process transforms the co-occurrence frequencies into embeddings that capture the most significant relationships between words.
  • The resulting 2-dimensional embeddings can then be plotted on a graph, where each point represents a word. Words that frequently appear in similar contexts will be closer together, while those that rarely co-occur will be further apart.

Dimensionality considerations

Choosing the right dimensionality for an embedding represents an important decision that balances the need for a useful, compact representation against the risk of losing important information. While a smaller embedding is easier to work with and can be more practical for downstream applications, too few dimensions might omit valuable data. Conversely, an embedding that’s too large could sacrifice the benefits of data compression.

Furthermore, the complexity of calculating distances between embeddings can be influenced by their size. Simpler metrics are easier to implement and interpret when the embeddings are larger, which is why embeddings typically contain hundreds to a few thousand parameters.

The importance of calculating distances between embeddings lies in the ability to quantify the similarity or dissimilarity between the data points they represent. In the context of machine learning, particularly in tasks involving classification, clustering, and recommendation systems, understanding the proximity between embeddings can greatly improve the model’s performance and decision-making capabilities. The distance between two embeddings is a numerical value that represents how close or far apart those embeddings are in the vector space. In applications like document retrieval, product recommendations, or finding synonyms in natural language processing, the distance between embeddings can indicate how similar two pieces of content are. Distance metrics are used in clustering algorithms to group data points (or embeddings) that are similar to each other. By measuring how far an embedding is from a defined norm or cluster of embeddings, it’s possible to identify outliers or anomalies.

Conclusion

In conclusion, embeddings have emerged as a transformative step in machine learning, particularly within NLP. From the compact representation of complex datasets to the facilitation of groundbreaking improvements like transformer neural networks, embeddings have proven to be indispensable. They not only enable the compression of massive datasets, such as ImageNet, into manageable sizes but also allow sophisticated computational analyses. The flexibility of embeddings extends beyond deep neural networks, with techniques like GloVe, SVD, and PCA offering alternative ways to generate these representations. Their ability to bridge the gap between raw data and actionable insights is proof of their value in AI technologies.

By using these tools, we can unlock new dimensions of data understanding, improve the performance of AI systems, and find the way for future breakthroughs. Let us embrace the challenge of pushing the boundaries of what is possible with embeddings, forging ahead to discover new solutions and applications.

Deepchecks For LLM VALIDATION

A Deep Dive into Embeddings: From Theory to Practical Applications

  • Reduce Risk
  • Simplify Compliance
  • Gain Visibility
  • Version Comparison
TRY LLM VALIDATION

Recent Blog Posts

Precision vs. Recall in the Quest for Model Mastery
Precision vs. Recall in the Quest for Model Mastery
Γ—

Webinar Event
The Best LLM Safety-Net to Date:
Deepchecks, Garak, and NeMo Guardrails πŸš€
June 18th, 2024    8:00 AM PST

Days
:
Hours
:
Minutes
:
Seconds
Register NowRegister Now