How pre-trained model can reduce dimensionality in text?

Tiya Vaj
3 min readOct 4, 2024

--

Pre-trained models, particularly those based on neural networks, can represent high-dimensional text data in lower-dimensional spaces while preserving semantic relationships. Here’s how they achieve lower dimensionality effectively:

1. Word Embeddings

  • Dense Vectors: Traditional text representations like Bag of Words or TF-IDF create sparse, high-dimensional vectors where most values are zero. In contrast, pre-trained models like Word2Vec, GloVe, or FastText produce dense vectors (embeddings) of fixed size (e.g., 100, 200, or 300 dimensions) for each word, capturing semantic relationships.
  • Dimensionality Reduction: These embeddings represent each word in a lower-dimensional space, effectively reducing the complexity of the input data. For example, instead of representing a vocabulary of 10,000 unique words with a 10,000-dimensional vector (where most entries are zero), word embeddings might represent them with a 300-dimensional vector, significantly reducing dimensionality while retaining meaningful information.

2. Contextualized Embeddings

  • Transformers: Models like BERT, GPT, or RoBERTa use attention mechanisms to capture context. Instead of providing a static embedding for each word, they generate embeddings based on the context in which the word appears. Each word in a sentence can thus be represented as a unique vector that considers the surrounding words.
  • Output Size: The output of these models is typically much lower in dimensionality than the input space. For example, a sentence may consist of many words, each represented by a high-dimensional embedding, but the model may output a single embedding for the entire sentence or document, often much lower in dimension (e.g., 768 for BERT).

3. Pooling Techniques

  • Pooling Layers: After generating embeddings for words in a sentence or document, pooling techniques (e.g., average pooling, max pooling) can be applied to reduce the dimensionality further. This condenses the information from individual word embeddings into a single fixed-size vector representing the entire text.
  • Global Context: These pooling methods help summarize the information, allowing the model to capture the overall context while reducing the dimensionality of the representation.

4. Trainng on Large Datasets

  • Semantic Relationships: Pre-trained models are typically trained on large corpora of text, allowing them to learn meaningful semantic relationships between words and phrases. This helps them generalize and represent concepts in a lower-dimensional space efficiently.
  • Knowledge Transfer: The knowledge gained from vast amounts of data allows the model to encapsulate complex relationships in fewer dimensions, making it easier to capture the essence of the text without needing to represent every possible word explicitly.

5. Fine-tuning for Specific Tasks

  • Task-Specific Adaptation: Pre-trained models can be fine-tuned on specific tasks, further optimizing the representation for the given context, often resulting in lower-dimensional outputs that are well-suited for particular applications (e.g., sentiment analysis, topic classification).

Conclusion

Pre-trained models demonstrate lower dimensionality through various mechanisms, including the generation of dense embeddings, contextualized representations, pooling techniques, and training on large datasets. By capturing semantic relationships and reducing the complexity of the input space, these models can effectively represent text data in lower-dimensional spaces while maintaining critical information for downstream tasks.

The dimension size of BERT embeddings depends on the specific variant of BERT being used. Here are the common sizes for different BERT models:

1. BERT Base:
— Hidden Size: 768
— Number of Layers: 12
— Number of Attention Heads: 12
— Total Parameters: Approximately 110 million

2. BERT Large:
— Hidden Size: 1024
— Number of Layers: 24
— Number of Attention Heads: 16
— Total Parameters: Approximately 345 million

### Explanation of Hidden Size
- The **hidden size** refers to the dimensionality of the embedding produced by the BERT model for each token (word).
- For example, when using BERT Base, each token in the input text is represented by a 768-dimensional vector. When using BERT Large, each token is represented by a 1024-dimensional vector.

### Sentence Embeddings
- When extracting embeddings for entire sentences or documents, you might use methods such as pooling (e.g., mean pooling, max pooling) to reduce the dimensionality to a single vector representing the entire input, but the size of the individual token embeddings remains the same as specified above.

Summary
- BERT Base: 768 dimensions per token
- BERT Large: 1024 dimensions per token

These dimensionalities are critical for capturing the contextual meanings of words in various NLP tasks.

--

--

Tiya Vaj

Ph.D. Research Scholar in NLP and my passionate towards data-driven for social good.Let's connect here https://www.linkedin.com/in/tiya-v-076648128/