Mixedbread

mxbai-embed-2d-large-v1

Explore mxbai-embed-2d-large-v1, the world's first 2D-Matryoshka embedding model. Learn about its innovative approach to reducing model size while maintaining high performance, and discover how to leverage its flexible dimensionality for various NLP tasks and efficient information retrieval.

Parameters 340M
Context Window 512
Price / 1M tokens $0.00
Languages EN

Model Description

is the world's first 2D-Matryoshka embedding model. The 2D-Matryoshka model introduces a novel approach that enables you to reduce both the number of layers and the dimensions of embeddings within the model. This dual reduction strategy allows for a more compact model size while still delivering performance on par with that of leading models such as . Specifically, reducing the model's layers by approximately 50% retains up to 85% of its original performance, even without additional training.

The model was pretrained using contrastive training on over 700 million pairs, covering a wide variety of topics across the internet. It was then fine-tuned with over 30 million high-quality triplets using novel loss functions. allows users to get multiple models out of one and use different embedding sizes, providing full control over the trade-offs between speed, storage consumption, and model performance.

On the Massive Text Embedding Benchmark (MTEB), mxbai-embed-2d-large-v1a performs at the level of current embedding models of different sizes. The model's performance remains competitive even when the embedding size is reduced by a factor of 16. Additionally, the model retains about 75% of its performance after cutting half of its layers, demonstrating the effectiveness of the 2D-Matryoshka approach.

Compare with other models

ModelContext WindowDimensionsInput Price (/1M tokens)
mxbai embed 2d large v1512 1024$0.00
mxbai Embed Large v1512 1024$0.00
deepset mxbai embed german large v1512 1024$0.00
mxbai embed xsmall v14.1K 384$0.00
mxbai colbert large v1512 1024$0.00

The following code illustrates how to compute similarities between sentences using the cosine similarity score function. The number of dimensions can be adjusted using the dimensions parameter of the SDK.

import torch
from mixedbread import Mixedbread
from sentence_transformers.util import semantic_search

mxbai = Mixedbread(api_key="YOUR_API_KEY")
model = "mixedbread-ai/mxbai-embed-2d-large-v1"

prompt = 'Represent this sentence for searching relevant passages:'
query = "A man is eating a piece of bread"

docs = [
    "A man is eating food.",
    "A man is eating pasta.",
    "The girl is carrying a baby.",
    "A man is riding a horse.",
]

query_res = mxbai.embed(
    model=model,
    prompt=prompt,
    input=[query],
    normalized=True,
    encoding_format='float',
    dimensions=512
)

docs_res = mxbai.embed(
    model=model,
    input=docs,
    normalized=True,
    encoding_format='float',
    dimensions=512
)

query_embedding_list = query_res.data[0].embedding
docs_embeddings_list = [item.embedding for item in docs_res.data]

query_tensor = torch.tensor([query_embedding_list])
docs_tensor = torch.tensor(docs_embeddings_list)

hits = semantic_search(query_tensor, docs_tensor, top_k=len(docs))

print(f"Query: {query}\n")
print("Results (sorted by relevance):")
for hit in hits[0]:
    doc_index = hit['corpus_id']
    score = hit['score']
    print(f"Score: {score:.4f}\tDocument: {docs[doc_index]}")

Last updated: May 6, 2025