Mixedbread

deepset-mxbai-embed-de-large-v1

Discover deepset-mxbai-embed-de-large-v1, a powerful German/English embedding model developed through collaboration between deepset and Mixedbread. This state-of-the-art open-source model offers superior performance, supports binary quantization and Matryoshka representation learning, and enables significant cost reductions in real-world applications.

Parameters 487M
Context Window 512
Price / 1M tokens $0.00
Languages EN, DE

Model Description

is a powerful German/English embedding model developed through collaboration between and Mixedbread. It sets a new performance standard among open-source embedding models, outperforming domain-specific alternatives in real-world applications.

The model was initialized from the multilingual-e5-large model and fine-tuned on over 30 million pairs of high-quality German data using the AnglE loss function. This extensive training enables the model to adapt to a wide range of topics and domains, making it suitable for various real-world applications and use cases.

deepset-mxbai-embed-de-large-v1 supports both and (MRL). This allows for significant reductions in storage and infrastructure costs, with the potential for 97%+ cost savings through binary MRL.

The model achieves top performance on various benchmarks, including private and public datasets created in collaboration with deepset's clients. It demonstrates strong performance across diverse tasks, showcasing its versatility and robustness.

Compare with other models

ModelContext WindowDimensionsInput Price (/1M tokens)
deepset mxbai embed german large v1512 1024$0.00
mxbai Embed Large v1512 1024$0.00
mxbai embed 2d large v1512 1024$0.00
mxbai embed xsmall v14.1K 384$0.00
mxbai colbert large v1512 1024$0.00

The following code illustrates how to retrieve relevant passages for a given query using semantic_search.

import torch
from mixedbread import Mixedbread
from sentence_transformers.util import semantic_search

mxbai = Mixedbread(api_key="YOUR_API_KEY")
model = "mixedbread-ai/deepset-mxbai-embed-de-large-v1"

query_text = 'query: Warum sollte man biologisches Brot kaufen?'
passage_texts = [
    "passage: In unserer Bäckerei bieten wir auch glutenfreies Brot an, das für Menschen mit Zöliakie geeignet ist.",
    "passage: Kuchen und Gebäck sind ebenfalls Teil unseres Angebots, wobei wir auf höchste Qualität und Frische achten.",
    "passage: Wir haben auch eine Auswahl an herzhaften Snacks und Sandwiches, die perfekt für die Mittagspause sind.",
    "passage: Biologisches Brot wird aus natürlichen Zutaten hergestellt und enthält keine künstlichen Zusatzstoffe. Es ist gesünder und umweltfreundlicher.",
    "passage: Unsere Bäckerei bietet eine Vielzahl von Brotsorten an, darunter auch biologisches Brot. Es schmeckt besser und ist frei von chemischen Konservierungsstoffen.",
    "passage: Kunden bevorzugen zunehmend biologisches Brot, da es nicht nur gut für die Gesundheit ist, sondern auch einen positiven Beitrag zur Umwelt leistet."
]

query_res = mxbai.embed(
    model=model,
    input=[query_text],
    normalized=True,
    encoding_format='float'
)

passages_res = mxbai.embed(
    model=model,
    input=passage_texts,
    normalized=True,
    encoding_format='float'
)

query_embedding_list = query_res.data[0].embedding
passage_embeddings_list = [item.embedding for item in passages_res.data]

query_tensor = torch.tensor([query_embedding_list])
passages_tensor = torch.tensor(passage_embeddings_list)

hits = semantic_search(query_tensor, passages_tensor, top_k=len(passage_texts))

print(f"Query: {query_text}\n")
print("Results (sorted by relevance):")
for hit in hits[0]:
    doc_index = hit['corpus_id']
    score = hit['score']
    print(f"Score: {score:.4f}\tPassage: {passage_texts[doc_index]}")

Last updated: May 7, 2025