deepset-mxbai-embed-de-large-v1
Model Description
deepset-mxbai-embed-de-large-v1 is a powerful German/English embedding model developed through collaboration between deepset and Mixedbread. It sets a new performance standard among open-source embedding models, outperforming domain-specific alternatives in real-world applications.
The model was initialized from the multilingual-e5-large model and fine-tuned on over 30 million pairs of high-quality German data using the AnglE loss function. This extensive training enables the model to adapt to a wide range of topics and domains, making it suitable for various real-world applications and Retrieval-Augmented Generation (RAG) use cases.
deepset-mxbai-embed-de-large-v1 supports both binary quantization and Matryoshka representation learning (MRL). This allows for significant reductions in storage and infrastructure costs, with the potential for 97%+ cost savings through binary MRL.
The model achieves top performance on various benchmarks, including private and public datasets created in collaboration with deepset's clients. It demonstrates strong performance across diverse tasks, showcasing its versatility and robustness.
Compare with other models
Model | Context Window | Dimensions | Price / 1M tokens |
---|---|---|---|
deepset-mxbai-embed-de-large-v1 | 512 | 1024 | $0.10 |
mxbai-embed-large-v1 | 512 | 1024 | $0.10 |
mxbai-embed-2d-large-v1 | 512 | 1024 | $0.10 |
mxbai-embed-2d-large-v1
Explore mxbai-embed-2d-large-v1, the world's first 2D-Matryoshka embedding model. Learn about its innovative approach to reducing model size while maintaining high performance, and discover how to leverage its flexible dimensionality for various NLP tasks and efficient information retrieval.
mxbai-colbert-large-v1
A state-of-the-art ColBERT model for reranking and retrieval tasks. This model combines efficient vector search with nuanced token-level matching, making it ideal for advanced information retrieval applications.