mxbai-colbert-large-v1
Model Description
mxbai-colbert-large-v1 is a state-of-the-art ColBERT (Contextualized Late Interaction BERT) model for reranking and retrieval tasks. It is based on the mxbai-embed-large-v1 model and achieves state-of-the-art performance on 13 publicly available BEIR benchmarks.
ColBERT combines the benefits of vector search and cross-encoders. Queries and documents are encoded separately, but instead of creating a single embedding for the entire document, ColBERT generates contextualized embeddings for each token in the document. During search, the token-level query embeddings are compared with the token-level embeddings of the documents using the lightweight scoring function MaxSim. This allows ColBERT to capture nuanced matching signals while being computationally efficient.
mxbai-colbert-large-v1 is initialized from the mxbai-embed-large-v1 model, which was trained on over 700 million samples from various domains. The ColBERT model was then fine-tuned on around 96 million samples to adapt it to the late interaction mechanism. This extensive training enables the model to be used for a wide range of tasks and domains.
On the BEIR benchmark, mxbai-colbert-large-v1 outperforms other ColBERT models on average and directly in most tasks. Its exceptionally high reranking score even surpasses typical scores for cross-encoder based reranker models on the benchmark, despite the advantages of the ColBERT architecture regarding resource efficiency. The model also demonstrates state-of-the-art retrieval performance when compared to other currently available ColBERT models.
deepset-mxbai-embed-de-large-v1
Discover deepset-mxbai-embed-de-large-v1, a powerful German/English embedding model developed through collaboration between deepset and Mixedbread. This state-of-the-art open-source model offers superior performance, supports binary quantization and Matryoshka representation learning, and enables significant cost reductions in real-world applications.
mxbai-embed-xsmall-v1
Explore mxbai-embed-xsmall-v1, Mixedbread AI's smallest and most efficient English embedding model optimized for retrieval. Discover its competitive performance, long context support and capabilities in resource-constrained applications.