All posts

October 16, 2025

Fantastic (small) Retrievers and How to Train Them: mxbai-edge-colbert-v0

Fantastic (small) Retrievers and How to Train Them: mxbai-edge-colbert-v0

Introduction

This summer, we set out to prepare the next steps of an ambitious research roadmap, with the ultimate aim of designing ever-improving approaches to late interaction and multi-vector retrieval. As we started planning experimental work more thoroughly, we were faced with a problem when thinking about one important question: what model do we tinker with for early experiments?

Indeed, to try out new things you need a strong baseline with a well-understood training process to ensure that the results of your experiments are meaningful. Ideally, this baseline would be both strong and small. Scaling laws exist in information retrieval as they do in the rest of machine learning, and capable strong models allow you to test out ideas in record time before scaling them with relatively little effort.

With this in mind, we did not know what this tiny experimental testbed should be: this is not to say that there are no good ColBERT models out there, but none of them seemed to meet our needs!

is an excellent baseline, but it is from 2021, which, in AI terms, means that it is ancient. , the current state-of-the-art, is a fantastic model, but it suffers from two problems: it is larger than we would like and it is initialized from a pre-trained dense embedding model that is hard to reproduce, limiting our control on experiments. , while an extremely strong model at a great compact size, is also initialized from a pre-trained checkpoint which is the averaging of not one, but two hard-to-reproduce embedding models. Additionally, it has a MiniLM backbone, which means it suffers from the same limitations of previous-generation encoders such as the lack of long-context support, much like ColBERTv2.

Just a few weeks before we started pondering this, the collection of models had come out: among other things, it includes a replication of (with some tweaks) across a large range of model scales, from 17 million to 1 billion parameters. The two smallest sizes of Ettin, with 17 and 32 million parameters respectively, seemed like perfect matches: we quickly made the decision to train tiny, capable models that could support all of our future experiments. We also immediately decided that we would release these models publicly, as we believe open source releases to be the perfect home for models that can run on just about any hardware.

If you want the full details on our training process, please head over to the . We have attempted to make the tech report a true overview of "how to train sane, near state-of-the-art retrievers in 2025" and we would highly encourage you to read it if this is something of interest for you.

If you just want an overview of what we did and HuggingFace links, however, you're at the right place.

Training Small ColBERTs: The Steps

Previous research on ColBERT models has indicated a pretty clear trend: all state-of-the-art ColBERTs are initialized from strong single-vector embedding models, which have undergone their own somewhat standardized multi-stage training process. This is likely due to a combination of reasons, ranging from to the lack of a standardized ColBERT pre-training recipe, among other potential culprits.

Training Dense Backbones

As such, before training our ColBERT models, we must first prepare suitable dense models to serve as backbones. As the purpose of this is to build all-around good performing baselines rather than benchmark chasing, we opt to follow standardized methods and use widely used datasets with limited overfitting potential.

The Dense training process we opt for consists of three steps:

  • First, we performed a large-scale weakly supervised contrastive pre-training on around 150 million training pairs. This step serves as a way to preheat the model's representations, shifting them from Ettin's original language modelling objective to a similarity objective. The data is not of high quality, but is large in volume, slowly shifting the embedding space in the right direction.
  • Secondly, we perform supervised fine-tuning. This is the key step, where the model trained in the first stage is now exposed to retrieval queries and their matching document, with positive documents annotated by a human. Following standard practice, we perform hard-negative mining, so as to provide the model with "believable" looking negative examples and teach it to distinguish near-matches from actually relevant documents.

Our third step is, as of yet, less standard: Stella-style knowledge distillation. This step is the key component of the Stella retrieval models, which are well known in the information retrieval community as being very strong models for their size. Effectively, the aim here is to align the representations of our model with that of a much larger, better model. Curiosity got the best of us here: we are really big fans of the Stella models and wanted to explore this approach to distillation in depth.

Again, we provide more information on this step in the , but broadly we adopted a simplified version of the Stella mixture of losses, inspired by MongoDB's recent report on LEAF-style distillation. We note that this step strongly improved the performance of our 32 million parameter model variant and resulted in a small-but-noticeable boost for the 17 million one.

After this stage, here we are: we now have a viable backbone that is easy to produce using standardized methods!

Training a ColBERT model

We're now ready to move on to the next step: creating ColBERT models!

Ablations light the way

We decided to take this opportunity to also run many ablations, seeking to answer a few questions we still had about the underlying mechanisms of the standard training recipe. Namely, we were wondering if...:

  1. Muon is a good optimizer for late interaction models?
  2. Projection dimension matters, and if so, at what point performance begins to rapidly degrade?
  3. Qwen3-Reranker is a good teacher for KL-Div distillation over teacher scores?
  4. Our proposed improve models using state-of-the-art recipes, rather than more academic ones using a weaker base model?
  5. The backbone models that have undergone Stella-style distillation produce better ColBERT models?
  6. The use, or not, of casing has an impact?

The answer to these questions and many more is, you guessed it, in the ! But, as a sneak peek, the answer to 1 is yes.

Building on our findings from the ablations, we then proceed to use the best settings we've uncovered to train the final models, resulting in our final checkpoints: mxbai-edge-colbert-v0, at both the 17M and 32M parameter scales. For everything not otherwise ablated, we followed the standardized training method introduced in .

So, how do they fare?

Short answer: surprisingly well!

For models which did not do anything out-of-the-extraordinary to seek SotA performance, and which have largely steered clear of any contamination data at the most important stages, our models reach robust performance across the board:

Results on BEIR

ModelAVGMS MARCOSciFactToucheFiQATREC-COVIDNQDBPedia
Large Models (>100M)
GTE-ModernColBERT-v10.5470.4530.7630.3120.4530.8360.6180.480
ColBERTv20.4880.4560.6930.2630.3560.7330.5620.446
Medium Models (<35M)
mxbai-edge-colbert-v0-32m0.5210.4500.7400.3130.3900.7750.6000.455
answerai-colbert-small-v10.5340.4340.7400.2500.4100.8310.5940.464
bge-small-en-v1.50.5170.4080.7130.2600.4030.7590.5020.400
snowflake-s0.5190.4020.7220.2350.4070.8010.5090.410
Small Models (<25M)
mxbai-edge-colbert-v0-17m0.4900.4160.7190.3160.3260.7130.5510.410
colbert-muvera-micro0.3940.3640.6620.2510.2540.5610.3860.332
all-MiniLM-L6-v20.4190.3650.6450.1690.3690.4720.4390.323

Results on LongEmbed

ModelAVG
Large Models (>100M)
GTE-ModernColBERT-v1 (32k)0.898
GTE-ModernColBERT-v1 (4k)0.809
granite-embedding-english-r20.656
ColBERTv20.428
Medium Models (<50M)
mxbai-edge-colbert-v0-32m (32k)0.849
mxbai-edge-colbert-v0-32m (4k)0.783
granite-embedding-small-english-r20.637
answerai-colbert-small-v10.441
bge-small-en-v1.50.312
snowflake-arctic-embed-s0.356
Small Models (<25M)
mxbai-edge-colbert-v0-17m (32k)0.847
mxbai-edge-colbert-v0-17m (4k)0.776
all-MiniLM-L6-v20.298
colbert-muvera-micro0.405

Our 17 million parameter model in particular is a standout performer which we hope will be a very strong baseline for many experiments to come. Despite its incredibly low parameter count and a projection dimension of 48, just about one third of the standard 128, it comfortably outperforms ColBERTv2. And it does so while scaling exceptionally well across longer contexts: its performance of LongEmbed very comfortably exceeds the current <1B parameter state-of-the-art single-vector retriever on the by more than 19 NDCG@10 points.

Efficiency is the name of the game

Our models build upon the current wave of more efficient encoders, spearheaded by ModernBERT and carried on by subsequent models such as Ettin or ModernVBERT. As such, we designed them with efficiency in mind, attempting to minimize their computational requirements without degrading performance.

On top of their low parameter counts and the architectural improvements inherent to the ModernBERT architecture, such as built-in unpadding and flash attention 2, we adopt very small final projection dimensions for our models, which makes them particularly memory- and RAM-friendly:

ModelParamsDim.NDCG@10LoCoGPUCPUMem. (MB)
ColBERTv2130M1280.6198--81s1540s732
answerai-colbert-small-v133M960.6545--59s621s549
colbert-muvera-micro4M1280.5599--45s88s732
mxbai-edge-colbert-v0-17m17M480.640551s487s275
mxbai-edge-colbert-v0-32m32M640.652055s589s366

We're particularly excited by the 17M variant's potential as an end-to-end retriever or reranker following a for on-edge usecases, as it can embed dozens of documents in milliseconds on CPU with remarkably low memory footprint.

What next

The models are already available on HuggingFace and supported in PyLate: and .

With this release, we killed two birds with one stone, having released both the strongest existing edge retrieval model to date, mxbai-edge-colbert-v0, and a set of extremely strong baselines to support further experimentation.

In the future, we intend to periodically update our edge-sized open source offerings to further disseminate our research findings in a bite-sized, anyone-can-use-it format.

If this sounds like something you'd like to contribute to, we are hiring across all technical positions! Take a look at them below and don't hesitate to apply if you feel like a fit for any of them: