DocumentationAPI Reference📓 Tutorials🧑‍🍳 Cookbook🤝 Integrations💜 Discord


Rankers reorder documents based on a condition such as relevance, or recency. The improvement that the Ranker brings comes at the cost of some additional computation time. Haystack supports various ranking models such as transformer models and Cohere models.

Position in a PipelineAfter a Retriever
ClassesCohereRanker, DiversityRanker, LostInTheMiddleRanker, RecentnessRanker, SentenceTransformersRanker


CohereRanker uses models by Cohere to rerank documents. Cohere models are trained with a context length of 512 tokens. The model takes into account both the tokens from the query and the document. If your query is longer than 256 tokens, it's shortened to the first 256 tokens.

For more information and best practices on re-ranking in Cohere, see Cohere documentation.

Here’s how you initialize the CohereRanker:

from haystack.nodes import CohereRanker

ranker = CohereRanker(


The DiversityRanker is designed to maximize the variety of given documents. It does so by selecting the most semantically similar document to the query, then selecting the least similar one, and continuing this process with the remaining documents until a diverse set is formed. It operates on the principle that a diverse set of documents can increase the LLM’s ability to generate answers with more breadth and depth.

Here’s how you initialize the DiversityRanker:

from haystack.nodes import DiversityRanker

ranker = DiversityRanker(


This ranker sorts the documents based on the "Lost in the Middle" order, based on the "Lost in the Middle: How Language Models Use Long Contexts" research paper. The ranker positions the most relevant documents at the beginning and at the end of the resulting list while placing the least relevant documents in the middle.

Here’s how you initialize the LostInTheMiddleRanker:

from haystack.nodes import LostInTheMiddleRanker

ranker = LostInTheMiddleRanker(


If you want to sort the documents you've retrieved based on both relevance and recency, use a RecentnessRanker.

For example, if you are building a QA system based on release notes, you might want results that are based on the most recent releases to have priority over the rest.

Here’s how you initialize the RecentnessRanker:

from haystack.nodes import RecentnessRanker

ranker = RecentnessRanker(


SentenceTransformersRanker uses a Cross-Encoder model to rerank documents. It can be used on top of a retriever to boost the performance of the document search. This is particularly useful if the retriever has a high recall but is bad at sorting the documents by relevance.

In Haystack, you can use any Cross-Encoder model that returns a single logit as a similarity score. For examples, see the Sentence Transformers page for some examples.

As an example, SentenceTransformersRanker can pair nicely with a sparse retriever, such as the BM25Retriever. While the BM25Retriever is fast and lightweight, it is not sensitive to word order but rather treats text as a bag of words. By placing SentenceTransformersRanker afterward, you can offset this weakness and have a better-sorted list of relevant documents.

To use SentenceTransformersRanker in a pipeline, run:

from haystack.document_stores import ElasticsearchDocumentStore
from haystack.nodes import BM25Retriever, SentenceTransformersRanker
from haystack import Pipeline

document_store = ElasticsearchDocumentStore()
... retriever = BM25Retriever(document_store)
ranker = SentenceTransformersRanker(model_name_or_path="cross-encoder/ms-marco-MiniLM-L-12-v2")
... p = Pipeline()
p.add_node(component=retriever, name="BM25Retriever", inputs=["Query"])
p.add_node(component=ranker, name="Ranker", inputs=["BM25Retriever"])

The SentenceTransformersRanker can also be used in isolation by calling its predict() method after initialization.

Keep in mind that SentenceTransformersRanker needs to be initialized with a model trained on a text pair classification task. The SentenceTransformersRanker has a train() method to allow for this training. Alternatively, this FARM script shows how to train a text pair classification model.

Related Links