DocumentationAPI ReferenceπŸ““ TutorialsπŸ§‘β€πŸ³ Cookbook🀝 IntegrationsπŸ’œ Discord

FastembedTextEmbedder

This component computes the embeddings of a string using embedding models supported by Fastembed.

NameFastembedTextEmbedder
Pathhttps://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/fastembed
Position in a PipelineBefore an embedding Retriever in a Query/RAG pipeline
Inputsβ€œtext”: a string
Outputsβ€œembedding”: a vector (list of float numbers) enriched with metadata

This component should be used to embed a simple string (such as a query) into a vector. For embedding lists of Documents, use the FastembedDocumentEmbedder, which enriches the Document with the computed embedding, known as vector.

Overview

FastembedTextEmbedder transforms a string into a vector that captures its semantics using embedding models supported by Fastembed.

When you perform embedding retrieval, use this component first to transform your query into a vector. Then, the embedding Retriever will use the vector to search for similar or relevant Documents.

Compatible models

You can find the original models in the Fastembed documentation.

Currently, most of the models in the Massive Text Embedding Benchmark (MTEB) Leaderboard are compatible with Fastembed. You can look for compatibility in the supported model list.

Installation

To start using this integration with Haystack, install the package with:

pip install fastembed-haystack

Instructions

Some recent models that you can find in MTEB require prepending the text with an instruction to work better for retrieval.
For example, if you use BAAI/bge-large-en-v1.5 model, you should prefix your query with the instruction: β€œpassage:”.

This is how it works with FastembedTextEmbedder:

instruction = "passage:"
embedder = FastembedTextEmbedder(
	*model_name_or_path="*BAAI/bge-large-en-v1.5",
	prefix=instruction)

Parameters

You can set the path where the model will be stored in a cache directory. Also, you can set the number of threads a single onnxruntime session can use.

cache_dir= "/your_cacheDirectory"
embedder = FastembedTextEmbedder(
	*model_name_or_path="*BAAI/bge-large-en-v1.5",
	cache_dir=cache_dir,
	threads=2
)

If you want to use the data parallel encoding, you can set the parameters parallel and batch_size.

  • If parallel > 1, data-parallel encoding will be used. This is recommended for offline encoding of large datasets.
  • If parallel is 0, use all available cores.
  • If None, don't use data-parallel processing; use default onnxruntime threading instead.

πŸ‘

If you create a Text Embedder and a Document Embedder based on the same model, Haystack uses the same resource behind the scenes to save resources.

Usage

On its own

from haystack_integrations.components.embedders.fastembed import FastembedTextEmbedder

text = """It clearly says online this will work on a Mac OS system. 
The disk comes and it does not, only Windows. 
Do Not order this if you have a Mac!!"""
text_embedder = FastembedTextEmbedder(model="BAAI/bge-small-en-v1.5")
text_embedder.warm_up()
embedding = text_embedder.run(text)["embedding"]

In a Pipeline

from haystack import Document, Pipeline
from haystack.components.retrievers.in_memory import InMemoryEmbeddingRetriever
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack_integrations.components.embedders.fastembed import FastembedDocumentEmbedder, FastembedTextEmbedder

document_store = InMemoryDocumentStore(embedding_similarity_function="cosine")

documents = [
    Document(content="My name is Wolfgang and I live in Berlin"),
    Document(content="I saw a black horse running"),
    Document(content="Germany has many big cities"),
    Document(content="fastembed is supported by and maintained by Qdrant."),
]

document_embedder = FastembedDocumentEmbedder()
document_embedder.warm_up()
documents_with_embeddings = document_embedder.run(documents)["documents"]
document_store.write_documents(documents_with_embeddings)

query_pipeline = Pipeline()
query_pipeline.add_component("text_embedder", FastembedTextEmbedder())
query_pipeline.add_component("retriever", InMemoryEmbeddingRetriever(document_store=document_store))
query_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")

query = "Who supports fastembed?"

result = query_pipeline.run({"text_embedder": {"text": query}})

print(result["retriever"]["documents"][0])  # noqa: T201

# Document(id=...,
#  content: 'fastembed is supported by and maintained by Qdrant.',
#  score: 0.758..)

Related Links

Check out the API reference in the GitHub repo or in our docs: