SentenceTransformersTextEmbedder
SentenceTransformersTextEmbedder transforms a string into a vector that captures its semantics using an embedding model compatible with the Sentence Transformers library.
When you perform embedding retrieval, use this component first to transform your query into a vector. Then, the embedding Retriever will use the vector to search for similar or relevant Documents.
Name | SentenceTransformersTextEmbedder |
Type | Text Embedder |
Position in a pipeline | Before an embedding Retriever in a Query/RAG pipeline |
Inputs | "text": a string |
Outputs | "embedding": a List of float numbers "meta": a Dictionary of metadata |
Overview
This component should be used to embed a simple string (such as a query) into a vector. For embedding lists of Documents, use the SentenceTransformersDocumentEmbedder, which enriches the Document with the computed embedding, known as vector.
The component uses a HF_API_TOKEN
environment variable by default. Otherwise, you can pass a Hugging Face API token at initialization with token
:
text_embedder = SentenceTransformersTextEmbedder(token=Secret.from_token("<your-api-key>"))
Compatible Models
Unless specified otherwise while initializing this component, the default embedding model is `sentence-transformers/all-mpnet-base-v2`.
You can find the original models in the Sentence Transformers documentation.
Nowadays, most of the models in the Massive Text Embedding Benchmark (MTEB) Leaderboard are compatible with Sentence Transformers.
You can look for compatibility in the model card: an example related to BGE models.
Instructions
Some recent models that you can find in MTEB require prepending the text with an instruction to work better for retrieval.
For example, if you use BAAI/bge-large-en-v1.5, you should prefix your query with the following instruction: “Represent this sentence for searching relevant passages:”
This is how it works with SentenceTransformersTextEmbedder
:
instruction = "Represent this sentence for searching relevant passages:"
embedder = SentenceTransformersTextEmbedder(
*model="*BAAI/bge-large-en-v1.5",
prefix=instruction)
If you create a Text Embedder and a Document Embedder based on the same model, Haystack takes care of using the same resource behind the scenes in order to save resources.
Usage
On its own
from haystack.components.embedders import SentenceTransformersTextEmbedder
text_to_embed = "I love pizza!"
text_embedder = SentenceTransformersTextEmbedder()
text_embedder.warm_up()
print(text_embedder.run(text_to_embed))
# {'embedding': [-0.07804739475250244, 0.1498992145061493,, ...]}
In a pipeline
from haystack import Document
from haystack import Pipeline
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.embedders import SentenceTransformersTextEmbedder, SentenceTransformersDocumentEmbedder
from haystack.components.retrievers.in_memory import InMemoryEmbeddingRetriever
document_store = InMemoryDocumentStore(embedding_similarity_function="cosine")
documents = [Document(content="My name is Wolfgang and I live in Berlin"),
Document(content="I saw a black horse running"),
Document(content="Germany has many big cities")]
document_embedder = SentenceTransformersDocumentEmbedder()
document_embedder.warm_up()
documents_with_embeddings = document_embedder.run(documents)['documents']
document_store.write_documents(documents_with_embeddings)
query_pipeline = Pipeline()
query_pipeline.add_component("text_embedder", SentenceTransformersTextEmbedder())
query_pipeline.add_component("retriever", InMemoryEmbeddingRetriever(document_store=document_store))
query_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
query = "Who lives in Berlin?"
result = query_pipeline.run({"text_embedder":{"text": query}})
print(result['retriever']['documents'][0])
# Document(id=..., mimetype: 'text/plain',
# text: 'My name is Wolfgang and I live in Berlin')
Updated 10 months ago