PerplexityTextEmbedder
PerplexityTextEmbedder transforms a string into a vector that captures its semantics using a Perplexity embedding model.
When you perform embedding retrieval, use this component to transform your query into a vector. Then, the embedding Retriever looks for similar or relevant documents.
| Most common position in a pipeline | Before an embedding Retriever in a query/RAG pipeline |
| Mandatory init variables | api_key: A Perplexity API key. Can be set with PERPLEXITY_API_KEY env var. |
| Mandatory run variables | text: A string |
| Output variables | embedding: A list of float numbers meta: A dictionary of metadata |
| API reference | Integrations |
| GitHub link | https://github.com/deepset-ai/haystack-core-integrations/blob/main/integrations/perplexity/src/haystack_integrations/components/embedders/perplexity/text_embedder.py |
| Package name | perplexity-haystack |
Overview
PerplexityTextEmbedder supports the following embedding models:
pplx-embed-v1-0.6b(default)pplx-embed-v1-4b
Use PerplexityTextEmbedder to embed a single string, such as a query. For embedding lists of documents, use PerplexityDocumentEmbedder.
The component uses a PERPLEXITY_API_KEY environment variable by default. You can also pass an API key directly at initialization:
python
from haystack_integrations.components.embedders.perplexity import PerplexityTextEmbedder
from haystack.utils import Secret
embedder = PerplexityTextEmbedder(api_key=Secret.from_token("<your-api-key>"))
Usage
On its own
python
from haystack_integrations.components.embedders.perplexity import PerplexityTextEmbedder
text_embedder = PerplexityTextEmbedder()
result = text_embedder.run("I love pizza!")
print(result["embedding"])
# [0.017020374536514282, -0.023255806416273117, ...]
info
We recommend setting PERPLEXITY_API_KEY as an environment variable instead of passing it as a parameter.
In a pipeline
python
from haystack import Document, Pipeline
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.retrievers.in_memory import InMemoryEmbeddingRetriever
from haystack_integrations.components.embedders.perplexity import (
PerplexityTextEmbedder,
PerplexityDocumentEmbedder,
)
document_store = InMemoryDocumentStore(embedding_similarity_function="cosine")
documents = [
Document(content="My name is Wolfgang and I live in Berlin"),
Document(content="I saw a black horse running"),
Document(content="Germany has many big cities"),
]
document_embedder = PerplexityDocumentEmbedder()
documents_with_embeddings = document_embedder.run(documents)["documents"]
document_store.write_documents(documents_with_embeddings)
query_pipeline = Pipeline()
query_pipeline.add_component("text_embedder", PerplexityTextEmbedder())
query_pipeline.add_component(
"retriever",
InMemoryEmbeddingRetriever(document_store=document_store),
)
query_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
result = query_pipeline.run({"text_embedder": {"text": "Who lives in Berlin?"}})
print(result["retriever"]["documents"][0])