AnthropicGenerator
This component enables simple text completion using Anthropic Claude LLMs
Name | AnthropicGenerator |
Path | https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/anthropic |
Most common Position in a Pipeline | After a PromptBuilder |
Mandatory Input variables | “model”: the name of the model to use with this generator |
Output variables | “replies”: a list of strings with all the replies generated by the model |
Overview
This component supports Anthropic Claude models provided through Anthropic’s own inferencing infrastructure. For a full list of available models, check out the Anthropic Claude documentation.
AnthropicGenerator
needs an Anthropic API key to work. You can write this key in:
- The
api_key
parameter - The
ANTHROPIC_API_KEY
environment variable (recommended)
Currently, available models are:
claude-2.1
claude-3-haiku-20240307
claude-3-sonnet-20240229
(default)claude-3-opus-20240229
Although Anthropic natively supports a much richer messaging API, we have intentionally simplified it in this component so that the main input/output interface is string-based.
For more complete messaging support, consider using the AnthropicChatGenerator.
Refer to the Anthropic API documentation for more details on the parameters supported by the Anthropic API, which you can provide with generation_kwargs
when running the component.
Streaming
AnthropicGenerator
supports streaming the tokens from the LLM directly in output. To do so, pass a function to the streaming_callback
when initializing.
Usage
Install the anthropic-haystack
package to use the AnthropicGenerator
:
pip install anthropic-haystack
On its own
Basic usage:
import os
from haystack_integrations.components.generators.anthropic import AnthropicGenerator
os.environ["ANTHROPIC_API_KEY"] = "Your Anthropic API Key"
client = AnthropicGenerator(model="claude-2.1")
response = client.run("What's Natural Language Processing? Be brief.")
print(response)
# >>{'replies': ['Natural language processing (NLP) is a branch of artificial intelligence focused on enabling
# >>computers to understand, interpret, and manipulate human language. The goal of NLP is to read, decipher,
# >> understand, and make sense of the human languages in a manner that is valuable.'], 'meta': {'model':
# >> 'claude-2.1', 'index': 0, 'finish_reason': 'end_turn', 'usage': {'input_tokens': 18, 'output_tokens': 58}}}
In a pipeline
Below is an example RAG Pipeline where we answer a predefined question using the contents from the given URL pointing to the Anthropic prompt engineering guide. We fetch the contents of the URL and generate an answer with the AnthropicGenerator
:
import os
from haystack import Pipeline
from haystack.components.builders import PromptBuilder
from haystack.components.converters import HTMLToDocument
from haystack.components.fetchers import LinkContentFetcher
from haystack.components.generators.utils import print_streaming_chunk
from haystack_integrations.components.generators.anthropic import AnthropicGenerator
# To run this example, you will need to set an `ANTHROPIC_API_KEY` environment variable.
os.environ["ANTHROPIC_API_KEY"] = "Your Anthropic API Key"
template = """
Given the following information, answer the question.
Context:
{% for document in documents %}
{{ document.content }}
{% endfor %}
Question: {{ query }}
"""
rag_pipeline = Pipeline()
rag_pipeline.add_component("fetcher", LinkContentFetcher())
rag_pipeline.add_component("converter", HTMLToDocument())
rag_pipeline.add_component("prompt_builder", PromptBuilder(template=template))
rag_pipeline.add_component(
"llm",
AnthropicGenerator(
model="claude-3-sonnet-20240229",
streaming_callback=print_streaming_chunk,
),
)
rag_pipeline.connect("fetcher", "converter")
rag_pipeline.connect("converter", "prompt_builder")
rag_pipeline.connect("prompt_builder", "llm")
question = "What are the best practices in prompt engineering?"
rag_pipeline.run(
data={
"fetcher": {"urls": ["https://docs.anthropic.com/claude/docs/prompt-engineering"]},
"prompt_builder": {"query": question},
}
)
Updated 7 months ago