AnthropicChatGenerator
This component enables chat completion using Anthropic Claude LLMs
Name | AnthropicChatGenerator |
Path | https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/anthropic |
Most common Position in a Pipeline | After DynamicChatPromptBuilder |
Mandatory Input variables | “messages”: a list of ChatMessage instances |
Output variables | "replies": a list of ChatMessage objects”meta”: a list of dictionaries with the metadata associated with each reply, such as token count, finish reason, and so on |
Overview
This component supports Anthropic Claude models provided through Anthropic’s own inferencing infrastructure. For a full list of available models, check out the Anthropic Claude documentation.
AnthropicChatGenerator
needs an Anthropic API key to work. You can write this key in:
- The
api_key
parameter - The
ANTHROPIC_API_KEY
environment variable (recommended)
Currently, available models are:
claude-2.1
claude-3-haiku-20240307
claude-3-sonnet-20240229
(default)claude-3-opus-20240229
This component needs a list of ChatMessage
objects to operate. ChatMessage
is a data class that contains a message, a role (who generated the message, such as user
, assistant
, system
, function
), and optional metadata.
Refer to the Anthropic API documentation for more details on the parameters supported by the Anthropic API, which you can provide with generation_kwargs
when running the component.
Streaming
AnthropicChatGenerator
supports streaming the tokens from the LLM directly in output. To do so, pass a function to the streaming_callback
when initializing.
Usage
Install the anthropic-haystack
package to use the AnthropicChatGenerator
:
pip install anthropic-haystack
On its own
Basic usage:
import os
from haystack_integrations.components.generators.anthropic import AnthropicChatGenerator
from haystack.dataclasses import ChatMessage
os.environ["ANTHROPIC_API_KEY"] = "Your Anthropic API Key"
messages = [ChatMessage.from_user("What's Natural Language Processing?")]
client = AnthropicChatGenerator(model="claude-3-sonnet-20240229")
response = client.run(messages)
print(response)
# >>> {'replies': [ChatMessage(content='Natural Language Processing (NLP) is a branch of artificial intelligence that deals with the interaction between computers and humans using natural languages, such as English, Spanish, French, etc. The goal of NLP is to enable computers to understand, interpret, and generate human language naturally.\n\nNLP involves several tasks and techniques, including:\n\n1. Natural Language Understanding (NLU): This involves analyzing and comprehending human language, including tasks like text classification, sentiment analysis, named entity recognition, and relationship extraction.\n\n2. Natural Language Generation (NLG): This involves producing human-readable text from structured data or representations, such as generating summaries, reports, or natural language descriptions.\n\n3. Machine Translation: Automatically translating text or speech from one natural language to another.\n\n4. Speech Recognition: Converting spoken language into text.\n\n5. Text-to-Speech: Converting written text into spoken language.\n\n6. Question Answering: Enabling systems to provide accurate answers to questions asked in natural language.\n\nNLP combines principles from various fields, including computer science, linguistics, and machine learning. It uses techniques such as statistical models, neural networks, and rule-based systems to analyze, understand, and generate human language.\n\nNLP has numerous applications in various domains, such as virtual assistants, customer service chatbots, sentiment analysis for social media monitoring, language translation tools, text summarization, and information extraction from unstructured data.', role=<ChatRole.ASSISTANT: 'assistant'>, name=None, meta={'model': 'claude-3-sonnet-20240229', 'index': 0, 'finish_reason': 'end_turn', 'usage': {'input_tokens': 13, 'output_tokens': 305}})]}
In a pipeline
Below is an example RAG Pipeline where we answer a predefined question using the contents from the below mentioned URL pointing to the Anthropic prompt engineering guide. We fetch the contents of the URL and generate an answer with the AnthropicChatGenerator
.
import os
from haystack import Pipeline
from haystack.components.builders import DynamicChatPromptBuilder
from haystack.components.converters import HTMLToDocument
from haystack.components.fetchers import LinkContentFetcher
from haystack.components.generators.utils import print_streaming_chunk
from haystack.dataclasses import ChatMessage
from haystack_integrations.components.generators.anthropic import AnthropicChatGenerator
# To run this example, you will need to set an `ANTHROPIC_API_KEY` environment variable.
os.environ["ANTHROPIC_API_KEY"] = "Your Anthropic API Key"
messages = [
ChatMessage.from_system("You are a prompt expert who answers questions based on the given documents."),
ChatMessage.from_user("Here are the documents: {{documents}} \\n Answer: {{query}}"),
]
rag_pipeline = Pipeline()
rag_pipeline.add_component("fetcher", LinkContentFetcher())
rag_pipeline.add_component("converter", HTMLToDocument())
rag_pipeline.add_component("prompt_builder", DynamicChatPromptBuilder(runtime_variables=["documents"]))
rag_pipeline.add_component(
"llm",
AnthropicChatGenerator(
model="claude-3-sonnet-20240229",
streaming_callback=print_streaming_chunk,
),
)
rag_pipeline.connect("fetcher", "converter")
rag_pipeline.connect("converter", "prompt_builder")
rag_pipeline.connect("prompt_builder", "llm")
question = "What are the best practices in prompt engineering?"
rag_pipeline.run(
data={
"fetcher": {"urls": ["https://docs.anthropic.com/claude/docs/prompt-engineering"]},
"prompt_builder": {"template_variables": {"query": question}, "prompt_source": messages},
}
)
# >>> {'llm': {'replies': [ChatMessage(content="According to the provided document, some best practices for prompt engineering with Claude include:\n\n1. Providing clear and specific instructions in the prompt to guide Claude's response.\n\n2. Breaking down complex tasks into smaller steps or sub-prompts.\n\n3. Giving examples or demonstrations of the desired output format.\n\n4. Using techniques like few-shot prompting with a small number of examples.\n\n5. Iterating on prompts and providing feedback to improve performance over time.\n\nThe document suggests that effective prompt engineering can help maximize Claude's capabilities for different tasks and use cases. However, it does not provide detailed examples or an exhaustive list of best practices. The recommendation is to experiment and iterate to find what works well for your specific needs.", role=<ChatRole.ASSISTANT: 'assistant'>, name=None, meta={'model': 'claude-3-sonnet-20240229', 'index': 0, 'finish_reason': 'end_turn', 'usage': {'input_tokens': 148, 'output_tokens': 158}})]}}
Updated 9 months ago