AnthropicVertexChatGenerator
This component enables chat completions using AnthropicVertex API.
Most common position in a pipeline | After a ChatPromptBuilder |
Mandatory init variables | "region": The region where the Anthropic model is deployed ”project_id”: GCP project ID where the Anthropic model is deployed |
Mandatory run variables | “messages”: A list of ChatMessage objects |
Output variables | “replies”: A list of strings with all the replies generated by the LLM ”meta”: A list of dictionaries with the metadata associated with each reply, such as token count, finish reason, and others |
API reference | Anthropic |
GitHub link | https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/anthropic |
Overview
AnthropicVertexChatGenerator
enables text generation using state-of-the-art Claude 3 LLMs using the Anthropic Vertex AI API.
It supports Claude 3.5 Sonnet
, Claude 3 Opus
, Claude 3 Sonnet
, and Claude 3 Haiku
models, that are accessible through the Vertex AI API endpoint. For more details about the models, refer to Anthropic Vertex AI documentation.
Parameters
To use the AnthropicVertexChatGenerator
, ensure you have a GCP project with Vertex AI enabled. You need to specify your GCP project_id
and region
.
You can provide these keys in the following ways:
- The
REGION
andPROJECT_ID
environment variables (recommended) - The
region
andproject_id
init parameters
Before making requests, you may need to authenticate with GCP using gcloud auth login
.
Set your preferred supported Anthropic model with the model
parameter when initializing the component. Additionally, ensure that the desired Anthropic model is activated in the Vertex AI Model Garden.
AnthropicVertexChatGenerator
requires a prompt to generate text, but you can pass any text generation parameters available in the Anthropic Messaging API method directly to this component using the generation_kwargs
parameter, both at initialization and when running the component. For more details on the parameters supported by the Anthropic API, see the Anthropic documentation.
Finally, the component needs a list of ChatMessage
objects to operate. ChatMessage
is a data class that contains a message, a role (who generated the message, such as user
, assistant
, system
, function
), and optional metadata.
Only text input modality is supported at this time.
Streaming
This ChatGenerator supports streaming the tokens from the LLM directly in output. To do so, pass a function to the streaming_callback
init parameter.
Prompt Caching
Prompt caching is a feature for Anthropic LLMs that stores large text inputs for reuse. It allows you to send a large text block once and then refer to it in later requests without resending the entire text.
This feature is particularly useful for coding assistants that need full codebase context and for processing large documents. It can help reduce costs and improve response times.
Here's an example of an instance of AnthropicVertexChatGenerator
being initialized with prompt caching and tagging a message to be cached:
from haystack_integrations.components.generators.anthropic import AnthropicVertexChatGenerator
from haystack.dataclasses import ChatMessage
generation_kwargs = {"extra_headers": {"anthropic-beta": "prompt-caching-2024-07-31"}}
claude_llm = AnthropicVertexChatGenerator(
region="your_region", project_id="test_id", generation_kwargs=generation_kwargs
)
system_message = ChatMessage.from_system("Replace with some long text documents, code or instructions")
system_message.meta["cache_control"] = {"type": "ephemeral"}
messages = [system_message, ChatMessage.from_user("A query about the long text for example")]
result = claude_llm.run(messages)
# and now invoke again with
messages = [system_message, ChatMessage.from_user("Another query about the long text etc")]
result = claude_llm.run(messages)
# and so on, either invoking component directly or in the pipeline
For more details, refer to Anthropic's documentation and integration examples.
Usage
Install theanthropic-haystack
package to use the AnthropicVertexChatGenerator
:
pip install anthropic-haystack
On its own
from haystack_integrations.components.generators.anthropic import AnthropicVertexChatGenerator
from haystack.dataclasses import ChatMessage
messages = [ChatMessage.from_user("What's Natural Language Processing?")]
client = AnthropicVertexChatGenerator(
model="claude-3-sonnet@20240229",
project_id="your-project-id", region="us-central1"
)
response = client.run(messages)
print(response)
In a pipeline
You can also use AnthropicVertexChatGenerator
with the Anthropic chat models in your pipeline.
from haystack import Pipeline
from haystack.components.builders import ChatPromptBuilder
from haystack.dataclasses import ChatMessage
from haystack_integrations.components.generators.anthropic import AnthropicVertexChatGenerator
from haystack.utils import Secret
pipe = Pipeline()
pipe.add_component("prompt_builder", ChatPromptBuilder())
pipe.add_component("llm", AnthropicVertexChatGenerator(project_id="test_id", region="us-central1"))
pipe.connect("prompt_builder", "llm")
country = "Germany"
system_message = ChatMessage.from_system("You are an assistant giving out valuable information to language learners.")
messages = [system_message, ChatMessage.from_user("What's the official language of {{ country }}?")]
res = pipe.run(data={"prompt_builder": {"template_variables": {"country": country}, "template": messages}})
print(res)
Updated 24 days ago