DocumentationAPI ReferenceπŸ““ TutorialsπŸ§‘β€πŸ³ Cookbook🀝 IntegrationsπŸ’œ Discord🎨 Studio
Documentation

AnthropicVertexChatGenerator

This component enables chat completions using AnthropicVertex API.

Most common position in a pipelineAfter a ChatPromptBuilder
Mandatory init variables"region": The region where the Anthropic model is deployed

”project_id”: GCP project ID where the Anthropic model is deployed
Mandatory run variablesβ€œmessages”: A list ofΒ ChatMessage Β Β objects
Output variablesβ€œreplies”: A list of strings with all the replies generated by the LLM

”meta”: A list of dictionaries with the metadata associated with each reply, such as token count, finish reason, and others
API referenceAnthropic
GitHub linkhttps://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/anthropic

Overview

AnthropicVertexChatGenerator enables text generation using state-of-the-art Claude 3 LLMs using the Anthropic Vertex AI API.
It supports Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Sonnet, and Claude 3 Haiku models, that are accessible through the Vertex AI API endpoint. For more details about the models, refer toΒ Anthropic Vertex AI documentation.

Parameters

To use the AnthropicVertexChatGenerator, ensure you have a GCP project with Vertex AI enabled. You need to specify your GCP project_id and region.

You can provide these keys in the following ways:

  • TheΒ REGION and PROJECT_IDΒ environment variables (recommended)
  • TheΒ region and project_idΒ init parameters

Before making requests, you may need to authenticate with GCP using gcloud auth login.

Set your preferred supported Anthropic model with theΒ modelΒ parameter when initializing the component. Additionally, ensure that the desired Anthropic model is activated in the Vertex AI Model Garden.

AnthropicVertexChatGeneratorΒ requires a prompt to generate text, but you can pass any text generation parameters available in the AnthropicΒ Messaging APIΒ method directly to this component using theΒ generation_kwargsΒ parameter, both at initialization and when running the component. For more details on the parameters supported by the Anthropic API, see theΒ Anthropic documentation.

Finally, the component needs a list ofΒ ChatMessageΒ objects to operate.Β ChatMessageΒ is a data class that contains a message, a role (who generated the message, such asΒ user,Β assistant,Β system,Β function), and optional metadata.

Only text input modality is supported at this time.

Streaming

You can stream output as it’s generated. Pass a callback to streaming_callback. Use the built-in print_streaming_chunk to print text tokens and tool events (tool calls and tool results).

from haystack.components.generators.utils import print_streaming_chunk

# Configure any `Generator` or `ChatGenerator` with a streaming callback
component = SomeGeneratorOrChatGenerator(streaming_callback=print_streaming_chunk)

# If this is a `ChatGenerator`, pass a list of messages:
# from haystack.dataclasses import ChatMessage
# component.run([ChatMessage.from_user("Your question here")])

# If this is a (non-chat) `Generator`, pass a prompt:
# component.run({"prompt": "Your prompt here"})

πŸ“˜

Streaming works only with a single response. If a provider supports multiple candidates, set n=1.

See our Streaming Support docs to learn more how StreamingChunk works and how to write a custom callback.

Give preference to print_streaming_chunk by default. Write a custom callback only if you need a specific transport (for example, SSE/WebSocket) or custom UI formatting.

Prompt Caching

Prompt caching is a feature for Anthropic LLMs that stores large text inputs for reuse. It allows you to send a large text block once and then refer to it in later requests without resending the entire text.

This feature is particularly useful for coding assistants that need full codebase context and for processing large documents. It can help reduce costs and improve response times.

Here's an example of an instance ofΒ AnthropicVertexChatGeneratorΒ being initialized with prompt caching and tagging a message to be cached:

from haystack_integrations.components.generators.anthropic import AnthropicVertexChatGenerator
from haystack.dataclasses import ChatMessage

generation_kwargs = {"extra_headers": {"anthropic-beta": "prompt-caching-2024-07-31"}}

claude_llm = AnthropicVertexChatGenerator(
    region="your_region", project_id="test_id", generation_kwargs=generation_kwargs
)

system_message = ChatMessage.from_system("Replace with some long text documents, code or instructions")
system_message.meta["cache_control"] = {"type": "ephemeral"}

messages = [system_message, ChatMessage.from_user("A query about the long text for example")]
result = claude_llm.run(messages)

# and now invoke again with 

messages = [system_message, ChatMessage.from_user("Another query about the long text etc")]
result = claude_llm.run(messages)

# and so on, either invoking component directly or in the pipeline 

For more details, refer to Anthropic'sΒ documentationΒ and integrationΒ examples.

Usage

InstallΒ theanthropic-haystackΒ package to use theΒ AnthropicVertexChatGenerator:

pip install anthropic-haystack

On its own

from haystack_integrations.components.generators.anthropic import AnthropicVertexChatGenerator
from haystack.dataclasses import ChatMessage

messages = [ChatMessage.from_user("What's Natural Language Processing?")]
client = AnthropicVertexChatGenerator(
  model="claude-3-sonnet@20240229",
  project_id="your-project-id", region="us-central1"
)

response = client.run(messages)
print(response)

In a pipeline

You can also useΒ AnthropicVertexChatGeneratorwith the Anthropic chat models in your pipeline.

from haystack import Pipeline
from haystack.components.builders import ChatPromptBuilder
from haystack.dataclasses import ChatMessage
from haystack_integrations.components.generators.anthropic import AnthropicVertexChatGenerator
from haystack.utils import Secret

pipe = Pipeline()
pipe.add_component("prompt_builder", ChatPromptBuilder())
pipe.add_component("llm", AnthropicVertexChatGenerator(project_id="test_id", region="us-central1"))
pipe.connect("prompt_builder", "llm")

country = "Germany"
system_message = ChatMessage.from_system("You are an assistant giving out valuable information to language learners.")
messages = [system_message, ChatMessage.from_user("What's the official language of {{ country }}?")]

res = pipe.run(data={"prompt_builder": {"template_variables": {"country": country}, "template": messages}})
print(res)