Google GenAI integration for Haystack
Module haystack_integrations.components.generators.google_genai.chat.chat_generator
GoogleGenAIChatGenerator
A component for generating chat completions using Google's Gemini models via the Google Gen AI SDK.
This component provides an interface to Google's Gemini models through the new google-genai SDK, supporting models like gemini-2.0-flash and other Gemini variants.
Usage example
from haystack.dataclasses.chat_message import ChatMessage
from haystack.tools import Tool, Toolset
from haystack_integrations.components.generators.google_genai import GoogleGenAIChatGenerator
# Initialize the chat generator
chat_generator = GoogleGenAIChatGenerator(model="gemini-2.0-flash")
# Generate a response
messages = [ChatMessage.from_user("Tell me about the future of AI")]
response = chat_generator.run(messages=messages)
print(response["replies"][0].text)
# Tool usage example
def weather_function(city: str):
return f"The weather in {city} is sunny and 25°C"
weather_tool = Tool(
name="weather",
description="Get weather information for a city",
parameters={"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"]},
function=weather_function
)
# Can use either List[Tool] or Toolset
chat_generator_with_tools = GoogleGenAIChatGenerator(
model="gemini-2.0-flash",
tools=[weather_tool] # or tools=Toolset([weather_tool])
)
messages = [ChatMessage.from_user("What's the weather in Paris?")]
response = chat_generator_with_tools.run(messages=messages)
GoogleGenAIChatGenerator.__init__
def __init__(*,
api_key: Secret = Secret.from_env_var("GOOGLE_API_KEY"),
model: str = "gemini-2.0-flash",
generation_kwargs: Optional[Dict[str, Any]] = None,
safety_settings: Optional[List[Dict[str, Any]]] = None,
streaming_callback: Optional[StreamingCallbackT] = None,
tools: Optional[Union[List[Tool], Toolset]] = None)
Initialize a GoogleGenAIChatGenerator instance.
Arguments:
api_key
: Google API key, defaults to theGOOGLE_API_KEY
environment variable, see https://ai.google.dev/gemini-api/docs/api-key for more information.model
: Name of the model to use (e.g., "gemini-2.0-flash")generation_kwargs
: Configuration for generation (temperature, max_tokens, etc.)safety_settings
: Safety settings for content filteringstreaming_callback
: A callback function that is called when a new token is received from the stream.tools
: A list of Tool objects or a Toolset that the model can use. Each tool should have a unique name.
GoogleGenAIChatGenerator.to_dict
def to_dict() -> Dict[str, Any]
Serializes the component to a dictionary.
Returns:
Dictionary with serialized data.
GoogleGenAIChatGenerator.from_dict
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "GoogleGenAIChatGenerator"
Deserializes the component from a dictionary.
Arguments:
data
: Dictionary to deserialize from.
Returns:
Deserialized component.
GoogleGenAIChatGenerator.run
@component.output_types(replies=List[ChatMessage])
def run(messages: List[ChatMessage],
generation_kwargs: Optional[Dict[str, Any]] = None,
safety_settings: Optional[List[Dict[str, Any]]] = None,
streaming_callback: Optional[StreamingCallbackT] = None,
tools: Optional[Union[List[Tool], Toolset]] = None) -> Dict[str, Any]
Run the Google Gen AI chat generator on the given input data.
Arguments:
messages
: A list of ChatMessage instances representing the input messages.generation_kwargs
: Configuration for generation. If provided, it will override the default config.safety_settings
: Safety settings for content filtering. If provided, it will override the default settings.streaming_callback
: A callback function that is called when a new token is received from the stream.tools
: A list of Tool objects or a Toolset that the model can use. If provided, it will override the tools set during initialization.
Raises:
RuntimeError
: If there is an error in the Google Gen AI chat generation.ValueError
: If a ChatMessage does not contain at least one of TextContent, ToolCall, or ToolCallResult or if the role in ChatMessage is different from User, System, Assistant.
Returns:
A dictionary with the following keys:
replies
: A list containing the generated ChatMessage responses.
GoogleGenAIChatGenerator.run_async
@component.output_types(replies=List[ChatMessage])
async def run_async(
messages: List[ChatMessage],
generation_kwargs: Optional[Dict[str, Any]] = None,
safety_settings: Optional[List[Dict[str, Any]]] = None,
streaming_callback: Optional[StreamingCallbackT] = None,
tools: Optional[Union[List[Tool], Toolset]] = None) -> Dict[str, Any]
Async version of the run method. Run the Google Gen AI chat generator on the given input data.
Arguments:
messages
: A list of ChatMessage instances representing the input messages.generation_kwargs
: Configuration for generation. If provided, it will override the default config.safety_settings
: Safety settings for content filtering. If provided, it will override the default settings.streaming_callback
: A callback function that is called when a new token is received from the stream.tools
: A list of Tool objects or a Toolset that the model can use. If provided, it will override the tools set during initialization.
Raises:
RuntimeError
: If there is an error in the Google Gen AI chat generation.ValueError
: If a ChatMessage does not contain at least one of TextContent, ToolCall, or ToolCallResult or if the role in ChatMessage is different from User, System, Assistant.
Returns:
A dictionary with the following keys:
replies
: A list containing the generated ChatMessage responses.