DocumentationAPI Reference📓 Tutorials🧑‍🍳 Cookbook🤝 Integrations💜 Discord🎨 Studio (Waitlist)
API Reference

Google AI integration for Haystack

Module haystack_integrations.components.generators.google_ai.gemini

GoogleAIGeminiGenerator

Generates text using multimodal Gemini models through Google AI Studio.

Usage example

from haystack.utils import Secret
from haystack_integrations.components.generators.google_ai import GoogleAIGeminiGenerator

gemini = GoogleAIGeminiGenerator(model="gemini-pro", api_key=Secret.from_token("<MY_API_KEY>"))
res = gemini.run(parts = ["What is the most interesting thing you know?"])
for answer in res["replies"]:
    print(answer)

Multimodal example

import requests
from haystack.utils import Secret
from haystack.dataclasses.byte_stream import ByteStream
from haystack_integrations.components.generators.google_ai import GoogleAIGeminiGenerator

BASE_URL = (
    "https://raw.githubusercontent.com/deepset-ai/haystack-core-integrations"
    "/main/integrations/google_ai/example_assets"
)

URLS = [
    f"{BASE_URL}/robot1.jpg",
    f"{BASE_URL}/robot2.jpg",
    f"{BASE_URL}/robot3.jpg",
    f"{BASE_URL}/robot4.jpg"
]
images = [
    ByteStream(data=requests.get(url).content, mime_type="image/jpeg")
    for url in URLS
]

gemini = GoogleAIGeminiGenerator(model="gemini-1.5-flash", api_key=Secret.from_token("<MY_API_KEY>"))
result = gemini.run(parts = ["What can you tell me about this robots?", *images])
for answer in result["replies"]:
    print(answer)

GoogleAIGeminiGenerator.__init__

def __init__(*,
             api_key: Secret = Secret.from_env_var("GOOGLE_API_KEY"),
             model: str = "gemini-1.5-flash",
             generation_config: Optional[Union[GenerationConfig,
                                               Dict[str, Any]]] = None,
             safety_settings: Optional[Dict[HarmCategory,
                                            HarmBlockThreshold]] = None,
             tools: Optional[List[Tool]] = None,
             streaming_callback: Optional[Callable[[StreamingChunk],
                                                   None]] = None)

Initializes a GoogleAIGeminiGenerator instance.

To get an API key, visit: https://makersuite.google.com

Arguments:

  • api_key: Google AI Studio API key.
  • model: Name of the model to use. For available models, see https://ai.google.dev/gemini-api/docs/models/gemini
  • generation_config: The generation configuration to use. This can either be a GenerationConfig object or a dictionary of parameters. For available parameters, see the GenerationConfig API reference.
  • safety_settings: The safety settings to use. A dictionary with HarmCategory as keys and HarmBlockThreshold as values. For more information, see the API reference
  • tools: A list of Tool objects that can be used for Function calling.
  • streaming_callback: A callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.

GoogleAIGeminiGenerator.to_dict

def to_dict() -> Dict[str, Any]

Serializes the component to a dictionary.

Returns:

Dictionary with serialized data.

GoogleAIGeminiGenerator.from_dict

@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "GoogleAIGeminiGenerator"

Deserializes the component from a dictionary.

Arguments:

  • data: Dictionary to deserialize from.

Returns:

Deserialized component.

GoogleAIGeminiGenerator.run

@component.output_types(replies=List[Union[str, Dict[str, str]]])
def run(parts: Variadic[Union[str, ByteStream, Part]],
        streaming_callback: Optional[Callable[[StreamingChunk], None]] = None)

Generates text based on the given input parts.

Arguments:

  • parts: A heterogeneous list of strings, ByteStream or Part objects.
  • streaming_callback: A callback function that is called when a new token is received from the stream.

Returns:

A dictionary containing the following key:

  • replies: A list of strings or dictionaries with function calls.

Module haystack_integrations.components.generators.google_ai.chat.gemini

GoogleAIGeminiChatGenerator

Completes chats using multimodal Gemini models through Google AI Studio.

It uses the ChatMessage dataclass to interact with the model.

Usage example

from haystack.utils import Secret
from haystack.dataclasses.chat_message import ChatMessage
from haystack_integrations.components.generators.google_ai import GoogleAIGeminiChatGenerator


gemini_chat = GoogleAIGeminiChatGenerator(model="gemini-pro", api_key=Secret.from_token("<MY_API_KEY>"))

messages = [ChatMessage.from_user("What is the most interesting thing you know?")]
res = gemini_chat.run(messages=messages)
for reply in res["replies"]:
    print(reply.content)

messages += res["replies"] + [ChatMessage.from_user("Tell me more about it")]
res = gemini_chat.run(messages=messages)
for reply in res["replies"]:
    print(reply.content)

With function calling:

from haystack.utils import Secret
from haystack.dataclasses.chat_message import ChatMessage
from google.ai.generativelanguage import FunctionDeclaration, Tool

from haystack_integrations.components.generators.google_ai import GoogleAIGeminiChatGenerator

# Example function to get the current weather
def get_current_weather(location: str, unit: str = "celsius") -> str:
    # Call a weather API and return some text
    ...

# Define the function interface
get_current_weather_func = FunctionDeclaration(
    name="get_current_weather",
    description="Get the current weather in a given location",
    parameters={
        "type": "object",
        "properties": {
            "location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"},
            "unit": {
                "type": "string",
                "enum": [
                    "celsius",
                    "fahrenheit",
                ],
            },
        },
        "required": ["location"],
    },
)
tool = Tool([get_current_weather_func])

messages = [ChatMessage.from_user("What is the most interesting thing you know?")]

gemini_chat = GoogleAIGeminiChatGenerator(model="gemini-pro", api_key=Secret.from_token("<MY_API_KEY>"),
                                          tools=[tool])

messages = [ChatMessage.from_user(content = "What is the temperature in celsius in Berlin?")]
res = gemini_chat.run(messages=messages)

weather = get_current_weather(**res["replies"][0].content)
messages += res["replies"] + [ChatMessage.from_function(content=weather, name="get_current_weather")]
res = gemini_chat.run(messages=messages)
for reply in res["replies"]:
    print(reply.content)

GoogleAIGeminiChatGenerator.__init__

def __init__(*,
             api_key: Secret = Secret.from_env_var("GOOGLE_API_KEY"),
             model: str = "gemini-1.5-flash",
             generation_config: Optional[Union[GenerationConfig,
                                               Dict[str, Any]]] = None,
             safety_settings: Optional[Dict[HarmCategory,
                                            HarmBlockThreshold]] = None,
             tools: Optional[List[Tool]] = None,
             streaming_callback: Optional[Callable[[StreamingChunk],
                                                   None]] = None)

Initializes a GoogleAIGeminiChatGenerator instance.

To get an API key, visit: https://makersuite.google.com

Arguments:

  • api_key: Google AI Studio API key. To get a key, see Google AI Studio.
  • model: Name of the model to use. For available models, see https://ai.google.dev/gemini-api/docs/models/gemini.
  • generation_config: The generation configuration to use. This can either be a GenerationConfig object or a dictionary of parameters. For available parameters, see the GenerationConfig API reference.
  • safety_settings: The safety settings to use. A dictionary with HarmCategory as keys and HarmBlockThreshold as values. For more information, see the API reference
  • tools: A list of Tool objects that can be used for Function calling.
  • streaming_callback: A callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.

GoogleAIGeminiChatGenerator.to_dict

def to_dict() -> Dict[str, Any]

Serializes the component to a dictionary.

Returns:

Dictionary with serialized data.

GoogleAIGeminiChatGenerator.from_dict

@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "GoogleAIGeminiChatGenerator"

Deserializes the component from a dictionary.

Arguments:

  • data: Dictionary to deserialize from.

Returns:

Deserialized component.

GoogleAIGeminiChatGenerator.run

@component.output_types(replies=List[ChatMessage])
def run(messages: List[ChatMessage],
        streaming_callback: Optional[Callable[[StreamingChunk], None]] = None)

Generates text based on the provided messages.

Arguments:

  • messages: A list of ChatMessage instances, representing the input messages.
  • streaming_callback: A callback function that is called when a new token is received from the stream.

Returns:

A dictionary containing the following key:

  • replies: A list containing the generated responses as ChatMessage instances.