Google Vertex integration for Haystack
Module haystack_integrations.components.generators.google_vertex.gemini
VertexAIGeminiGenerator
VertexAIGeminiGenerator
enables text generation using Google Gemini models.
Usage example:
from haystack_integrations.components.generators.google_vertex import VertexAIGeminiGenerator
gemini = VertexAIGeminiGenerator()
result = gemini.run(parts = ["What is the most interesting thing you know?"])
for answer in result["replies"]:
print(answer)
>>> 1. **The Origin of Life:** How and where did life begin? The answers to this ...
>>> 2. **The Unseen Universe:** The vast majority of the universe is ...
>>> 3. **Quantum Entanglement:** This eerie phenomenon in quantum mechanics allows ...
>>> 4. **Time Dilation:** Einstein's theory of relativity revealed that time can ...
>>> 5. **The Fermi Paradox:** Despite the vastness of the universe and the ...
>>> 6. **Biological Evolution:** The idea that life evolves over time through natural ...
>>> 7. **Neuroplasticity:** The brain's ability to adapt and change throughout life, ...
>>> 8. **The Goldilocks Zone:** The concept of the habitable zone, or the Goldilocks zone, ...
>>> 9. **String Theory:** This theoretical framework in physics aims to unify all ...
>>> 10. **Consciousness:** The nature of human consciousness and how it arises ...
VertexAIGeminiGenerator.__init__
def __init__(*,
model: str = "gemini-1.5-flash",
project_id: Optional[str] = None,
location: Optional[str] = None,
generation_config: Optional[Union[GenerationConfig,
Dict[str, Any]]] = None,
safety_settings: Optional[Dict[HarmCategory,
HarmBlockThreshold]] = None,
tools: Optional[List[Tool]] = None,
tool_config: Optional[ToolConfig] = None,
system_instruction: Optional[Union[str, ByteStream, Part]] = None,
streaming_callback: Optional[Callable[[StreamingChunk],
None]] = None)
Multi-modal generator using Gemini model via Google Vertex AI.
Authenticates using Google Cloud Application Default Credentials (ADCs). For more information see the official Google documentation.
Arguments:
project_id
: ID of the GCP project to use. By default, it is set during Google Cloud authentication.model
: Name of the model to use. For available models, see https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models.location
: The default location to use when making API calls, if not set uses us-central-1.generation_config
: The generation config to use. Can either be aGenerationConfig
object or a dictionary of parameters. Accepted fields are: - temperature - top_p - top_k - candidate_count - max_output_tokens - stop_sequencessafety_settings
: The safety settings to use. See the documentation for HarmBlockThreshold and HarmCategory for more details.tools
: List of tools to use when generating content. See the documentation for Tool the list of supported arguments.tool_config
: The tool config to use. See the documentation for ToolConfigsystem_instruction
: Default system instruction to use for generating content.streaming_callback
: A callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.
VertexAIGeminiGenerator.to_dict
def to_dict() -> Dict[str, Any]
Serializes the component to a dictionary.
Returns:
Dictionary with serialized data.
VertexAIGeminiGenerator.from_dict
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "VertexAIGeminiGenerator"
Deserializes the component from a dictionary.
Arguments:
data
: Dictionary to deserialize from.
Returns:
Deserialized component.
VertexAIGeminiGenerator.run
@component.output_types(replies=List[Union[str, Dict[str, str]]])
def run(parts: Variadic[Union[str, ByteStream, Part]],
streaming_callback: Optional[Callable[[StreamingChunk], None]] = None)
Generates content using the Gemini model.
Arguments:
parts
: Prompt for the model.streaming_callback
: A callback function that is called when a new token is received from the stream.
Returns:
A dictionary with the following keys:
replies
: A list of generated content.
Module haystack_integrations.components.generators.google_vertex.captioner
VertexAIImageCaptioner
VertexAIImageCaptioner
enables text generation using Google Vertex AI imagetext generative model.
Authenticates using Google Cloud Application Default Credentials (ADCs). For more information see the official Google documentation.
Usage example:
import requests
from haystack.dataclasses.byte_stream import ByteStream
from haystack_integrations.components.generators.google_vertex import VertexAIImageCaptioner
captioner = VertexAIImageCaptioner()
image = ByteStream(
data=requests.get(
"https://raw.githubusercontent.com/deepset-ai/haystack-core-integrations/main/integrations/google_vertex/example_assets/robot1.jpg"
).content
)
result = captioner.run(image=image)
for caption in result["captions"]:
print(caption)
>>> two gold robots are standing next to each other in the desert
VertexAIImageCaptioner.__init__
def __init__(*,
model: str = "imagetext",
project_id: Optional[str] = None,
location: Optional[str] = None,
**kwargs)
Generate image captions using a Google Vertex AI model.
Authenticates using Google Cloud Application Default Credentials (ADCs). For more information see the official Google documentation.
Arguments:
project_id
: ID of the GCP project to use. By default, it is set during Google Cloud authentication.model
: Name of the model to use.location
: The default location to use when making API calls, if not set uses us-central-1. Defaults to None.kwargs
: Additional keyword arguments to pass to the model. For a list of supported arguments see theImageTextModel.get_captions()
documentation.
VertexAIImageCaptioner.to_dict
def to_dict() -> Dict[str, Any]
Serializes the component to a dictionary.
Returns:
Dictionary with serialized data.
VertexAIImageCaptioner.from_dict
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "VertexAIImageCaptioner"
Deserializes the component from a dictionary.
Arguments:
data
: Dictionary to deserialize from.
Returns:
Deserialized component.
VertexAIImageCaptioner.run
@component.output_types(captions=List[str])
def run(image: ByteStream)
Prompts the model to generate captions for the given image.
Arguments:
image
: The image to generate captions for.
Returns:
A dictionary with the following keys:
captions
: A list of captions generated by the model.
Module haystack_integrations.components.generators.google_vertex.code_generator
VertexAICodeGenerator
This component enables code generation using Google Vertex AI generative model.
VertexAICodeGenerator
supports code-bison
, code-bison-32k
, and code-gecko
.
Usage example:
from haystack_integrations.components.generators.google_vertex import VertexAICodeGenerator
generator = VertexAICodeGenerator()
result = generator.run(prefix="def to_json(data):")
for answer in result["replies"]:
print(answer)
>>> ```python
>>> import json
>>>
>>> def to_json(data):
>>> """Converts a Python object to a JSON string.
>>>
>>> Args:
>>> data: The Python object to convert.
>>>
>>> Returns:
>>> A JSON string representing the Python object.
>>> """
>>>
>>> return json.dumps(data)
>>> ```
VertexAICodeGenerator.__init__
def __init__(*,
model: str = "code-bison",
project_id: Optional[str] = None,
location: Optional[str] = None,
**kwargs)
Generate code using a Google Vertex AI model.
Authenticates using Google Cloud Application Default Credentials (ADCs). For more information see the official Google documentation.
Arguments:
project_id
: ID of the GCP project to use. By default, it is set during Google Cloud authentication.model
: Name of the model to use.location
: The default location to use when making API calls, if not set uses us-central-1.kwargs
: Additional keyword arguments to pass to the model. For a list of supported arguments see theTextGenerationModel.predict()
documentation.
VertexAICodeGenerator.to_dict
def to_dict() -> Dict[str, Any]
Serializes the component to a dictionary.
Returns:
Dictionary with serialized data.
VertexAICodeGenerator.from_dict
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "VertexAICodeGenerator"
Deserializes the component from a dictionary.
Arguments:
data
: Dictionary to deserialize from.
Returns:
Deserialized component.
VertexAICodeGenerator.run
@component.output_types(replies=List[str])
def run(prefix: str, suffix: Optional[str] = None)
Generate code using a Google Vertex AI model.
Arguments:
prefix
: Code before the current point.suffix
: Code after the current point.
Returns:
A dictionary with the following keys:
replies
: A list of generated code snippets.
Module haystack_integrations.components.generators.google_vertex.image_generator
VertexAIImageGenerator
This component enables image generation using Google Vertex AI generative model.
Authenticates using Google Cloud Application Default Credentials (ADCs). For more information see the official Google documentation.
Usage example:
from pathlib import Path
from haystack_integrations.components.generators.google_vertex import VertexAIImageGenerator
generator = VertexAIImageGenerator()
result = generator.run(prompt="Generate an image of a cute cat")
result["images"][0].to_file(Path("my_image.png"))
VertexAIImageGenerator.__init__
def __init__(*,
model: str = "imagegeneration",
project_id: Optional[str] = None,
location: Optional[str] = None,
**kwargs)
Generates images using a Google Vertex AI model.
Authenticates using Google Cloud Application Default Credentials (ADCs). For more information see the official Google documentation.
Arguments:
project_id
: ID of the GCP project to use. By default, it is set during Google Cloud authentication.model
: Name of the model to use.location
: The default location to use when making API calls, if not set uses us-central-1.kwargs
: Additional keyword arguments to pass to the model. For a list of supported arguments see theImageGenerationModel.generate_images()
documentation.
VertexAIImageGenerator.to_dict
def to_dict() -> Dict[str, Any]
Serializes the component to a dictionary.
Returns:
Dictionary with serialized data.
VertexAIImageGenerator.from_dict
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "VertexAIImageGenerator"
Deserializes the component from a dictionary.
Arguments:
data
: Dictionary to deserialize from.
Returns:
Deserialized component.
VertexAIImageGenerator.run
@component.output_types(images=List[ByteStream])
def run(prompt: str, negative_prompt: Optional[str] = None)
Produces images based on the given prompt.
Arguments:
prompt
: The prompt to generate images from.negative_prompt
: A description of what you want to omit in the generated images.
Returns:
A dictionary with the following keys:
images
: A list of ByteStream objects, each containing an image.
Module haystack_integrations.components.generators.google_vertex.question_answering
VertexAIImageQA
This component enables text generation (image captioning) using Google Vertex AI generative models.
Authenticates using Google Cloud Application Default Credentials (ADCs). For more information see the official Google documentation.
Usage example:
from haystack.dataclasses.byte_stream import ByteStream
from haystack_integrations.components.generators.google_vertex import VertexAIImageQA
qa = VertexAIImageQA()
image = ByteStream.from_file_path("dog.jpg")
res = qa.run(image=image, question="What color is this dog")
print(res["replies"][0])
>>> white
VertexAIImageQA.__init__
def __init__(*,
model: str = "imagetext",
project_id: Optional[str] = None,
location: Optional[str] = None,
**kwargs)
Answers questions about an image using a Google Vertex AI model.
Authenticates using Google Cloud Application Default Credentials (ADCs). For more information see the official Google documentation.
Arguments:
project_id
: ID of the GCP project to use. By default, it is set during Google Cloud authentication.model
: Name of the model to use.location
: The default location to use when making API calls, if not set uses us-central-1.kwargs
: Additional keyword arguments to pass to the model. For a list of supported arguments see theImageTextModel.ask_question()
documentation.
VertexAIImageQA.to_dict
def to_dict() -> Dict[str, Any]
Serializes the component to a dictionary.
Returns:
Dictionary with serialized data.
VertexAIImageQA.from_dict
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "VertexAIImageQA"
Deserializes the component from a dictionary.
Arguments:
data
: Dictionary to deserialize from.
Returns:
Deserialized component.
VertexAIImageQA.run
@component.output_types(replies=List[str])
def run(image: ByteStream, question: str)
Prompts model to answer a question about an image.
Arguments:
image
: The image to ask the question about.question
: The question to ask.
Returns:
A dictionary with the following keys:
replies
: A list of answers to the question.
Module haystack_integrations.components.generators.google_vertex.text_generator
VertexAITextGenerator
This component enables text generation using Google Vertex AI generative models.
VertexAITextGenerator
supports text-bison
, text-unicorn
and text-bison-32k
models.
Authenticates using Google Cloud Application Default Credentials (ADCs). For more information see the official Google documentation.
Usage example:
from haystack_integrations.components.generators.google_vertex import VertexAITextGenerator
generator = VertexAITextGenerator()
res = generator.run("Tell me a good interview question for a software engineer.")
print(res["replies"][0])
>>> **Question:**
>>> You are given a list of integers and a target sum.
>>> Find all unique combinations of numbers in the list that add up to the target sum.
>>>
>>> **Example:**
>>>
>>> ```
>>> Input: [1, 2, 3, 4, 5], target = 7
>>> Output: [[1, 2, 4], [3, 4]]
>>> ```
>>>
>>> **Follow-up:** What if the list contains duplicate numbers?
VertexAITextGenerator.__init__
def __init__(*,
model: str = "text-bison",
project_id: Optional[str] = None,
location: Optional[str] = None,
**kwargs)
Generate text using a Google Vertex AI model.
Authenticates using Google Cloud Application Default Credentials (ADCs). For more information see the official Google documentation.
Arguments:
project_id
: ID of the GCP project to use. By default, it is set during Google Cloud authentication.model
: Name of the model to use.location
: The default location to use when making API calls, if not set uses us-central-1.kwargs
: Additional keyword arguments to pass to the model. For a list of supported arguments see theTextGenerationModel.predict()
documentation.
VertexAITextGenerator.to_dict
def to_dict() -> Dict[str, Any]
Serializes the component to a dictionary.
Returns:
Dictionary with serialized data.
VertexAITextGenerator.from_dict
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "VertexAITextGenerator"
Deserializes the component from a dictionary.
Arguments:
data
: Dictionary to deserialize from.
Returns:
Deserialized component.
VertexAITextGenerator.run
@component.output_types(replies=List[str],
safety_attributes=Dict[str, float],
citations=List[Dict[str, Any]])
def run(prompt: str)
Prompts the model to generate text.
Arguments:
prompt
: The prompt to use for text generation.
Returns:
A dictionary with the following keys:
replies
: A list of generated replies.safety_attributes
: A dictionary with the safety scores of each answer.citations
: A list of citations for each answer.
Module haystack_integrations.components.generators.google_vertex.chat.gemini
VertexAIGeminiChatGenerator
VertexAIGeminiChatGenerator
enables chat completion using Google Gemini models.
Authenticates using Google Cloud Application Default Credentials (ADCs). For more information see the official Google documentation.
Usage example:
from haystack.dataclasses import ChatMessage
from haystack_integrations.components.generators.google_vertex import VertexAIGeminiChatGenerator
gemini_chat = VertexAIGeminiChatGenerator()
messages = [ChatMessage.from_user("Tell me the name of a movie")]
res = gemini_chat.run(messages)
print(res["replies"][0].content)
>>> The Shawshank Redemption
VertexAIGeminiChatGenerator.__init__
def __init__(*,
model: str = "gemini-1.5-flash",
project_id: Optional[str] = None,
location: Optional[str] = None,
generation_config: Optional[Union[GenerationConfig,
Dict[str, Any]]] = None,
safety_settings: Optional[Dict[HarmCategory,
HarmBlockThreshold]] = None,
tools: Optional[List[Tool]] = None,
tool_config: Optional[ToolConfig] = None,
system_instruction: Optional[Union[str, ByteStream, Part]] = None,
streaming_callback: Optional[Callable[[StreamingChunk],
None]] = None)
VertexAIGeminiChatGenerator
enables chat completion using Google Gemini models.
Authenticates using Google Cloud Application Default Credentials (ADCs). For more information see the official Google documentation.
Arguments:
project_id
: ID of the GCP project to use. By default, it is set during Google Cloud authentication.model
: Name of the model to use. For available models, see https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models.location
: The default location to use when making API calls, if not set uses us-central-1. Defaults to None.generation_config
: Configuration for the generation process. See the [GenerationConfig documentation](https://cloud.google.com/python/docs/reference/aiplatform/latest/vertexai.generative_models.GenerationConfig for a list of supported arguments.safety_settings
: Safety settings to use when generating content. See the documentation for HarmBlockThreshold and HarmCategory for more details.tools
: List of tools to use when generating content. See the documentation for Tool the list of supported arguments.tool_config
: The tool config to use. See the documentation for [ToolConfig] (https://cloud.google.com/vertex-ai/generative-ai/docs/reference/python/latest/vertexai.generative_models.ToolConfig)system_instruction
: Default system instruction to use for generating content.streaming_callback
: A callback function that is called when a new token is received from the stream. The callback function accepts StreamingChunk as an argument.
VertexAIGeminiChatGenerator.to_dict
def to_dict() -> Dict[str, Any]
Serializes the component to a dictionary.
Returns:
Dictionary with serialized data.
VertexAIGeminiChatGenerator.from_dict
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "VertexAIGeminiChatGenerator"
Deserializes the component from a dictionary.
Arguments:
data
: Dictionary to deserialize from.
Returns:
Deserialized component.
VertexAIGeminiChatGenerator.run
@component.output_types(replies=List[ChatMessage])
def run(messages: List[ChatMessage],
streaming_callback: Optional[Callable[[StreamingChunk], None]] = None)
Prompts Google Vertex AI Gemini model to generate a response to a list of messages.
Arguments:
messages
: The last message is the prompt, the rest are the history.streaming_callback
: A callback function that is called when a new token is received from the stream.
Returns:
A dictionary with the following keys:
replies
: A list of ChatMessage objects representing the model's replies.