DocumentationAPI Reference📓 Tutorials🧑‍🍳 Cookbook🤝 Integrations💜 Discord🎨 Studio (Waitlist)
API Reference

Uses a large language model to answer complex queries that require multiple steps to find the correct answer.

Module base

Tool

class Tool()

Agent uses tools to find the best answer. A tool is a pipeline or a node. When you add a tool to an Agent, the Agent

can invoke the underlying pipeline or node to answer questions.

You must provide a name and a description for each tool. The name should be short and should indicate what the tool can do. The description should explain what the tool is useful for. The Agent uses the description to decide when to use a tool, so the wording you use is important.

Arguments:

  • name: The name of the tool. The Agent uses this name to refer to the tool in the text the Agent generates. The name should be short, ideally one token, and a good description of what the tool can do, for example: "Calculator" or "Search". Use only letters (a-z, A-Z), digits (0-9) and underscores (_)."
  • pipeline_or_node: The pipeline or node to run when the Agent invokes this tool.
  • description: A description of what the tool is useful for. The Agent uses this description to decide when to use which tool. For example, you can describe a tool for calculations by "useful for when you need to

answer questions about math".

ToolsManager

class ToolsManager()

The ToolsManager manages tools for an Agent.

ToolsManager.__init__

def __init__(
    tools: Optional[List[Tool]] = None,
    tool_pattern:
    str = r"Tool:\s*(\w+)\s*Tool Input:\s*(?:\"([\s\S]*?)\"|((?:.|\n)*))\s*")

Arguments:

  • tools: A list of tools to add to the ToolManager. Each tool must have a unique name.
  • tool_pattern: A regular expression pattern that matches the text that the Agent generates to invoke a tool.

ToolsManager.get_tool_names

def get_tool_names() -> str

Returns a string with the names of all registered tools.

ToolsManager.get_tools

def get_tools() -> List[Tool]

Returns a list of all registered tool instances.

ToolsManager.get_tool_names_with_descriptions

def get_tool_names_with_descriptions() -> str

Returns a string with the names and descriptions of all registered tools.

ToolsManager.extract_tool_name_and_tool_input

def extract_tool_name_and_tool_input(
        llm_response: str) -> Tuple[Optional[str], Optional[str]]

Parse the tool name and the tool input from the PromptNode response.

Arguments:

  • llm_response: The PromptNode response.

Returns:

A tuple containing the tool name and the tool input.

Agent

class Agent()

An Agent answers queries using the tools you give to it. The tools are pipelines or nodes. The Agent uses a large language model (LLM) through the PromptNode you initialize it with. To answer a query, the Agent follows this sequence:

  1. It generates a thought based on the query.
  2. It decides which tool to use.
  3. It generates the input for the tool.
  4. Based on the output it gets from the tool, the Agent can either stop if it now knows the answer or repeat the process of 1) generate thought, 2) choose tool, 3) generate input.

Agents are useful for questions containing multiple sub questions that can be answered step-by-step (Multi-hop QA) using multiple pipelines and nodes as tools.

Agent.__init__

def __init__(prompt_node: PromptNode,
             prompt_template: Optional[Union[str, PromptTemplate]] = None,
             tools_manager: Optional[ToolsManager] = None,
             memory: Optional[Memory] = None,
             prompt_parameters_resolver: Optional[Callable] = None,
             max_steps: int = 8,
             final_answer_pattern: str = r"Final Answer\s*:\s*(.*)",
             streaming: bool = True)

Creates an Agent instance.

Arguments:

  • prompt_node: The PromptNode that the Agent uses to decide which tool to use and what input to provide to it in each iteration.
  • prompt_template: A new PromptTemplate or the name of an existing PromptTemplate for the PromptNode. It's used for generating thoughts and choosing tools to answer queries step-by-step. If it's not set, the PromptNode's default template is used and if it's not set either, the Agent's default zero-shot-react template is used.
  • tools_manager: A ToolsManager instance that the Agent uses to run tools. Each tool must have a unique name. You can also add tools with add_tool() before running the Agent.
  • memory: A Memory instance that the Agent uses to store information between iterations.
  • prompt_parameters_resolver: A callable that takes query, agent, and agent_step as parameters and returns a dictionary of parameters to pass to the prompt_template. The default is a callable that returns a dictionary of keys and values needed for the React agent prompt template.
  • max_steps: The number of times the Agent can run a tool +1 to let it infer it knows the final answer. Set it to at least 2, so that the Agent can run one a tool once and then infer it knows the final answer. The default is 8.
  • final_answer_pattern: A regular expression to extract the final answer from the text the Agent generated.
  • streaming: Whether to use streaming or not. If True, the Agent will stream response tokens from the LLM. If False, the Agent will wait for the LLM to finish generating the response and then process it. The default is True.

Agent.update_hash

def update_hash()

Used for telemetry. Hashes the tool classnames to send an event only when they change. See haystack/telemetry.py::send_event

Agent.add_tool

def add_tool(tool: Tool)

Add a tool to the Agent. This also updates the PromptTemplate for the Agent's PromptNode with the tool name.

Arguments:

  • tool: The tool to add to the Agent. Any previously added tool with the same name will be overwritten. Example: `agent.add_tool( Tool( name="Calculator", pipeline_or_node=calculator description="Useful when you need to answer questions about math" ) )

Agent.has_tool

def has_tool(tool_name: str) -> bool

Check whether the Agent has a tool with the name you provide.

Arguments:

  • tool_name: The name of the tool for which you want to check whether the Agent has it.

Agent.run

def run(query: str,
        max_steps: Optional[int] = None,
        params: Optional[dict] = None) -> Dict[str, Union[str, List[Answer]]]

Runs the Agent given a query and optional parameters to pass on to the tools used. The result is in the

same format as a pipeline's result: a dictionary with a key answers containing a list of answers.

Arguments:

  • query: The search query
  • max_steps: The number of times the Agent can run a tool +1 to infer it knows the final answer. If you want to set it, make it at least 2 so that the Agent can run a tool once and then infer it knows the final answer.
  • params: A dictionary of parameters you want to pass to the tools that are pipelines. To pass a parameter to all nodes in those pipelines, use the format: {"top_k": 10}. To pass a parameter to targeted nodes in those pipelines, use the format: {"Retriever": {"top_k": 10}, "Reader": {"top_k": 3}}. You can only pass parameters to tools that are pipelines, but not nodes.

Agent.create_agent_step

def create_agent_step(max_steps: Optional[int] = None) -> AgentStep

Create an AgentStep object. Override this method to customize the AgentStep class used by the Agent.

Agent.prepare_data_for_memory

def prepare_data_for_memory(**kwargs) -> dict

Prepare data for saving to the Agent's memory. Override this method to customize the data saved to the memory.

Agent.check_prompt_template

def check_prompt_template(template_params: Dict[str, Any]) -> None

Verifies that the Agent's prompt template is adequately populated with the correct parameters

provided by the prompt parameter resolver.

If template_params contains a parameter that is not specified in the prompt template, a warning is logged at DEBUG level. Sometimes the prompt parameter resolver may provide additional parameters that are not used by the prompt template. However, if the prompt parameter resolver provides a 'transcript' parameter that is not used in the prompt template, an error is logged.

Arguments:

  • template_params: The parameters provided by the prompt parameter resolver.

Module conversational

ConversationalAgent

class ConversationalAgent(Agent)

A ConversationalAgent is an extension of the Agent class that enables the use of tools with several default parameters. ConversationalAgent can manage a set of tools and seamlessly integrate them into the conversation. If no tools are provided, the agent will be initialized to have a basic chat application.

Here is an example how you can create a chat application with tools:

import os

from haystack.agents.conversational import ConversationalAgent
from haystack.nodes import PromptNode
from haystack.agents.base import Tool

# Initialize a PromptNode and the desired tools
prompt_node = PromptNode("gpt-3.5-turbo", api_key=os.environ.get("OPENAI_API_KEY"), max_length=256)
tools = [Tool(name="ExampleTool", pipeline_or_node=example_tool_node)]

# Create the ConversationalAgent instance
agent = ConversationalAgent(prompt_node, tools=tools)

# Use the agent in a chat application
while True:
    user_input = input("Human (type 'exit' or 'quit' to quit): ")
    if user_input.lower() == "exit" or user_input.lower() == "quit":
        break
    else:
        assistant_response = agent.run(user_input)
        print("Assistant:", assistant_response)

If you don't want to have any tools in your chat app, you can create a ConversationalAgent only with a PromptNode:

import os

from haystack.agents.conversational import ConversationalAgent
from haystack.nodes import PromptNode

# Initialize a PromptNode
prompt_node = PromptNode("gpt-3.5-turbo", api_key=os.environ.get("OPENAI_API_KEY"), max_length=256)

# Create the ConversationalAgent instance
agent = ConversationalAgent(prompt_node)

If you're looking for more customization, check out Agent.

ConversationalAgent.__init__

def __init__(prompt_node: PromptNode,
             prompt_template: Optional[Union[str, PromptTemplate]] = None,
             tools: Optional[List[Tool]] = None,
             memory: Optional[Memory] = None,
             max_steps: Optional[int] = None)

Creates a new ConversationalAgent instance.

Arguments:

  • prompt_node: A PromptNode used by Agent to decide which tool to use and what input to provide to it in each iteration. If there are no tools added, the model specified with PromptNode will be used for chatting.
  • prompt_template: A new PromptTemplate or the name of an existing PromptTemplate for the PromptNode. It's used for keeping the chat history, generating thoughts and choosing tools (if provided) to answer queries. It defaults to to "conversational-agent" if there is at least one tool provided and "conversational-agent-without-tools" otherwise.
  • tools: A list of tools to use in the Agent. Each tool must have a unique name.
  • memory: A memory object for storing conversation history and other relevant data, defaults to ConversationMemory if no memory is provided.
  • max_steps: The number of times the Agent can run a tool +1 to let it infer it knows the final answer. It defaults to 5 if there is at least one tool provided and 2 otherwise.

ConversationalAgent.update_hash

def update_hash()

Used for telemetry. Hashes the tool classnames to send an event only when they change. See haystack/telemetry.py::send_event

ConversationalAgent.has_tool

def has_tool(tool_name: str) -> bool

Check whether the Agent has a tool with the name you provide.

Arguments:

  • tool_name: The name of the tool for which you want to check whether the Agent has it.

ConversationalAgent.run

def run(query: str,
        max_steps: Optional[int] = None,
        params: Optional[dict] = None) -> Dict[str, Union[str, List[Answer]]]

Runs the Agent given a query and optional parameters to pass on to the tools used. The result is in the

same format as a pipeline's result: a dictionary with a key answers containing a list of answers.

Arguments:

  • query: The search query
  • max_steps: The number of times the Agent can run a tool +1 to infer it knows the final answer. If you want to set it, make it at least 2 so that the Agent can run a tool once and then infer it knows the final answer.
  • params: A dictionary of parameters you want to pass to the tools that are pipelines. To pass a parameter to all nodes in those pipelines, use the format: {"top_k": 10}. To pass a parameter to targeted nodes in those pipelines, use the format: {"Retriever": {"top_k": 10}, "Reader": {"top_k": 3}}. You can only pass parameters to tools that are pipelines, but not nodes.

ConversationalAgent.create_agent_step

def create_agent_step(max_steps: Optional[int] = None) -> AgentStep

Create an AgentStep object. Override this method to customize the AgentStep class used by the Agent.

ConversationalAgent.prepare_data_for_memory

def prepare_data_for_memory(**kwargs) -> dict

Prepare data for saving to the Agent's memory. Override this method to customize the data saved to the memory.

ConversationalAgent.check_prompt_template

def check_prompt_template(template_params: Dict[str, Any]) -> None

Verifies that the Agent's prompt template is adequately populated with the correct parameters

provided by the prompt parameter resolver.

If template_params contains a parameter that is not specified in the prompt template, a warning is logged at DEBUG level. Sometimes the prompt parameter resolver may provide additional parameters that are not used by the prompt template. However, if the prompt parameter resolver provides a 'transcript' parameter that is not used in the prompt template, an error is logged.

Arguments:

  • template_params: The parameters provided by the prompt parameter resolver.

Module utils

react_parameter_resolver

def react_parameter_resolver(query: str, agent: "Agent", agent_step: AgentStep,
                             **kwargs) -> Dict[str, Any]

A parameter resolver for ReAct-based agents that returns the query, the tool names, the tool names with descriptions, and the transcript (internal monologue).

agent_without_tools_parameter_resolver

def agent_without_tools_parameter_resolver(query: str, agent: "Agent",
                                           **kwargs) -> Dict[str, Any]

A parameter resolver for simple chat agents without tools that returns the query and the history.

conversational_agent_parameter_resolver

def conversational_agent_parameter_resolver(query: str, agent: "Agent",
                                            agent_step: AgentStep,
                                            **kwargs) -> Dict[str, Any]

A parameter resolver for ReAct-based conversational agent that returns the query, the tool names, the tool names with descriptions, the history of the conversation, and the transcript (internal monologue).

Module agent_step

AgentStep

class AgentStep()

The AgentStep class represents a single step in the execution of an agent.

AgentStep.__init__

def __init__(current_step: int = 1,
             max_steps: int = 10,
             final_answer_pattern: Optional[str] = None,
             prompt_node_response: str = "",
             transcript: str = "")

Arguments:

  • current_step: The current step in the execution of the agent.
  • max_steps: The maximum number of steps the agent can execute.
  • final_answer_pattern: The regex pattern to extract the final answer from the PromptNode response. If no pattern is provided, entire prompt node response is considered the final answer.
  • prompt_node_response: The PromptNode response received. text it generated during execution up to this step. The transcript is used to generate the next prompt.

AgentStep.create_next_step

def create_next_step(prompt_node_response: Any,
                     current_step: Optional[int] = None) -> AgentStep

Creates the next agent step based on the current step and the PromptNode response.

Arguments:

  • prompt_node_response: The PromptNode response received.
  • current_step: The current step in the execution of the agent.

AgentStep.final_answer

def final_answer(query: str) -> Dict[str, Any]

Formats an answer as a dict containing query and answers similar to the output of a Pipeline.

The full transcript based on the Agent's initial prompt template and the text it generated during execution.

Arguments:

  • query: The search query

AgentStep.is_last

def is_last() -> bool

Check if this is the last step of the Agent.

Returns:

True if this is the last step of the Agent, False otherwise.

AgentStep.completed

def completed(observation: Optional[str]) -> None

Update the transcript with the observation

Arguments:

  • observation: received observation from the Agent environment.

AgentStep.__repr__

def __repr__() -> str

Return a string representation of the AgentStep object.

Returns:

A string that represents the AgentStep object.

AgentStep.parse_final_answer

def parse_final_answer() -> Optional[str]

Parse the final answer from the response of the prompt node.

This function searches the prompt node's response for a match with the pre-defined final answer pattern. If a match is found, it's returned as the final answer after removing leading/trailing quotes and whitespaces. If no match is found, it returns None.

Returns:

The final answer as a string if a match is found, otherwise None.

Module memory/conversation_memory

ConversationMemory

class ConversationMemory(Memory)

A memory class that stores conversation history.

ConversationMemory.__init__

def __init__(input_key: str = "input", output_key: str = "output")

Initialize ConversationMemory with input and output keys.

Arguments:

  • input_key: The key to use for storing user input.
  • output_key: The key to use for storing model output.

ConversationMemory.load

def load(keys: Optional[List[str]] = None, **kwargs) -> str

Load conversation history as a formatted string.

Arguments:

  • keys: Optional list of keys (ignored in this implementation).
  • kwargs: Optional keyword arguments
  • window_size: integer specifying the number of most recent conversation snippets to load.

Returns:

A formatted string containing the conversation history.

ConversationMemory.save

def save(data: Dict[str, Any]) -> None

Save a conversation snippet to memory.

Arguments:

  • data: A dictionary containing the conversation snippet to save.

ConversationMemory.clear

def clear() -> None

Clear the conversation history.

Module memory/conversation_summary_memory

ConversationSummaryMemory

class ConversationSummaryMemory(ConversationMemory)

A memory class that stores conversation history and periodically generates summaries.

ConversationSummaryMemory.__init__

def __init__(prompt_node: PromptNode,
             prompt_template: Optional[Union[str, PromptTemplate]] = None,
             input_key: str = "input",
             output_key: str = "output",
             summary_frequency: int = 3)

Initialize ConversationSummaryMemory with a PromptNode, optional prompt_template,

input and output keys, and a summary_frequency.

Arguments:

  • prompt_node: A PromptNode object for generating conversation summaries.
  • prompt_template: Optional prompt template as a string or PromptTemplate object.
  • input_key: input key, default is "input".
  • output_key: output key, default is "output".
  • summary_frequency: integer specifying how often to generate a summary (default is 3).

ConversationSummaryMemory.load

def load(keys: Optional[List[str]] = None, **kwargs) -> str

Load conversation history as a formatted string, including the latest summary.

Arguments:

  • keys: Optional list of keys (ignored in this implementation).
  • kwargs: Optional keyword arguments
  • window_size: integer specifying the number of most recent conversation snippets to load.

Returns:

A formatted string containing the conversation history with the latest summary.

ConversationSummaryMemory.load_recent_snippets

def load_recent_snippets(window_size: int = 1) -> str

Load the most recent conversation snippets as a formatted string.

Arguments:

  • window_size: integer specifying the number of most recent conversation snippets to load.

Returns:

A formatted string containing the most recent conversation snippets.

ConversationSummaryMemory.summarize

def summarize() -> str

Generate a summary of the conversation history and clear the history.

Returns:

A string containing the generated summary.

ConversationSummaryMemory.needs_summary

def needs_summary() -> bool

Determine if a new summary should be generated.

Returns:

True if a new summary should be generated, otherwise False.

ConversationSummaryMemory.unsummarized_snippets

def unsummarized_snippets() -> int

Returns how many conversation snippets have not been summarized.

Returns:

The number of conversation snippets that have not been summarized.

ConversationSummaryMemory.has_unsummarized_snippets

def has_unsummarized_snippets() -> bool

Returns True if there are any conversation snippets that have not been summarized.

Returns:

True if there are unsummarized snippets, otherwise False.

ConversationSummaryMemory.save

def save(data: Dict[str, Any]) -> None

Save a conversation snippet to memory and update the save count.

Generate a summary if needed.

Arguments:

  • data: A dictionary containing the conversation snippet to save.

ConversationSummaryMemory.clear

def clear() -> None

Clear the conversation history and the summary.