DocumentationAPI Reference📓 Tutorials🧑‍🍳 Cookbook🤝 Integrations💜 Discord🎨 Studio
API Reference

Tool-using agents with provider-agnostic chat model support.

Module agent

Agent

A Haystack component that implements a tool-using agent with provider-agnostic chat model support.

The component processes messages and executes tools until a exit_condition condition is met. The exit_condition can be triggered either by a direct text response or by invoking a specific designated tool.

When you call an Agent without tools, it acts as a ChatGenerator, produces one response, then exits.

Usage example

from haystack.components.agents import Agent
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage
from haystack.tools.tool import Tool

tools = [Tool(name="calculator", description="..."), Tool(name="search", description="...")]

agent = Agent(
    chat_generator=OpenAIChatGenerator(),
    tools=tools,
    exit_condition="search",
)

# Run the agent
result = agent.run(
    messages=[ChatMessage.from_user("Find information about Haystack")]
)

assert "messages" in result  # Contains conversation history

Agent.__init__

def __init__(*,
             chat_generator: ChatGenerator,
             tools: Optional[Union[List[Tool], Toolset]] = None,
             system_prompt: Optional[str] = None,
             exit_conditions: Optional[List[str]] = None,
             state_schema: Optional[Dict[str, Any]] = None,
             max_agent_steps: int = 100,
             raise_on_tool_invocation_failure: bool = False,
             streaming_callback: Optional[StreamingCallbackT] = None)

Initialize the agent component.

Arguments:

  • chat_generator: An instance of the chat generator that your agent should use. It must support tools.
  • tools: List of Tool objects or a Toolset that the agent can use.
  • system_prompt: System prompt for the agent.
  • exit_conditions: List of conditions that will cause the agent to return. Can include "text" if the agent should return when it generates a message without tool calls, or tool names that will cause the agent to return once the tool was executed. Defaults to ["text"].
  • state_schema: The schema for the runtime state used by the tools.
  • max_agent_steps: Maximum number of steps the agent will run before stopping. Defaults to 100. If the agent exceeds this number of steps, it will stop and return the current state.
  • raise_on_tool_invocation_failure: Should the agent raise an exception when a tool invocation fails? If set to False, the exception will be turned into a chat message and passed to the LLM.
  • streaming_callback: A callback that will be invoked when a response is streamed from the LLM.

Raises:

  • TypeError: If the chat_generator does not support tools parameter in its run method.

Agent.warm_up

def warm_up() -> None

Warm up the Agent.

Agent.to_dict

def to_dict() -> Dict[str, Any]

Serialize the component to a dictionary.

Returns:

Dictionary with serialized data

Agent.from_dict

@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "Agent"

Deserialize the agent from a dictionary.

Arguments:

  • data: Dictionary to deserialize from

Returns:

Deserialized agent

Agent.run

def run(messages: List[ChatMessage],
        streaming_callback: Optional[StreamingCallbackT] = None,
        **kwargs: Dict[str, Any]) -> Dict[str, Any]

Process messages and execute tools until the exit condition is met.

Arguments:

  • messages: List of chat messages to process
  • streaming_callback: A callback that will be invoked when a response is streamed from the LLM.
  • kwargs: Additional data to pass to the State schema used by the Agent. The keys must match the schema defined in the Agent's state_schema.

Returns:

Dictionary containing messages and outputs matching the defined output types

Agent.run_async

async def run_async(messages: List[ChatMessage],
                    streaming_callback: Optional[StreamingCallbackT] = None,
                    **kwargs: Dict[str, Any]) -> Dict[str, Any]

Asynchronously process messages and execute tools until the exit condition is met.

This is the asynchronous version of the run method. It follows the same logic but uses asynchronous operations where possible, such as calling the run_async method of the ChatGenerator if available.

Arguments:

  • messages: List of chat messages to process
  • streaming_callback: A callback that will be invoked when a response is streamed from the LLM.
  • kwargs: Additional data to pass to the State schema used by the Agent. The keys must match the schema defined in the Agent's state_schema.

Returns:

Dictionary containing messages and outputs matching the defined output types