DocumentationAPI Reference📓 Tutorials🧑‍🍳 Cookbook🤝 Integrations💜 Discord🎨 Studio
Documentation

Agent

The Agent component is a tool-using agent that interacts with chat-based LLMs and tools to solve complex queries iteratively. It can execute external tools, manage state across multiple LLM calls, and stop execution based on configurable exit_conditions.

Most common position in a pipelineAfter a ChatPromptBuilder or user input
Mandatory init variableschat_generator: An instance of a Chat Generator that supports tools
Mandatory run variablesmessages: A list of ChatMessages
Output variablesmessages: Chat history with tool and model responses
API referenceAgents
GitHub linkhttps://github.com/deepset-ai/haystack/blob/main/haystack/components/agents/agent.py

Overview

The Agent component is a loop-based system that uses a chat-based large language model (LLM) and external tools to solve complex user queries. It works iteratively—calling tools, updating state, and generating prompts—until one of the configurable exit_conditions is met.

It can:

  • Dynamically select tools based on user input,
  • Maintain and validate runtime state using a schema,
  • Stream token-level outputs from the LLM.

The Agent returns a dictionary containing:

  • messages: the full conversation history,
  • Additional dynamic keys based on state_schema.

Parameters

To initialize the Agent component, you need to provide it with an instance of a Chat Generator that supports tools.

You can additionally configure:

  • A system_prompt for your Agent,
  • A list of exit_conditions strings that will cause the agent to return. Can be either:
    • “text”, which means that the Agent will exit as soon as the LLM replies only with a text response,
    • or specific tool names.
  • A state_schema for one agent invocation run. It defines extra information – such as documents or context – that tools can read from or write to during execution. You can use this schema to pass parameters that tools can both produce and consume.
  • streaming_callback to stream the tokens from the LLM directly in output.

📘

For a complete list of available parameters, refer to the Agents API Documentation.

Usage

On its own

from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage
from haystack.tools.tool import Tool
from haystack.components.agents import Agent
from typing import List

# Tool Function
def calculate(expression: str) -> dict:
    try:
        result = eval(expression, {"__builtins__": {}})
        return {"result": result}
    except Exception as e:
        return {"error": str(e)}

# Tool Definition
calculator_tool = Tool(
    name="calculator",
    description="Evaluate basic math expressions.",
    parameters={
        "type": "object",
        "properties": {
            "expression": {"type": "string", "description": "Math expression to evaluate"}
        },
        "required": ["expression"]
    },
    function=calculate,
    outputs_to_state={"calc_result": {"source": "result"}}
)

# Agent Setup
agent = Agent(
    chat_generator=OpenAIChatGenerator(),
    tools=[calculator_tool],
    exit_conditions=["calculator"],
    state_schema={
        "calc_result": {"type": int},
    }
)

# Run the Agent
agent.warm_up()
response = agent.run(messages=[ChatMessage.from_user("What is 7 * (4 + 2)?")])

# Output
print(response["messages"])
print("Calc Result:", response.get("calc_result"))

In a pipeline

The example pipeline below creates a research assistant using OpenAIChatGenerator and a SerperDevWebSearch tool. It searches the internet, reads and processes the page content, and builds a prompt for the AI. The assistant uses this information to write a clear, formatted answer with source links.

from haystack.components.agents import Agent
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.components.builders.answer_builder import AnswerBuilder
from haystack.components.builders.chat_prompt_builder import ChatPromptBuilder
from haystack.components.converters.html import HTMLToDocument
from haystack.components.fetchers.link_content import LinkContentFetcher
from haystack.components.websearch.serper_dev import SerperDevWebSearch
from haystack.dataclasses import ChatMessage
from haystack.core.pipeline import Pipeline
from haystack.tools.tool import Tool
from haystack.core.super_component import SuperComponent
from haystack.tools import ComponentTool
import os

# Build the nested search tool pipeline using haystack.core.pipeline.Pipeline
search_component = Pipeline()

search_component.add_component("search", SerperDevWebSearch(top_k=10))
search_component.add_component("fetcher", LinkContentFetcher(timeout=3, raise_on_failure=False, retry_attempts=2))
search_component.add_component("converter", HTMLToDocument())
search_component.add_component("builder", ChatPromptBuilder(
    template=[ChatMessage.from_user("""
{% for doc in docs %}
<search-result url="{{ doc.meta.url }}">
{{ doc.content|default|truncate(25000) }}
</search-result>
{% endfor %}
""")],
    variables=["docs"],
    required_variables=["docs"]
))

search_component.connect("search.links", "fetcher.urls")
search_component.connect("fetcher.streams", "converter.sources")
search_component.connect("converter.documents", "builder.docs")

# Wrap in SuperComponent + ComponentTool
super_tool_component = SuperComponent(pipeline=search_component)
search_tool = ComponentTool(
    name="search",
    description="Use this tool to search for information on the internet.",
    component=super_tool_component
)

# Build the Chat Generator
chat_generator = OpenAIChatGenerator()

# Create the Agent
agent = Agent(
    chat_generator=chat_generator,
    tools=[search_tool],
    system_prompt="""
You are a deep research assistant.
You create comprehensive research reports to answer the user's questions.
You use the 'search'-tool to answer any questions.
You perform multiple searches until you have the information you need to answer the question.
Make sure you research different aspects of the question.
Use markdown to format your response.
When you use information from the websearch results, cite your sources using markdown links.
It is important that you cite accurately.
""",
    exit_conditions=["text"],
    max_agent_steps=100,
    raise_on_tool_invocation_failure=False
)

agent.warm_up()

# Answer builder
answer_builder = AnswerBuilder()

# Simulate input
query = "What are the latest updates on the Artemis moon mission?"
messages = [ChatMessage.from_user(query)]

# Run agent
agent_output = agent.run(messages=messages)

# Filter replies with valid 'text' only
valid_replies = [msg for msg in agent_output["messages"] if getattr(msg, "text", None)]

answers = answer_builder.run(query=query, replies=valid_replies)

# Print the result
for answer in answers["answers"]:
    print(answer)

Additional References

🧑‍🍳 Cookbook: Build a GitHub Issue Resolver Agent

📓 Tutorial: Build a Tool-Calling Agent