Tool
Tool is a data class representing a function that Language Models can prepare a call for.
A growing number of Language Models now support passing tool definitions alongside the prompt.
Tool calling refers to the ability of Language Models to generate calls to tools - be they functions or APIs - when responding to user queries. The model prepares the tool call but does not execute it.
If you are looking for the details of this data class's methods and parameters, visit our API documentation.
Tool class
Tool is a simple and unified abstraction to represent tools in the Haystack framework.
A tool is a function for which Language Models can prepare a call.
The Tool class is used in Chat Generators and provides a consistent experience across models. Tool is also used in the ToolInvoker component that executes calls prepared by Language Models.
@dataclass
class Tool:
name: str
description: str
parameters: Dict[str, Any]
function: Callable
outputs_to_string: dict[str, Any] | None = None
inputs_from_state: dict[str, str] | None = None
outputs_to_state: dict[str, dict[str, Any]] | None = None
nameis the name of the Tool.descriptionis a string describing what the Tool does.parametersis a JSON schema describing the expected parameters.functionis invoked when the Tool is called.outputs_to_string(optional) controls how parts of the tool’s output are converted into one or more strings (e.g. for LLM consumption).inputs_from_state(optional) maps values from the agent state to the tool’s input parameters (e.g. to share info between tools)outputs_to_state(optional) specifies how tool outputs are written back into the agent state, with optional handlers.
Keep in mind that the accurate definitions of name and description are important for the Language Model to prepare the call correctly.
Tool exposes a tool_spec property, returning the tool specification to be used by Language Models.
It also has an invoke method that executes the underlying function with the provided parameters.
Tool Initialization
There are three ways to create a Tool:
@tooldecorator — recommended for most cases; infers name, description, and schema from the function.create_tool_from_function— same as@toolbut called as a function; useful when you can’t decorate directly.- Manual initialization — construct
Tool(...)directly when you need full control over the JSON schema.
For most use cases, we recommend @tool or create_tool_from_function. Both automatically generate the parameters JSON schema from your function’s type hints and Annotated parameter descriptions, so you don’t need to write the schema by hand.
@tool decorator
The @tool decorator converts a function into a Tool. It infers the name, description, and parameters from the function and automatically generates a JSON schema. Use typing.Annotated to add descriptions to individual parameters. When called without arguments (@tool), defaults are inferred from the function. When called with arguments (@tool(name=..., outputs_to_state=...)), you can customize any of the Tool fields.
from typing import Annotated, Literal
from haystack.tools import tool
@tool
def get_weather(
city: Annotated[str, "the city for which to get the weather"] = "Munich",
unit: Annotated[
Literal["Celsius", "Fahrenheit"],
"the unit for the temperature",
] = "Celsius",
):
"""A simple function to get the current weather for a location."""
return f"Weather report for {city}: 20 {unit}, sunny"
print(get_weather)
Tool(
name=’get_weather’,
description=’A simple function to get the current weather for a location.’,
parameters={
‘type’: ‘object’,
‘properties’: {
‘city’: {‘type’: ‘string’, ‘description’: ‘the city for which to get the weather’, ‘default’: ‘Munich’},
‘unit’: {
‘type’: ‘string’,
‘enum’: [‘Celsius’, ‘Fahrenheit’],
‘description’: ‘the unit for the temperature’,
‘default’: ‘Celsius’,
},
},
},
function=<function get_weather at 0x7f7b3a8a9b80>,
)
create_tool_from_function
create_tool_from_function is the functional equivalent of @tool — useful when you’re working with a function you can’t decorate directly (e.g. a method from a library). It accepts the same optional parameters as @tool and generates the JSON schema in the same way.
from typing import Annotated, Literal
from haystack.tools import create_tool_from_function
def get_weather(
city: Annotated[str, "the city for which to get the weather"] = "Munich",
unit: Annotated[
Literal["Celsius", "Fahrenheit"],
"the unit for the temperature",
] = "Celsius",
):
"""A simple function to get the current weather for a location."""
return f"Weather report for {city}: 20 {unit}, sunny"
tool = create_tool_from_function(get_weather)
print(tool)
Tool(
name=’get_weather’,
description=’A simple function to get the current weather for a location.’,
parameters={
‘type’: ‘object’,
‘properties’: {
‘city’: {‘type’: ‘string’, ‘description’: ‘the city for which to get the weather’, ‘default’: ‘Munich’},
‘unit’: {
‘type’: ‘string’,
‘enum’: [‘Celsius’, ‘Fahrenheit’],
‘description’: ‘the unit for the temperature’,
‘default’: ‘Celsius’,
},
},
},
function=<function get_weather at 0x7f7b3a8a9b80>,
)
Manual Initialization
Use this approach when you need full control over the JSON schema — for example, when the function signature alone isn’t enough to express the parameter constraints.
from haystack.tools import Tool
def add(a: int, b: int) -> int:
return a + b
parameters = {
"type": "object",
"properties": {"a": {"type": "integer"}, "b": {"type": "integer"}},
"required": ["a", "b"],
}
add_tool = Tool(
name="addition_tool",
description="This tool adds two numbers",
parameters=parameters,
function=add,
)
print(add_tool.tool_spec)
print(add_tool.invoke(a=15, b=10))
{
‘name’: ‘addition_tool’,
‘description’: ‘This tool adds two numbers’,
‘parameters’: {
‘type’: ‘object’,
‘properties’: {‘a’: {‘type’: ‘integer’}, ‘b’: {‘type’: ‘integer’}},
‘required’: [‘a’, ‘b’]
}
}
25
Advanced Tool Configuration
outputs_to_string and outputs_to_state let you control how a tool’s outputs are surfaced to the LLM and stored in the agent state.
Use them to format structured outputs for the LLM while keeping raw data available for later steps.
from haystack.tools import Tool
def format_documents(documents):
return "\n".join(f"{i+1}. Document: {doc.content}" for i, doc in enumerate(documents))
def format_summary(metadata):
return f"Found {metadata['count']} results"
tool = Tool(
name="search",
description="Search for documents",
parameters={...},
function=search_func, # Returns {"documents": [Document(...)], "metadata": {"count": 5}, "debug_info": {...}}
outputs_to_string={
"formatted_docs": {"source": "documents", "handler": format_documents},
"summary": {"source": "metadata", "handler": format_summary}
}
outputs_to_state={"documents": {"source": "documents"}}, # Save Documents into Agent's state
)
# After the tool invocation, the tool result includes:
# {
# "formatted_docs": "1. Document Title\n Content...\n2. ...",
# "summary": "Found 5 results"
# }
After invocation, only the configured string outputs are returned to the LLM, while selected fields through outputs_to_state (like documents) are saved in the agent state.
Shaping Tool outputs with outputs_to_string
By default, a tool's return value is converted to a string using a default handler before being sent to the Language Model.
You can use outputs_to_string to customize this behavior using one of two formats:
-
Single output format: Use
source,handler, and/orraw_resultat the root level.python{"source": "docs", "handler": format_documents, "raw_result": False}source: (Optional) Specifies the key to extract from the tool's output dictionary. If omitted, the entire result is passed to the handler.handler: (Optional) A function that takes the output (or the extracted source value) and returns the final result.raw_result: (Optional) IfTrue, the result is returned "as is" without further string conversion, but applying thehandlerif provided. This is intended for multimodal tools returning images. In this mode, the tool or handler should return a list ofTextContentandImageContentobjects for compatibility with Chat Generators.
-
Multiple output format: Map custom keys to individual configurations.
python{"formatted_docs": {"source": "docs", "handler": format_documents},"summary": {"source": "summary_text", "handler": str.upper}}Each entry defines a
sourcekey and can optionally include ahandler. The individual outputs are processed, collected into a dictionary, and then converted into a single string (usually a JSON-like representation) for the LLM.noteraw_resultis not supported in the multiple output format.
The example below shows how to use outputs_to_string with raw_result: True to return images:
from haystack.components.agents import Agent
from haystack.components.generators.chat import OpenAIResponsesChatGenerator
from haystack.dataclasses import ChatMessage, ImageContent, TextContent
from haystack.tools import create_tool_from_function
def retrieve_image():
"""Tool to retrieve an image"""
return [
TextContent("Here is the retrieved image."),
ImageContent.from_file_path("test/test_files/images/apple.jpg"),
]
image_retriever_tool = create_tool_from_function(
function=retrieve_image,
outputs_to_string={"raw_result": True},
)
agent = Agent(
chat_generator=OpenAIResponsesChatGenerator(model="gpt-5.4-nano"),
system_prompt="You are an Agent that can retrieve images and describe them.",
tools=[image_retriever_tool],
)
user_message = ChatMessage.from_user(
"Retrieve the image and describe it in max 10 words.",
)
result = agent.run(messages=[user_message])
print(result["last_message"].text)
# Red apple with stem resting on straw.
Toolset
A Toolset groups multiple Tool instances into a single manageable unit.
It simplifies the passing of tools to components like Chat Generators or ToolInvoker, and supports filtering, serialization, and reuse.
from haystack.tools import Toolset
math_toolset = Toolset([add_tool, subtract_tool])
See more details and examples on the Toolset documentation page.
Usage
To better understand this section, make sure you are also familiar with Haystack’s ChatMessage data class.
The recommended way to use tools in Haystack is through the Agent component, which manages the full tool call loop automatically. The sections below also show how to wire ChatGenerator and ToolInvoker together manually for cases where you need fine-grained control over the loop.
Passing Tools to Agent
The Agent component is the easiest way to use tools. It internally combines a Chat Generator and a ToolInvoker, runs the tool call loop for you, and exposes the final response and any state written by tools.
from typing import Annotated
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage
from haystack.tools import tool
from haystack.components.agents import Agent
@tool(outputs_to_state={"calc_result": {"source": "result"}})
def calculator(expression: Annotated[str, "math expression to evaluate"]) -> dict:
"""Evaluate a basic math expression."""
try:
result = eval(expression, {"__builtins__": {}})
return {"result": result}
except Exception as e:
return {"error": str(e)}
agent = Agent(
system_prompt="You are a helpful assistant that can perform calculations using the calculator tool.",
chat_generator=OpenAIChatGenerator(),
tools=[calculator],
state_schema={"calc_result": {"type": int}},
)
response = agent.run(messages=[ChatMessage.from_user("What is 7 * (4 + 2)?")])
print(response["messages"])
print("Calc Result:", response.get("calc_result"))
Manual Tool Calling with ChatGenerator and ToolInvoker
The following sections show the lower-level approach of driving tool calls yourself with ChatGenerator and ToolInvoker. This is useful when you need precise control over the loop — for example, to add custom logic between steps — but for most use cases the Agent component above is simpler.
Passing Tools to a Chat Generator
Using the tools parameter, you can pass tools as a list of Tool instances or a single Toolset during initialization or in the run method. Tools passed at runtime override those set at initialization.
Not all Chat Generators currently support tools, but we are actively expanding tool support across more models.
Look out for the tools parameter in a specific Chat Generator’s __init__ and run methods.
from haystack.dataclasses import ChatMessage
from haystack.components.generators.chat import OpenAIChatGenerator
# Initialize the Chat Generator with the addition tool
chat_generator = OpenAIChatGenerator(model="gpt-5.4-nano", tools=[add_tool])
# here we expect the Tool to be invoked
res = chat_generator.run([ChatMessage.from_user("10 + 238")])
print(res)
# here the model can respond without using the Tool
res = chat_generator.run([ChatMessage.from_user("What is the habitat of a lion?")])
print(res)
{‘replies’: [ChatMessage(
_role=<ChatRole.ASSISTANT: ‘assistant’>,
_content=[ToolCall(tool_name=’addition_tool’, arguments={‘a’: 10, ‘b’: 238}, id=’call_rbYtbCdW0UbWMfy2x0sgF1Ap’)],
_meta={...}
)]}
{‘replies’: [ChatMessage(
_role=<ChatRole.ASSISTANT: ‘assistant’>,
_content=[TextContent(text=’Lions primarily inhabit grasslands, savannas, and open woodlands. ...’)],
_meta={...}
)]}
The same result of the previous run can be achieved by passing tools at runtime:
# Initialize the Chat Generator without tools
chat_generator = OpenAIChatGenerator(model="gpt-5.4-nano")
# pass tools in the run method
res_w_tool_call = chat_generator.run(
[ChatMessage.from_user("10 + 238")],
tools=math_toolset,
)
print(res_w_tool_call)
Executing Tool Calls
To execute prepared tool calls, you can use the ToolInvoker component.
This component acts as the execution engine for tools, processing the calls prepared by the Language Model.
Here’s an example:
import random
from typing import Annotated
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.components.tools import ToolInvoker
from haystack.tools import tool
@tool
def weather(location: Annotated[str, "the city to get weather for"]) -> dict:
"""Get the current weather for a location."""
return {
"temp": f"{random.randint(-10, 40)} °C",
"humidity": f"{random.randint(0, 100)}%",
}
# Initialize the Chat Generator with the weather tool
chat_generator = OpenAIChatGenerator(model="gpt-5.4-nano", tools=[weather])
# Initialize the Tool Invoker with the weather tool
tool_invoker = ToolInvoker(tools=[weather])
user_message = ChatMessage.from_user("What is the weather in Berlin?")
replies = chat_generator.run(messages=[user_message])["replies"]
print(f"assistant messages: {replies}")
# If the assistant message contains a tool call, run the tool invoker
if replies[0].tool_calls:
tool_messages = tool_invoker.run(messages=replies)["tool_messages"]
print(f"tool messages: {tool_messages}")
assistant messages: [ChatMessage(
_role=<ChatRole.ASSISTANT: ‘assistant’>,
_content=[ToolCall(tool_name=’weather’, arguments={‘location’: ‘Berlin’}, id=’call_YEvCEAmlvc42JGXV84NU8wtV’)],
_meta={‘model’: ‘gpt-5.4-nano’, ‘index’: 0, ‘finish_reason’: ‘tool_calls’, ‘usage’: {‘completion_tokens’: 13, ‘prompt_tokens’: 50, ‘total_tokens’: 63}}
)]
tool messages: [ChatMessage(
_role=<ChatRole.TOOL: ‘tool’>,
_content=[ToolCallResult(result="{‘temp’: ‘22 °C’, ‘humidity’: ‘35%’}", origin=ToolCall(tool_name=’weather’, arguments={‘location’: ‘Berlin’}, id=’call_YEvCEAmlvc42JGXV84NU8wtV’), error=False)],
_meta={}
)]
Processing Tool Results with the Chat Generator
In some cases, the raw output from a tool may not be immediately suitable for the end user.
You can refine the tool’s response by passing it back to the Chat Generator. This generates a user-friendly and conversational message.
Building on the previous example, we extend the if block to send all messages back to the Chat Generator:
# ... same setup as above (weather tool, chat_generator, tool_invoker)
user_message = ChatMessage.from_user("What is the weather in Berlin?")
replies = chat_generator.run(messages=[user_message])["replies"]
print(f"assistant messages: {replies}")
if replies[0].tool_calls:
tool_messages = tool_invoker.run(messages=replies)["tool_messages"]
print(f"tool messages: {tool_messages}")
# pass all messages back to the Chat Generator for a final natural-language response
messages = [user_message] + replies + tool_messages
final_replies = chat_generator.run(messages=messages)["replies"]
print(f"final assistant messages: {final_replies}")
assistant messages: [ChatMessage(
_role=<ChatRole.ASSISTANT: ‘assistant’>,
_content=[ToolCall(tool_name=’weather’, arguments={‘location’: ‘Berlin’}, id=’call_jHX0RCDHRKX7h8V9RrNs6apy’)],
_meta={‘model’: ‘gpt-5.4-nano’, ‘index’: 0, ‘finish_reason’: ‘tool_calls’, ‘usage’: {‘completion_tokens’: 13, ‘prompt_tokens’: 50, ‘total_tokens’: 63}}
)]
tool messages: [ChatMessage(
_role=<ChatRole.TOOL: ‘tool’>,
_content=[ToolCallResult(result="{‘temp’: ‘2 °C’, ‘humidity’: ‘15%’}", origin=ToolCall(tool_name=’weather’, arguments={‘location’: ‘Berlin’}, id=’call_jHX0RCDHRKX7h8V9RrNs6apy’), error=False)],
_meta={}
)]
final assistant messages: [ChatMessage(
_role=<ChatRole.ASSISTANT: ‘assistant’>,
_content=[TextContent(text=’The current weather in Berlin is 2 °C with a humidity level of 15%.’)],
_meta={‘model’: ‘gpt-5.4-nano’, ‘index’: 0, ‘finish_reason’: ‘stop’, ‘usage’: {‘completion_tokens’: 19, ‘prompt_tokens’: 85, ‘total_tokens’: 104}}
)]
Additional References
📚 Tutorials:
- Build a Tool-Calling Agent
- Creating a Multi-Agent System with Haystack
- Human-in-the-Loop with Haystack Agents
🧑🍳 Cookbooks: