Tool
Tool
is a class representing a function that Language Models can prepare a call for.
A growing number of Language Models now support passing tool definitions alongside the prompt.
Tool calling refers to the ability of Language Models to generate calls to tools - be they functions or APIs - when responding to user queries. The model prepares the tool call but does not execute it.
If you are looking for the details of this data class's methods and parameters, visit our API documentation.
Tool class
Tool
is a simple and unified abstraction to represent tools in the Haystack framework.
A tool is a function for which Language Models can prepare a call.
The Tool
class is used in Chat Generators and provides a consistent experience across models. Tool
is also used in the ToolInvoker
component that executes calls prepared by Language Models.
@dataclass
class Tool:
name: str
description: str
parameters: Dict[str, Any]
function: Callable
name
is the name of the Tool.description
is a string describing what the Tool does.parameters
is a JSON schema describing the expected parameters.function
is invoked when the Tool is called.
Keep in mind that the accurate definitions of name
and description
are important for the Language Model to prepare the call correctly.
Tool
exposes a tool_spec
property, returning the tool specification to be used by Language Models.
It also has an invoke
method that executes the underlying function with the provided parameters.
Tool Initialization
Here is how to initialize a Tool to work with a specific function:
from haystack.tools import Tool
def add(a: int, b: int) -> int:
return a + b
parameters = {
"type": "object",
"properties": {
"a": {"type": "integer"},
"b": {"type": "integer"}
},
"required": ["a", "b"]
}
add_tool = Tool(name="addition_tool",
description="This tool adds two numbers",
parameters=parameters,
function=add)
print(add_tool.tool_spec)
print(add_tool.invoke(a=15, b=10))
{'name': 'addition_tool',
'description': 'This tool adds two numbers',
'parameters':{'type': 'object',
'properties':{'a':{'type': 'integer'}, 'b':{'type': 'integer'}},
'required':['a', 'b']}}
25
@tool decorator
The @tool
decorator simplifies converting a function into a Tool. It infers Tool name, description, and parameters from the function and automatically generates a JSON schema. It uses Python's typing.Annotated
for the description of parameters. If you need to customize Tool name and description, use create_tool_from_function
instead.
from typing import Annotated, Literal
from haystack.tools import tool
@tool
def get_weather(
city: Annotated[str, "the city for which to get the weather"] = "Munich",
unit: Annotated[Literal["Celsius", "Fahrenheit"], "the unit for the temperature"] = "Celsius"):
'''A simple function to get the current weather for a location.'''
return f"Weather report for {city}: 20 {unit}, sunny"
print(get_weather)
Tool(name='get_weather', description='A simple function to get the current weather for a location.',
parameters={
'type': 'object',
'properties': {
'city': {'type': 'string', 'description': 'the city for which to get the weather', 'default': 'Munich'},
'unit': {
'type': 'string',
'enum': ['Celsius', 'Fahrenheit'],
'description': 'the unit for the temperature',
'default': 'Celsius',
},
}
},
function=<function get_weather at 0x7f7b3a8a9b80>)
create_tool_from_function
The create_tool_from_function
method provides more flexibility than the@tool
decorator and allows setting Tool name and description. It infers the Tool parameters automatically and generates a JSON schema automatically in the same way as the @tool
decorator.
from typing import Annotated, Literal
from haystack.tools import create_tool_from_function
def get_weather(
city: Annotated[str, "the city for which to get the weather"] = "Munich",
unit: Annotated[Literal["Celsius", "Fahrenheit"], "the unit for the temperature"] = "Celsius"):
'''A simple function to get the current weather for a location.'''
return f"Weather report for {city}: 20 {unit}, sunny"
tool = create_tool_from_function(get_weather)
print(tool)
Tool(name='get_weather', description='A simple function to get the current weather for a location.',
parameters={
'type': 'object',
'properties': {
'city': {'type': 'string', 'description': 'the city for which to get the weather', 'default': 'Munich'},
'unit': {
'type': 'string',
'enum': ['Celsius', 'Fahrenheit'],
'description': 'the unit for the temperature',
'default': 'Celsius',
},
}
},
function=<function get_weather at 0x7f7b3a8a9b80>)
Usage
To better understand this section, make sure you are also familiar with Haystack's ChatMessage
data class.
Passing Tools to a Chat Generator
Using the tools
parameter, a list of Tool
can be passed to Chat Generators during initialization or in the run
method. tools
passed at runtime override those set at initialization.
Chat Generators support
Not all Chat Generators currently support tools, but we are actively expanding tool support across more models.
Look out for the
tools
parameter in a specific Chat Generator's__init__
andrun
methods.
from haystack.dataclasses import ChatMessage
from haystack.components.generators.chat import OpenAIChatGenerator
# Initialize the Chat Generator with the addition tool
chat_generator = OpenAIChatGenerator(model="gpt-4o-mini", tools=[add_tool])
# here we expect the Tool to be invoked
res=chat_generator.run([ChatMessage.from_user("10 + 238")])
print(res)
# here the model can respond without using the Tool
res=chat_generator.run([ChatMessage.from_user("What is the habitat of a lion?")])
print(res)
{'replies':[ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>,
_content=[ToolCall(tool_name='addition_tool',
arguments={'a':10, 'b':238},
id='call_rbYtbCdW0UbWMfy2x0sgF1Ap'
)],
_meta={...})]}
{'replies':[ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>,
_content=[TextContent(text='Lions primarily inhabit grasslands, savannas, and open woodlands. ...'
)],
_meta={...})]}
The same result of the previous run can be achieved by passing tools at runtime:
# Initialize the Chat Generator without tools
chat_generator = OpenAIChatGenerator(model="gpt-4o-mini")
# pass tools in the run method
res_w_tool_call=chat_generator.run([ChatMessage.from_user("10 + 238")], tools=[add_tool])
print(res_w_tool_call)
Executing Tool Calls
To execute prepared tool calls, you can use the ToolInvoker
component. This component acts as the execution engine for tools, processing the calls prepared by the Language Model.
Here's an example:
import random
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.components.tools import ToolInvoker
from haystack.tools import Tool
# Define a dummy weather toolimport random
def dummy_weather(location: str):
return {"temp": f"{random.randint(-10,40)} °C",
"humidity": f"{random.randint(0,100)}%"}
weather_tool = Tool(
name="weather",
description="A tool to get the weather",
function=dummy_weather,
parameters={
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"],
},
)
# Initialize the Chat Generator with the weather tool
chat_generator = OpenAIChatGenerator(model="gpt-4o-mini", tools=[weather_tool])
# Initialize the Tool Invoker with the weather tool
tool_invoker = ToolInvoker(tools=[weather_tool])
user_message = ChatMessage.from_user("What is the weather in Berlin?")
replies = chat_generator.run(messages=[user_message])["replies"]
print(f"assistant messages: {replies}")
# If the assistant message contains a tool call, run the tool invoker
if replies[0].tool_calls:
tool_messages = tool_invoker.run(messages=replies)["tool_messages"]
print(f"tool messages: {tool_messages}")
assistant messages:[ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>, _content=[ToolCall(tool_name='weather',
arguments={'location': 'Berlin'}, id='call_YEvCEAmlvc42JGXV84NU8wtV')], _meta={'model': 'gpt-4o-mini-2024-07-18',
'index':0, 'finish_reason': 'tool_calls', 'usage':{'completion_tokens':13, 'prompt_tokens':50, 'total_tokens':
63}})]
tool messages: [ChatMessage(_role=<ChatRole.TOOL: 'tool'>, _content=[ToolCallResult(result="{'temp': '22 °C',
'humidity': '35%'}", origin=ToolCall(tool_name='weather', arguments={'location': 'Berlin'},
id='call_YEvCEAmlvc42JGXV84NU8wtV'), error=False)], _meta={})]
Processing Tool Results with the Chat Generator
In some cases, the raw output from a tool may not be immediately suitable for the end user.
You can refine the tool’s response by passing it back to the Chat Generator. This generates a user-friendly and conversational message.
In this example, we’ll pass the tool’s response back to the Chat Generator for final processing:
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.components.tools import ToolInvoker
from haystack.tools import Tool
# Define a dummy weather toolimport random
def dummy_weather(location: str):
return {"temp": f"{random.randint(-10,40)} °C",
"humidity": f"{random.randint(0,100)}%"}
weather_tool = Tool(
name="weather",
description="A tool to get the weather",
function=dummy_weather,
parameters={
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"],
},
)
chat_generator = OpenAIChatGenerator(model="gpt-4o-mini", tools=[weather_tool])
tool_invoker = ToolInvoker(tools=[weather_tool])
user_message = ChatMessage.from_user("What is the weather in Berlin?")
replies = chat_generator.run(messages=[user_message])["replies"]
print(f"assistant messages: {replies}")
if replies[0].tool_calls:
tool_messages = tool_invoker.run(messages=replies)["tool_messages"]
print(f"tool messages: {tool_messages}")
# we pass all the messages to the Chat Generator
messages = [user_message] + replies + tool_messages
final_replies = chat_generator.run(messages=messages)["replies"]
print(f"final assistant messages: {final_replies}")
assistant messages:[ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>, _content=[ToolCall(tool_name='weather',
arguments={'location': 'Berlin'}, id='call_jHX0RCDHRKX7h8V9RrNs6apy')], _meta={'model': 'gpt-4o-mini-2024-07-18',
'index':0, 'finish_reason': 'tool_calls', 'usage':{'completion_tokens':13, 'prompt_tokens':50, 'total_tokens':
63}})]
tool messages: [ChatMessage(_role=<ChatRole.TOOL: 'tool'>, _content=[ToolCallResult(result="{'temp': '2 °C',
'humidity': '15%'}", origin=ToolCall(tool_name='weather', arguments={'location': 'Berlin'},
id='call_jHX0RCDHRKX7h8V9RrNs6apy'), error=False)], _meta={})]
final assistant messages: [ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>, _content=[TextContent(text='The
current weather in Berlin is 2 °C with a humidity level of 15%.')], _meta={'model': 'gpt-4o-mini-2024-07-18',
'index': 0, 'finish_reason': 'stop', 'usage': {'completion_tokens': 19, 'prompt_tokens': 85, 'total_tokens':
104}})]
Additional References
🧑🍳 Cookbooks:
Updated 3 days ago