ToolInvoker
This componentΒ isΒ designedΒ toΒ execute toolΒ callsΒ preparedΒ byΒ languageΒ models. ItΒ actsΒ as aΒ bridgeΒ betweenΒ theΒ language model'sΒ outputΒ andΒ the actualΒ execution ofΒ functionsΒ orΒ toolsΒ thatΒ performΒ specificΒ tasks.
Most common position in a pipeline | After a Chat Generator |
Mandatory init variables | βtoolsβ: A list of Tools that can be invoked |
Mandatory run variables | βmessagesβ: A list of ChatMessage objects from a Chat Generator containing tool calls |
Output variables | βtool_messagesβ: A list of ChatMessage objects with tool role. Each ChatMessage objects wraps the result of a tool invocation. |
API reference | Tools |
GitHub link | https://github.com/deepset-ai/haystack/blob/main/haystack/components/tools/tool_invoker.py |
Overview
AΒ ToolInvoker
Β isΒ aΒ componentΒ that processesΒ ChatMessage
Β objectsΒ containing tool calls. ItΒ invokes theΒ correspondingΒ toolsΒ and returnsΒ theΒ resultsΒ as aΒ list ofΒ ChatMessage
Β objects. Each toolΒ isΒ definedΒ withΒ aΒ name, description, parameters, andΒ aΒ function thatΒ performsΒ theΒ task. TheΒ ToolInvoker
Β managesΒ theseΒ toolsΒ andΒ handlesΒ the invocationΒ process.
You can pass multiple tools to the ToolInvoker
component, and it will automatically choose the right tool to call based on tool calls produced by a Language Model.
The ToolInvoker
has two additionally helpful parameters:
convert_result_to_json_string
: Usejson.dumps
(when True) orstr
(when False) to convert the result into a string.raise_on_failure
: If True, it will raise an exception in case of errors. If False, it will return aChatMessage
object witherror=True
and a description of the error inresult
. Use this, for example, when you want to keep the Language Model running in a loop and fixing its errors.
ChatMessage and Tool Data Classes
Follow the links to learn more about ChatMessage and Tool data classes.
Usage
On its own
from haystack.dataclasses import ChatMessage, ToolCall
from haystack.components.tools import ToolInvoker
from haystack.tools import Tool
# Tool definition
def dummy_weather_function(city: str):
return f"The weather in {city} is 20 degrees."
parameters = {"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"]}
tool = Tool(name="weather_tool",
description="A tool to get the weather",
function=dummy_weather_function,
parameters=parameters)
# Usually, the ChatMessage with tool_calls is generated by a Language Model
# Here, we create it manually for demonstration purposes
tool_call = ToolCall(
tool_name="weather_tool",
arguments={"city": "Berlin"}
)
message = ChatMessage.from_assistant(tool_calls=[tool_call])
# ToolInvoker initialization and run
invoker = ToolInvoker(tools=[tool])
result = invoker.run(messages=[message])
print(result)
>> {
>> 'tool_messages': [
>> ChatMessage(
>> _role=<ChatRole.TOOL: 'tool'>,
>> _content=[
>> ToolCallResult(
>> result='"The weather in Berlin is 20 degrees."',
>> origin=ToolCall(
>> tool_name='weather_tool',
>> arguments={'city': 'Berlin'},
>> id=None
>> )
>> )
>> ],
>> _meta={}
>> )
>> ]
>> }
In a pipeline
The following code snippet shows how to process a user query about the weather. First, we define a Tool
for fetching weather data, then we initialize a ToolInvoker
to execute this tool, while using an OpenAIChatGenerator
to generate responses. A ConditionalRouter
is used in this pipeline to route messages based on whether they contain tool calls. The pipeline connects these components, processes a user message asking for the weather in Berlin, and outputs the result.
from haystack.dataclasses import ChatMessage
from haystack.components.tools import ToolInvoker
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.components.routers import ConditionalRouter
from haystack.tools import Tool
from haystack import Pipeline
from typing import List # Ensure List is imported
# Define a dummy weather tool
import random
def dummy_weather(location: str):
return {"temp": f"{random.randint(-10, 40)} Β°C",
"humidity": f"{random.randint(0, 100)}%"}
weather_tool = Tool(
name="weather",
description="A tool to get the weather",
function=dummy_weather,
parameters={
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"],
},
)
# Initialize the ToolInvoker with the weather tool
tool_invoker = ToolInvoker(tools=[weather_tool])
# Initialize the ChatGenerator
chat_generator = OpenAIChatGenerator(model="gpt-4o-mini", tools=[weather_tool])
# Define routing conditions
routes = [
{
"condition": "{{replies[0].tool_calls | length > 0}}",
"output": "{{replies}}",
"output_name": "there_are_tool_calls",
"output_type": List[ChatMessage], # Use direct type
},
{
"condition": "{{replies[0].tool_calls | length == 0}}",
"output": "{{replies}}",
"output_name": "final_replies",
"output_type": List[ChatMessage], # Use direct type
},
]
# Initialize the ConditionalRouter
router = ConditionalRouter(routes, unsafe=True)
# Create the pipeline
pipeline = Pipeline()
pipeline.add_component("generator", chat_generator)
pipeline.add_component("router", router)
pipeline.add_component("tool_invoker", tool_invoker)
# Connect components
pipeline.connect("generator.replies", "router")
pipeline.connect("router.there_are_tool_calls", "tool_invoker.messages") # Correct connection
# Example user message
user_message = ChatMessage.from_user("What is the weather in Berlin?")
# Run the pipeline
result = pipeline.run({"messages": [user_message]})
# Print the result
print(result)
{
"tool_invoker":{
"tool_messages":[
"ChatMessage(_role=<ChatRole.TOOL":"tool"">",
"_content="[
"ToolCallResult(result=""{'temp': '33 Β°C', 'humidity': '79%'}",
"origin=ToolCall(tool_name=""weather",
"arguments="{
"location":"Berlin"
},
"id=""call_pUVl8Cycssk1dtgMWNT1T9eT"")",
"error=False)"
],
"_name=None",
"_meta="{
}")"
]
}
}
Additional References
π§βπ³ Cookbooks:
Updated 9 months ago