DocumentationAPI ReferenceπŸ““ TutorialsπŸ§‘β€πŸ³ Cookbook🀝 IntegrationsπŸ’œ Discord

DynamicChatPromptBuilder

This component constructs prompts dynamically by processing chat messages.

NameDynamicChatPromptBuilder
Folder Path/builders/
Most common Position in a PipelineBefore a Generator
Mandatory Input variablesβ€œprompt_source”: a List of ChatMessage objects
Output variablesβ€œprompt”: a dynamically constructed prompt

Overview

DynamicChatPromptBuilder generates prompts dynamically by processing a list of ChatMessage instances. It integrates with Jinja2 templating.

ChatMessageΒ is a data class that includes message content, a role (who generated the message, such asΒ user,Β assistant,Β system,Β function), and optional metadata.

If you would like your builder to work dynamically with a simple string template, check out the DynamicPromptBuilder component instead.

How it works

DynamicChatPromptBuilder takes the last user message in the list of ChatMessage instances as a template and renders it with runtime and template variables, which it applies to render the final prompt.

Using variables

You can initialize this component with runtime_variables that are resolved during Pipeline runtime execution. For example, if runtime_variables contains documents, DynamicChatPromptBuilder will expect an input called documents.
The values associated with variables from the Pipeline runtime are then injected into template placeholders of a ChatMessage.

You can also provide additional template_variables directly to the Pipeline run method. These variables are then merged with the variables from the Pipeline runtime.

πŸ“˜

Variables

You must provide runtime_variables if they are passed as inputs and outputs between Pipeline components.
If you provide template_variables directly in run method for DynamicChatPromptBuilder, do not pass them to runtime_variables.

Usage

On its own

This code example will show how the prompt is generated using both runtime and template variables:

from haystack.components.builders import DynamicChatPromptBuilder
from haystack.dataclasses import ChatMessage

prompt_builder = DynamicChatPromptBuilder()
location = "Berlin"
messages = [ChatMessage.from_system("Always thank the user for their question after the response is given."),
            ChatMessage.from_user("Tell me about {{location}}")]

prompt_builder.run(template_variables={"location": location}, prompt_source=messages)

In a Pipeline Without Runtime Variables

This is an example of a Pipeline without any runtime variables. Here, DynamicChatPromptBuilder fills in a prompt template with a location variable and passes it to an LLM:

from haystack.components.builders import DynamicChatPromptBuilder
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage
from haystack import Pipeline
from haystack.utils import Secret

# no parameter init, we don't use any runtime template variables
prompt_builder = DynamicChatPromptBuilder()
llm = OpenAIChatGenerator(api_key=Secret.from_token("<your-api-key>"), model="gpt-3.5-turbo")

pipe = Pipeline()
pipe.add_component("prompt_builder", prompt_builder)
pipe.add_component("llm", llm)
pipe.connect("prompt_builder.prompt", "llm.messages")

location = "Berlin"
system_message = ChatMessage.from_system("You are a helpful assistant giving out valuable information to tourists.")
messages = [system_message, ChatMessage.from_user("Tell me about {{location}}")]

res = pipe.run(data={"prompt_builder": {"template_variables": {"location": location}, "prompt_source": messages}})
print(res)

This is what a response would look like:

>> {'llm': {'replies': [ChatMessage(content="Berlin is the capital city of Germany and one of the most vibrant
and diverse cities in Europe. Here are some key things to know...Enjoy your time exploring the vibrant and dynamic
capital of Germany!", role=<ChatRole.ASSISTANT: 'assistant'>, name=None, meta={'model': 'gpt-3.5-turbo-0613',
'index': 0, 'finish_reason': 'stop', 'usage': {'prompt_tokens': 27, 'completion_tokens': 681, 'total_tokens':
708}})]}}

Then, you could ask about the weather forecast in said location. The DynamicChatPromptBuilder fills in the template with the new day_count variable and passes to an LLM once again:

messages = [system_message, ChatMessage.from_user("What's the weather forecast for {{location}} in the next
{{day_count}} days?")]

res = pipe.run(data={"prompt_builder": {"template_variables": {"location": location, "day_count": "5"},
                                    "prompt_source": messages}})

print(res)

Here’s the response to this request:

>> {'llm': {'replies': [ChatMessage(content="Here is the weather forecast for Berlin in the next 5
days:\n\nDay 1: Mostly cloudy with a high of 22Β°C (72Β°F) and...so it's always a good idea to check for updates
closer to your visit.", role=<ChatRole.ASSISTANT: 'assistant'>, name=None, meta={'model': 'gpt-3.5-turbo-0613',
'index': 0, 'finish_reason': 'stop', 'usage': {'prompt_tokens': 37, 'completion_tokens': 201,
'total_tokens': 238}})]}}

In a Pipeline With Runtime Variables

This is an example of a Pipeline with runtime variables. Here, DynamicChatPromptBuilder fills in a prompt template with a location variable and Documents it received from a Retriever and passes it to an LLM:

from haystack import Document
from haystack.components.builders import DynamicChatPromptBuilder
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.components.retrievers.in_memory import InMemoryBM25Retriever
from haystack.dataclasses import ChatMessage
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack import Pipeline

document_store = InMemoryDocumentStore()
documents = [Document(content="There are over 7,000 languages spoken around the world today."),
			       Document(content="Elephants have been observed to behave in a way that indicates a high level of self-awareness, such as recognizing themselves in mirrors."),
			       Document(content="In certain parts of the world, like the Maldives, Puerto Rico, and San Diego, you can witness the phenomenon of bioluminescent waves.")]
document_store.write_documents(documents=documents)

pipeline = Pipeline()
pipeline.add_component("retriever", InMemoryBM25Retriever(document_store=document_store))
pipeline.add_component("prompt_builder", DynamicChatPromptBuilder(runtime_variables=["query", "documents"]))
pipeline.add_component("llm", OpenAIChatGenerator())
pipeline.connect("retriever.documents", "prompt_builder.documents")
pipeline.connect("prompt_builder.prompt", "llm.messages")

question = "How many languages are there?"
location = "Puerto Rico"
system_message = ChatMessage.from_system("You are a helpful assistant giving out valuable information to tourists.")
messages = [system_message, ChatMessage.from_user("""
Given these documents and given that I am currently in {{ location }}, answer the question.\nDocuments:
    {% for doc in documents %}
        {{ doc.content }}
    {% endfor %}

    \nQuestion: {{query}}
    \nAnswer:
""")]
question = "Can I see bioluminescent waves at my current location?"
res = pipeline.run(data={"retriever": {"query": question}, "prompt_builder": {"template_variables": {"location": location}, "prompt_source": messages, "query": question}})
print(res)

This is what a response would look like:

>> {'llm': {'replies': [ChatMessage(content='Yes, you can see bioluminescent waves in certain parts of Puerto Rico.
One of the most well-known locations for experiencing this phenomenon in Puerto Rico is Mosquito Bay on the island
of Vieques. The bioluminescent waves are caused by microorganisms called dinoflagellates that emit light when agitated,
creating a beautiful natural light show in the water. It is a must-see experience if you are in Puerto Rico and have
the opportunity to visit Mosquito Bay or other bioluminescent bays in the area.', role=<ChatRole.ASSISTANT: 'assistant'>,
name=None, meta={'model': 'gpt-3.5-turbo-0125', 'index': 0, 'finish_reason': 'stop', 'usage': {'completion_tokens': 110,
'prompt_tokens': 141, 'total_tokens': 251}})]}}

Related Links

See the parameters details in our API reference: