DocumentationAPI Reference📓 Tutorials🧑‍🍳 Cookbook🤝 Integrations💜 Discord


This component constructs prompts dynamically by processing string templates.

Folder path/builders/
Most common position in a pipelineBefore a Generator
Mandatory input variables“prompt_source”: A string
Output variables“prompt”: A dynamically constructed prompt


Deprecation Warning

This component is deprecated and will be removed in Haystack 2.4.0.

Use PromptBuilder instead.


DynamicPromptBuilder generates prompts dynamically by processing a string template. It integrates with Jinja2 templating.

If you would like your builder to work with a ChatMessage dynamically, check out the DynamicChatPromptBuilder component instead.

How it works

DynamicPromptBuilder takes a string template from the pipeline run and renders it with runtime and template variables, which it applies to render the final prompt.

Using variables

You can initialize this component with runtime_variables that are resolved during pipeline runtime execution. For example, if runtime_variables contains documents, DynamicPromptBuilder will expect an input called documents.
The values associated with variables from the Pipeline runtime are then injected into template placeholders of a string template that is provided to the run method.

You can also provide additional template_variables directly to the pipeline run method. These variables are then merged with the variables from the pipeline runtime.



You must provide runtime_variables if they are passed as inputs and outputs between pipeline components.
If you provide template_variables directly in run method for DynamicPromptBuilder, do not pass them to runtime_variables.


On its own

This code example will show how the prompt is generated using both runtime and template variables:

from import DynamicPromptBuilder
from haystack.dataclasses import ChatMessage

prompt_builder = DynamicPromptBuilder(runtime_variables=["location"])
location = "Berlin"
messages = [ChatMessage.from_system("Always thank the user for their question after the response is given."),
            ChatMessage.from_user("Tell me about {{location}}")]{"location": location}, prompt_source=messages)

In a pipeline

This is an example of a query pipeline with both runtime and template variables:

from typing import List
from import DynamicPromptBuilder
from haystack.components.generators import OpenAIGenerator
from haystack import Pipeline, component, Document

prompt_builder = DynamicPromptBuilder(runtime_variables=["documents"])
llm = OpenAIGenerator(api_key="<your-api-key>", model="gpt-3.5-turbo")

class DocumentProducer:

    def run(self, doc_input: str):
        return {"documents": [Document(content=doc_input)]}

pipe = Pipeline()
pipe.add_component("doc_producer", DocumentProducer())
pipe.add_component("prompt_builder", prompt_builder)
pipe.add_component("llm", llm)
pipe.connect("doc_producer.documents", "prompt_builder.documents")
pipe.connect("prompt_builder.prompt", "llm.prompt")

template = "Here is the document: {{documents[0].content}} \\n Answer: {{query}}"
result =
        "doc_producer": {"doc_input": "Hello world, I live in Berlin"},
        "prompt_builder": {
            "prompt_source": template,
            "template_variables": {"query": "Where does the speaker live?"},

Here's what a response would look like:

>> {'llm': {'replies': ['The speaker lives in Berlin.'],
>> 'meta': [{'model': 'gpt-3.5-turbo-0613',
>> 'index': 0,
>> 'finish_reason': 'stop',
>> 'usage': {'prompt_tokens': 28,
>> 'completion_tokens': 6,
>> 'total_tokens': 34}}]}}

Related Links

See the parameters details in our API reference: