DynamicPromptBuilder
This component constructs prompts dynamically by processing string templates.
| Name | DynamicPromptBuilder |
|---|---|
| Folder Path | /builders/ |
| Most common Position in a Pipeline | Before a Generator |
| Mandatory Input variables | “prompt_source”: a string |
| Output variables | “prompt”: a dynamically constructed prompt |
Overview
DynamicPromptBuilder generates prompts dynamically by processing a string template. It integrates with Jinja2 templating.
If you would like your builder to work with a ChatMessage dynamically, check out the DynamicChatPromptBuilder component instead.
How it works
DynamicPromptBuilder takes a string template from the Pipeline run and renders it with runtime and template variables, which it applies to render the final prompt.
Using variables
You can initialize this component with runtime_variables that are resolved during Pipeline runtime execution. For example, if runtime_variables contains documents, DynamicPromptBuilder will expect an input called documents.
The values associated with variables from the Pipeline runtime are then injected into template placeholders of a string template that is provided to the run method.
You can also provide additional template_variables directly to the Pipeline run method. These variables are then merged with the variables from the Pipeline runtime.
Variables
You must provide
runtime_variablesif they are passed as inputs and outputs between Pipeline components.
If you providetemplate_variablesdirectly inrunmethod forDynamicPromptBuilder, do not pass them toruntime_variables.
Usage
On its own
This code example will show how the prompt is generated using both runtime and template variables:
from haystack.components.builders import DynamicPromptBuilder
from haystack.dataclasses import ChatMessage
prompt_builder = DynamicPromptBuilder(runtime_variables=["location"])
location = "Berlin"
messages = [ChatMessage.from_system("Always thank the user for their question after the response is given."),
ChatMessage.from_user("Tell me about {{location}}")]
prompt_builder.run(template_variables={"location": location}, prompt_source=messages)
In a Pipeline
This is an example of a query Pipeline with both runtime and template variables:
from typing import List
from haystack.components.builders import DynamicPromptBuilder
from haystack.components.generators import OpenAIGenerator
from haystack import Pipeline, component, Document
prompt_builder = DynamicPromptBuilder(runtime_variables=["documents"])
llm = OpenAIGenerator(api_key="<your-api-key>", model="gpt-3.5-turbo")
@component
class DocumentProducer:
@component.output_types(documents=List[Document])
def run(self, doc_input: str):
return {"documents": [Document(content=doc_input)]}
pipe = Pipeline()
pipe.add_component("doc_producer", DocumentProducer())
pipe.add_component("prompt_builder", prompt_builder)
pipe.add_component("llm", llm)
pipe.connect("doc_producer.documents", "prompt_builder.documents")
pipe.connect("prompt_builder.prompt", "llm.prompt")
template = "Here is the document: {{documents[0].content}} \\n Answer: {{query}}"
result = pipe.run(
data={
"doc_producer": {"doc_input": "Hello world, I live in Berlin"},
"prompt_builder": {
"prompt_source": template,
"template_variables": {"query": "Where does the speaker live?"},
},
}
)
print(result)
Here's what a response would look like:
>> {'llm': {'replies': ['The speaker lives in Berlin.'],
>> 'meta': [{'model': 'gpt-3.5-turbo-0613',
>> 'index': 0,
>> 'finish_reason': 'stop',
>> 'usage': {'prompt_tokens': 28,
>> 'completion_tokens': 6,
>> 'total_tokens': 34}}]}}
Updated over 1 year ago
