Uses Large Language Models directly in your pipelines.
Module prompt_node
PromptTemplate
class PromptTemplate(BasePromptTemplate, ABC)
PromptTemplate is a template for a prompt you feed to the model to instruct it what to do. For example, if you want the model to perform sentiment analysis, you simply tell it to do that in a prompt. Here's what such prompt template may look like:
PromptTemplate(name="sentiment-analysis",
prompt_text="Give a sentiment for this context. Answer with positive, negative
or neutral. Context: $documents; Answer:")
Optionally, you can declare prompt parameters in the PromptTemplate. Prompt parameters are input parameters that need to be filled in
the prompt_text for the model to perform the task. For example, in the template above, there's one prompt parameter, documents
. You declare prompt parameters by adding variables to the prompt text. These variables should be in the format: $variable
. In the template above, the variable is $documents
.
At runtime, these variables are filled in with arguments passed to the fill()
method of the PromptTemplate. So in the example above, the $documents
variable will be filled with the Documents whose sentiment you want the model to analyze.
For more details on how to use PromptTemplate, see PromptNode.
PromptTemplate.__init__
def __init__(name: str,
prompt_text: str,
prompt_params: Optional[List[str]] = None)
Creates a PromptTemplate instance.
Arguments:
name
: The name of the prompt template (for example, sentiment-analysis, question-generation). You can specify your own name but it must be unique.prompt_text
: The prompt text, including prompt parameters.prompt_params
: Optional parameters that need to be filled in the prompt text. If you don't specify them, they're inferred from the prompt text. Any variable in prompt text in the format$variablename
is interpreted as a prompt parameter.
PromptTemplate.prepare
def prepare(*args, **kwargs) -> Dict[str, Any]
Prepares and verifies the prompt template with input parameters.
Arguments:
args
: Non-keyword arguments to fill the parameters in the prompt text of a PromptTemplate.kwargs
: Keyword arguments to fill the parameters in the prompt text of a PromptTemplate.
Returns:
A dictionary with the prompt text and the prompt parameters.
PromptTemplate.fill
def fill(*args, **kwargs) -> Iterator[str]
Fills the parameters defined in the prompt text with the arguments passed to it and returns the iterator prompt text.
You can pass non-keyword (args) or keyword (kwargs) arguments to this method. If you pass non-keyword arguments, their order must match the left-to-right
order of appearance of the parameters in the prompt text. For example, if the prompt text is:
Come up with a question for the given context and the answer. Context: $documents; Answer: $answers; Question:
, then the first non-keyword argument fills the $documents
variable
and the second non-keyword argument fills the $answers
variable.
If you pass keyword arguments, the order of the arguments doesn't matter. Variables in the prompt text are filled with the corresponding keyword argument.
Arguments:
args
: Non-keyword arguments to fill the parameters in the prompt text. Their order must match the order of appearance of the parameters in prompt text.kwargs
: Keyword arguments to fill the parameters in the prompt text.
Returns:
An iterator of prompt texts.
PromptModel
class PromptModel(BaseComponent)
The PromptModel class is a component that uses a pre-trained model to perform tasks based on a prompt. Out of the box, it supports two model invocation layers: Hugging Face transformers and OpenAI, with the ability to register additional custom invocation layers.
Although it is possible to use PromptModel to make prompt invocations on the underlying model, use PromptNode to interact with the model. PromptModel instances are a way for multiple PromptNode instances to use a single PromptNode, and thus save computational resources.
For more details, refer to Promptnode.
PromptModel.__init__
def __init__(model_name_or_path: str = "google/flan-t5-base",
max_length: Optional[int] = 100,
api_key: Optional[str] = None,
use_auth_token: Optional[Union[str, bool]] = None,
use_gpu: Optional[bool] = None,
devices: Optional[List[Union[str, torch.device]]] = None,
model_kwargs: Optional[Dict] = None)
Creates an instance of PromptModel.
Arguments:
model_name_or_path
: The name or path of the underlying model.max_length
: The maximum length of the output text generated by the model.api_key
: The API key to use for the model.use_auth_token
: The Hugging Face token to use.use_gpu
: Whether to use GPU or not.devices
: The devices to use where the model is loaded.model_kwargs
: Additional keyword arguments passed to the underlying model.
PromptModel.register
def register(invocation_layer: Type[PromptModelInvocationLayer])
Registers additional prompt model invocation layer. It takes a function that returns a boolean as a
matching condition on model_name_or_path
and a class that implements PromptModelInvocationLayer
interface.
PromptModel.invoke
def invoke(prompt: Union[str, List[str]], **kwargs) -> List[str]
It takes in a prompt, and returns a list of responses using the underlying invocation layer.
Arguments:
prompt
: The prompt to use for the invocation. It can be a single prompt or a list of prompts.kwargs
: Additional keyword arguments to pass to the invocation layer.
Returns:
A list of model generated responses for the prompt or prompts.
PromptNode
class PromptNode(BaseComponent)
The PromptNode class is the central abstraction in Haystack's large language model (LLM) support. PromptNode supports multiple NLP tasks out of the box. You can use it to perform tasks, such as summarization, question answering, question generation, and more, using a single, unified model within the Haystack framework.
One of the benefits of PromptNode is that you can use it to define and add additional prompt templates the model supports. Defining additional prompt templates makes it possible to extend the model's capabilities and use it for a broader range of NLP tasks in Haystack. Prompt engineers define templates for each NLP task and register them with PromptNode. The burden of defining templates for each task rests on the prompt engineers, not the users.
Using an instance of the PromptModel class, you can create multiple PromptNodes that share the same model, saving the memory and time required to load the model multiple times.
PromptNode also supports multiple model invocation layers: Hugging Face transformers and OpenAI with an ability to register additional custom invocation layers. However, we currently support only T5 Flan and OpenAI InstructGPT models.
We recommend using LLMs fine-tuned on a collection of datasets phrased as instructions, otherwise we find that the LLM does not "follow" prompt instructions well. This is why we recommend using T5 flan or OpenAI InstructGPT models.
For more details, see PromptNode.
PromptNode.__init__
def __init__(model_name_or_path: Union[str,
PromptModel] = "google/flan-t5-base",
default_prompt_template: Optional[Union[str,
PromptTemplate]] = None,
output_variable: Optional[str] = None,
max_length: Optional[int] = 100,
api_key: Optional[str] = None,
use_auth_token: Optional[Union[str, bool]] = None,
use_gpu: Optional[bool] = None,
devices: Optional[List[Union[str, torch.device]]] = None,
stop_words: Optional[List[str]] = None,
top_k: int = 1,
model_kwargs: Optional[Dict] = None)
Creates a PromptNode instance.
Arguments:
model_name_or_path
: The name of the model to use or an instance of the PromptModel.default_prompt_template
: The default prompt template to use for the model.output_variable
: The name of the output variable in which you want to store the inference results.max_length
: The maximum length of the generated text output.api_key
: The API key to use for the model.use_auth_token
: The authentication token to use for the model.use_gpu
: Whether to use GPU or not.devices
: The devices to use for the model.top_k
: Number of independently generated text to return per prompt.stop_words
: Stops text generation if any one of the stop words is generated.model_kwargs
: Additional keyword arguments passed when loading the model specified bymodel_name_or_path
.
PromptNode.__call__
def __call__(*args, **kwargs) -> List[str]
This method is invoked when the component is called directly, for example:
PromptNode pn = ...
sa = pn.set_default_prompt_template("sentiment-analysis")
sa(documents=[Document("I am in love and I feel great!")])
PromptNode.prompt
def prompt(prompt_template: Optional[Union[str, PromptTemplate]], *args,
**kwargs) -> List[str]
Prompts the model and represents the central API for the PromptNode. It takes a prompt template,
a list of non-keyword and keyword arguments, and returns a list of strings - the responses from the underlying model.
If you specify the optional prompt_template parameter, it takes precedence over the default PromptTemplate for this PromptNode.
Arguments:
prompt_template
: The name or object of the optional PromptTemplate to use.
Returns:
A list of strings as model responses.
PromptNode.add_prompt_template
def add_prompt_template(prompt_template: PromptTemplate) -> None
Adds a prompt template to the list of supported prompt templates.
Arguments:
prompt_template
: PromptTemplate object to be added.
Returns:
None
PromptNode.remove_prompt_template
def remove_prompt_template(prompt_template: str) -> PromptTemplate
Removes a prompt template from the list of supported prompt templates.
Arguments:
prompt_template
: Name of the prompt template to be removed.
Returns:
PromptTemplate object that was removed.
PromptNode.set_default_prompt_template
def set_default_prompt_template(
prompt_template: Union[str, PromptTemplate]) -> "PromptNode"
Sets the default prompt template for the node.
Arguments:
prompt_template
: The prompt template to be set as default.
Returns:
The current PromptNode object.
PromptNode.get_prompt_templates
def get_prompt_templates() -> List[PromptTemplate]
Returns the list of supported prompt templates.
Returns:
List of supported prompt templates.
PromptNode.get_prompt_template_names
def get_prompt_template_names() -> List[str]
Returns the list of supported prompt template names.
Returns:
List of supported prompt template names.
PromptNode.is_supported_template
def is_supported_template(prompt_template: Union[str, PromptTemplate]) -> bool
Checks if a prompt template is supported.
Arguments:
prompt_template
: The prompt template to be checked.
Returns:
True if the prompt template is supported, False otherwise.
PromptNode.get_prompt_template
def get_prompt_template(prompt_template_name: str) -> PromptTemplate
Returns a prompt template by name.
Arguments:
prompt_template_name
: The name of the prompt template to be returned.
Returns:
The prompt template object.
PromptNode.prompt_template_params
def prompt_template_params(prompt_template: str) -> List[str]
Returns the list of parameters for a prompt template.
Arguments:
prompt_template
: The name of the prompt template.
Returns:
The list of parameters for the prompt template.
PromptNode.run
def run(query: Optional[str] = None,
file_paths: Optional[List[str]] = None,
labels: Optional[MultiLabel] = None,
documents: Optional[List[Document]] = None,
meta: Optional[dict] = None,
invocation_context: Optional[Dict[str,
Any]] = None) -> Tuple[Dict, str]
Runs the PromptNode on these inputs parameters. Returns the output of the prompt model.
The parameters query
, file_paths
, labels
, documents
and meta
are added to the invocation context
before invoking the prompt model. PromptNode uses these variables only if they are present as
parameters in the PromptTemplate.
Arguments:
query
: The PromptNode usually ignores the query, unless it's used as a parameter in the prompt template.file_paths
: The PromptNode usually ignores the file paths, unless they're used as a parameter in the prompt template.labels
: The PromptNode usually ignores the labels, unless they're used as a parameter in the prompt template.documents
: The documents to be used for the prompt.meta
: PromptNode usually ignores meta information, unless it's used as a parameter in the PromptTemplate.invocation_context
: The invocation context to be used for the prompt.
PromptNode.run_batch
def run_batch(queries: Optional[List[str]] = None,
documents: Optional[Union[List[Document],
List[List[Document]]]] = None,
invocation_contexts: Optional[List[Dict[str, Any]]] = None)
Runs PromptNode in batch mode.
-
If you provide a list containing a single query (and/or invocation context)...
- ... and a single list of Documents, the query is applied to each Document individually.
- ... and a list of lists of Documents, the query is applied to each list of Documents and the results are aggregated per Document list.
-
If you provide a list of multiple queries (and/or multiple invocation contexts)...
- ... and a single list of Documents, each query (and/or invocation context) is applied to each Document individually.
- ... and a list of lists of Documents, each query (and/or invocation context) is applied to its corresponding list of Documents and the results are aggregated per query-Document pair.
-
If you provide no Documents, then each query (and/or invocation context) is applied directly to the PromptTemplate.
Arguments:
queries
: List of queries.documents
: Single list of Documents or list of lists of Documents in which to search for the answers.invocation_contexts
: List of invocation contexts.