DocumentationAPI ReferenceπŸ““ TutorialsπŸ§‘β€πŸ³ Cookbook🀝 IntegrationsπŸ’œ Discord

Reads a set of documents and generates an answer to a question, word by word

Module openai

OpenAIAnswerGenerator

class OpenAIAnswerGenerator(BaseGenerator)

This component is now deprecated and will be removed in future versions. Use PromptNode instead of OpenAIAnswerGenerator, as explained in https://haystack.deepset.ai/tutorials/22_pipeline_with_promptnode.

Uses the GPT-3 models from the OpenAI API to generate Answers based on the Documents it receives. The Documents can come from a Retriever or you can supply them manually.

To use this Node, you need an API key from an active OpenAI account. You can sign-up for an account on the OpenAI API website.

OpenAIAnswerGenerator.__init__

def __init__(api_key: str,
             azure_base_url: Optional[str] = None,
             azure_deployment_name: Optional[str] = None,
             model: str = "text-davinci-003",
             max_tokens: int = 50,
             api_version: str = "2022-12-01",
             top_k: int = 5,
             temperature: float = 0.2,
             presence_penalty: float = 0.1,
             frequency_penalty: float = 0.1,
             examples_context: Optional[str] = None,
             examples: Optional[List[List[str]]] = None,
             stop_words: Optional[List[str]] = None,
             progress_bar: bool = True,
             prompt_template: Optional[PromptTemplate] = None,
             context_join_str: str = " ",
             moderate_content: bool = False,
             api_base: str = "https://api.openai.com/v1",
             openai_organization: Optional[str] = None)

Arguments:

  • api_key: Your API key from OpenAI. It is required for this node to work.
  • azure_base_url: The base URL for the Azure OpenAI API. If not supplied, Azure OpenAI API will not be used. This parameter is an OpenAI Azure endpoint, usually in the form https://<your-endpoint>.openai.azure.com.
  • azure_deployment_name: The name of the Azure OpenAI API deployment. If not supplied, Azure OpenAI API will not be used.
  • model: ID of the engine to use for generating the answer. You can select one of "text-ada-001", "text-babbage-001", "text-curie-001", or "text-davinci-003" (from worst to best and from cheapest to most expensive). For more information about the models, refer to the OpenAI Documentation.
  • max_tokens: The maximum number of tokens reserved for the generated Answer. A higher number allows for longer answers without exceeding the max prompt length of the OpenAI model. A lower number allows longer prompts with more documents passed as context, but the generated answer might be cut after max_tokens.
  • api_version: The version of the Azure OpenAI API to use. The default is 2022-12-01 version.
  • top_k: Number of generated Answers.
  • temperature: What sampling temperature to use. Higher values mean the model will take more risks and value 0 (argmax sampling) works better for scenarios with a well-defined Answer.
  • presence_penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they have already appeared in the text. This increases the model's likelihood to talk about new topics. For more information about frequency and presence penalties, see parameter details in OpenAI.
  • frequency_penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. See more information about frequency and presence penalties.
  • examples_context: A text snippet containing the contextual information used to generate the Answers for the examples you provide. If not supplied, the default from OpenAI API docs is used: "In 2017, U.S. life expectancy was 78.6 years."
  • examples: List of (question, answer) pairs that helps steer the model towards the tone and answer format you'd like. We recommend adding 2 to 3 examples. If not supplied, the default from OpenAI API docs is used: [["Q: What is human life expectancy in the United States?", "A: 78 years."]]
  • stop_words: Up to four sequences where the API stops generating further tokens. The returned text does not contain the stop sequence. If you don't provide any stop words, the default value from OpenAI API docs is used: ["\n", "<|endoftext|>"].
  • prompt_template: A PromptTemplate that tells the model how to generate answers given a context and query supplied at runtime. The context is automatically constructed at runtime from a list of provided documents. Use example_context and a list of examples to provide the model with examples to steer it towards the tone and answer format you would like. If not supplied, the default prompt template is:
    PromptTemplate(
        "Please answer the question according to the above context."
        "\n===\nContext: {examples_context}\n===\n{examples}\n\n"
        "===\nContext: {context}\n===\n{query}",
    )

To learn how variables, such as '{context}', are substituted in the prompt text, see PromptTemplate.

  • context_join_str: The separation string used to join the input documents to create the context used by the PromptTemplate.
  • moderate_content: Whether to filter input and generated answers for potentially sensitive content using the OpenAI Moderation API. If the input or answers are flagged, an empty list is returned in place of the answers.
  • api_base: The base URL for the OpenAI API, defaults to "https://api.openai.com/v1".
  • openai_organization: The OpenAI-Organization ID, defaults to None. For more details, see see OpenAI documentation.

OpenAIAnswerGenerator.predict

def predict(query: str,
            documents: List[Document],
            top_k: Optional[int] = None,
            max_tokens: Optional[int] = None,
            timeout: Union[float, Tuple[float, float]] = OPENAI_TIMEOUT)

Use the loaded QA model to generate Answers for a query based on the Documents it receives.

Returns dictionaries containing Answers. Note that OpenAI doesn't return scores for those Answers.

Example:

{
    'query': 'Who is the father of Arya Stark?',
    'answers':[Answer(
                 'answer': 'Eddard,',
                 'score': None,
                 ),...
              ]
}

Arguments:

  • query: The query you want to provide. It's a string.
  • documents: List of Documents in which to search for the Answer.
  • top_k: The maximum number of Answers to return.
  • max_tokens: The maximum number of tokens the generated Answer can have.
  • timeout: How many seconds to wait for the server to send data before giving up, as a float, or a :ref:(connect timeout, read timeout) <timeouts> tuple. Defaults to 10 seconds.

Returns:

Dictionary containing query and Answers.

OpenAIAnswerGenerator.run

def run(query: str,
        documents: List[Document],
        top_k: Optional[int] = None,
        labels: Optional[MultiLabel] = None,
        add_isolated_node_eval: bool = False,
        max_tokens: Optional[int] = None)

Arguments:

  • query: Query string.
  • documents: List of Documents the answer should be based on.
  • top_k: The maximum number of answers to return.
  • labels: Labels to be used for evaluation.
  • add_isolated_node_eval: If True, the answer generator will be evaluated in isolation.
  • max_tokens: The maximum number of tokens the generated answer can have.

OpenAIAnswerGenerator.run_batch

def run_batch(queries: List[str],
              documents: Union[List[Document], List[List[Document]]],
              top_k: Optional[int] = None,
              labels: Optional[List[MultiLabel]] = None,
              batch_size: Optional[int] = None,
              add_isolated_node_eval: bool = False,
              max_tokens: Optional[int] = None)

Arguments:

  • queries: List of query strings.
  • documents: List of list of Documents the answer should be based on.
  • top_k: The maximum number of answers to return.
  • labels: Labels to be used for evaluation.
  • add_isolated_node_eval: If True, the answer generator will be evaluated in isolation.
  • max_tokens: The maximum number of tokens the generated answer can have.

OpenAIAnswerGenerator.predict_batch

def predict_batch(queries: List[str],
                  documents: Union[List[Document], List[List[Document]]],
                  top_k: Optional[int] = None,
                  batch_size: Optional[int] = None,
                  max_tokens: Optional[int] = None)

Generate the answer to the input queries. The generation will be conditioned on the supplied documents.

These documents can for example be retrieved via the Retriever.

  • If you provide a list containing a single query...

    • ... and a single list of Documents, the query will be applied to each Document individually.
    • ... and a list of lists of Documents, the query will be applied to each list of Documents and the Answers will be aggregated per Document list.
  • If you provide a list of multiple queries...

    • ... and a single list of Documents, each query will be applied to each Document individually.
    • ... and a list of lists of Documents, each query will be applied to its corresponding list of Documents and the Answers will be aggregated per query-Document pair.

Arguments:

  • queries: List of queries.
  • documents: Related documents (for example, coming from a retriever) the answer should be based on. Can be a single list of Documents or a list of lists of Documents.
  • top_k: Number of returned answers per query.
  • batch_size: Not applicable.
  • max_tokens: The maximum number of tokens the generated answer can have.

Returns:

Generated answers plus additional infos in a dict like this:

{'queries': 'who got the first nobel prize in physics',
 'answers':
     [{'query': 'who got the first nobel prize in physics',
       'answer': ' albert einstein',
       'meta': { 'doc_ids': [...],
                 'doc_scores': [80.42758 ...],
                 'doc_probabilities': [40.71379089355469, ...
                 'content': ['Albert Einstein was a ...]
                 'titles': ['"Albert Einstein"', ...]
 }}]}