DocumentationAPI ReferenceπŸ““ TutorialsπŸ§‘β€πŸ³ Cookbook🀝 IntegrationsπŸ’œ Discord

Abstract class for Generators.

Module base

BaseGenerator

class BaseGenerator(BaseComponent)

Abstract class for Generators

BaseGenerator.predict

@abstractmethod
def predict(query: str, documents: List[Document], top_k: Optional[int],
            max_tokens: Optional[int]) -> Dict

Abstract method to generate answers.

Arguments:

  • query: Query string.
  • documents: Related documents (for example, coming from a retriever) the answer should be based on.
  • top_k: Number of returned answers.
  • max_tokens: The maximum number of tokens the generated answer can have.

Returns:

Generated answers plus additional infos in a dict.

BaseGenerator.run

def run(query: str,
        documents: List[Document],
        top_k: Optional[int] = None,
        labels: Optional[MultiLabel] = None,
        add_isolated_node_eval: bool = False,
        max_tokens: Optional[int] = None)

Arguments:

  • query: Query string.
  • documents: List of Documents the answer should be based on.
  • top_k: The maximum number of answers to return.
  • labels: Labels to be used for evaluation.
  • add_isolated_node_eval: If True, the answer generator will be evaluated in isolation.
  • max_tokens: The maximum number of tokens the generated answer can have.

BaseGenerator.run_batch

def run_batch(queries: List[str],
              documents: Union[List[Document], List[List[Document]]],
              top_k: Optional[int] = None,
              labels: Optional[List[MultiLabel]] = None,
              batch_size: Optional[int] = None,
              add_isolated_node_eval: bool = False,
              max_tokens: Optional[int] = None)

Arguments:

  • queries: List of query strings.
  • documents: List of list of Documents the answer should be based on.
  • top_k: The maximum number of answers to return.
  • labels: Labels to be used for evaluation.
  • add_isolated_node_eval: If True, the answer generator will be evaluated in isolation.
  • max_tokens: The maximum number of tokens the generated answer can have.

BaseGenerator.predict_batch

def predict_batch(queries: List[str],
                  documents: Union[List[Document], List[List[Document]]],
                  top_k: Optional[int] = None,
                  batch_size: Optional[int] = None,
                  max_tokens: Optional[int] = None)

Generate the answer to the input queries. The generation will be conditioned on the supplied documents.

These documents can for example be retrieved via the Retriever.

  • If you provide a list containing a single query...

    • ... and a single list of Documents, the query will be applied to each Document individually.
    • ... and a list of lists of Documents, the query will be applied to each list of Documents and the Answers will be aggregated per Document list.
  • If you provide a list of multiple queries...

    • ... and a single list of Documents, each query will be applied to each Document individually.
    • ... and a list of lists of Documents, each query will be applied to its corresponding list of Documents and the Answers will be aggregated per query-Document pair.

Arguments:

  • queries: List of queries.
  • documents: Related documents (for example, coming from a retriever) the answer should be based on. Can be a single list of Documents or a list of lists of Documents.
  • top_k: Number of returned answers per query.
  • batch_size: Not applicable.
  • max_tokens: The maximum number of tokens the generated answer can have.

Returns:

Generated answers plus additional infos in a dict like this:

{'queries': 'who got the first nobel prize in physics',
 'answers':
     [{'query': 'who got the first nobel prize in physics',
       'answer': ' albert einstein',
       'meta': { 'doc_ids': [...],
                 'doc_scores': [80.42758 ...],
                 'doc_probabilities': [40.71379089355469, ...
                 'content': ['Albert Einstein was a ...]
                 'titles': ['"Albert Einstein"', ...]
 }}]}