DocumentationAPI ReferenceπŸ““ TutorialsπŸ§‘β€πŸ³ Cookbook🀝 IntegrationsπŸ’œ Discord

RagasEvaluator

This component evaluates Haystack Pipelines using LLM-based metrics. It supports metrics like context relevance, factual accuracy, response relevance, and more.

NameRagasEvaluator
Pathhttps://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/ragas
Most common Position in a PipelineOn its own or in an Evaluation Pipeline. To be used after a separate Pipeline has generated the inputs for the Evaluator.
Mandatory Input variablesβ€œinputs”: A keyword arguments dictionary containing the expected inputs. The expected inputs will change based on what metric you are evaluating. See below for more details.
Output variablesβ€œresults”: A nested list of metric results. There can be one or more results, depending on the metric. Each result is a dictionary containing:
- name - The name of the metric.
- score - The score of the metric.

Ragas is an evaluation framework that provides a number of LLM-based evaluation metrics. You can use the RagasEvaluator component to evaluate a Haystack Pipeline, such as a retrieval-augmented generative Pipeline, against one of the metrics provided by Ragas.

Supported Metrics

Ragas supports a number of metrics, which we expose through the Ragas metric enumeration. Below is the list of metrics supported by the RagasEvaluator in Haystack with the expected metric_params while initializing the evaluator. Many metrics use OpenAI models and require an environment variable OPENAI_API_KEY to be set. For a complete guide on these metrics, visit the Ragas documentation.

MetricMetric ParametersExpected inputsMetric description
ANSWER_CORRECTNESS"weights": Tuple[float, float]questions: List[str], responses: List[str], ground_truths: List[str]Grades the accuracy of the generated answer when compared to the ground truth.
FAITHFULNESSNonequestions: List[str], contexts: List[List[str]], responses: List[str]Grades how factual the generated response was.
ANSWER_SIMILARITY"threshold": floatresponses: List[str], ground_truths: List[str]Grades how similar the generated answer is to the ground truth answer specified.
CONTEXT_PRECISIONNonequestions: List[str], contexts: List[List[str]], ground_truths: List[str]Grades if the answer has any additional irrelevant information for the question asked.
CONTEXT_UTILIZATIONNonequestions: List[str], contexts: List[List[str]], responses: List[str]Grade to what extent the generated answer uses the provided context.
CONTEXT_RECALLNonequestions: List[str], contexts: List[List[str]], ground_truths: List[str]Grades how complete the generated response was for the question specified.
ASPECT_CRITIQUE"name": str,
"definition”: str,
"strictness": int
questions: List[str], contexts: List\[List[str]], responses: List[str]Grades generated answers based on custom aspects on a binary scale.
CONTEXT_RELEVANCYNonequestions: List[str], contexts: List[List[str]]Grades how relevant the provided context was for the question specified.
ANSWER_RELEVANCY"strictness": intquestions: List[str], contexts: List[List[str]] responses: List[str]Grades how relevant the generated response is given the question.

Parameters Overview

To initialize a RagasEvaluator, you need to provide the following parameters :

  • metric: A RagasMetric.
  • metric_params: Optionally, if the metric calls for any additional parameters, you should provide them here.

Usage

To use the RagasEvaluator, you first need to install the integration:

pip install ragas-haystack

To use the RagasEvaluator you need to follow these steps:

  1. Initialize the RagasEvaluator while providing the correct metric_params for the metric you are using.
  2. Run the RagasEvaluator, either on its own or in a Pipeline, by providing the expected input for the metric you are using.

Examples

Evaluate Context Relevance

To create a context-relevance evaluation Pipeline:

from haystack import Pipeline
from haystack_integrations.components.evaluators.ragas import RagasEvaluator, RagasMetric

pipeline = Pipeline()
evaluator = RagasEvaluator(
    metric=RagasMetric.CONTEXT_RELEVANCY,
)
pipeline.add_component("evaluator", evaluator)

To run the evaluation Pipeline, you should have the expected inputs for the metric ready at hand. This metric expects a list of questions and contexts, which should come from the results of the Pipeline you want to evaluate.

results = pipeline.run({"evaluator": {"questions": ["When was the Rhodes Statue built?", "Where is the Pyramid of Giza?"], 
                                                "contexts": [["Context for question 1"], ["Context for question 2"]]}})

Evaluate Context Relevance and Aspect Critique

To create a Pipeline that evaluates context relevance and aspect critique:

from haystack import Pipeline
from haystack_integrations.components.evaluators.ragas import RagasEvaluator, RagasMetric

pipeline = Pipeline()
evaluator_context = RagasEvaluator(
    metric=RagasMetric.CONTEXT_PRECISION,
)
evaluator_aspect = RagasEvaluator(
    metric=RagasMetric.ASPECT_CRITIQUE,
    metric_params={"name": "custom", "definition": "Is this answer problematic for children?", "strictness": 3},
)
pipeline.add_component("evaluator_context", evaluator_context)
pipeline.add_component("evaluator_aspect", evaluator_aspect)

To run the evaluation Pipeline, you should have the expected inputs for the metrics ready at hand. These metrics expect a list of questions, contexts, responses, and ground_truths. These should come from the results of the Pipeline you want to evaluate.

QUESTIONS = ["Which is the most popular global sport?", "Who created the Python language?"]
CONTEXTS = [["The popularity of sports can be measured in various ways, including TV viewership, social media presence, number of participants, and economic impact. Football is undoubtedly the world's most popular sport with major events like the FIFA World Cup and sports personalities like Ronaldo and Messi, drawing a followership of more than 4 billion people."], 
                 ["Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming language. Its design philosophy emphasizes code readability, and its language constructs aim to help programmers write clear, logical code for both small and large-scale software projects."]]
RESPONSES = ["Football is the most popular sport with around 4 billion followers worldwide", "Python language was created by Guido van Rossum."]
GROUND_TRUTHS = ["Football is the most popular sport", "Python language was created by Guido van Rossum."]
results = pipeline.run({
        "evaluator_context": {"questions": QUESTIONS, "contexts": CONTEXTS, "ground_truths": GROUND_TRUTHS},
        "evaluator_aspect": {"questions": QUESTIONS, "contexts": CONTEXTS, "responses": RESPONSES},
})

Related Links

Check out the API reference in the GitHub repo or in our docs: