DocumentationAPI Reference📓 Tutorials🧑‍🍳 Cookbook🤝 Integrations💜 Discord🎨 Studio (Waitlist)
Documentation

Evaluators

EvaluatorDescription
AnswerExactMatchEvaluatorEvaluates answers predicted by Haystack pipelines using ground truth labels. It checks character by character whether a predicted answer exactly matches the ground truth answer.
ContextRelevanceEvaluatorUses an LLM to evaluate whether a generated answer can be inferred from the provided contexts.
DeepEvalEvaluatorUse DeepEval to evaluate generative pipelines.
DocumentMAPEvaluatorEvaluates documents retrieved by Haystack pipelines using ground truth labels. It checks to what extent the list of retrieved documents contains only relevant documents as specified in the ground truth labels or also non-relevant documents.
DocumentMRREvaluatorEvaluates documents retrieved by Haystack pipelines using ground truth labels. It checks at what rank ground truth documents appear in the list of retrieved documents.
DocumentNDCGEvaluatorEvaluates documents retrieved by Haystack pipelines using ground truth labels. It checks at what rank ground truth documents appear in the list of retrieved documents. This metric is called normalized discounted cumulative gain (NDCG).
DocumentRecallEvaluatorEvaluates documents retrieved by Haystack pipelines using ground truth labels. It checks how many of the ground truth documents were retrieved.
FaithfulnessEvaluatorUses an LLM to evaluate whether a generated answer can be inferred from the provided contexts. Does not require ground truth labels.
LLMEvaluatorUses an LLM to evaluate inputs based on a prompt containing user-defined instructions and examples.
RagasEvaluatorUse Ragas framework to evaluate a retrieval-augmented generative pipeline.
SASEvaluatorEvaluates answers predicted by Haystack pipelines using ground truth labels. It checks the semantic similarity of a predicted answer and the ground truth answer using a fine-tuned language model.