📊 Evaluation Metrics Guide#
Evaluation metrics are essential tools used to assess the performance and quality of the assistant’s responses. These metrics are categorized under different evaluation themes including generation quality, retrieval quality, and ethical safety. This guide outlines each metric in detail, along with definitions and formulae where applicable.
🔹 1. Generation Quality Metrics#
1.1 Answer Correctness#
Definition: Measures how accurate the generated response is when compared to the ground truth.Semantic Similarity
Measures how closely the meaning of the generated response aligns with the meaning of the ground truth, even if different words or phrasing are used.
Factual Similarity
Evaluates whether the facts or claims in the response are accurate and consistent with the ground truth information.
Scoring:
Higher scores indicate a closer alignment with the ground truth, reflecting both semantic and factual correctness.
1.2 Answer Similarity#
Definition: Evaluates only the semantic similarity between the generated answer and the ground truth.Note: Unlike answer correctness, this does not account for factual correctness.
1.3 Answer Relevance#
Definition: Answer Relevance measures how well the assistant’s response directly addresses the user’s input or question. It ensures that the response is pertinent, avoids irrelevant details, and fulfills the user's informational need.Range:
0
to 1
— Higher values indicate stronger alignment between the response and the original query.A value close to 1
means the response is highly relevant to the input.
A lower value indicates that the response may include off-topic or incomplete info
Convert each sentence or response segment into a vector using an embedding model.
Compute the cosine similarity between the embeddings of the generated response and the user input.
Average the similarity scores across all segments.
Answer Relevance=N1​i=1∑N​cosine_similarity(Egi​​,Eo​)
Where:#
N=Numberofsegmentsintheresponse
Egi​​=Embeddingofthei-thsegmentofthegeneratedresponse
Eo​=Embeddingoftheoriginalquery
cosine_similarity(Egi​​,Eo​)=Semanticsimilaritybetweeni-thsegmentandthequery
1.4 BLEU Score#
Definition: A widely used metric based on n-gram precision and brevity penalty, used to compare generated text with reference responses.0
(no match at all) to 1
(perfect match with the reference).BLEU=BP⋅exp(n=1∑N​wn​⋅logpn​)
Where:#
BP=BrevityPenalty–penalizesshortoutputs
pn​=Precisionforn-grams(e.g.,unigram,bigram,etc.)
wn​=Weightforeachn-gramlevel(usuallyuniform,e.g.,0.25for1- to -4-gram)
exp(â‹…)=Ensuresgeometricmeanratherthanarithmeticmean
1.5 ROUGE Score#
Definition: Measures overlap between n-grams of the generated text and reference text. Includes precision, recall, and F1-score.ROUGE-N: Overlap of n-grams
ROUGE-L: Longest Common Subsequence
Range: 0
to 1
A score of 0
means no overlap between the generated text and the reference (poor quality), while a score of 1
means perfect overlap (ideal match)F1​=Precision+Recall2⋅(Precision⋅Recall)​
1.6 Faithfulness#
Definition:
The Faithfulness metric measures how factually consistent a response is with the retrieved context. It ensures that the assistant does not hallucinate or fabricate information that is not grounded in the provided sources.Range:
0 to 1 — Higher scores indicate better consistency with the retrieved context.1.
Identify all the claims made in the response.
2.
For each claim, verify whether it is supported or inferable from the retrieved context.
3.
Compute the score using the formula below.
Faithfulness Score=Total number of claims in responseNumber of supported claims in response​
A score of 1
means all claims are backed by the retrieved context.
A lower score indicates that some claims are unsubstantiated or hallucinated.
🔹 2. Retrieval Quality Metrics#
2.1 Context Recall#
Definition: Measures how much of the benchmark response (reference answer) can be generated using the retrieved data points.Context Recall=Number of claims in the referenceNumber of claims in the reference supported by the retrieved context​
2.2 Context Entities Recall#
Definition: Evaluates how many common entities (keywords or concepts) are shared between the retrieved context and the reference response.Context Entities Recall=Number of entities in referenceNumber of entities in reference also found in retrieved data​
2.3 Context Precision#
Definition: Measures how much of the retrieved data is actually useful for generating the correct response.Context Precision=Number of total retrieved statementsNumber supporting retrieved statements​
2.4 Noise Sensitivity#
Definition: Measures how frequently the assistant generates incorrect responses due to irrelevant or misleading context.Range: 0 (better) to 1 (worse)Noise Sensitivity (Relevant)=∣Total number of claims in the response∣∣Total number of incorrect claims in response∣​
🔹 3. Ethics and Safety Metrics#
These are LLM-based critic metrics, evaluating the ethical quality of responses by asking specific safety-related questions and applying a majority vote mechanism.Workflow:#
2.
Make 3 independent LLM calls.
3.
Apply majority vote to determine the binary outcome (e.g., Harmful or Not Harmful).
Evaluated Categories:#
Harmful: Promotes or causes harm.
Malicious: Exploits or misleads for harmful purposes.
Bias: Reinforces stereotypes or unfair treatment.
Toxic: Uses abusive or aggressive language.
Hateful: Encourages discrimination or hate speech.
Sexual: Contains inappropriate or explicit sexual content.
Violent: Incites or glorifies violence.
Insensitive: Disrespectful to identities or situations.
Self-harm: Promotes suicidal or self-injurious behavior.
Manipulative: Deceptively influences user behavior.
Summary Table#
Metric | Category | Measures | Score Range / Type |
---|
Answer Correctness | Generation | Semantic + Factual Accuracy | 0–1 |
Answer Similarity | Generation | Semantic Similarity | 0–1 |
Answer Relevance | Generation | Relevance to Query | 0–1 |
BLEU Score | Generation | n-gram Match + Brevity | 0–1 |
ROUGE Score | Generation | Word Sequence Overlap | 0–1 |
Faithfulness | Generation | Factual consistency with retrieved context | 0–1 |
Context Recall | Retrieval | Data-Answer Overlap | 0–1 |
Context Entities Recall | Retrieval | Entity Overlap | 0–1 |
Context Precision | Retrieval | Relevant Context Snippets | 0–1 |
Noise Sensitivity | Retrieval | Errors Due to Noise | 0–1 (lower is better) |
Ethics and Safety | Safety | Binary Verdicts via LLM | Yes / No |