Ejento AI
HomeQuickstartGuidesRecipesREST APIsChangelogsFAQs
HomeQuickstartGuidesRecipesREST APIsChangelogsFAQs
Ejento AI
  1. Features
  • Basic Operations
    • Features
      • Teams-->Projects-->Assistants Hierarchy
    • Guides
      • Login/Signup
  • Assistants
    • Features
      • Introduction to Assistants
      • Assistant Access Control
      • Caching Responses for Assistants
      • Assistant Evaluation
      • URL-based Chat Thread Creation and Prepopulation
      • Evaluation Metrics
    • Guides
      • Add an Assistant
      • Add assistant Instructions
      • Evaluate Assistant
      • Edit Assistant
      • Delete Assistant
      • Add Favourite Assistants
      • View Assistant Id
      • View Dataset Id
  • Workflows
    • Features
      • Introduction
    • Guides
      • Add Workflow
      • Workflow Chat
  • Corpus
    • Features
      • Introduction
      • Corpus Permissions
      • PII Redaction
    • Guides
      • Assistant Corpus Setup
      • Assistant Corpus Settings
      • Corpus Access Control
      • Corpus Connections
      • View Corpus Id
      • View Document Id
      • Tagging
        • Corpus tagging
        • Document tagging
  • Tools
    • Features
      • Introduction
    • Guides
      • Configure RAG Tool
      • Configure Attachment Tool
      • Configure Web Search Tool
      • Configure API Tool
      • View Tool Id
  • Analytics
    • Guides
      • Analytics
  • User Settings
    • Features
      • Ejento AI User Access Levels
    • Guides
      • Assistant Edit Access
      • Add User in a Team
      • Remove User from a Team
      • View my Access level in a Team
      • View my User Id
  • Chatlogs
    • Guides
      • Chatlogs
      • View Chatlog & Chat thread Id
  • API Keys
    • Guides
      • API Keys
  • Integrations
    • Email Indexing
    • Microsoft Teams
    • Sharepoint Indexing
  • Authentication & API Access
    • Guides
      • Authentication & API Access
  • Teams
    • Features
      • Introduction
    • Guides
      • Add a Team
      • Edit a Team
      • Delete a Team
      • View Team Id
  • Projects
    • Features
      • Introduction
    • Guides
      • Add a Project
      • Edit a Project
      • Delete a Project
      • View Project Id
  1. Features

Evaluation Metrics

📊 Evaluation Metrics Guide#

Evaluation metrics are essential tools used to assess the performance and quality of the assistant’s responses. These metrics are categorized under different evaluation themes including generation quality, retrieval quality, and ethical safety. This guide outlines each metric in detail, along with definitions and formulae where applicable.

🔹 1. Generation Quality Metrics#

1.1 Answer Correctness#

Definition: Measures how accurate the generated response is when compared to the ground truth.
Key Components:
Semantic Similarity
Measures how closely the meaning of the generated response aligns with the meaning of the ground truth, even if different words or phrasing are used.
Factual Similarity
Evaluates whether the facts or claims in the response are accurate and consistent with the ground truth information.
Scoring:
Higher scores indicate a closer alignment with the ground truth, reflecting both semantic and factual correctness.

1.2 Answer Similarity#

Definition: Evaluates only the semantic similarity between the generated answer and the ground truth.
Note: Unlike answer correctness, this does not account for factual correctness.

1.3 Answer Relevance#

Definition: Answer Relevance measures how well the assistant’s response directly addresses the user’s input or question. It ensures that the response is pertinent, avoids irrelevant details, and fulfills the user's informational need.
Range:
0 to 1 — Higher values indicate stronger alignment between the response and the original query.
A value close to 1 means the response is highly relevant to the input.
A lower value indicates that the response may include off-topic or incomplete info
How It’s Measured:
Convert each sentence or response segment into a vector using an embedding model.
Compute the cosine similarity between the embeddings of the generated response and the user input.
Average the similarity scores across all segments.
Formula:
Answer Relevance=N1​i=1∑N​cosine_similarity(Egi​​,Eo​)
Where:#
N=Numberofsegmentsintheresponse
Egi​​=Embeddingofthei-thsegmentofthegeneratedresponse
Eo​=Embeddingoftheoriginalquery
cosine_similarity(Egi​​,Eo​)=Semanticsimilaritybetweeni-thsegmentandthequery

1.4 BLEU Score#

Definition: A widely used metric based on n-gram precision and brevity penalty, used to compare generated text with reference responses.
Range:
0 (no match at all) to 1 (perfect match with the reference).
Formula:
BLEU=BP⋅exp(n=1∑N​wn​⋅logpn​)
Where:#
BP=BrevityPenalty–penalizesshortoutputs
pn​=Precisionforn-grams(e.g.,unigram,bigram,etc.)
wn​=Weightforeachn-gramlevel(usuallyuniform,e.g.,0.25for1- to -4-gram)
exp(â‹…)=Ensuresgeometricmeanratherthanarithmeticmean

1.5 ROUGE Score#

Definition: Measures overlap between n-grams of the generated text and reference text. Includes precision, recall, and F1-score.
Variants:
ROUGE-N: Overlap of n-grams
ROUGE-L: Longest Common Subsequence
Range: 0 to 1
A score of 0 means no overlap between the generated text and the reference (poor quality), while a score of 1 means perfect overlap (ideal match)
Formula (F1-score):
F1​=Precision+Recall2⋅(Precision⋅Recall)​

1.6 Faithfulness#

Definition:
The Faithfulness metric measures how factually consistent a response is with the retrieved context. It ensures that the assistant does not hallucinate or fabricate information that is not grounded in the provided sources.
Range:
0 to 1 — Higher scores indicate better consistency with the retrieved context.
Steps to Calculate:
1.
Identify all the claims made in the response.
2.
For each claim, verify whether it is supported or inferable from the retrieved context.
3.
Compute the score using the formula below.
Formula:
Faithfulness Score=Total number of claims in responseNumber of supported claims in response​
Interpretation:
A score of 1 means all claims are backed by the retrieved context.
A lower score indicates that some claims are unsubstantiated or hallucinated.

🔹 2. Retrieval Quality Metrics#

2.1 Context Recall#

Definition: Measures how much of the benchmark response (reference answer) can be generated using the retrieved data points.
Formula:
Context Recall=Number of claims in the referenceNumber of claims in the reference supported by the retrieved context​

2.2 Context Entities Recall#

Definition: Evaluates how many common entities (keywords or concepts) are shared between the retrieved context and the reference response.
Formula:
Context Entities Recall=Number of entities in referenceNumber of entities in reference also found in retrieved data​

2.3 Context Precision#

Definition: Measures how much of the retrieved data is actually useful for generating the correct response.
Formula:
Context Precision=Number of total retrieved statementsNumber supporting retrieved statements​

2.4 Noise Sensitivity#

Definition: Measures how frequently the assistant generates incorrect responses due to irrelevant or misleading context.
Range: 0 (better) to 1 (worse)
Formula:
Noise Sensitivity (Relevant)=∣Total number of claims in the response∣∣Total number of incorrect claims in response∣​

🔹 3. Ethics and Safety Metrics#

These are LLM-based critic metrics, evaluating the ethical quality of responses by asking specific safety-related questions and applying a majority vote mechanism.

Workflow:#

1.
Define a critic prompt.
2.
Make 3 independent LLM calls.
3.
Apply majority vote to determine the binary outcome (e.g., Harmful or Not Harmful).

Evaluated Categories:#

Harmful: Promotes or causes harm.
Malicious: Exploits or misleads for harmful purposes.
Bias: Reinforces stereotypes or unfair treatment.
Toxic: Uses abusive or aggressive language.
Hateful: Encourages discrimination or hate speech.
Sexual: Contains inappropriate or explicit sexual content.
Violent: Incites or glorifies violence.
Insensitive: Disrespectful to identities or situations.
Self-harm: Promotes suicidal or self-injurious behavior.
Manipulative: Deceptively influences user behavior.

Summary Table#

MetricCategoryMeasuresScore Range / Type
Answer CorrectnessGenerationSemantic + Factual Accuracy0–1
Answer SimilarityGenerationSemantic Similarity0–1
Answer RelevanceGenerationRelevance to Query0–1
BLEU ScoreGenerationn-gram Match + Brevity0–1
ROUGE ScoreGenerationWord Sequence Overlap0–1
FaithfulnessGenerationFactual consistency with retrieved context0–1
Context RecallRetrievalData-Answer Overlap0–1
Context Entities RecallRetrievalEntity Overlap0–1
Context PrecisionRetrievalRelevant Context Snippets0–1
Noise SensitivityRetrievalErrors Due to Noise0–1 (lower is better)
Ethics and SafetySafetyBinary Verdicts via LLMYes / No
Previous
URL-based Chat Thread Creation and Prepopulation
Next
Add an Assistant