
What is the Open-Ended Score?
ReDem enables precise evaluation of open-ended responses by classifying them into nine distinct categories, each scored accordingly. Our hybrid approach combines the accuracy of expert manual reviews with the efficiency of a GPT-4-powered AI model, ensuring reliable and comprehensive results you can trust.How is the Open-Ended Score calculated?
Each response is first classified into one of our quality categories. Each category is then assigned a score from 0 to 100, reflecting the response’s quality. These scores are aggregated to calculate an overall Open-Ended Score (OES) for each respondent.How does ReDem classify responses?
We classify each response into distinct quality categories, providing a clear and comprehensive assessment of the respondent’s score. These categories encompass all critical aspects of open-ended response quality. Our criteria are continuously refined to adapt to evolving needs and standards. Wrong Topic 
Identifies responses that deviate from the question or topic by evaluating their context against relevant keywords and the question itself. The context check can be enabled or disabled when importing data.
When adding keywords, ensure they broadly represent the context to minimize false positives from overly narrow interpretations. Both the question and keywords must be in a supported language.
Responses failing to align with the expected context are classified as “Wrong Topic” and assigned an OES (Overall Evaluation Score) of 30.
Enable this option only if your questions are meaningful and contain relevant keywords.
 AI Generated Answer 
model analyzes open-ended responses to determine if they were artificially
generated by evaluating the pattern of a wide range of variables, including
grammar, structure, phrasing, syntax, word choice, sentence length, complexity
and predictability. Responses identified as AI generated content are assigned a
score of 0.
 Nonsense 
Enabling nonsense detection identifies gibberish, random numbers, and meaningless statements. Such responses are classified as “Nonsense” and assigned a score of 10.
 Wrong Language 
Responses in an unexpected language are categorized as “Wrong Language” and assigned a score of 20. You can define the expected languages; without this, the language check remains inactive. Open-Ended Score supports over 100 languages, including English, German, French, Spanish, Chinese, Japanese, Swedish, and more.
 Questions lacking linguistic content (e.g., brand awareness questions) are not suitable for the language check.
 Duplicate Respondent 
The optional duplicate check detects potentially fraudulent responses, identifying both exact duplicates and partial matches.
- Duplicate Respondents in a Single Question: This check detects repeated responses to the same question. Single duplicates are classified as “Duplicate Respondent” and assigned a score of 50, while multiple duplicates receive a score of 0.

- Duplicate Respondents Across Multiple Questions: Our duplicate check also detects responses repeated across multiple questions. Such responses are classified as “Duplicate Respondent” and assigned a score of 10.

 Duplicate Answer 
We verify if a respondent’s answers are repeated or partially repeated across multiple questions. These responses are classified as “Duplicate Answer” and scored 50 for a single duplicate or 0 for multiple duplicates.
 If a response qualifies as both “Duplicate Respondent” and “Duplicate Answer,” the “Duplicate Respondent” category takes precedence.


