
What is the Open-Ended Score?
ReDem enables precise analysis of open-ended responses by classifying them into eight distinct categories, each with a corresponding score. This process is powered by a GPT based AI model, delivering reliable and comprehensive results you can trust.How is the Open-Ended Score calculated?
Each response is first classified into one of our quality categories. Each category is then assigned a score from 0 to 100, reflecting the response’s quality. These question scores are aggregated to calculate an overall Open-Ended Score (OES) for each respondent.The quality categories are:
Wrong Topic
Identifies responses that deviate from the question or topic by evaluating their context against relevant keywords and the question itself. The context check is always enabled.
Keywords are only necessary when the question lacks sufficient context. When adding them, ensure they broadly reflect the intended context to avoid false positives caused by overly narrow interpretations. Both the question and keywords must be in a supported language.
Responses failing to align with the expected context are classified as “Wrong Topic” and assigned an OES (Overall Evaluation Score) of 30.
AI Generated Answer
The model analyzes open-ended responses to assess whether they were artificially generated, evaluating patterns across a wide range of variables including grammar, structure, phrasing, syntax, word choice, sentence length, complexity, and predictability. Responses identified as AI-generated are assigned a score of 0.
Nonsense
Enabling nonsense detection identifies gibberish, random numbers, and meaningless statements. Such responses are classified as “Nonsense” and assigned a score of 10.
Wrong Language
Responses provided in an unexpected language are flagged as “Wrong Language” and assigned a score of 20. You can specify the expected language(s); if none are defined, the language check remains inactive. Open-Ended Score supports over 100 languages, including English, German, French, Spanish, Chinese, Japanese, Swedish, and more. The check is performed per respondent, and it is recommended to use the questionnaire language as the reference.
Questions lacking linguistic content (e.g., brand awareness questions) are not suitable for the language check.
Duplicate Respondent
The optional duplicate check detects potentially fraudulent responses, identifying both exact duplicates and partial matches.
- This check detects repeated responses to the same question. Short or common responses are not penalized, but substantial duplications may lower the score. Scoring is continuous, so minor overlap results in moderate impact, while extensive duplication leads to stronger penalties.

- Duplicate Respondents Across Multiple Questions: When repetition occurs across several questions, the penalty increases. In severe cases, such as near-complete repetition across multiple responses, a score of 0 may be assigned.

- Full Response Set Matches: If a respondent’s full set of answers matches another’s, all responses are marked as duplicates and scored 0.
- Whitelists exclude standard or commonly accepted responses from being flagged.
Duplicate Answer
We check whether a respondent provides the same or similar answers across multiple questions. If duplication is limited to one question, a score of 50 is assigned; if duplicates are found in more than one question, the score is 0.
If a response qualifies as both “Duplicate Respondent” and “Duplicate Answer,” the “Duplicate Respondent” category takes precedence.


