Open-Ended Score
Discover why the Open-Ended Score is an effective method for detecting poor data quality.
What is the Open-Ended Score?
ReDem enables precise analysis of open-ended responses by classifying them into eight distinct categories, each with a corresponding score. This process is powered by a GPT based AI model, delivering reliable and comprehensive results you can trust.
How is the Open-Ended Score calculated?
Each response is first classified into one of our quality categories. Each category is then assigned a score from 0 to 100, reflecting the response’s quality. These question scores are aggregated to calculate an overall Open-Ended Score (OES) for each respondent.
The quality categories are:
Wrong Topic
Identifies responses that deviate from the question or topic by evaluating their context against relevant keywords and the question itself. The context check is always enabled.
Keywords are only necessary when the question lacks sufficient context. When adding them, ensure they broadly reflect the intended context to avoid false positives caused by overly narrow interpretations. Both the question and keywords must be in a supported language.
Responses failing to align with the expected context are classified as “Wrong Topic” and assigned an OES (Overall Evaluation Score) of 30.
AI Generated Answer
The model analyzes open-ended responses to assess whether they were artificially generated, evaluating patterns across a wide range of variables including grammar, structure, phrasing, syntax, word choice, sentence length, complexity, and predictability. Responses identified as AI-generated are assigned a score of 0.
Nonsense
Enabling nonsense detection identifies gibberish, random numbers, and meaningless statements. Such responses are classified as “Nonsense” and assigned a score of 10.
Wrong Language
Responses provided in an unexpected language are flagged as “Wrong Language” and assigned a score of 20. You can specify the expected language(s); if none are defined, the language check remains inactive. Open-Ended Score supports over 100 languages, including English, German, French, Spanish, Chinese, Japanese, Swedish, and more. The check is performed per respondent, and it is recommended to use the questionnaire language as the reference.
Duplicate Respondent
The optional duplicate check detects potentially fraudulent responses, identifying both exact duplicates and partial matches.
- This check detects repeated responses to the same question. If duplication is limited to a single question, a score of 50 is assigned. However, if duplicates are found across multiple questions, the score is reduced to 0.
- Duplicate Respondents Across Multiple Questions: Our duplicate check also detects responses repeated across multiple questions. Such responses are classified as “Duplicate Respondent” and assigned a score of 10.
Duplicate Answer
We check whether a respondent provides the same or similar answers across multiple questions. If duplication is limited to one question, a score of 50 is assigned; if duplicates are found in more than one question, the score is 0.
Bad Language
Responses containing swear words or offensive language are classified as “Bad Language” and are assigned a score of 10.
Generic Answers
Responses containing vague or generic phrases like “good,” “ok,” “anything,” or “yes” are classified as Generic Answers and assigned a score of 50.
No Information
Responses that lack meaningful content—such as “no idea,” “nothing,” “no comment,” or “I don’t know”—are labeled as No Information and given a score of 60.
Valid Answer
Valid responses are those that do not fall into any other quality category. Each response is also evaluated for its level of detail. “Valid Answers” are assigned a score between 70 and 100, based on their detail quality.
Use of GPT for Open-Ended Score
ReDem uses the most advanced GPT large language models (LLMs) from OpenAI, to analyze and categorize open-ended responses. This enables precise and reliable scoring by leveraging cutting-edge language understanding.
To ensure data privacy and compliance, GPT is integrated into the ReDem OES with strict safeguards:
Individual Responses & Anonymity
Each response is sent to OpenAI individually, using a fully anonymized ID. Only the single response is transmitted per API request—complete survey datasets are never shared.
Exclusive Use by ReDem
Only ReDem communicates with OpenAI. The platform does not share any details about the origin or source of the responses.
Data Storage & Retention
OpenAI retains data for up to 30 days, after which it is permanently deleted. The data is never used to train AI models.
GDPR-Compliant Data Transfers
ReDem and OpenAI operate under a Data Processing Agreement based on EU Standard Contractual Clauses (SCCs). This ensures all data transfers—including those involving personal data—fully comply with GDPR
This setup gives you both the advanced capabilities of GPT and the data protection required for responsible AI use in survey research.