blog

Is It Worth the Hype? Synthetic Respondents vs. Human Insight

Louise Principe
Feb 27, 2024
talking to synthetic respondents

If you’re here, you’ve probably heard a lot of buzz surrounding synthetic samples and their potential to revolutionize data collection. The big question is, will these AI respondents replace human panelists? In this blog post, we shed some light on the meaning of synthetic data and its quality and viability for your market research.

What are Synthetic Respondents?

Synthetic respondents refer to a pool of artificial research participants that produce human-like qualitative or quantitative feedback. This process involves training your AI model on real-world data, wherein the algorithm takes in its correlations, patterns, and statistical characteristics. From this, the trained model can generate data that mirrors the identity of your target audience.

Human Insight vs. Synthetic Data

Kantar conducted two key experiments to test the performance of GPT-4 generated feedback compared to human responses. The first experiment involved Likert-scale questions, while the second required GPT-4 to generate verbatim responses to natural language questions. 

Both experiments used survey data from around 5,000 respondents to ensure the synthetic samples generated by GPT-4 closely matched the demographic characteristics of the human survey respondents.

What did we learn?

Positive Bias

GPT-4 showed an overall positive bias in responses compared to human survey participants, especially in questions involving emotional aspects. While artificially generated answers to practical questions like product pricing were similar to human responses, synthetic data significantly differs from human feedback when asked more nuanced questions. 

Sub-Group Analysis Challenges

Synthetic samples struggled to capture sub-group trends effectively. GPT-4's responses varied from the actual preferences of specific demographic groups, indicating a lack of sensitivity for more detailed segmental analysis.

Lack of Variety in Qualitative Responses

In qualitative assessments, the answers given by synthetic respondents lacked variety and nuance compared to their human counterparts. The model tended to veer towards more stereotypical answers, showing a need for improvement in capturing the intricacies of human thinking.

Key Takeaway

The experiments with GPT-4 revealed challenges in generating synthetic datasets for market research, including a positive bias in emotionally nuanced questions and difficulties accurately reflecting the preferences of specific demographic groups. Furthermore, the qualitative responses from synthetic samples lacked the nuanced variety observed in human answers.

Is a Synthetic Sample Worth It?

Based on Kantar’s experiment, today’s synthetic data samples are not enough to replace human samples in market research. However, this doesn’t mean that we should totally scrap this technology from our research toolbox.

Instead of relying solely on human panelists, generative AI can serve as a proxy, offering advantages such as shorter interview times and supplementing your existing findings or customer data requirements. Furthermore, you can leverage its capabilities to conduct qualitative research on a quantitative scale.

Considerations for the Future

While the hype surrounding synthetic respondents is high, the current capabilities of this AI research assistant tool fall short of meeting the quality standards required for market research. 

Despite the current market research risks, blended models that combine human and synthetic samples might be a more viable solution for insight-gathering as machine learning models continue to evolve. Kantar suggests a strategy where detailed, proprietary training samples specific to the topic of interest are used to fine-tune your AI models. This helps improve the AI tool’s performance in generating relevant answers to your queries. 

But for now, reliable decision-making remains rooted in high-quality, original data drawn from real consumers. The human voice is still irreplaceable in gathering meaningful and nuanced responses – making synthetic sampling a useful tool to augment your study but not an end-all-be-all for today’s research needs.

Key Takeaway

Blending human responses with data from synthetic samples is a potential solution for market research, especially as AI continues to advance. Fine-tuning AI models with your own training data can improve the performance of your AI model, but the human touch remains crucial for gathering actionable insights in today's research landscape.

Harness the Power of AI with a Human Touch Using Quillit ai™

Quillit is an AI report-writing tool developed by Civicom for qualitative marketing researchers. Cut the time to produce your report by 80%. Quillit enables you to accelerate your client reports by providing first-draft summaries and answers to specific questions, which you can enrich with your own research insights and perspectives. Quillit is GDPR, SOC2, and HIPAA compliant. Your content is partitioned to protect data privacy. Contact us to learn more about this leading-edge AI solution.

Elevate Your Project Success with Civicom:
Your Project Success Is Our Number One Priority

Request a Project Quote

Explore More

Related Blogs

cross