Blog / Research

Synthetic Respondents in UX Research: What Works and What Doesn't

A blunt, practical take on synthetic respondents: where AI genuinely helps in research, where it still falls short, and how to use it without fooling yourself.

Here is the short version: AI is already useful in research, but mostly as a way to move faster, not as a clean replacement for talking to real people.

That sounds obvious, but it gets lost very quickly once people start talking about synthetic respondents. There is a big difference between “I asked ChatGPT to pretend it was my customer” and an actual system designed to simulate respondents in a disciplined way. A lot of the noise in this space comes from blurring those two things together.

If you work in UX research or product research, AI can already save you time in meaningful places. What it still does not do well enough is remove the need for interviews, validation, or judgment. And honestly, that is where some of the hype still falls apart.

What synthetic respondents are

A synthetic respondent is not just one model with one clever prompt. In practice, a serious setup is usually some kind of multi-step system. It has to create different respondent profiles, keep them from collapsing into the same voice, and stop the model from inventing things that only sound believable.

Usually, that means it needs to:

  • generate several distinct personas;
  • keep them from drifting into fantasy;
  • validate answers for realism;
  • check that findings do not diverge from real-world observations;
  • train on actual interviews, not just generic language patterns.

That is why the output from a proper setup looks very different from the output of a casual “pretend you are my customer” prompt.

This is also where a lot of teams get tripped up. They get one surprisingly coherent answer, and from there it is very tempting to act like the method is already mature. That is usually the moment when generated language quietly starts getting treated like evidence.

Where AI is already useful

The most sensible way to use AI in research is to give it the parts of the process where speed matters and the cost of being slightly wrong is still manageable.

In practice, it is already pretty useful for:

  • framing research questions;
  • drafting interview guides and scripts;
  • initial transcript analysis;
  • clustering responses;
  • identifying recurring patterns;
  • rough summarization;
  • recruiting templates and communication;
  • research quality control.

That is already a meaningful gain. In a real team, shaving hours off prep and analysis is not trivial. It changes how much work you can actually get through in a week. But it is still very different from saying AI can replace research.

Where it still does not replace humans

This is where I think people still overreach.

Synthetic respondents are not reliable enough to replace organic interviews, at least not if the decision actually matters. If you are making a real product bet, changing pricing, repositioning a feature, or trying to understand why people are not converting, you still need contact with reality.

You should not:

  • make strategic decisions from AI answers alone;
  • treat synthetic data as full hypothesis validation;
  • ignore real interviews and actual sales conversations;
  • assume the model understands your market better than real users do.

If a team stops talking to customers and starts asking AI instead, that is not some advanced version of research. It is still guesswork. It just arrives in cleaner sentences.

Why people overestimate AI research

Part of it is simple FOMO. It is very easy to feel like everyone else has already figured this out and you are behind if you are still doing real interviews by hand.

But most of the polished examples people point to are not “one prompt and done.” They are usually enterprise products, internal tools, or heavier pipelines with a lot more going on under the hood than the demo suggests.

There is also a more subtle trap here: large models are extremely good at sounding plausible. In research, that is dangerous. A polished answer is not the same thing as a verified answer, and once you forget that, the whole process gets shaky fast.

How to validate AI outputs

The rule I would stick to is simple: AI-generated interviews are not hypothesis validation. At best, they are a way to generate hypotheses faster.

You still need to validate through:

  • follow-up real interviews;
  • quantitative research;
  • marketing tests;
  • sales attempts;
  • observation of actual product behavior.

The useful version of this workflow is not “trust the model.” It is “compare the model against reality and figure out where it is consistently helpful, where it drifts, and where it just starts making elegant nonsense.”

A workable workflow

If I were building a practical workflow around synthetic respondents, it would look something like this:

  1. Define the research task.
  2. Gather context about the product, audience, and market.
  3. Prepare the interview guide or hypotheses.
  4. Run them through AI for initial analysis and expansion.
  5. Validate the output against real interviews or real data.
  6. Refine the conclusions with human judgment.
  7. Record where AI helped and where it was wrong.

That is the version that actually works. AI becomes an amplifier for the research process instead of a theatrical replacement for it. That is a much healthier role for it right now.

How to write better prompts

A weak prompt gives you exactly what you would expect: something generic, smooth, and difficult to trust.

If you want useful output, the prompt needs more structure and more context. In most cases, one-line prompts produce one-line thinking.

Good practices:

  • separate the task from the context;
  • ask the model to gather missing information first;
  • provide real interview excerpts;
  • use source search when it makes sense;
  • ask for caveats, risks, and alternatives, not just an answer;
  • use newer models for harder tasks.

In other words, if you want more than polished filler, you have to do more than throw one sentence at the model and hope it will do the difficult thinking for you. Usually it will not.

What practitioners should do

If I had to reduce this to a few simple rules, they would be:

  • do not replace research with AI simulation;
  • use AI to speed up routine work;
  • validate findings with real users;
  • train the process on your own interviews;
  • treat synthetic respondents as a tool, not as truth.

That is the healthiest way to think about synthetic respondents right now. They are not magic, and they are not useless either. They are one more layer in the research stack. Their value depends a lot on how honestly you deal with their limits, and whether you are using them to sharpen your thinking or to avoid doing the uncomfortable parts of research in the first place.

Author

About Vadim Glazkov

Vadim Glazkov is the founder of Glasgow Research and a product research expert working with founders and B2B SaaS teams on customer interviews, JTBD, market validation, and decision-ready research.

View author page

Editorial process

This article was reviewed and edited before publication for clarity, structure, and accuracy. Read more about the workflow.