Blog / Research
How to Do Customer Research Without Mistaking Politeness for Signal
A practical guide to doing customer research: ask better questions, focus on past behavior, and stop mistaking polite feedback for real demand.
How to Do Customer Research Without Mistaking Politeness for Signal
If you have ever walked out of a customer call feeling encouraged and then watched the prospect disappear, you already know how bad customer research usually fails.
The call felt positive.
The signal was weak.
That pattern shows up all the time in founder-led and product-led teams. A prospect says the product is interesting. They praise the idea. They sound engaged. Everyone leaves the conversation feeling slightly relieved.
Then nothing happens.
No second meeting. No internal follow-up. No buying motion. No next step at all.
That is not a minor interpretation error. It is one of the clearest ways teams confuse politeness with demand.
So if you want the practical version of how to do customer research, start here: the job is not to collect positive conversations. The job is to reduce uncertainty around a decision.
Why most customer research feels useful but is not
Bad customer research is dangerous precisely because it does not look obviously bad.
The notes look clean. The calls feel thoughtful. The founder feels calmer. The team says it talked to customers.
Then reality breaks the story later.
Usually the process failed much earlier than anyone admits:
- the team asked vague questions;
- the conversation stayed in hypotheticals;
- the product got praised instead of pressure-tested;
- no one paid attention to what happened after the call;
- the real purpose of the research was reassurance, not learning.
There is also a newer version of the same mistake: teams skip the hard part and ask AI what customers want. That produces fast, polished language and very little grounded evidence. It removes discomfort, not uncertainty. I wrote about that trap more directly in /blog/synthetic-respondents-ai-research/.
Start with the decision, not with “talking to customers”
Most teams begin too vaguely.
They say:
- “We need to talk to customers.”
- “We should do discovery.”
- “Let’s get some feedback.”
That is not a research plan.
Customer research should start with a specific decision. For example:
- are we trying to understand whether this problem is painful enough to matter;
- are we trying to learn how people solve it today;
- are we trying to pressure-test a segment;
- are we trying to understand why deals stall;
- are we trying to diagnose why adoption stays weak?
If you cannot name the decision, the research will drift.
That is one reason customer research methods get chosen so badly. Teams start with the ritual instead of the question. They think the important part is “doing research” at all. It is not. The important part is whether the work can generate decision-grade signal.
Ask about past behavior, current workarounds, and real constraints
Once the decision is clear, the next job is to stop asking the kind of questions that invite politeness.
Weak questions sound like this:
- “Would you use this?”
- “Does this seem valuable?”
- “What do you think about this idea?”
- “If we built this, would it help?”
Those questions are easy to answer generously.
Better customer research asks about things that already happened:
- How are you dealing with this today?
- What did you try before?
- When did this become painful enough to matter?
- What happened the last time you tried to solve it?
- What is annoying enough that you still remember it clearly?
That shift matters because past behavior is harder to fake than future enthusiasm.
People are generous in theory all the time. They are much more useful when they describe what they already do, already avoid, already pay for, or already postpone.
The goal is not to make the interview more “professional.” The goal is to get out of the performance layer of the conversation and into the evidence layer.
Treat what happens after the conversation as part of the evidence
A lot of teams stop interpreting the signal once the call ends.
That is a mistake.
What happens after the conversation is often one of the strongest pieces of evidence you have.
If someone says:
- “This is really interesting”
- “I can see teams using this”
- “This looks strong”
and then never takes the next step, that gap tells you something.
It does not automatically mean there is zero demand in every context. But it often means the signal is weaker than the founder wants it to be.
That is why I do not trust praise on its own. Praise is cheap. Movement is expensive.
If the conversation felt strong but nothing followed, you should investigate:
- was the problem actually urgent;
- was the buyer relevant;
- was the pain costly enough;
- was the person only being polite;
- did the team hear validation where there was only curiosity?
This is also why customer research quality depends so much on interview quality. If the person running the conversation cannot separate warmth from evidence, the team starts building on top of social comfort. That is one reason weak interviewing is so expensive, and why I wrote /blog/why-you-shouldnt-delegate-customer-interviews/ in the first place.
A practical workflow for doing customer research
If you want a simple workflow, use this:
-
Name the decision. Do not start with “we need feedback.” Start with the real business question.
-
Find the right people. Talk to people who genuinely sit close to the problem, not just people who are available or polite.
-
Ask about past behavior and current workarounds. Move the conversation away from ideas and toward evidence.
-
Capture what makes the problem matter. Look for urgency, cost, delay, friction, and what people already do to cope.
-
Watch what happens after the call. Follow-up behavior is part of the research, not a separate sales issue.
-
Treat praise carefully. If someone likes the idea but does nothing, record that as weak signal, not traction.
-
Look for patterns, not flattering quotes. Your job is not to build a slide deck full of encouraging lines. Your job is to improve the next decision.
FAQ
Does positive feedback ever matter?
Yes, but only in context. Praise matters much less than evidence of real behavior, real pain, or real next-step movement.
Can AI help with customer research?
Yes, for preparation, synthesis, and question framing. No, if it is being used to replace customer conversations or avoid real evidence.
What is the simplest way to improve customer research quickly?
Stop asking hypothetical future questions and start asking about what people already do, already pay for, already avoid, and already postpone.
Final point
Good customer research is usually less comforting than teams expect.
That is part of the point.
If every conversation makes the founder feel better but nothing changes in the market afterward, the process is probably collecting reassurance instead of evidence.
If you want help auditing the way your team runs interviews or customer discovery before an expensive decision, that is exactly the kind of work Glasgow Research is built for.
Author
About Vadim Glazkov
Vadim Glazkov is the founder of Glasgow Research and a product research expert working with founders and B2B SaaS teams on customer interviews, JTBD, market validation, and decision-ready research.