Blog / Research
Customer Research Methods: Choose the Method That Matches the Decision
A practical guide to customer research methods for B2B SaaS teams: when to use interviews, surveys, desk research, usability testing, expert interviews, and research inside sales.
Customer Research Methods: Choose the Method That Matches the Decision
Most teams do not choose customer research methods.
They choose habits.
They choose the method that feels familiar. The method that sounds respectable in a meeting. The method that creates the least friction. The method that lets them say, “We spoke to customers,” without asking whether that conversation could ever answer the real question.
That is why so much customer research produces weak signal.
The problem is usually not that the team has never heard of interviews, surveys, desk research, or usability testing. The problem is that method selection is being driven by comfort, panic, or internal politics instead of by the decision the team is actually trying to make.
If you work in B2B SaaS, this gets expensive fast. You are often dealing with narrow markets, messy buying groups, long decision cycles, and product bets that are too costly to steer with soft evidence.
So here is the practical version: customer research methods are decision tools, not a menu. The right method depends on the question, the timing, and the business risk. If you start anywhere else, the process gets noisy very quickly.
The first mistake is starting with the method
The usual conversation starts one step too late:
- “Should we run interviews?”
- “Should we send a survey?”
- “Should we do JTBD?”
- “Should we just ask AI and use that as a shortcut?”
None of those are good starting questions.
The first question should be: what decision are we trying to make?
Until that is clear, method choice is mostly theater. Teams start choosing methods because:
- interviews feel like “real research”;
- surveys look scalable;
- AI sounds faster than recruiting people;
- founders want proof more than they want surprise;
- the product is already built and everyone is hoping research will save the launch.
That is how teams end up with the wrong signal from the start.
I see this in several recurring patterns.
Some teams skip research completely and lean on intuition. Now that AI is everywhere, that same instinct often shows up as “let’s prompt ChatGPT and get a view of the market.” The language comes back polished, and people mistake plausibility for evidence. That is still guesswork. It just arrives in better prose. If you want the fuller version of that warning, the same problem shows up in synthetic research shortcuts too, which I wrote about in /blog/synthetic-respondents-ai-research/.
Other teams default to interviews for almost everything. Interviews are useful. They are not universal. Treating them as the answer to every question is not methodological rigor. It is just a more socially acceptable habit.
Then there is the panic version. A founder spends a year or two building a product, gets close to launch, and suddenly asks for research to figure out who actually needs it. The method category may still be directionally right, but the timing is wrong. At that stage, broad discovery often gives less value than tight hypothesis testing around the bets already in play.
That is why good method selection starts with the decision, not the method label.
Choose the method that matches the question
You do not need a giant taxonomy first. You need a clean way to match questions to methods.
This is the framework I find most useful:
| If the real question is… | Better method | Why this fits |
|---|---|---|
How many? or How often? | Quantitative research, survey, or product data | You are trying to measure scale, not collect stories. |
Could the answer already exist? | Desk research | There is no reason to recruit people if solid evidence is already available. |
Can people complete this task? | Usability testing | This is about task behavior and interaction friction, not broad market demand. |
What problem do people have, how do they describe it, and what do they do now? | Problem interviews | You need motivations, context, language, and workarounds. |
What do specialists in this market already know? | Expert interviews | Sometimes experts help you frame the terrain before customer work. |
Is the market tiny and hard to sample cleanly? | Research inside the sales process | In some B2B markets, live selling and discovery produce better signal than artificial survey logic. |
This is broadly compatible with the standard distinction between qualitative and quantitative work and with common method maps such as those published by Nielsen Norman Group. It also fits broader definitions of customer research, such as the Interaction Design Foundation’s explanation that customer research can include interviews, surveys, ethnographic work, and desk research depending on the question being asked.
The important part is not the taxonomy. The important part is the discipline behind it: let the question define the method.
Use interviews when you need motivations, context, and language
Interviews are strong when you need to understand:
- how people describe the problem in their own words;
- what they are doing today instead of using your solution;
- what makes the problem painful enough to matter;
- why prior attempts failed;
- how a role, context, or buying process shapes the decision.
This is where interviews shine. They are useful for problem discovery, positioning work, and early-stage product understanding.
Do not use interviews when the real question is quantitative
If the question is how many, how often, or what percentage, interviews are not the right tool.
If the question is about interface quality and task performance, usability testing is a better fit. Nielsen Norman Group’s usability testing primer is explicit about this: usability testing is about observing a participant trying to complete realistic tasks in a product or service, not about solving broad demand questions.
If the answer may already exist in desk research, analytics, call notes, or market material, do not turn custom research into a ritual just because it sounds more serious.
This is where a lot of teams waste time. They run interviews to answer questions that were never interview questions in the first place.
The failure patterns that ruin method choice
The wrong method creates the wrong evidence. And the wrong evidence does not just waste time. It creates false confidence.
Polite feedback is not demand
This is one of the most common founder mistakes.
Someone shows a demo. The prospect says the product looks interesting. The meeting feels positive. The founder hears encouragement and translates it into demand.
Then nothing happens.
No next meeting. No budget conversation. No buying motion. The prospect disappears.
That is not demand. That is politeness.
Once teams switch to more structured interviews focused on past behavior, current workarounds, and the cost of the problem, the signal usually gets better fast. People stop complimenting the product and start describing the actual reasons they do not move.
This is also why interview quality matters so much. If the person running the conversation cannot separate politeness from evidence, the company ends up making product decisions on top of social courtesy. If your problem is not method selection but interview quality, the deeper issue may be who is being allowed to run decision-grade interviews in the first place. I covered that problem more directly in /blog/why-you-shouldnt-delegate-customer-interviews/.
Research brought in too late changes the method
Timing is part of method choice.
If a founder comes in a week before launch asking, “Can we research who this is for?”, the answer is not simply “yes, let’s do discovery interviews.” That would ignore the actual situation.
At that point, the useful questions are narrower:
- which segment is most likely to have this problem now;
- what are they already doing instead;
- which of the current hypotheses survives contact with reality;
- is the evidence strong enough to justify launch, repositioning, or delay.
The same method can be good in one stage and badly timed in another. That is one reason research should not be treated as a last-minute rescue purchase.
Research used to prove a belief is not research
Another pattern I see too often is research being used as internal persuasion.
The team already believes something. The research is commissioned to support it. Then people start asking for findings that can strengthen the preferred narrative for stakeholders.
At that point, the method barely matters. The research function has already been bent toward confirmation.
This is one of the clearest signs that the team does not want signal. It wants cover.
Good customer research should make it easier to change direction, not easier to defend a weak direction in better-looking slides.
Shortcuts get more attractive when the truth is uncomfortable
Teams rarely avoid research because they hate methods.
They avoid research because real signal can be painful. It can show that:
- the problem is weaker than the founder hoped;
- the target segment is wrong;
- the product is harder to adopt than the team admits;
- the buying process is more complex than the pitch assumes.
That discomfort creates demand for substitutes:
- intuition;
- AI summaries;
- a few unstructured calls;
- a market report treated like an oracle;
- a survey sent before the team even knows what it is measuring.
None of those substitutes removes uncertainty in a reliable way.
Customer research methods change in B2B SaaS
B2B teams often make a second mistake on top of bad method selection: they describe the buyer as if the buyer were a company.
That is not how buying works.
A company does not buy software. A person in a role does. Sometimes several people in roles do.
That is why B2B research has to pay attention to the buying group:
- the champion;
- the end user;
- the manager who feels the operational pain;
- the budget owner;
- the person who can block the deal late.
If your research method assumes one clean buyer, the output may sound elegant while still being strategically wrong.
This also changes method choice. In B2B SaaS, expert interviews can be more useful than teams expect because domain knowledge is often concentrated. And in tiny markets, research inside the sales process can be more honest than pretending you have a large, survey-friendly population.
That is the real advantage of a decision-led approach. It adjusts the method to the market you actually have, not the one you wish you had.
A practical method-selection workflow
If you want a repeatable operating sequence, use this:
-
Name the decision. Are you choosing a segment, testing a problem hypothesis, diagnosing adoption, checking pricing logic, or evaluating usability?
-
Rewrite the decision as one concrete question. Do not say “we need research.” Say what the question actually is.
-
Classify the question. Is it about quantity, behavior in a task, motivations and language, existing market knowledge, or commercial reality in a small market?
-
Check whether the answer may already exist. If yes, start with desk research, analytics, or previous evidence before you recruit anyone.
-
Choose the smallest method that can still produce decision-grade signal. More elaborate is not automatically better.
-
Check whether the timing still makes the method useful. Good methods still fail when they arrive too late.
This sequence is deliberately simple. It is supposed to protect teams from choosing methods based on status, panic, or convenience.
FAQ
Are interviews always the best customer research method?
No. Interviews are best when you need motivations, context, problem language, and decision dynamics. They are weak when the real question is quantitative or when the answer probably already exists elsewhere.
When should you use a survey instead?
Use a survey when you already know what you need to measure and the real question is about scale, distribution, or prevalence. Do not use a survey as a substitute for problem understanding.
Can AI replace customer research?
No. AI can help frame questions, speed up analysis, or summarize material. It should not be treated as a substitute for talking to customers or validating hypotheses against reality.
Final point
Customer research methods are not there to make the team feel thoughtful. They are there to make the next decision less wrong.
That is the standard that matters.
The right method is not the one that feels sophisticated, efficient, or modern. It is the one that reduces uncertainty around the actual business decision before the mistake gets expensive.
If your team is about to make a product, positioning, or go-to-market decision and you are still arguing about methods before you have named the question, stop there first. That is usually where the real research problem begins.
If you want help choosing the right research method before an expensive decision, that is exactly the kind of work Glasgow Research is built for.
Author
About Vadim Glazkov
Vadim Glazkov is the founder of Glasgow Research and a product research expert working with founders and B2B SaaS teams on customer interviews, JTBD, market validation, and decision-ready research.