Juicy Talks
AI-powered product design. Made by Omer Frank, shaped by AI.
Juicy Talks
How to design assumption-killing interview questions
We reframe user interviews as assumption-killing tools that reduce risk and reveal real behavior. We walk through desirability, viability, and feasibility, then show how to design hypotheses, ask better questions, avoid bias, and synthesize patterns that matter.
• reframing interviews from validation to risk reduction
• the big three risks desirability, viability, feasibility
• pipeline from belief to hypothesis to question
• behavior over opinions as the core signal
• four evidence types past, present workflow, emotion, decision criteria
• direct vs indirect questions and when to use each
• avoiding leading language, hypotheticals, and founder energy bias
• synthesis signals repeated pain, workarounds, delay, resistance
• designing interviews that expose gaps rather than soothe egos
Welcome to Juicy Talks. Today we're doing a deep dive into something absolutely crucial for anyone building anything. We're talking about user interviews, but not just casual chats. We're talking about structured, rigorous design. Because, you know, so often we mistake a really pleasant conversation for true validation.
SPEAKER_00:Exactly. That feeling is so dangerous. You leave an interview thinking they love my idea, this is a home run. And that warm, fuzzy feeling, it's probably one of the riskiest traps in product development. So our mission today is to replace that feeling with, well, a framework for risk reduction to find verifiable truth, not just compliments.
SPEAKER_01:Okay, let's unpack that. We assume interviews are for learning, which sure they are. But the real goal you're talking about is systematic risk reduction. Yeah. So what are the most dangerous illusions, the biggest risks that product teams have that interviews are supposed to kill?
SPEAKER_00:Right. These are the big three. The core beliefs you have to treat as assumptions, not facts. The first and the most common one is desirability. Just the basic assumption that people actually want what we're building.
SPEAKER_01:Okay, that makes sense. Do they want it?
SPEAKER_00:The second is viability, the assumption that people will pay for it. And, you know, pay isn't just money, it's time, it's effort, it's the hassle of changing a habit.
SPEAKER_01:Desirability, viability, got it. Those are about the user and the market. But the third one, feasibility, the idea that we can build it the way we imagine. I always thought of that as an internal engineering problem. Why is a user interview meant to challenge that?
SPEAKER_00:That's such a great question. Because feasibility isn't just about code or supply chains, it's about user context. Let's say we can build some super complex AI that automates a report, but our interviews show us that users are solving this problem with a simple shared spreadsheet and they're happy with it.
SPEAKER_01:Ah, so our complex solution is maybe technically feasible, but it's not contextually feasible. It doesn't fit their world.
SPEAKER_00:Precisely. It's not just can we build it, but should we build it this way? If you walk in trying to confirm your right on all three, that it's wanted, they'll pay, and your idea fits their world, you'll hear what you want to hear. The shift is seeing the interview as an assumption killing machine.
SPEAKER_01:That's a huge reframing. Okay, let's talk about the starting point, because this is where I think most of us get it wrong. We have an idea and we immediately start writing questions. Do you like this feature? Would this be useful? But you're saying the starting point isn't the question at all.
SPEAKER_00:Aaron Powell That's exactly right. The question is the very last step in the process. The pipeline you have to follow is belief, then assumption, then hypothesis, and only then the question. Teams skip those middle two steps constantly.
SPEAKER_01:And that leads to useless data, I'm guessing. So a vague assumption just ruins everything.
SPEAKER_00:Aaron Powell It does. The classic weak example is something like users want to be more productive.
SPEAKER_01:Yeah.
SPEAKER_00:I mean, of course they do. Everyone wants to be more productive. You've learned absolutely nothing that you can build on.
SPEAKER_01:Aaron Powell So how do we get from that vague belief to something we can actually test? What's the filter?
SPEAKER_00:Aaron Powell You have to ask yourself one simple thing. If this assumption is true, what must also be true in the real world? Hmm. So if you assume users need a faster way to export reports, it must be true that they are currently frustrated by how slow it is now. There has to be observable pain.
SPEAKER_01:So we're looking for evidence of the problem today, not just their hopes for a solution tomorrow.
SPEAKER_00:Yes, that's the key. And once you have that, you can move to a textable hypothesis. And this needs three specific parts the who, the in what situation, and the what behavior.
SPEAKER_01:Okay, so the who isn't just users.
SPEAKER_00:No, never. It has to be specific. Senior developers. The situation is the context. Coding late at night. And the behavior is the most important part. It's what they will do. We'll switch to dark themes to reduce eye strain.
SPEAKER_01:That clarity is everything. Because now you're not asking for an opinion, you're looking for an action. Behavior is the only currency that really matters.
SPEAKER_00:Exactly. Contrast the weak hypothesis, which is users want a dark mode. You'll get a 90% yes rate on that.
SPEAKER_01:But it's meaningless.
SPEAKER_00:Right. It's a cheap compliment. The strong hypothesis is about what senior developers do under specific stress. That is something you can actually verify.
SPEAKER_01:And if they don't do it, you've just saved yourself months of building a feature that nobody would have actually used. You validated an opinion, not a need.
SPEAKER_00:You got it. The weak one gives you a wish list. The strong one gives you a scenario to test against reality.
SPEAKER_01:So before writing a single question, we're really designing for evidence. What kinds of evidence are we actually hunting for?
SPEAKER_00:Four key types. The first and most important is past behavior. What did they actually do last week? Not what they say they'll do tomorrow. Past action is the strongest proof you can get.
SPEAKER_01:And what's next? What they're doing right now.
SPEAKER_00:Yep. Their current workflows, how are they solving this problem today? We're looking for the messy workarounds, the spreadsheet, the copy pasting. Those are golden nuggets. They show you how much the problem really hurts.
SPEAKER_01:If they're using three different tools to do one thing, you know the pain is real.
SPEAKER_00:Absolutely. The third is emotional triggers, the specific moment of frustration, the sigh, the groan. You need to document that emotion.
SPEAKER_01:That's what separates a nice to have from a must-have, the actual pain.
SPEAKER_00:And finally, decision criteria. Why did they choose product X over product Y? This gets at viability. Maybe your product is faster, but if their main criteria is security, you'll still lose.
SPEAKER_01:Yeah, the difference between I would totally use that, which is an opinion, and I paid$50 for a tool that does this last week. That's the whole game. One is a promise, the other is evidence.
SPEAKER_00:For sure. And that brings us to how you phrase the questions. Yeah. This is where we all get a little nervous because we want reassurance. We want to ask direct questions.
SPEAKER_01:Right. When is it okay to ask directly and when do we have to go indirect?
SPEAKER_00:Direct questions are okay for really simple facts, things like awareness. Have you heard of notion or language? What do you call this document? Or very specific near-term objections. Is there any reason you wouldn't sign up today? But they break down the moment you start asking about the future or hinting at your solution.
SPEAKER_01:Aaron Powell But what about hypotheticals? Sometimes if you're building something totally new, you kind of have to ask, imagine a world where it really is.
SPEAKER_00:It takes them out of their reality where things have costs, time, money, effort. If you absolutely must use one, you have to ground it immediately. Ask, okay, if we built that, what tool that you pay for now would you stop using to make room for it?
SPEAKER_01:Ah, that brings the cost back in. It forces a trade-off. So for the real truth, we have to go indirect. What do indirect questions reveal?
SPEAKER_00:They reveal reality, friction, and workarounds. Because you're making them recall a specific memory, not invent an answer. So instead of the trap, is security important to you? Of course they'll say yes. You ask, tell me about the last time you rejected a software vendor for security reasons.
SPEAKER_01:That's a story. You're asking for a story which contains evidence.
SPEAKER_00:That's it. Let's make it practical. For that fitness app idea, don't ask, would you work out more if you could share your progress? Instead, ask, think about the last time you had a really successful streak of workouts. What was different about that time?
SPEAKER_01:And they might say it was a workout buddy. Or maybe it was a specific feature in another app. That's real data.
SPEAKER_00:And for that sauce dashboard team that's obsessed with the real-time data, don't ask, do you want real-time data? Ask. Walk me through the first thing you did when you opened your laptop this morning. Did you check any numbers? How often?
SPEAKER_01:And you might find out they only need the numbers updated once an hour. You just save the engineering team a massive headache.
SPEAKER_00:You got it. But even a perfect question can fail if you contaminate the answer. You have to watch for leading the witness. The first trap is leading language. How much do you love this great new feature?
SPEAKER_01:Yeah, that's an obvious one. Or the implied solution. Would an export button help here?
SPEAKER_00:Right. You're just telling them the answer you want. Then there are future promises. If we built this, would you buy it? And as we said, hypotheticals.
SPEAKER_01:Okay, what's the last trap? The really sneaky one.
SPEAKER_00:It's the founder energy bias. This one is so subtle. If you, the interviewer, are just bursting with excitement, the user will often just lie to be nice. They mirror your energy. I once built a feature because a user I admired was so enthusiastic about it, he never used it. Not once. He just didn't want to hurt my feelings.
SPEAKER_01:Ouch. That is a painful lesson. So the takeaway is never ask, would you use this? Instead, you ask, how do you currently solve this problem?
SPEAKER_00:Exactly. One invites a lie, the other demands a memory. And one last thing: the interview itself validates nothing. Only the synthesis does. One interview is just an anecdote. You need to see patterns across many interviews.
SPEAKER_01:What are those patterns? What are we looking for in the synthesis?
SPEAKER_00:You're looking for four things. Repeated pain due. Multiple people get frustrated at the exact same step. Repeated workaround, are they all using the same clunky spreadsheet to solve it?
SPEAKER_01:The same hack across different companies? That's a huge signal.
SPEAKER_00:Huge. Then repeated delay. Do they all hesitate at the same price point? And finally, repeated resistance. Do they all get confused by the same term? If you don't see those patterns repeating, the pain probably isn't big enough to build a business around.
SPEAKER_01:So what does this all mean for someone listening right now, getting ready for their next round of research?
SPEAKER_00:Well, the uncomfortable truth is that good research shouldn't make you feel warm and fuzzy. It should make you feel exposed. It should reveal the gaps in your thinking. If your team leaves a bunch of interviews feeling more excited than uneasy, you probably just spend a week gathering compliments.
SPEAKER_01:That's such a powerful takeaway. The next time you sit down to write an interview script, stop. Put the questions away and just ask yourself Am I here to be right or am I here to find the truth?
SPEAKER_00:Something to maul over as you design your next project.
SPEAKER_01:Thanks for listening to Juicy Talks.