Juicy Talks

The Trust Builder framework

Omer Frank Season 1

Send us a text

We share a practical framework for building trustworthy AI products and apply it to Joy, an agent that plans a full concert trip. Five pillars—competence, transparency, predictability, alignment and resilience—turn hesitant users into confident delegators.

• the shift from usability to trust as the primary interface
• the origin story of the trust builder framework
• competence through clarification, confirmation and visible progress
• transparency with leading reasons, process breadcrumbs and trade-offs
• predictability via stable flows and user control
• alignment with honest upsells and labeled sponsorship
• resilience with plain-language errors, recovery paths and saved state
• a final provocation on radical honesty as competitive edge


SPEAKER_01:

Welcome to Juicy Talks. Today we're peeling back the curtain on something fundamental to modern product design. Trust in artificial intelligence. It's not just about functionality anymore, it's about feeling safe, letting the machine handle your life.

SPEAKER_00:

That really is the heart of it, isn't it? We've moved beyond just asking, can it do the job? Now the question is, are you, the user, actually willing to hand over control? Especially when real things are at stake.

SPEAKER_01:

Exactly. And that's a huge hurdle. So today we are diving into a practical framework, the trust builder framework. It's designed specifically for crafting AI experiences people can truly count on. We'll explore the five essential pillars that turn that initial uncertainty into real confidence when interacting with AI agents.

SPEAKER_00:

And it's interesting where this framework actually came from. It wasn't cooked up in a lab somewhere, it grew out of a real-world headache. Picture this an AI platform. Looked perfect, technically flawless, fast, accurate.

SPEAKER_01:

Sounds ideal. What was the problem?

SPEAKER_00:

The problem was it dealt with serious stuff. Financial decisions, big purchases, supply chain moves, things with real consequences, often involving real money. And users, they just wouldn't delegate. They'd use it for insights, maybe, but when it came time to actually act, they hit manual override every time.

SPEAKER_01:

Wow. So technically a success, but practically a failure because nobody trusted it enough.

SPEAKER_00:

Precisely. They paid for automation, but ended up doing the work themselves because the trust wasn't there. It shows how much AI has shifted UX. It used to be all about usability, ease of use, maybe a little bit of delight. But now, trust becomes the interface. That's the core idea. Our job as designers, as builders, has changed. We're designing these subtle cues, the words, the visuals, the rhythm of the interaction that whisper reliability and safety.

SPEAKER_01:

So when does that hesitation finally vanish? The framework points to five key things. When the AI feels competent, transparent, predictable, aligned, and resilient, that's when people start to let go. Those are the five pillars.

SPEAKER_00:

And to make these ideas really concrete, we're going to use an example throughout our chat. Let's imagine an AI agent called Joy. Joy's job is to plan a concert trip for you, end-to-end. Find tickets for your favorite band, book a suitable hotel nearby, sort out the flights, all within your budget and style. We'll see how these five pillars, competence, transparency, and the rest, show up as actual design choices in how Joy works.

SPEAKER_01:

Okay, let's start with the absolute bedrock then. Competence, pillar one. Because if the AI can't reliably do what it's supposed to do, nothing else really matters, does it? Trust starts with that fundamental question: does this thing actually work?

SPEAKER_00:

It's completely non-negotiable. If the agent fails constantly at his main job, the whole system just crumbles. And it's not just about the easy path, it's about core reliability. Can it handle unexpected stuff? A sold-out show, an API suddenly going down.

SPEAKER_01:

So for Joy, our concert planner, competence means delivering the whole package without gaps. If I say, plan my trip, Oasis, London, October under$1,500. Joy needs to handle a complex sequence. Find real concert dates, check ticket availability, not just on one site, maybe, but several, compare seat quality versus price, find hotels nearby, flights, and make it all fit the budget.

SPEAKER_00:

And every single step it gets right builds a tiny bit more confidence. But that confidence is so fragile. Think about the easy mistakes. A flight time that clashes with the concert, a hotel that's miles away because it only looked at price. Suggesting a show that's already sold out, any one of those just drains trust instantly, forces you back to checking everything yourself.

SPEAKER_01:

So the design takeaway is almost humbling.

SPEAKER_00:

Exactly. Don't try to be flashy, don't optimize for speed above all else. Just get it right. Consistently. Reliability is competence's quiet superpower.

SPEAKER_01:

Okay, but reliability is the output. The framework also talks about competence in terms of managing the input, which I find interesting. This idea of building trust through inquiry, asking before acting.

SPEAKER_00:

Right. The best agents don't just guess or assume. They pause, they ask clarifying questions. We need to see that curiosity in an AI not as a flaw, but as a feature. A sign of competence, actually.

SPEAKER_01:

Aaron Powell, Because jumping in too fast leads to errors.

SPEAKER_00:

Leads to bad assumptions. If you just tell Joy, book me a trip to London, that's way too vague. A good human agent would ask more questions, wouldn't they? The AI has to do the same. It needs to understand what good means to you.

SPEAKER_01:

So how does that look in the interface? What does Joy actually do?

SPEAKER_00:

Aaron Ross Powell Instead of launching a massive, probably useless search, Joy might present simple, clickable options first. Things like, what's your absolute budget limit? Are your dates flexible at all? Any seating preferences, balcony, floor, and maybe an open box for anything specific. This shows it's listening before it acts. It avoids those frustrating downstream errors caused by bad guesses.

SPEAKER_01:

It's like preemptive error correction. You involve the user up front to define success. Makes sense. And this leads to the next point about competence confirming, understanding, and delegation.

SPEAKER_00:

Exactly. Once you've given Joy your preferences, it needs to prove it heard you correctly. It does this by mirroring your request back, but in plain language, not technical jargon.

SPEAKER_01:

So not just starting the search, but pausing to say.

SPEAKER_00:

Something like, okay, got it. Sounds like a great trip. Just to confirm, I'll handle finding balcony seats for OASIS, look for direct flights from JFK, book a hotel within two miles in the venue, and keep the total cost under$1,500. Sound right?

SPEAKER_01:

That mirroring is key. It proves it didn't miss anything. It turns your vague request into a concrete, shared plan.

SPEAKER_00:

And that moment where it confirms understanding is where safe delegation happens. You feel okay letting go because the AI has shown it gets it.

SPEAKER_01:

Okay, so the AI understands, it starts working. But then there's often that period of waiting, the dreaded spinner. The framework calls this next part making the invisible visible or agent thinking aloud. Why is the silent spinner so bad for trust?

SPEAKER_00:

Because silence breeds anxiety, doesn't it? If the system goes quiet for more than a few seconds, your mind starts racing. Is it stuck? That a forget my request? Is it searching for the wrong thing? That uncertainty. It's a trust killer.

SPEAKER_01:

Aaron Powell So we replace uncertainty with what transparency about the effort.

SPEAKER_00:

Precisely. Instead of silence, joy shows a live feed of what it's doing. It narrates its progress. Things like checking concert dates, completed, found three relevant dates, comparing ticket prices across five sites in progress, reviewing hotels near the venue, starting now.

SPEAKER_01:

Like little breadcrumbs.

SPEAKER_00:

Exactly. Concrete breadcrumbs. It confirms the agent is working, it's respecting your constraints, it hasn't crashed. It transforms passive, anxious waiting into active understanding. And understanding prevents doubt.

SPEAKER_01:

Okay, so wrapping up competence. It sounds like four key things: core reliability, handling surprises, asking the right clarifying questions up front.

SPEAKER_00:

Confirming understanding clearly before acting. And showing the work, making that progress visible and understandable.

SPEAKER_01:

If you get those four right, you've built a solid foundation users can actually see and feel.

SPEAKER_00:

Absolutely. That's the entry ticket. But it's not enough on its own. Which brings us to pillar two, transparency.

SPEAKER_01:

Transparency. Can I understand it? This feels like the why behind the what. Competence shows it works, transparency shows why it works the way it does or why it made a certain choice.

SPEAKER_00:

Exactly. It turns that raw performance into something you can actually trust long term. When you understand the reasoning behind an AI's decision, you stop seeing it as this unpredictable black box. You start believing in the process.

SPEAKER_01:

So for joy, this means explaining things. Why these seats over cheaper ones, why this hotel recommendation, how did it stick to my budget?

SPEAKER_00:

And the key is making that reasoning easily accessible, low friction, which leads to the first transparency tactic: leading with reason. Explain as you go. The rule is simple. Always start with the reason. People trust what they understand.

SPEAKER_01:

But isn't there a risk of overwhelming the user? Too much text, too much detail. It could slow things down.

SPEAKER_00:

That's the balancing act. The principle is clarity now, detail later. So instead of just showing section C, row 10,$150, Joy might lead with, I pick these seats because they offer great value and a clear view, rated$910 by fans for this venue and fit your balcony preference.

SPEAKER_01:

Ah, so the justification comes first, briefly.

SPEAKER_00:

Right. Short, honest, upfront. It answers the user's likely why before they even feel the need to ask suspiciously. If they are curious and want more, then you provide a little why this button or link that expands to show the deeper data review scores, price comparisons, whatever.

SPEAKER_01:

Respectful of their time, but the full info is there if needed.

SPEAKER_00:

I like that. What about the deeper transparency? This idea of unpacking the black box using the chain of thought sounds more involved.

SPEAKER_01:

It is, and it's crucial for higher stakes decisions. It means we stop treating the AI like some magic box and start showing its work. Think of it like a visible process log. When Joy presents the final trip plan, you don't just see the result, you see the key steps it took.

SPEAKER_00:

So Joy would actually show what it might show a summarized journey. Located three OASIS concert dates. Confidence? Checked five-ticket vendors, average response time, 1.5, analyzed 12 seating options against budget and view preference. Final cost,$1,450,$1,500 budget.

SPEAKER_01:

You mentioned confidence score there. What kind of metrics are actually useful without being overwhelming?

SPEAKER_00:

Good question. You want metrics that directly address potential user worries. Confidence scores are great. If the AI is only 60% sure about hotel availability, you need to know that risk. Showing which sources it checked builds credibility, and that real-time budget tracker is vital for financial tasks. It turns potential bill shock into managed expectation.

SPEAKER_01:

So the trade-offs become visible too, like choosing slightly more expensive seats because the view is rated much higher and it still fits the budget.

SPEAKER_00:

Exactly. You can trace the final output back to your original input and the AI's documented steps. It's not magic anymore. It's a logical process you can follow.

SPEAKER_01:

Okay, so the transparency checklist. Clear, jargon-free explanations up front, a visible process, that chain of thought idea, showing how input led to output, and being really open about any trade-offs made along the way.

SPEAKER_00:

Spot on. Hiding the why is a shortcut to eroding trust.

SPEAKER_01:

Moving on to pillar three. Predictability. The comfort of consistency. Will it do what I expect? This feels quieter, less obvious than competence or transparency, maybe.

SPEAKER_00:

It is quieter, but just as important. Predictability is that underlying feeling of safety you get when you know how an interaction is going to unfold. People find comfort in patterns. So the AI's behavior, the flow of interaction, needs to be consistent every time you use it, even if the results themselves change.

SPEAKER_01:

So for joy, this means the overall process shouldn't change randomly.

SPEAKER_00:

Right. It should always follow a logical sequence. Maybe. Gather your needs, suggest options, confirm your choice, book and summarize. If one time it asks about hotels first, and the next time it asks about flights first, that inconsistency creates confusion, and confusion undermines trust.

SPEAKER_01:

How do you establish that predictability right from the start?

SPEAKER_00:

By setting expectations up front. The AI should begin by giving you a clear roadmap, like a little contract. So when you first interact with Joy, it might say, okay, let's plan your trip. Here's how we'll do it. First, concert tickets. Second, hotel. Third, flights. You're in control the whole time.

SPEAKER_01:

Just laying out the steps visually, maybe?

SPEAKER_00:

Yeah, a simple visual map. It does a couple of things. It reduces anxiety because you know it's coming, and it frames you as a collaborator, not just someone waiting for a result. Knowing what's next is a huge trust signal.

SPEAKER_01:

Aaron Powell But predictability doesn't mean rigidity, right? People's lives are messy. Plans change.

SPEAKER_00:

Aaron Powell That's the crucial counterpoint. Predictability also means reinforcing the user's control. Users get really frustrated if they feel locked into a process.

SPEAKER_01:

So what if I start planning with joy, but then my boss calls and I have to stop?

SPEAKER_00:

A predictable, user-centric AI handles that gracefully. It needs constant visible reminders that you're in charge. So clear options to pause and save for later, or maybe you already booked your flight there should be an easy skip flight step option.

SPEAKER_01:

So things like go back and change budget, skip this step, safe progress, accessible controls.

SPEAKER_00:

Exactly. It shows the AI adapts to your reality, not the other way around. It prioritizes your needs over its own neat linear process. If it forces you down its path rigidly, it feels demanding. If it adapts, it feels like a helpful partner.

SPEAKER_01:

That flexibility actually reinforces the predictability of its helpful behavior. That's interesting. Okay, predictability checklist. Explain what's coming next. Keep the interaction flow consistent each time, and always support user control with easy ways to pause, skip, or go back.

SPEAKER_00:

Mailed it. Consistency builds that deep, quiet confidence.

SPEAKER_01:

All right. Pillar four, alignment. This feels like the big one in terms of ethics and well business model. Is it really on my side?

SPEAKER_00:

This is where trust truly becomes loyalty or where it shatters completely. Alignment means the AI is transparently working for you, the user, and not for some hidden agenda like pushing sponsored products, upselling sneakily, or optimizing for the company's profit over your needs.

SPEAKER_01:

And if you sense that misalignment even once, poof, trust gone.

SPEAKER_00:

Instantly and probably permanently. So let's go back to Joy. Your constraints were under$1,500 and balcony seats. And aligned joy sticks to those. It only suggests trade-offs after checking with you and explaining the pros and cons honestly.

SPEAKER_01:

How does it handle something like upselling them? Or sponsored content, which most platforms have. This brings us to honest upsells and goal adherence.

SPEAKER_00:

The key is to lead with your goal first. Joy should present the option that perfectly matches your request and clearly label it as such. So maybe it shows option A. Section B balcony,$1,440 total, aligned with your goals. Then perhaps side by side option B, premium section A,$1,560 total. Stretch option with wider seats.

SPEAKER_01:

That little badge aligned with your goals seems critical.

SPEAKER_00:

It's massive. It's proof the AI listened and prioritized what you asked for. The stretch option is presented neutrally as an alternative, not the default. Imagine if it always showed the more expensive one first without comment. You'd immediately feel pushed, wouldn't you?

SPEAKER_01:

Yeah, you'd get suspicious fast. But doesn't being that honest say about sponsored hotels potentially undermine the business or make the user feel constantly marketed to?

SPEAKER_00:

That's the tension. Definitely. And the only way through it, according to this framework, is radical honesty. Trade-off transparency, hiding sponsored placements, or pretending they're the absolute best fit just backfires when the user finds out. You have to separate commercial interest from genuine recommendations.

SPEAKER_01:

So how would Joy present hotel options, some of which might be sponsored?

SPEAKER_00:

It needs to set the stage before showing the list. Total transparency. And might say something like, okay, for hotels, I'm showing you two types of options to give you the best choice. First, the hotels that most closely match your preferences for location and price. Second, some options from our partners that might offer different perks.

SPEAKER_01:

And then clearly label them.

SPEAKER_00:

Absolutely. Best match, 0.8 miles from venue versus sponsored partner hotel, 1.5 miles from venue, includes breakfast. By explaining the categories up front, it maintains an open, respectful tone. It demonstrates its primary loyalty is to your needs.

SPEAKER_01:

Okay. Alignment checklist. Always prioritize the user's stated needs first. Be crystal clear about the AI's intentions, no hidden agendas, and empower the user to make the final choice without pressure. Honesty builds loyalty.

SPEAKER_00:

Couldn't say better myself.

SPEAKER_01:

Which brings us to the final pillar, number five. Resilience. What happens when things go wrong? Because they inevitably will.

SPEAKER_00:

This is the ultimate stress test for trust. Resilience isn't about preventing failure that's impossible. It's about how the system behaves during failure. It's about turning that oh no moment into potentially an opportunity to strengthen trust.

SPEAKER_01:

How so? What's the worst thing an AI can do when it fails?

SPEAKER_00:

Crash, freeze, go silent, or give a cryptic error message. That just confirms all your fears about relying on it. A resilient agent does three things immediately, admits the problem clearly, hands control back to you smoothly, and crucially protects the progress you've already made.

SPEAKER_01:

Let's look at that first UI spotlight. Clear error and quick recovery. Say Joy tries to book the flight and the payment fails. High stress moment.

SPEAKER_00:

Totally. Clarity comes first, even before apology. And no jargon. Instead of error 503, payment gateway timeout, Joy should say something calm and specific, like oh, seems we couldn't process the payment for the flight right now. Sometimes systems get busy. No worries though, your chosen seats are held for the next 10 minutes.

SPEAKER_01:

Okay, that messaging does a few things. It normalizes the error, sometimes systems get busy, reduces panic, and it immediately reassures you your seats are held.

SPEAKER_00:

Exactly. Protection first, and then immediate clear recovery options. The UI instantly shows try payment again or choose a different flight option. And maybe that little countdown timer showing how long the hold lasts, it shifts the user from panic to problem-solving mode. It's designing for recovery, not for impossible perfection.

SPEAKER_01:

What about protecting progress? If the payment fails or the whole thing crashes midway, do I have to start all over again? That would destroy trust.

SPEAKER_00:

Absolutely destroy it. So resilience requires robust data persistence. The system has to save your state. If you've already confirmed the concert, the seats, the budget that must be saved securely. If something fails at the hotel booking stage, a resilient design lets you jump right back to that point with all previous choices intact. No punishment for the system's failure.

SPEAKER_01:

So resilience is fundamentally about managing the human reaction to failure. The AI owns the error, clarifies next steps, and respects the user's time and effort.

SPEAKER_00:

That's the core of it. The resilience checklist. Clear, honest, calming error messages, no jargon, easy, practical recovery options, and always, always protect the user's prior work. Every error tests trust. Resilience passes the test.

SPEAKER_01:

Okay. This has been incredibly insightful. We walked through all five pillars. Competence, does it work? Transparency, can I understand it? Predictability, will it act consistently? Alignment is it on my side.

SPEAKER_00:

Right.

SPEAKER_01:

And resilience. What happens when it breaks?

SPEAKER_00:

And viewed together as a unified framework, they provide a powerful roadmap for designing AI experiences that people don't just use, but genuinely rely on. It applies across the board, really simple chatbots, complex planning agents like Joy, even fully autonomous systems.

SPEAKER_01:

So for anyone listening who's building AI products, the takeaway seems to be run your concepts through this joy planner lens. Can your AI actually deliver? Can it explain itself clearly? Does it behave predictably? Does it put your needs first? And can it help you recover gracefully when things inevitably go wrong?

SPEAKER_00:

It's a fantastic design audit tool, really.

SPEAKER_01:

Okay, so final thought. We talked a lot about alignment and being honest about sponsored content or business needs. Here's something to chew on. If trust truly is the new interface, how much of a competitive edge could a company gain by designing its AI to be radically honest? To explicitly say, okay, normally I'd suggest X based on your needs, but my business goal requires me to offer Y. Here are both, you decide. Could that level of transparency actually build more loyalty in the long run?

SPEAKER_00:

That's a fascinating question. A really provocative thought to end on.

SPEAKER_01:

That wraps up our deep dive into the architecture of trust in AI. We hope this framework helps you spot where your own interactions with AI are building confidence and maybe where a little redesign could make a world of difference. Thanks for listening to Juicy Talks.