Good-looking and effective are not the same thing. Most design budgets fund one while the other bleeds revenue. The case for performance design.


Most companies that come to us think they have a design problem.
The diagnosis is almost always wrong. They see a visual that feels stale compared to competitors and assume that's where the conversion leak is. It usually isn't.
The brief typically says something like: "We need a refresh" or "It's starting to feel dated" or "Our competitors just launched something really slick." They commission a redesign. New visual language, updated typography, a hero section that looks great in a board presentation. It launches. Conversion doesn't move. Sometimes it drops.
Designers get blamed. The agency gets replaced. And nobody asks the actual question: where in the funnel are people already abandoning before they ever see the new visual.
That's the part you need to fix first.

There's a version of design that's really about impressing other designers. You see it on portfolio sites, in award submissions, in the work that gets shared on Dribbble and LinkedIn. Experimental navigation. Elaborate scroll animations. Layouts that require a second to figure out. It's clever, it photographs well, and it has essentially nothing to do with whether the product performs.
The industry rewards this relentlessly. Clients get dazzled by it in presentations. Agencies build their reputation on it. And somewhere in the process, the actual question gets quietly dropped: does this help people do what they came here to do?
I've watched good designers get better at winning awards while their clients watch conversion rates flatten.
This isn't new. It's been true as long as design awards have existed. But the stakes have shifted. Beautiful design used to be a barrier to entry. Now it's a commodity. You can get a gorgeous website from any number of agencies, each one more polished than the last. What they don't do is move the needle. The companies that win now are the ones where looking good and working well are the same thing. That's the only difference that actually matters anymore.
Performance design sits at the intersection of behavioural psychology, visual design, and conversion data. Leave out any one and you've got something that doesn't work. The hard part isn't the theory. It's that most teams are built to optimise for just one of those three things.
Behavioural psychology first. Understanding why people stop. Where they hesitate, where they abandon, what specific signal triggers doubt at the moment they're about to commit. I've watched teams spend weeks on a button colour when the real problem was three fields earlier in the form, where users genuinely didn't understand what information was being asked. You can't fix what you don't observe.
Visual design is still there. Its job changed, though. It's not about expressing a creative vision anymore. It's about guiding attention and reducing cognitive load. Every element earns its place by serving the user's next action. If it doesn't, it goes. That sounds simple until you have to defend removing something that looks polished.
Conversion data keeps everyone honest. Session recordings, funnel analysis, A/B tests. These aren't there to replace your judgment. They're there to check it, because confident design instincts are wrong often enough that you want the backup.

Some success signals in design are worth interrogating, especially when you're trying to actually move conversion numbers.
Award recognition doesn't predict customer behaviour. The criteria for design awards have nothing to do with whether a product converts or retains users or achieves anything for the business. Optimising for awards is a choice, and it comes with consequences. I've never met a customer who stayed longer because the interface won an award.
Stakeholder approval and customer outcomes are different things. A design that passes internal review can still bomb with real users. Stakeholders bring aesthetic preferences, brand opinions, and organisational politics to feedback in ways that don't match how customers actually behave. The design that survives the most meetings is often the design that performs worst.
Pixel perfection is the easiest thing to obsess over. Designers do it constantly: spacing, alignment, perfect corner radius. Meanwhile the page loads in five seconds on a mid-range Android phone and the user is already gone. The person who bounced at second three never saw the perfectly polished button. Google's mobile page speed research is clear on this: as load time goes from one second to three, bounce probability increases 32%. At five seconds it's 90%. You can't design your way out of physics.
The practice shift isn't as dramatic as the framing suggests. You're still designing. You're still making things look good.
You just start with a different question. Instead of "what would look great here?" it's "what's actually stopping people from completing this?" That reframing pulls you toward different solutions entirely. Shorter forms. Clearer copy. Faster pages. Trust signals positioned where doubt actually appears in the session recording, not scattered decoratively across the footer because it looks balanced.
The reality is harder to present to stakeholders because it's less impressive. Removing a field doesn't look like much. Rewriting a headline doesn't. Surfacing pricing earlier usually makes the interface look messier, not cleaner. But these are the things that shift the conversion line.
Baymard Institute's checkout research quantified this: average checkout has 23.48 form fields. Ideal is 12 to 14. A 35% conversion improvement is possible through design changes alone, no product work required. That's not award-winning work. It's invisible work. It's the work that shows up in revenue.

Traditional agencies are optimised for delivery, not performance. You get paid for producing work, not for what that work actually does. The rational incentive is to ship something that looks good enough to get approved, satisfy the written brief, and move on to the next project. Whether it converts is someone else's problem.
I'm not being cynical. That's just how the economics work. And it means the people best positioned to fix a conversion leak are usually the last ones measured on whether they actually fixed it.
This is why performance design needs a different contract. You have to be measured on what happens after launch, not just on what you delivered before it. That one change breaks the entire traditional agency model because it means you can't move on. You have to watch the data. You have to defend decisions with session recordings instead of aesthetic judgment. You have to be willing to remove work you're proud of because it doesn't perform.
Most design briefs are written around outputs. "Deliver a new homepage." "Redesign the onboarding flow." "Create a visual system." Reasonable asks, but they measure the wrong thing.
You probably have a brief like this somewhere. Look at it. It's most likely written around what you're going to produce, not what you're trying to achieve.
Performance design briefs start with the outcome instead. Increase form completion by 25%. Reduce drop-off between step two and three of onboarding. Get users to the point where they actually experience why they signed up. Once you have that as the target, the design decisions follow. They have to, because you're going to be measured against whether you hit that number, not whether stakeholders liked the presentation.
This conversation is harder upfront. But it's the only one that makes the work accountable.
If you're not sure where the leak actually is in your funnel, let's look at the data together. Thirty minutes, we trace through session recordings and funnel analysis. The problem usually becomes obvious once you're looking at the right metrics instead of guessing based on board feedback.
We're also building a platform that does this friction audit automatically, at scale. You upload your funnel data and it surfaces where people are actually getting stuck, not where they think they're getting stuck. Private beta opening soon.
Related: Why we built flow-three around this approach · What 290% activation uplift looks like in practice

Multidisciplinary talent breaks hiring systems built for specialists. In the AI era, that structural blind spot is a competitive liability most organisations still haven't addressed.

Performance design isn't UX with better metrics. It's a different discipline. What it is, how AI changes it, and what shifts when you start doing it.
Let's Talk
First call is always diagnostic. You describe where the numbers feel wrong — most of the time, we can identify the cause before we’ve seen the product.
Not a pitch. A look at the problem together.