Design QA Is Not a Gate. It's a Habit
“If you think good design expensive, you should look at the cost of bad design”
Most design teams eventually arrive at the same problem: quality is inconsistent, and no one quite knows where the standard lives. Work that looked great in Figma, but then it ships with misaligned spacing. Components drift from the design system. Edge cases that were flagged but never resolved quietly make it to production. It’s a world gone mad!
The instinctive response is to add a gate. A final “design QA” check before delivery. A sign-off step that ensures nothing slips through.
It's a reasonable idea. But if the gate is the only intervention, it's already too late.
Design QA isn't a destination at the end of the process. It really should be a discipline that runs through the whole design & development lifecycle.
How the "Gate" Model Fails
The appeal of a design QA gate is obvious as it creates accountability, it's visible in the workflow, and it gives design a formal moment to catch issues before they go to production.
However, the problem is what happens upstream. If designers know there's a gate at the end, there's an implicit assumption that quality will be verified then. Work gets delivered. Bugs and deficiencies surface. And suddenly the gate becomes a bottleneck, because there isn't enough time to fix what's been uncovered.
The real question isn't "did we catch it at the gate?" It's "why did it make it to the gate in the first place?"
"Validation needs to be happening continuously, so that there are no surprises, and bugs or deficiencies are uncovered with enough time for engineering to fix. This 'gate' is really just a sign off that we've done the latter."
That's the key reframe: the gate isn't where quality happens. It's where you confirm quality already happened.
What Continuous Design QA Actually Looks Like
Continuous design QA means building quality checks into the natural rhythm of the work, not as separate events, but as integrated moments throughout the epic and story lifecycle.
In practice, that looks like:
During design
Checking designs against the design system before sharing with the team, not as a final step, but as a working habit. Does this use the right components? Are tokens applied correctly? Is this consistent with adjacent flows?
During implementation
Reviewing work in a proper environment as stories move through development, not just checking the Figma file. Designers should be checking their own work against what's actually being built, on a real device, in a real environment, early enough that engineering has time to respond.
I used to bug the developers before the committed work to Git, so they used to send me screenshots just to shut me the hell up. It was crude, but at least I knew they were on the right track.
During review/test
Design participates in the review stage, not just as an observer, but as an active verifier. If a story is in test and a designer hasn't looked at it in a proper environment, that's too late.
At the gate
The final sign-off is a confirmation, not a discovery. If continuous validation has happened, there should be no surprises. That “gate” becomes a lightweight verification that the work meets the standard, not a triage session.
Defining the Standard (So "Good Enough" Has a Meaning)
One of the hardest parts of design QA isn't the process, it's the standard. "High quality" means different things to different designers, different engineers, and different Product Managers. Without a shared definition, QA becomes subjective and inconsistent.
The central question your team has to answer is: What needs to be true for design QA to pass?
A working framework for answering this has to account for a few dimensions:
Consistency with the design system - Are the right components, tokens, and patterns being used? Is anything drifting from the established system?
Completeness of states - Are all interaction states covered: empty, loading, error, success, edge cases? Has anything been flagged and left unresolved?
Experience coherence - Does the implemented experience match the intended design? Not pixel-perfect in every case, but close enough that the user experience is not degraded.
Risk calibration - Not everything needs the same level of scrutiny. A story using entirely existing components and patterns is lower risk than one introducing a new interaction or a novel pattern. The standard stays constant — but how much verification effort is required scales with risk.
Design QA and the Design System
Design QA and design system health are inseparable. A design system only works if it's actually being used, consistently, correctly, and with enough trust that designers don't go off-road to solve problems the system has already solved.
This creates a feedback loop. When designers do QA, they're also auditing the design system. They see where components are being misused, where the system has gaps, and where documentation is unclear. That signal, fed back into the system, makes future work more consistent and future QA faster.
The inverse is also true: a weak or underdocumented design system makes QA harder, because there's no clear reference point for what "correct" looks like. Teams that invest in their design system are also investing in the efficiency of their QA practice, whether they think of it that way or not.
Keeping It Alive When Teams Are Thin
The first thing deprioritized when designers are stretched is QA. Not because anyone decides it's not important. Because it's invisible. There's no ticket, no ceremony, no status update. It just quietly stops happening.
A few things that prevent this. Make QA tasks trackable, whether that's subtasks, acceptance criteria, or a step in your workflow. Invisible labor doesn't get done when things get hard. Calibrate effort to actual risk, not uniform process. A lightweight check for low-stakes stories; more thorough verification for high-risk work. And treat it as genuinely shared with engineering: designers catch visual and experience issues, engineers catch implementation divergence. Neither group is the backup for the other.
When recurring issues show up in QA, that's not just a defect to close. It's a signal about something upstream that needs to change.