Skip to main content

The Product Designer’s Pre-Launch Shotgun List: 7 Things to Verify Before You Ship

Shipping a product feature without a final verification is like firing a shotgun blindfolded — you might hit something, but you’ll likely miss the target and create collateral damage. This guide provides a structured, actionable checklist of seven critical verifications every product designer should run before launch. We cover core interaction patterns, error state coverage, accessibility compliance, performance thresholds, localization readiness, analytics instrumentation, and rollback planning

Introduction: Why Every Launch Needs a Shotgun List, Not a Wish List

Most product designers I have worked with — and I have collaborated with dozens across early-stage startups and mature product teams — share a common pain point: the final hours before launch feel chaotic. You have polished the mockups, handed off specs to engineering, and reviewed the first build. But when the feature hits staging, something always slips. A button label truncates on mobile. The empty state shows a raw error code. The loading spinner spins forever because the API call was never mocked. These small misses erode user trust and force emergency patches after launch. This guide exists to replace that chaos with a repeatable, structured checklist — what we call the shotgun list. The name is intentional: a shotgun spreads shot across a wide area, and this list spreads your attention across the seven most critical verification zones before you ship. By running through this list methodically, you reduce the chance of a post-launch fire drill. We are not covering every edge case in the universe — that is impossible. We are covering the high-impact areas that experienced teams consistently forget. Each section below gives you a concrete process, not just a reminder. Use this list as your final pre-launch ritual.

This overview reflects widely shared professional practices as of May 2026; verify critical details against current official guidance where applicable.

1. Verify Core Interaction Patterns: Do Users Actually Know What to Do?

The most polished UI is worthless if users cannot complete the primary task. Before you ship, you must verify that the core interaction flow — the main action a user takes — is intuitive, error-resistant, and forgiving. Teams often focus on visual polish and forget to test the flow with fresh eyes. I have seen a team spend two weeks perfecting a checkout page animation, only to discover that the "Continue" button was below the fold on a 13-inch laptop because they only tested on a 27-inch monitor. The fix took five minutes. The cost of that oversight was a delayed launch and a frustrated engineering team. To avoid this, start by mapping the primary user journey on paper or in a flow diagram. Identify every decision point, every input field, and every possible exit. Then, walk through the journey in the actual build — not in Figma. Check that each step has a clear, single next action. If a user has to guess what to do next, the interaction pattern is broken. Use the three-click heuristic: a user should be able to reach the core outcome within three clear clicks or taps. If they cannot, simplify the flow. Also verify that the back button or undo action works correctly. Users will make mistakes — your design must forgive them.

A Real-World Example: The Forgotten Confirmation Step

In a typical project I reviewed, a team built a new account deletion flow. The mockups showed a clean two-step process: confirm email, then click delete. In the staging build, the confirmation email was never sent because the backend integration was mocked. The team only discovered this when a beta tester tried to delete their account and got a generic error. The fix was straightforward — wire up the email service — but the oversight delayed the launch by three days. The lesson: always test the complete flow with real or mocked backend responses, not just the UI states you designed.

How to Verify Interaction Patterns Step by Step

First, create a task list of the top three user goals for this feature. Second, recruit one person who has never seen the feature — a colleague from another team works well — and ask them to complete each task without guidance. Watch where they hesitate, click incorrectly, or give up. Third, document every hesitation point and categorize it as a blocker, a nuisance, or a nice-to-fix. Blockers must be resolved before launch. Nuisances can be noted for the next iteration. Nice-to-fix items can wait. Finally, review the flow against platform conventions. For example, on iOS, swipe-to-delete is standard; if you build a custom gesture, users may not discover it. Stick with platform patterns unless you have strong evidence that a custom interaction improves task completion.

Verifying core interactions is not about perfection — it is about removing obvious barriers. Ship when the top three tasks are smooth and error-free.

2. Error State Coverage: Ship the Broken Experience Before Users Break It

Every digital product will fail at some point. A server goes down. A user types a malformed email. A network request times out on a subway. The difference between a trustworthy product and a frustrating one is how gracefully it handles those failures. Yet, error states are the most commonly skipped design artifacts in the pre-launch rush. I have audited dozens of staging builds where the "happy path" was pixel-perfect, but the error states were either missing entirely or showed raw JSON responses. This is a trust killer. When a user sees a cryptic error code, they do not blame the backend — they blame the product. To verify error state coverage, start by listing every possible failure point in your feature. This includes network timeouts, empty data sets, invalid input formats, expired sessions, permission denials, and server 500 errors. For each failure point, verify that the UI shows a human-readable message, a clear next action (retry, go back, contact support), and a visual indicator that the system is still responsive (not frozen). Use a checklist or a table to track coverage.

Common Error State Failure Modes

One common mistake is showing a generic "Something went wrong" toast for every error. This is better than a raw error code, but it is still unhelpful. Users need to know what went wrong and what they can do about it. For example, if a file upload fails because the file is too large, tell the user the maximum file size and let them try again with a smaller file. Another failure mode is hiding errors until the user submits a form. Instead, validate inputs inline as the user types. This reduces frustration and prevents the user from completing a form only to discover multiple errors at submission. A third failure mode is forgetting to handle the "no data" state. If a user navigates to a dashboard that has no data yet, show a helpful empty state with an illustration, a message explaining why the data is missing, and a clear call to action — like "Add your first item." Empty states are opportunities to guide users, not dead ends.

How to Audit Error States in Your Build

Set up a dedicated testing session where you intentionally trigger each failure mode. Block network requests in the browser dev tools to simulate timeouts. Submit forms with empty fields, invalid emails, and excessively long strings. Log out mid-session and try to access protected pages. For each scenario, take a screenshot and compare it to your design specs. If the error state does not match your design, or if it does not exist, add it to the fix list. Do not ship until all critical error states are covered with appropriate messaging and recovery actions. Your users will thank you when the inevitable failure occurs.

3. Accessibility Compliance: Ship for Everyone, Not Just the Ideal User

Accessibility is not a checkbox to tick after launch — it is a fundamental quality criterion. When you ship a feature that is inaccessible to users with visual, motor, or cognitive disabilities, you are excluding a significant portion of your audience and, in many jurisdictions, exposing your company to legal risk. Beyond compliance, accessible design improves the experience for all users. Captions help users in noisy environments. High contrast helps users in bright sunlight. Keyboard navigation helps power users who prefer shortcuts. Verifying accessibility before launch does not require a full audit by a specialist. You can catch the most common issues with a few automated and manual checks. Start with automated tools like axe DevTools or WAVE. These tools scan your page and flag issues such as missing alt text, low color contrast, missing form labels, and incorrect heading hierarchy. Run these tools on every page or screen in your feature. Fix all critical and serious issues before launch. However, automated tools catch only about 30% of accessibility issues. You must also perform manual checks.

Manual Accessibility Checks You Can Run in 15 Minutes

First, navigate your entire feature using only the keyboard. Use Tab to move between interactive elements, Enter to activate buttons, and Escape to close modals. Can you reach every interactive element? Do you know where you are at all times? If the focus indicator is missing or hard to see, add a visible focus ring. Second, test with a screen reader. On macOS, VoiceOver is built in; on Windows, NVDA is free. Turn on the screen reader and navigate your feature without looking at the screen. Does the screen reader announce the page title, headings, form labels, and button actions? If it reads "unlabeled button" or skips content, fix the underlying code. Third, check color contrast. Use a contrast checker tool to verify that text meets the WCAG AA standard (4.5:1 for normal text, 3:1 for large text). Pay special attention to placeholder text, disabled buttons, and links within paragraphs — these are common contrast failures.

When to Delay a Launch for Accessibility Issues

If you discover that a core interaction — like filling out a form or completing a purchase — is impossible with a screen reader or keyboard, you should delay the launch. This is not a nice-to-have; it is a fundamental usability blocker. For less critical issues, such as a missing alt text on a decorative image, you can ship and fix in a follow-up sprint. Use your judgment, but always prioritize flow-breaking issues. A good rule of thumb: if a user with a disability cannot complete the primary task, do not ship.

4. Performance Thresholds: Ship Fast, Not Just Pretty

Users are impatient. Research from many industry sources suggests that a one-second delay in page load time can reduce conversions by a measurable percentage. Even if you are not building a consumer app, slow performance erodes trust and increases cognitive load. Before you ship, you must verify that your feature meets basic performance thresholds. This does not mean you need to run a full performance budget analysis every time — but you should check three things: time to interactive, perceived load time, and responsiveness to user input. Start by loading your feature on a throttled network connection. Use Chrome DevTools to simulate a slow 3G connection. Does the page load in under three seconds? If not, identify the heaviest assets — large images, unoptimized JavaScript, or excessive API calls — and optimize them. Lazy-load images below the fold. Compress and resize images to the display size. Minify JavaScript and CSS. If you are shipping a single-page application, verify that the initial bundle size is under 200 KB (gzipped) for a reasonable first-load experience.

Real-World Example: The Dashboard That Took 12 Seconds to Load

I worked with a team that built a data-rich dashboard with multiple charts, filters, and real-time updates. On a local development server, everything felt instant. But when they tested on a throttled connection, the dashboard took over 12 seconds to become interactive. The culprit was a single unoptimized chart library that loaded 2 MB of JavaScript before rendering anything. The team replaced it with a lightweight alternative, and the load time dropped to under three seconds. The fix took two hours but saved the launch from a likely wave of user complaints. The lesson: always test on a realistic network, not just localhost.

How to Set and Enforce Performance Budgets

Before you start designing, agree on performance budgets with your engineering team. For example: the feature must load in under three seconds on a 3G connection, the JavaScript bundle must be under 150 KB, and the first paint must happen within one second. Use tools like Lighthouse or WebPageTest to measure these metrics during development. If a design decision — such as adding a high-resolution hero image — pushes you over the budget, either optimize the asset or reconsider the design. Make performance a design constraint, not an afterthought.

5. Localization Readiness: Ship for the World, Not Just Your Locale

If your product supports multiple languages or regions, localization readiness is a critical pre-launch verification. Even if you are only shipping in English today, if you plan to expand later, designing with localization in mind saves massive rework. Common localization pitfalls include hardcoded strings, text that overflows containers in longer languages, date and number formats that assume US conventions, and images with embedded text that cannot be translated. To verify localization readiness, start by checking that all user-facing strings are externalized — meaning they are stored in a separate file or database, not hardcoded in the code. Work with your engineering team to confirm that the codebase uses a standard internationalization library. Next, test your UI with pseudo-localization. Pseudo-localization replaces each character with an accented version and expands the string length by 30-50%. This reveals layout issues before you invest in real translations. Run pseudo-localization on every screen and check for text truncation, overlapping elements, and broken layouts.

Common Localization Failures and How to Fix Them

One common failure is designing a component with a fixed width that works for short English words like "Search" but breaks for longer translations like "Rechercher" (French) or "Suchen" (German) — both of which are actually shorter, but consider "Buscar" (Spanish) or "Căutare" (Romanian) which are longer. The fix is to use flexible containers that grow or shrink based on content, or to set a generous max-width with text truncation only as a last resort. Another failure is using images with embedded text, such as a banner that says "Limited Time Offer" in English. When you localize, you would need a separate image for each language, which is expensive and error-prone. Instead, overlay text on the image using CSS or a dynamic text layer. A third failure is assuming a left-to-right layout. If you plan to support Arabic or Hebrew, your design must handle right-to-left text direction. Test with a right-to-left locale early to catch layout issues like misaligned icons or reversed button orders.

Step-by-Step Localization Verification Checklist

First, extract all strings and verify they are in the translation file. Second, run pseudo-localization and capture screenshots of every screen. Third, review date formats (DD/MM/YYYY vs. MM/DD/YYYY), number formats (comma vs. period as decimal separator), and currency symbols (placement before or after the number). Fourth, test with at least one real translation — even if it is machine-translated — to catch layout issues. Fifth, check that sorting and filtering logic works correctly for non-English characters (e.g., accented characters in French). Ship only when all screens display correctly with pseudo-localization and at least one real translation.

6. Analytics Instrumentation: Ship with Visibility, Not Blindness

Launching a feature without analytics is like firing a shotgun in the dark — you have no idea if you hit the target. Before you ship, verify that your feature is properly instrumented to capture key events, user actions, and error signals. Without this data, you cannot measure success, diagnose issues, or iterate effectively. Start by defining the success metrics for your feature before you design it. For a new onboarding flow, the success metric might be completion rate. For a checkout redesign, it might be conversion rate or average cart value. For each metric, define the specific events you need to track: page views, button clicks, form submissions, error occurrences, and drop-off points. Work with your engineering team to implement these events using your analytics platform — whether that is Amplitude, Mixpanel, Google Analytics, or a custom solution. Then, verify that the events fire correctly in the staging environment. Use the analytics platform’s debug mode or a browser extension to inspect events as you interact with the feature. Do not assume the events work just because the code is merged — test them manually.

Common Analytics Verification Failures

One frequent issue is tracking the same event under different names in different parts of the app. For example, the login flow might fire a "sign_in" event while the checkout flow fires a "user_login" event. This makes it impossible to compare data across funnels. Standardize event naming before implementation. Another issue is missing properties. A "purchase_completed" event without a property for the product ID or price is nearly useless. Define required properties for each event and verify they are populated. A third issue is firing events on every keystroke instead of on blur or submission, which floods the analytics pipeline with noise. Ensure events fire at the right moment — typically on user intent, not on every interaction.

How to Run an Analytics Verification Session

Create a test script that walks through the primary user journey step by step. For each step, note the expected event name and properties. Then, execute the script in the staging environment while watching the analytics debugger. Mark each event as pass or fail. If any event is missing, has the wrong name, or lacks required properties, file a bug and do not ship until it is fixed. Also test edge cases: what happens when a user abandons the flow midway? Does the analytics capture the drop-off point? What happens when an error occurs? Does the error event include the error message and stack trace? Ship only when all critical events fire correctly.

7. Rollback Plan and Feature Flags: Ship with a Safety Net

No matter how thorough your pre-launch verification is, something will go wrong in production. A regression slips through. A third-party API rate-limits your traffic. A user discovers a data vulnerability. When the unexpected happens, you need a fast, reliable way to revert the feature without deploying a full code rollback. This is where feature flags and rollback plans come in. Before you ship, verify that your feature is wrapped in a feature flag that allows you to turn it off instantly without redeploying. The flag should be server-side, not client-side, so that even if a user has a cached version of the app, you can disable the feature for all users. Work with your engineering team to confirm that the feature flag is implemented correctly and that toggling it off does not cause errors or broken layouts. Test the rollback process in staging: turn the flag off and verify that the user sees the previous version of the product without any glitches. Also verify that the flag can be toggled for a specific user or a percentage of users, which allows for gradual rollouts and A/B testing.

Creating a Rollback Runbook

A rollback runbook is a document that outlines exactly what to do if the feature needs to be disabled. It should include: who has permission to toggle the flag, the exact steps to toggle it off, how to monitor the impact (e.g., check error rates and support tickets), and how to communicate the decision to stakeholders. Share this runbook with your team before launch. I have seen teams scramble during an incident because the person who knew how to toggle the flag was on vacation, and no one else had the credentials. Avoid this by documenting the process and granting access to at least two team members.

When to Use a Gradual Rollout vs. Full Launch

For high-risk features — such as changes to the checkout flow, authentication, or data storage — consider a gradual rollout. Start with 1% of users, monitor for 24 hours, then increase to 10%, then 50%, and finally 100%. This allows you to catch issues early without affecting all users. For low-risk features — such as a minor UI tweak or a new informational page — a full launch may be acceptable. Use your judgment based on the feature’s impact on core functionality and data integrity. A gradual rollout is always safer, but it requires more coordination and monitoring. Ship with a safety net, and you will sleep better after launch.

Conclusion: Make the Shotgun List Your Pre-Launch Ritual

These seven verifications — core interaction patterns, error state coverage, accessibility compliance, performance thresholds, localization readiness, analytics instrumentation, and rollback planning — form a comprehensive pre-launch checklist that covers the highest-risk areas. By running this list before every launch, you replace last-minute panic with structured confidence. You will catch issues early, reduce post-launch incidents, and build trust with your users and your team. The key is to make this list a ritual, not a one-time exercise. Print it out, add it to your project management tool, or pin it to your team’s wiki. Customize it for your product’s context — add items specific to your domain, such as security reviews for fintech apps or compliance checks for healthcare products. Over time, the shotgun list becomes second nature, and your launches will feel smoother, faster, and less stressful. Ship with confidence, but always verify first.

About the Author

This article was prepared by the editorial team for this publication. We focus on practical explanations and update articles when major practices change.

Last reviewed: May 2026

Share this article:

Comments (0)

No comments yet. Be the first to comment!