"We Shipped It" Isn’t a WinSetting clear success criteria for product launches is paramount. But most PMs skip this step, often to their own detriment. Without clear success criteria, your launches are at the mercy of opinions. You’ll struggle to prove impact, defend your roadmap, or even explain what “done” means. Good success criteria protect your time, help you tell better stories, and earn trust. FAR too many Product folks confuse success with shipping. I say it all the time, but I'll drop it here again. Successful delivery should be assumed. You have software engineers. And let's be real. You're not exactly building spaceships. Should tech celebrate? Absolutely. But Product, nope. Not really something to pat yourself on the back over. Even many who do track metrics just chase vanity. They don’t set definitions of success early enough. And worst of all, they avoid clarity because it means everyone would see when something flops. If your definition of success is “we launched it,” you haven’t done your job as a Product Manager.Let's dive right in. Takeaway 1: Define Success Before You Start Building Most PMs wait until they're about to ship something to define whether it “worked.” But by then, it’s too late. Your team needs to know what success looks like before a single line of code is written. That means forcing a discussion early: What user behavior are we trying to change? What outcomes are we expecting? What would make us say “This launch was worth it”? You can’t skip this. If you do, everyone will default to easy answers after the fact. The designer will say it looks good. Engineering will say it’s technically sound. The stakeholder will say “I just don’t like it.” And you’ll be stuck in an argument with no common ground. Setting success criteria early gives you leverage. It anchors the conversation. It turns opinions into data. And it helps you say “no” to scope creep by reminding everyone what the original goal was. This doesn’t need to be complicated. A simple success statement like “We’ll consider this launch successful if 25% more users complete onboarding in under 5 minutes within 30 days” is enough. What matters is that it’s specific, measurable, and agreed upon before the work begins. Takeaway 2: Don’t Confuse Output With Outcomes This is where most teams get tripped up. They think the job is done when they ship. They write a release post, update some marketing materials, and celebrate. But no one checks to see if the thing they built actually made a difference. Shipping is not the finish line. It’s the starting point for impact. You didn’t build a feature so you could say you built it. You built it because you thought it would change something. Reduce churn, increase engagement, drive revenue, anything to make someone’s life easier. That’s what you should measure. The problem is, teams tend to over-focus on output because it’s easier to measure and easier to celebrate. “We shipped four features this quarter” sounds impressive to some people. But did those features solve a real problem? Did they move any meaningful metric? Did they even get adopted? If you don’t know, you can’t tell a compelling story about your work. And if you can’t tell a compelling story, you lose influence. You can avoid this trap by constantly asking: “What outcome are we chasing?” If your answer is about activity rather than impact, you're measuring the wrong thing. Takeaway 3: Choose Metrics That Actually Matter Not all metrics are created equal. Some make you feel good but don’t tell you anything useful. Others are harder to move, but give you a real signal of progress. Vanity metrics are the most seductive. Page views. Clicks. Sign-ups. They look great in a deck, but they’re often disconnected from real product value. If 10,000 people click on your feature and then immediately bounce, what did you really accomplish? Good success metrics are tied to behavior change. Did users do something different as a result of what you launched? Did they adopt a new workflow, stick around longer, spend more money, or report fewer issues? You’ll need to dig for these. They often require multiple data sources or coordination with analytics teams. But they’re worth it. Because when leadership asks, “How did this launch perform?”, you can answer with confidence instead of data-rich charts that don't really prove anything. Be cautious of picking metrics you think you can move rather than metrics that actually matter. It’s easy to game the system when the metric is soft. It’s much harder—but more valuable—when the metric reflects actual user behavior. Takeaway 4: Don’t Measure Everything. Measure What’s Actionable One common mistake is trying to track everything. You end up with dozens of dashboards, each telling a slightly different story. The problem isn’t that you don’t have enough data. It’s that you have too much of the wrong data. More metrics don’t equal more clarity. They often create confusion. Focus on a few key signals that help you make decisions. A good metric tells you what to do next. If a metric goes up or down and your only reaction is “interesting,” it’s probably not useful. For example, if your goal is to reduce drop-off during checkout, then the only thing you should obsess over is where users are bouncing and why. Tracking session length or general page views might be interesting, but they won’t help you fix the problem. This principle applies to both qualitative and quantitative signals. Customer feedback is a goldmine if you know what you’re listening for. But collecting vague “how did we do?” surveys without context leads to unhelpful noise. Your time and your team’s time are finite. Measure the things that guide action. Let go of the rest. Takeaway 5: Revisit Success Criteria After the Launch It sounds a bit ridiculous, but I believe plenty of Product folks avoid this step because it’s uncomfortable. They will have to go back and look at this with their boss, maybe even the CEO. What if you didn’t hit the metric? What if the feature flopped? What if no one used it? Nobody wants to look like they suck in front of a group of people. But, you still have to go back and look. And trust me, leaders want an honest look at what's happening. They're not your parents gearing up to ground you for a bad report card. This is where real product learning happens. Post-launch analysis isn’t just a box to check. It’s how you become a better decision-maker over time. If the launch didn’t perform, that doesn’t mean you failed. It means you learned something. Maybe your hypothesis was off. Maybe adoption took longer than expected. Maybe the metric wasn’t the right one. The only real failure is skipping the reflection altogether. The teams that get better over time are the ones who treat every launch like an experiment. They define a hypothesis, measure the outcome, and apply what they learned to the next thing. That’s how compound impact is built. Even better—share this reflection publicly. You’ll build credibility with leadership, show that your team is accountable, and model a healthy product culture. No one expects perfection. But they do expect transparency. If you can’t say whether your launch worked or not, people will start to question whether your roadmap is grounded in reality. In Conclusion It’s easy to get swept up in the speed of delivery, to chase metrics that look good on paper, or to move on quickly from a launch without ever asking the hard questions. But that’s not the job. The job is to solve problems that matter. To change behavior. To create impact. And you can’t do any of that if you don’t define what success looks like. And then measure it with discipline. So before your next launch, pause. Get clear on the outcome you want. Make it measurable. Align your team. Hold yourself accountable after the fact. You’ll ship better things, tell better stories, and earn more trust along the way. That’s what product management is really about. Thanks for reading. See you next week. |
I help grow the practice of Product Management by simplifying and demystifying the things that help you go from Product Novice to Product Ninja in no time
Why Most Product Ideas Die Quietly The backlog is where good ideas go to die. For most product managers, it becomes a cluttered mess of stakeholder requests, customer quotes, aspirational ideas, and one-off experiments that never went anywhere. The signal gets buried under noise, and when planning time rolls around, nobody can make sense of it. It creates chaos instead of clarity. And worse, it makes product managers look unorganized and indecisive. They treat the backlog like a museum...
Stop Prioritizing Speed Over Progress Today we're going to discuss the difference between real product progress and the illusion of momentum created by speed. Early-career product managers often get praised for being "fast." People say "Fail fast" or "Be quick to pivot". And while those statements good mantras generally speaking, speed can be deceptive. Moving quickly doesn’t mean you’re moving in the right direction. And it certainly isn't an indicator of progress. Over time, chasing speed...
The Product Manager Who Knew Too Much This week, I’m going to talk about a different kind of trap. The knowledge bottleneck. It happens when you, the product manager, become the person who knows everything about the roadmap, the customer, the edge cases, the trade-offs, the architecture, the goals, the caveats. That might sound like a good thing. It’s not. The longer you’re the only person with context, the more you turn into a blocker. You slow down decision-making. You burn out. Your team...