A Leader's Guide to Metrics Reviews


Ada and I both had the privilege of working at two data-driven companies, LinkedIn and SurveyMonkey, led by two analytically rigorous leaders, Jeff Weiner, and the late Dave Goldberg. Those experiences shaped the way that we both now think about building an effective data-driven product culture. One practice that both companies established was weekly executive-level metrics reviews. LinkedIn had two such meetings: the first was a member value meeting focused on the consumer experience, and the second was a monetization meeting covering each of the company's business lines. SurveyMonkey, on the other hand, had a single meeting called ACER, which stood for acquisition, conversion, engagement, retention, where they covered these funnels across all A/B tests happening in the company. I've come to believe that establishing such a metrics review meeting is critical for developing an effective data-driven culture and I wanted to share some of the best practices around doing so.

Why metrics reviews matter
There are three primary benefits of establishing weekly metrics reviews:

I'm a firm believer that the best leaders don't simply delegate and get out of the way. Instead, they establish strong systems of accountability as well as systems of inspection. A strong system of inspection enables leaders to regularly monitor progress, share best practices and learnings, and establish a high quality bar. Weekly metrics reviews are a fantastic system of inspection that enable you to accomplish all of this. There are many real downsides when you don't have such a system in place. Ada was recently talking to an investor about one of his portfolio companies and he was retelling a story of how the marketing team at the company had decided to completely redesign the company's marketing site to give it a far more modern look and feel. Months later they noticed a meaningful decline in their revenue, but it took them another couple of weeks to realize that the marketing site's redesign was the culprit. Having a weekly metrics review in place helps ensure you'll never have to go that long before identifying and correcting such a mistake.

Another key benefit of establishing weekly metrics reviews is it enforces a culture of data-driven decision-making. Too often companies say they are data-driven but when I actually look inside to see how teams are operating, I hear anecdotes like the following:
  • "I look at dashboards when I get a chance, but there is so much going on that I'm not looking at them regularly."
  • "I can't trust the dashboards because we've had so many data quality issues in the past."
  • "Whenever I look at the data, there are just so many random spikes and drops that I can't make any sense of it."

All of these are symptoms of not truly having a data-driven culture. A weekly metrics review solves for it by forcing key stakeholders to review the metrics each and every week and creating a forum to discuss the challenges they encounter.

The third benefit is that well-executed metrics reviews help to develop your team's collective product intuition. When you regularly review metrics, hear an analysis and explanation for each trend you discover, and internalize the results of A/B tests, you start to form opinions of what actually moves the needle vs not. Doing this regularly and seeings hundreds of such tests and trends is what ultimately builds your product intuition. This product intuition then helps you shortcut roadmap prioritization because you already have a feel for what types of initiatives will work for your business, helping you get way more leverage out of your existing team's resources.

Modeling your business
The best place to start is not by just throwing together a dashboard of metrics. Instead, you want to start by modeling out your business. This involves breaking down the ultimate outcomes metrics you are trying to increase into each sub-metric that is an important driver of that metric. You'll then recursively break down each of those sub-metrics into each of their own drivers. You'll continue to do so until you get to fairly atomic measures that are actionable.

Let's take a look at a quick example of this for a SaaS business with free trials. You're ultimately trying to maximize revenue, so that's your highest level outcome metric. But the reality is that revenue is a trailing measure, since by the time you notice a change in your revenue, the actions you could have taken to effect it were weeks if not months prior. Instead, we want to get to leading indicators that help us understand what's going on a lot earlier in the user journey. So let's get to work breaking revenue down to it's sub-metrics. We start with dividing it into paid subscribers times ARPU (average revenue per subscriber). You can then further break down paid subscribers into existing subscribers + new subscribers - churned subscribers. New subscribers can be further broken down into trial starts times trial to paid conversion rate. You can keep going down to acquisition metrics, including trial starts by acquisition source. ARPU can similarly be broken down into ARPU per plan type and the percentage of users purchasing each plan type. I hope this gives you a sense of how to go about modeling your business. You can perform a similar exercise for your own business, regardless of whether it's SaaS, consumer, e-commerce, or something else entirely.

The reason this exercise is so powerful is that it helps you get concrete about the levers that actually drive your business. By then filling in the model with all of the actual metrics, you can quickly see which of those levers actually have the most potential for impact.

Constructing your dashboard
Picking the metrics to include in your metrics review dashboard is not as simple as including all the metrics you uncovered in your model. That would likely be way too many measures to look at. Metrics selection is subject to the goldilocks principle: too few metrics and you won't have enough visibility into what's going on with the business. Too many metrics and it will all feel like noise and it will be hard to wrap your head around the key drivers. So you have to find just the right number somewhere in the middle. This often ends up being an iterative process.

You should include metrics that the team is actively trying to move since you'll want to know whether the initiatives are moving the needle or not. This should include all of the key results that you've defined as part of your OKRs. You'll also want to include metrics that tend to move since you'll want to keep an eye on them. On the other hand, you might want to skip low volume metrics that might in fact just result in a lot of noise. Or for example, if 85% of your customers come from the United States, you might include the percentage subscribers from the US vs rest of the world as a valuable metric in the dashboard, but maybe not break it down into every single country.

You may also choose to temporarily elevate metrics based on specific challenges the business is facing. For example, if you are working to improve your customer service response times, you might include customer service request volume and median response time in your dashboard. But you don't need them in their forever since they may not be a critical business driver.

It's equally important to think through the presentation of your metrics. It's often helpful to lay them out similar to your model so it's easy to see how one metric flows to another. I've found having week-over-week percent change and year-over-year percent change color-coded green or red based on whether it's up or down incredibly helpful for quickly scanning for areas to focus in on. Sometimes trendlines next to a few key metrics is helpful to see how a key measure is trending over a longer period of time. While the most data-savvy are most comfortable with just a raw table of data, these presentation tweaks go a long way to making the metrics dashboard accessible to a broader audience.

There should be a single owner, usually someone on the data team, responsible for the dashboard. They are responsible for making sure it's always sent out well before the metrics review so everyone has time to read it. They are also responsible for ensuring its accuracy and making updates based on feedback identified during the metrics review.

Keep in mind that this shouldn't be the only dashboard that you have. This is simply the executive-level dashboard for metrics review. You'll likely have additional dashboards that various teams are using. For example, there should likely be a detailed dashboard showing the breakdown by every country even though you may have decided to not include it in this dashboard. These other dashboards will be helpful when some metric goes sideways and you need to investigate what may be causing it.

Running your metrics review meeting
In terms of attendees for your metrics review meeting, you'll ideally invite key R&D leaders from product, design, and engineering, as well as key GTM leaders from sales and marketing. If you'll be covering financial metrics, you'll likely want the finance team or even the CFO in attendance. And you'll want the key folks from the data team involved as well.

In terms of cadence, it's best when they are weekly. When they are only done monthly or quarterly, it becomes very difficult to create a data-driven culture since it becomes easy to look at the metrics only once a month and then not really drive your business based on them. Of course this also depends on the cadence of your initiatives. If you are doing them weekly, ideally you have initiatives that are being executed each week that can be discussed in the meeting.

The meeting itself should be run by the most senior person in the room, ideally the CEO. This was the case at both LinkedIn and SurveyMonkey. Executive-level sponsorship is critical for making metrics reviews successful. First, it makes it clear that this is an important meeting that shouldn't be skipped, everyone should take seriously, and carefully prepare for. It also ensures action items are actually followed up on in a timely manner.

The first part of the meeting should be about discussing the metrics in the dashboard. You might go around by product or by funnel stage (growth, engagement, retention) or you might go through recent A/B test results. Every metric on the dashboard should have a clear owner and they should speak to the changes that have been seen in the past week and their headline commentary about it. They might also mention upcoming initiatives or changes they expect on the business performance. Everyone in the room then has an opportunity to ask questions about what they are seeing in the data or share their perspective. This discussion is critical because it allows teams to share with others what they are seeing and allow the team to collectively help explain what might be causing it, suggest further follow-up investigations that might be helpful, and so on. It's from these discussions that the real learning happens. For any question that the team can't answer, it should be decided in the meeting whether there should be a follow-up action item taken to investigate. Those should be documented and then discussed in the subsequent meeting.

The team should also identify any issues with the dashboard in this meeting. Do metrics look inaccurate? Does it feel like certain metrics are no longer useful? Each of these items should be taken as action items by the data team.

You might also choose to maintain a calendar of special topics as part of this meeting. Someone might present a look back on A/B test results for a large product launch 1 month after lunch. Or the data team might present a deep dive into churn metrics if that's been a critical issue. Or discuss model refinements that they are planning on making.

I hope this makes it clear how you can bring effective metrics reviews to your organization and how they can inspire a truly data-driven product culture. By now you've probably realized that this is a non-trivial effort, not only in terms of team member time but also data resources as well as senior leader sponsorship to be successful. But then again building a data-driven product culture was never for the faint of heart.
Want to accelerate your product career?
I've finally distilled my 15+ years of product experience into a course designed to help PMs master their craft. Join me for the next cohort of Mastering Product Management.
Are you building a new product?
Learn how to leverage the Deliberate Startup methodology, a modern approach to finding product/market fit. Join me for the next cohort of Finding Product/Market Fit.
Enjoyed this essay?
Get my monthly essays on product management & entrepreneurship delivered to your inbox.