A Leader's Guide to Implementing OKRs
Writing Effective OKRs
Start by defining each of your objectives. An objective details what you hope to accomplish with a given initiative. Great objectives are significant, concrete, finite, action-oriented, and inspirational.
- Launch an edit button on Twitter
- Redesign Facebook's profile page
- Build the world's best web browser at Google
- Launch a redesigned signup flow that reduces the number of steps required on Amazon
You then define one or more key results associated with each objective. A key result defines a numerically-based expression of success or progress towards an objective. Great key results are specific, time-bound, measurable, and verifiable.
Key result examples:
- Increase weekly active users by 20%
- Achieve 20M activated users
- Launch the new feature by 7/31
- Conduct 50 customer interviews
Measure Outcomes, Not Output
The most valuable key results measure meaningful user or business outcomes as opposed to simply the output of one's work. For example, "publish 15 blog posts" is a useful key result, but you'd also want to pair it with a key result that speaks to the outcome you hope to drive from publishing these blog posts. For example, "add 1K new subscribers", "reach 10K monthly pageviews", or "drive 500 referral signups".
The reason this is so critical is that when you review your OKRs at the end of the quarter, you'll now be able to reflect on not just whether you achieved the output you were seeking (the 15 blog posts), but whether that output actually generated the expected outcome. And if it doesn't you can have a meaningful discussion on ways to improve the quality of the output to generate better outcomes in the future or decide whether this initiative is even valuable for achieving those stated outcomes. It's this reflection that happens during OKR reviews that helps to calibrate what initiatives in-fact get you closer to your intended outcomes and thereby helps build your team's product intuition.
As another example, folks will often have a key result like "launch feature by 7/31". While that's a useful key result to have, it should again be paired with an outcome-oriented key result. How do you expect this feature will impact users? Maybe it is expected to "increase D7 retention by 5%" or "increase weekly active users by 10%" or "reduce monthly churn by 15%".
The reason that teams often only include output-oriented key results instead of outcome-oriented key results is that it's so much harder to estimate what the outcome will actually be than to estimate the output. The key is to accept that you are going to be wrong a lot and it's only through going through cycles of OKR reviews that you will improve your forecasting ability.
There are so many types of outcome-oriented key results that you could include. At LinkedIn, we tried our best to tie our OKRs back to one of the top 5 business metrics: signups, monthly active users, sessions, bookings, or EBITDA. Sometimes though, more specific measures associated with a given initiative are more appropriate. For example, maybe you are trying to reduce confusion associated with a given user flow and in that case, a reduction in the number of support tickets received on that issue could be an appropriate measure.
Now it's not always possible to have an outcome-oriented key result with every objective, especially for initiatives that span multiple quarters. But whenever it is possible, it's paramount to do so.
I've seen teams define new OKRs every month, every quarter, or every year. I've found quarterly OKRs to be the sweet spot. Annual OKRs are too infrequent as you'll want to update them more frequently based on what you are learning from customers and the market. It also significantly slows down your learning cycle as you'll only be doing OKR reviews annually. At the same time, implementing an OKR program does introduce overhead. And paying that tax every month is often needlessly expensive and not worth it as you rarely are moving fast enough to see the achievement of outcome-oriented key results in that timeframe. That's why I've always implemented quarterly OKRs to give you the ability to update and reflect on your OKRs often enough while giving your initiatives enough time to actually achieve meaningful key results.
Any team within an organization is best served by having no more than 3-5 OKRs. Part of the value of OKRs is creating focus within a given team. This is only possible if you make the hard trade-offs of which OKRs to include and which to exclude. Keep in mind, OKRs are not a representation of your entire product roadmap, which is often longer. They simply represent the top 3-5 initiatives that are paramount to accomplish. Some folks try to take it to an extreme and only have 1 OKR that the entire team is focused on. I've found this to be far too simplistic in practice as any ongoing business has at least several initiatives it needs to make progress on in any given quarter.
One of the most important benefits of OKRs is their ability to drive alignment, especially across partnering cross-functional teams. However, to achieve this benefit, alignment has to be baked into the process. I've found the best way to do this is for all teams to publish draft OKRs and then allow a one week period for review and feedback. During this review period, it's important for every team to verify that any dependencies they have on other teams are reflected as a priority in that team's OKRs. For example, if the product team has an OKR to launch a feature this quarter, you'll want to ensure the product marketing team also has an OKR to support its launch. Or if the product team is planning on putting together a spec for a new product, you'll want to ensure the UX research team has an OKR to support any user research requirements needed.
Too often I've seen teams just publish all their OKRs at once without this draft period, missing the entire benefit of ensuring alignment across cross-functional teams.
Consistent Scoring Guideline
It's important that you establish a single consistent scoring guideline across the entire team so that anyone can reliably score OKRs and so the definitions are consistent and clear.
I like to use the following scoring guideline based on achieving each OKR's key results:
- Green = 100%+ achieved
- Yellow >= 70% achieved
- Red < 70% achieved
The specific scoring guidelines don't matter as much as ensuring consistency throughout your organization. So feel free to modify these guidelines to what works best for the way you've defined your key results.
It's important to set clear expectations with the entire team on OKRs from the very beginning.
I like to set the expectation that OKRs are supposed to be stretch goals, ie. ambitious and aggressive. We should be setting our sights on what we would like to aspirationally achieve. So this means I consider it an absolute success if in a given quarter all of our OKRs are yellow because that means we hit 70% of our ambitious targets across all goals. In fact, if everything is green, I would tell the team we haven't set out ambitions big enough.
It's equally important to tell the team that OKR performance does not have a direct correlation to performance reviews and promotions. While they are certainly an input, there are a whole host of other factors that go into them. The reason this is important is that if people feel their OKR performance will affect their own performance, they'll end up sand-bagging the key results to try to ensure they always achieve them, which prevents you from ever developing ambitious targets.
For many teams, simply setting team OKRs is sufficient. For example, a marketing team may have a set of OKRs. The R&D team may have a collective set of its own OKRs. But I've also seen OKRs applied all the way down to the individual level. In this case, not only will the team have a set of OKRs, but each individual will have their own OKRs. Their personal OKRs likely include a subset of the team OKRs for which they have direct responsibility, but also often include a few personal OKRs that don't appear on the team's OKRs. This may include initiatives that they are focused on that didn't bubble up to a top 3-5 team OKR. I've seen other teams use them as a way to capture personal development and career goals that are important for the individual to achieve their career aspirations.
I'd say if your team is new to OKRs, simply start with shared team OKRs. The additional burden of managing personal OKRs significantly increases the tax associated with this tool. If your team stays small, you may choose to never pursue personal OKRs. We still don't do personal OKRs at Notejoy, for example.
Probably the most important, but often overlooked, aspect of implementing a successful OKR program is investing heavily in OKR reviews at the end of the quarter. The first step is to score the OKRs and ensure everyone agrees on it. With clearly-defined metric key results and a consistent scoring guideline, this should be easy. But nonetheless, I've always encountered debates here so it's helpful to first get on the same page.
But now the far more important part starts, which is reflecting on why you achieved or didn't achieve each key result. Maybe you discover that your forecast was off, even though the initiative was considered a success. That's a really helpful insight for next time you are preparing a forecast. Or maybe you delivered the intended output, but it fell short of the expected outcome. Were there things you could do next quarter to bolster the initiative to help it achieve it's desired results? Or maybe it's an important learning on how a given type of initiative doesn't actually deliver the intended outcome. Maybe you didn't get to an initiative in the quarter. Was it because of delays? Was it because you re-prioritized something else? And was that the right decision?
The insights you derive from an OKR review should directly influence what OKRs you signup for in the subsequent quarter as well as create opportunities for process improvements throughout your team. These reviews are where the learning actually happens, so make sure you don't short-change this effort.
Overcoming the Downsides of OKRs
While OKRs are a powerful tool, they do have substantial downsides that need to be understood and addressed.
OKRs, by the very way key results are defined, focus you on improving what you can explicitly measure. In general, this is a good thing because most organizations can benefit from a far stronger metrics orientation. However, there are often elements of the user experience, the brand, and the business that can't be easily measured and therefore don't end up being optimized. Sometimes you can get creative with your key result measures to come up with better ones that get at some of these less tangible aspects of your business. Some people like using NPS, for example, to get to the user delight of the product. So it's worth brainstorming if better measures do exist. But even then, there are a variety of aspects that can never be explicitly measured. The solve for this is certainly outside the OKR system. I find developing a set of product principles that guide product development efforts are a way to optimize for these immeasurables.
The quarterly focus on defining and achieving outcome-oriented key results can also result in a team biasing towards a short-term mindset. While this is beneficial from the perspective of helping to put wins on the board quickly, some of the most important initiatives are long-term in nature and require more than a quarter to accomplish. Or they may just take time before they turn into positive metric wins as you navigate the messy middle of an initiative to make it successful. The OKR process could short-change these initiatives and not give them the room they need to explore and thrive. One of the ways I like to solve for this is by explicitly calling out initiatives as core, strategic, or venture. Core initiatives support the bread and butter of your business. Strategic initiatives are new products and initiatives that are clear adjacencies to your core business. And venture initiatives are entirely new opportunities that you are looking to pursue with the highest risk. Explicitly calling initiatives out in each of these categories allows you to set appropriate expectations with the team on what realistic timelines look like to evaluate these initiatives and set key results accordingly.
Another drawback of OKRs is they focus explicitly on the outcomes that you are looking to achieve, yet pay no attention to the process for achieving those outcomes. One of my favorite books, The Score Takes Care of Itself by Bill Walsh, taught me that an over-fixation on outcomes is rarely the best way to achieve greatness. Instead one needs to optimize the very process of executing each initiative, which Bill Walsh calls focusing on the inputs. This is entirely true and OKRs themselves provide no solve for this. That being said, I don't think that means we need to abandon OKRs entirely. Instead, we need to supplement OKRs with additional tools more focused on improving the quality of our work. The very best teams are constantly improving the ways they do customer discovery, analyze metrics, define their roadmap, execute sprints, and so much more. Becoming world-class at the craft is an entirely independent and worthy pursuit.
I hope these best practices empower you to create a highly effective OKR program within your team, enabling your team to achieve new heights in terms of focus, alignment, accountability, and outcome-orientation.
Update: I received so many thoughtful questions on this post, so I decided to publish a follow-up post answering your most frequently asked questions on OKRs.
Want to accelerate your product career?
I've finally distilled my 15+ years of product experience into a new course designed to help PMs master their craft. Join us for the Fall cohort of Mastering Product Management.
Enjoyed this essay?
Get my monthly essays on product management & entrepreneurship delivered to your inbox.
Jan 26, 2020