Which product management frameworks do successful teams use?

It’s always a good idea to learn from the product management masters. Here are a few frameworks that guide successful teams making important decisions.
Classify
March 25, 2022
Updated
June 9, 2022

Every successful product was once a pipe dream with too wide a scope and too optimistic a timeline until a good product manager tethered it to reality through a diligent process. At times, it can be difficult to keep in mind that all popular products and features started as sketchy prototypes.

But they did. And finding out how exactly successful companies took their product from an idea to a revenue stream can help overwhelmed PMs tackle their own roadmaps.

Vision and strategy vary from company to company, so for some it’s best to create a totally unique product management process.

But there’s a lot to be said for taking an established, proven framework and applying it to your team’s specific use case or product. Why reinvent the wheel when the wheel helped build a billion dollar company?

We’re going to run through the product management frameworks used by successful teams at Amazon, Google, Basecamp, Intercom, and Dropbox. Nothing is one-size-fits-all – take what you need and leave the rest.

Amazon: Working backwards

Let’s start with Amazon, since they’re basically taking over the world. They clearly know how to pivot and successfully launch new products considering this trillion-dollar company started as an online bookstore.

In 2012, then-Director at Amazon Ian McAllister offered a rare window into Amazon’s internal processes via a Quora post:


A Quora post.


The framework, “working backwards,” is especially useful for product managers developing new products or features rather than starting from square one.

It makes perfect sense fundamentally, and it avoids a common trap startups fall into: building a product or set of features and then shaping an ideal customer to fit that product.

Instead, this framework first identifies the customer, views the product in terms of what excites the customer, and then shapes the product to fit that customer's practical needs.

The first step in this process is to write an internal press release for the theoretical finished product. In adherence with the components of any press release, the PM states how this new product solves a customer problem that existing solutions can’t, and then identifies the benefits that this new product or feature offers.

“If the benefits listed don’t sound very interesting or exciting to customers, then perhaps they’re not (and shouldn’t be built),” wrote McAllister. “Instead, the product manager should keep iterating on the press release until they’ve come up with benefits that actually sound like benefits.”

“Iterating on a press release is a lot less expensive than iterating on the product itself (and quicker!)” he added.

Here’s McAllister’s example of a press release that would frame a potential product for PMs:

McAllister advises that the press release should be simple, and PMs can write an FAQ that answers additional business or execution questions if necessary.

A Quora post.

“My rule of thumb is that if the press release is hard to write, then the product is probably going to suck,” he wrote.

The press release serves as a north star during development. The product team can refer back and make sure what their building delivers on the promises they made to the customer at the outset.

This keeps product development focused on customer benefits, reduces the chance of overbuilding (or introducing features not included in the original press release), and eliminates anything that doesn’t provide real customer value.

Google: The HEART framework

Onto the other A in FAANG: Alphabet (formerly Google).

The HEART framework stands for: Happiness, Engagement, Adoption, Retention, and Task Success. It’s designed to measure user-centered metrics for web applications to track progress toward goals during product testing and after launch.

In a four-page note on the HEART framework, former Senior Staff User Experience Researcher Kerry Rodden and co-authors Hilary Hutchinson and Xin Fu describe the process for mapping product goals to metrics and provide examples of how HEART metrics have helped product teams make decisions that are data-driven and user-centered.

This framework serves as a complementary metrics framework to PULSE, which is Google’s set of metrics for overall product health: Page views, Uptime, Latency, Seven-day active users, and Earnings.

The HEART framework focuses less on indirect metrics for user experience and more on contextualized direct metrics for user retention, adoption, and success.


A graph demonstrating how to measure the HEART framework.
Source: Google Inc.

“We use the term ‘Happiness’ to describe metrics that are attitudinal in nature,” wrote Rodden et al., “These relate to subjective aspects of user experience, like satisfaction, visual appeal, likelihood to recommend, and perceived ease of use.”

These insights are gleaned through general user surveys disseminated at a regular cadence over time as product changes are made.

Engagement refers to the user’s level of involvement with the product. Per the HEART model, the measurements that inform engagement need to be precise – basically, traditional vanity metrics like MAU should be replaced with precise measurements that provide a more granular look at user habits.

“For example, the Gmail team wanted to understand more about the level of engagement of their users than was possible with the PULSE metric of seven-day active users (which simply counts how many users visited the product at least once within the last week),” wrote the report authors. “With the reasoning that engaged users should check their email account regularly, as part of their daily routine, our chosen metric was the percentage of active users who visited the product on five or more days during the last week.”

Adoption and retention metrics zero in on unique users in a given time period. A distinction between new and existing users is necessary to discern rate of adoption (e.g., the number of accounts created in the last seven days) from retention (an example being the percentage of seven-day active users that are still seven-day active users three months later.)

Analyzing adoption and retention is key for gauging user interest in new products, new features, or products undergoing redesigns, Rodden noted.

Finally, Task Success denotes traditional behavioral metrics including efficiency, effectiveness, and error rate. Remote usability or benchmarking studies where researchers assign tasks to users and measure how long it takes them to complete a task correctly are helpful in tracking this metric.

Researchers also take note of how many users follow the “optimal path” to completing a task, or the user journey the product team had in mind during development.

PMs also must plot out the goals of the product or feature for this framework to be useful.

Basically, they must define the tasks users need to accomplish, and what the product, feature, or redesign is trying to achieve.

“Use the HEART framework to prompt articulation of goals (e.g., is it more important to attract new users, or to encourage existing users to become more engaged?)” wrote the report authors.

The product team must also address the signals that will indicate whether a product, feature, or redesign has succeeded or failed. Here, product managers should think about the feelings or perceptions associated with success or failure, the logs-based behavioral signals that would indicate task completion or abandonment, etc.

This framework provides a holistic evaluation of user experience and feature success. It works best for companies with a significant number of existing users who must do large-scale assessments.

Basecamp: Bets

Basecamp uses bets to decide which product or feature pitches to develop within a defined time frame. This allows for fast, nimble development and pivoting. Classify is a big fan of bets – we use a similar framework.

Basecamp delved into the way they leverage bets – and why – in an ebook.

Basecamp works in six-week cycles. Before the start of a new cycle, they hold a betting table where stakeholders decide which features to prioritize.

“At the betting table, they look at pitches from the last six weeks – or any pitches somebody purposefully revived and lobbied for again,” the writers at Basecamp wrote.

“Support can keep a list of requests or issues that come up more often than others,” they clarified. “Product tracks ideas they hope to be able to shape in a future cycle. Programmers maintain a list of bugs they’d like to fix when they have some time. There’s no one backlog or central list and none of these lists are direct inputs to the betting process.”

Why? Because backlogs are annoying and they waste time. Reviewing old ideas going back six months or more that are likely no longer relevant is rarely productive.

In theory, backlogs serve as a reminder of things the product and engineering teams should get to when they have time. In reality, they’re a massive gnarl of low-priority tasks that gives everyone anxiety.

Bets, on the other hand, are purposeful, timely, evidence-based recommendations with the potential to deliver value now.

The cross-department, round-table approach of bet selection meetings also help to maintain accountability without overburdening any one individual or team with the responsibility to carry a bet all the way through to the finish line themselves.

“This approach spreads out the responsibility for prioritizing and tracking what to do and makes it manageable,” said Basecamp writers. “People from different departments can advocate for whatever they think is important and use whatever method works for them to track those things – or not.”

For a step-by-step guide to building and presenting solid bets to your team, click here.

Intercom: The RICE Model

The RICE Model is designed to help product managers with crowded product roadmaps simplify prioritization.

Intercom developed the RICE Model to make it easier to score every conflicting factor that plays into prioritization decisions consistently. The model comprises four factors: Reach, Impact, Confidence, and Effort.


A graph showing how the RICE model functions.

Source: Intercom

In this case, “Reach” means the number of people and events a set of features will impact in a given time period. Product teams will define these cohorts, events, and time periods themselves: it could be customers per month, transactions per quarter, whatever makes sense for your team.

Impact is pretty straightforward: how much will these features impact customers or the business?Measuring impact, however, is a bit more nebulous. At Intercom, they frame impact in different ways depending on their goals. They may ask how much a set of features will impact conversion rate, impact how much a customer likes a product, or impact adoption rates, etc.

“Impact is difficult to measure,” wrote Sean McBride, former Product Manager at Intercom. “So, I choose from a multiple-choice scale: 3 for ‘massive impact’, 2 for ‘high’, 1 for ‘medium’, 0.5 for ‘low’, and finally 0.25 for ‘minimal.”

“These numbers get multiplied into the final score to scale it up or down,” he added.

Confidence gauges your teams’ intuition that your estimates are accurate. This intuition is valuable: it’s based on the extensive user research and experience each product team member brings to the table.

“Confidence is a percentage,” said McBride, “and I use another multipel-choice scale to help avoid decision paralysis.”

In this case, 100% means “high confidence”, 80% is “medium”, 50% is “low”.

Finally, Intercom factors effort into the equation. The goal is to choose new features or product changes that have the greatest amount of impact with the least amount of effort. This factor requires estimating the amount of time a project will take across product, design, and engineering.

McBride and his team measured estimates in terms of “person-months”, or the work that one team member can do in a month.

The RICE score, which Intercom ultimately uses to compare each feature to other possible options, combines each of these factors into one number.

The calculation looks like this:


The calculation for the RICE score.

Source: Intercom

This score represents the total impact per time worked.

This model is useful for teams who find themselves at a stalemate when deciding which features to tackle next. So, most teams, probably.

Simplifying prioritization into a scorecard ensures your team doesn’t feel like it’s comparing apples-to-oranges when deciding between multiple different projects with different goals.

Dropbox: Phase framework

Dropbox knows a thing or two about growing quickly and consequently needing to scale key roles – including product management – equally as fast.

Scaling is an exciting but super stressful and redundant process where certain methods for moving projects along, getting approval for features or redesigns, and sharing knowledge are quickly revealed to be inefficient.

When scaling its own product teams, Dropbox decided to create a ridiculously simple framework to label the phases of each project’s lifecycle. This made it easier to introduce, assess, and prioritize new features or product changes as the team grew.

This is it:

A Google Slide showing the Phase framework.


As you can see, it is small but mighty.

“There were times when teams would combine Phase 0 and Phase 1 for smaller projects or do multiple reviews in Phase 2 as they approached a launch,” clarified Sean Lynch, former Group Product Manager at Dropbox.

So, the way teams followed or incorporated the framework into prioritization and planning was pretty fluid.

But the important thing about walking through this framework is that it invites PMs to ask the right questions, agree on the problem they need to solve, and communicate the goals for a certain product concisely to the entire company.

Some may consider this framework to be too reductive, but Lynch found that attempts to expand the framework only served to muddle the goals of new features or add obstructive complexity.

Here’s how it went for Dropbox after implementing the phase framework:

“We quickly saw results: Reviews got shorter as the feedback became more targeted for the project. Our CTO was able to delegate some reviews because he knew when his feedback wouldn’t be helpful. The larger company could understand progress and felt more in the loop. Within a matter of weeks, the organization that had felt overwhelmed with the number of projects was both moving faster and comfortable with undertaking even more.”

So goes the argument for simplifying processes.

Regardless of whether you choose to implement the phase framework, it raises important questions: Are my processes needlessly complex? Are there certain steps, meetings, or metrics I could eliminate that don’t actually serve us as a team or company?

The answer is almost always yes. Too often, teams find themselves adopting new metrics, new apps, new systems, and new processes without eliminating old ones that either don’t yield measurable results or no longer fit the goals of a certain initiative.

It’s a good idea to take a hard look at your workflow and your calendar to see if there are any redundant check-ins that could be handled over email, or spreadsheets you and your team always seem to update but never turn to for answers.

See if any of these frameworks could complement or replace an existing process that isn’t as fast, precise, or effective for your teams as it should be.

Good luck out there.

Oh, and if you’re feeling frazzled and overwhelmed at work, try Classify. We built it specifically for product managers. It’ll help you stress less.

Join our newsletter to stay up to date on features and releases.
We care about keeping your data safe. Read our privacy policy
for more information.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
© 2022 ITKMOBILE, INC. All rights reserved.