RICE | A Framework Overview for Product Managers


Imagine you are a Product Manager at a company and your goal is to drive user engagement.  The engineering team has encouraged you to develop “feature A” since they feel that users will like it.  The design team is saying that a website revamp is the best possible thing to do at the moment.  But a key stakeholder is saying that neither of those two things should be done, but instead another feature should be worked on first.  

How do you sort out the noise?  We recommend using a prioritization framework.

Everyone who came to you with advice did so with good intentions.  They were basing their recommendations on gut feelings and emotions.  However, your job as a product manager is to turn this emotional-based decision-making process into one that is grounded in fact.  Currently, you have many different ideas to help you reach your one true goal.  Frameworks like the RICE method can help these ideas to put the product in the best sport it can be going forward.

What Is a RICE Score?

RICE stands for Reach, Impact, Confidence, and Effort.  It is a fairly simple and incredibly popular prioritization framework for determining the relative importance of various features, ideas, and initiatives that people may have for a product.  A RICE score lets the PM quantify the specific importance of a feature and compare them to many others.

The formula for calculating RICE score is as follows:

RICE= [(Reach x Impact x Confidence) / Effort ]

Due to the nature of a product manager’s job they have many different ideas that they can work on at any moment.  Prioritization frameworks like RICE help to systematize the process and ensure that the product manager always knows what to work on next.


The reach score is the number of people who will be impacted by implementing a feature in a given timeframe.  One useful aspect of this framework is you can choose how to define what the timeframe for the impact is in addition to what type of user you plan to reach with it.  This can be quantified via internal metrics during the product building process and in external surveys conducted on your target audience.  

An example of calculation reach is as follows: Let’s say you are the product manager for a file transfer application.  If you wanted to calculate the reach for a feature that tells people when they are close to running out of storage you would first want to find out how many people are close to that storage cap in a given time period (let’s say a day) and how long you want the time period to be (let’s say one month).  If 50 people a day get close to reaching the storage cap and your reach time period is one month then the reach for the feature would be 50*30=1500 people or 1500.

Sometimes calculating reach may be difficult if a product is very new and does not have many users and/or internal tools to calculate reach are not very accurate or not available.  In such a case it may be useful to try and get statistics from other companies in your competing field, but even that isn’t perfect as those companies may be involved with and serve target audiences who are not that close to your own.  

Another way to get around this would be to try and combine the reach and impact score into one combined score since at the end of the day what matters most is how much impact a feature will have on the revenue that it will generate.


This is the impact that the feature being implemented will have on your users.  In short, reach is about how many people, impact is how much will it affect a customer.  

A useful way to conceptualize impact is to think about how much implementing helps you reach a certain goal.  One common goal for most companies is “how the feature could affect the likelihood to convert someone into a repeat, long-term customer.”

The best way to go about and standardize this impact is by providing a scale to rate features on.  A common scale is 3 for “massive impact”, 2 for “high impact”, 1 for “medium impact”, 0.5 for “low impact”, and 0.25 for “minimal”.

Without a good goal to aim for the effectiveness of the impact score and RICE as a whole is greatly decreased.  It is useful to go and have strong communication with all teams and stakeholders in order to determine what the goal for a given time period is going to be so prioritization can be more effective.


This number represents how certain you are about the information that you input for reach and impact and their corresponding benefit when the feature is implemented.  Effectively, it aims to act as a fail-safe against reach and impact scores that are too high due to accidental bias in the planning process.

Once you have a good understanding of what the reach, impact, and effort scores are for a certain feature should be, but feel like there are still gaps, you should add a confidence score to take that uncertainty into consideration.

Just like the impact score, confidence is also calculated by providing a scale to measure the confidence level against.  A common scale is 100% for high confidence, 80% for medium confidence, and 50% for low confidence.

Whenever there is a low confidence score it is important for product managers to go and research as much as possible to increase the confidence score and accuracy of the rating.  A low confidence score should act as a last resort when there is no alternative, not a default option when one is too lazy to seek out more information.

Similarly, a high confidence score should be a reflection of the fact the product manager is confident in the success of a feature based on a number of different data points that they can point to as well as various mockups and designs of the actual feature.


This represents how much time it will take the various product, design, and engineering teams to implement a specific feature into the product.  While the other three aspects of RICE are made to calculate the upside of a specific feature, the effort score is meant to calculate the downside of implementing it.

By far the most common way to quantify efforts is the number of persons per month or “person-months” it will take to complete rounded up.  In the file transfer application example if it took 7 people 1 week to work on the feature it would have an effort score of 7 person-weeks.  

Standardizing effort in this manner allows for people to more quickly and easily compare how long various features and even products will take.  This, however, requires tight communication with various engineering and design teams to ensure that these ratings are as accurate as possible and may on occasion be hard to calculate at first glance.

The Power of Rice

RICE is not just some equation that can be used to calculate the effect of feature, but is instead an exercise made to encourage communication between stakeholders and reduce the barrier of communication and sharing ideas by standardizing the language used by getting rid of jargon.

Crafting great product requires great tools. Try Chisel today, it's free forever.