Michael Berliner (Senior Product Manager of Growth at MasterClass) shares his high-velocity, methodical, battle-tested process which identifies and executes on business growth opportunities through product features and marketing channels.
The theory of constraints (discussed in the Goal book) states that if any one piece of the system is a bottleneck or a roadblock it hurts the system as a whole. And when you look at a specific piece, one can be optimized very well without impacting the throughput.
A lot of times companies and executive teams don’t look at how the growth metrics work together between different teams. Doing this allows you to understand what a +10% increase of a metric means in terms of revenue.
A lot of times, health and growth can be at odds with each other because you can trade health for more growth. Looking at the CAC/LTV ratio in combination with Revenue is healthy because it considers both. It also gives you a common reference across the whole business.
Masterclass uses ICE scoring (Impact Effort Confidence with a score of 1 to 5 for each) in order to rank initiatives to optimize whatever they’re working on.
The throughput funnel for a team is a function of velocity, success rate and average impact.
- Velocity: how many feature experiences are you putting out per engineer per week? Example at Airbnb: 1 test/engineer per week.
- Success rate: for 100 experiments, how many are a statistically significant winner that you’re going to roll out? A good benchmark is a 20-30% success rate which means you should expect 70-80% of your test to be flat or negative.
Average Impact: if you have a 30% success rate, what’s the average impact over these. A more established company like Facebook might have a +1% average impact that drives a lot of revenue. More earlier stage companies should have a much higher average impact.
You can drastically improve your testing velocity if you’re able to change how your team is structured and your processes. MasterClass doubled their testing velocity in 1 quarter by changing processes to give more autonomy, empowering engineers to be “mini-PMs”.
Each time engineers bring an experiment, challenge them to find one thing they could get rid off that would allow them to launch faster.
Think about the testing medium: how can you get the same answer differently (e.g. changing UI front-end with Optimizely) or doing something more hacky.
Working with a pod structure can be the most impactful change. Otherwise you might be waiting in the backlogs of other teams (e.g. design, content, etc.).
There are usually some quick wins or winning themes that will improve your testing win-rate (e.g. Masterclass knows that video converts well for them and can double down on that), but the rest comes with better understanding your users over time.
The theory of constraints (discussed in the Goal book) states that if any one piece of the system is a bottleneck or a roadblock it hurts the system as a whole. And when you look at a specific piece, one can be optimized very well without impacting the throughput.
A lot of times companies and executive teams don’t look at how the growth metrics work together between different teams. Doing this allows you to understand what a +10% increase of a metric means in terms of revenue.
A lot of times, health and growth can be at odds with each other because you can trade health for more growth. Looking at the CAC/LTV ratio in combination with Revenue is healthy because it considers both. It also gives you a common reference across the whole business.
Masterclass uses ICE scoring (Impact Effort Confidence with a score of 1 to 5 for each) in order to rank initiatives to optimize whatever they’re working on.
The throughput funnel for a team is a function of velocity, success rate and average impact.
- Velocity: how many feature experiences are you putting out per engineer per week? Example at Airbnb: 1 test/engineer per week.
- Success rate: for 100 experiments, how many are a statistically significant winner that you’re going to roll out? A good benchmark is a 20-30% success rate which means you should expect 70-80% of your test to be flat or negative.
Average Impact: if you have a 30% success rate, what’s the average impact over these. A more established company like Facebook might have a +1% average impact that drives a lot of revenue. More earlier stage companies should have a much higher average impact.
You can drastically improve your testing velocity if you’re able to change how your team is structured and your processes. MasterClass doubled their testing velocity in 1 quarter by changing processes to give more autonomy, empowering engineers to be “mini-PMs”.
Each time engineers bring an experiment, challenge them to find one thing they could get rid off that would allow them to launch faster.
Think about the testing medium: how can you get the same answer differently (e.g. changing UI front-end with Optimizely) or doing something more hacky.
Working with a pod structure can be the most impactful change. Otherwise you might be waiting in the backlogs of other teams (e.g. design, content, etc.).
There are usually some quick wins or winning themes that will improve your testing win-rate (e.g. Masterclass knows that video converts well for them and can double down on that), but the rest comes with better understanding your users over time.
The theory of constraints (discussed in the Goal book) states that if any one piece of the system is a bottleneck or a roadblock it hurts the system as a whole. And when you look at a specific piece, one can be optimized very well without impacting the throughput.
A lot of times companies and executive teams don’t look at how the growth metrics work together between different teams. Doing this allows you to understand what a +10% increase of a metric means in terms of revenue.
A lot of times, health and growth can be at odds with each other because you can trade health for more growth. Looking at the CAC/LTV ratio in combination with Revenue is healthy because it considers both. It also gives you a common reference across the whole business.
Masterclass uses ICE scoring (Impact Effort Confidence with a score of 1 to 5 for each) in order to rank initiatives to optimize whatever they’re working on.
The throughput funnel for a team is a function of velocity, success rate and average impact.
- Velocity: how many feature experiences are you putting out per engineer per week? Example at Airbnb: 1 test/engineer per week.
- Success rate: for 100 experiments, how many are a statistically significant winner that you’re going to roll out? A good benchmark is a 20-30% success rate which means you should expect 70-80% of your test to be flat or negative.
Average Impact: if you have a 30% success rate, what’s the average impact over these. A more established company like Facebook might have a +1% average impact that drives a lot of revenue. More earlier stage companies should have a much higher average impact.
You can drastically improve your testing velocity if you’re able to change how your team is structured and your processes. MasterClass doubled their testing velocity in 1 quarter by changing processes to give more autonomy, empowering engineers to be “mini-PMs”.
Each time engineers bring an experiment, challenge them to find one thing they could get rid off that would allow them to launch faster.
Think about the testing medium: how can you get the same answer differently (e.g. changing UI front-end with Optimizely) or doing something more hacky.
Working with a pod structure can be the most impactful change. Otherwise you might be waiting in the backlogs of other teams (e.g. design, content, etc.).
There are usually some quick wins or winning themes that will improve your testing win-rate (e.g. Masterclass knows that video converts well for them and can double down on that), but the rest comes with better understanding your users over time.
Notes for this resource are currently being transferred and will be available soon.
Focus is critical and solves most of the problems.
[💎@04:03] The theory of constraints (discussed in the Goal book) states that if any one piece of the system is a bottleneck or a roadblock it hurts the system as a whole. And when you look at a specific piece, one can be optimized very well without impacting the throughput.
The goal of any company is to bring money.
There are a lot of ways companies can grow revenue. How do you know where to focus?
Above are metrics you can affect.
[💎@07:35] A lot of times companies and executive teams don’t look at how the growth metrics work together between different teams. Doing this allows you to understand what a +10% increase of a metric means in terms of revenue.
[💎@09:12] A lot of times, business health and growth can be at odds with each other because you can trade health for more growth. Looking at the CAC/LTV ratio in combination with Revenue is healthy because it considers both. It also gives you a common reference across the whole business.
[💎@10:40] Masterclass uses ICE scoring (Impact Effort Confidence with a score of 1 to 5 for each) in order to rank initiatives to optimize whatever they’re working on.
The confidence part is a reflection of the insights you can gather.
Once they have the value score, they separate in 3 categories:
[💎@20:07] The throughput funnel for a team is a function of velocity, success rate and average impact.
Testing velocity usually depends on how your team is structured from an organizational standpoint, on your resource allocation and on your processes.
[💎@22:37] You can drastically improve your testing velocity if you’re able to change how your team is structured and your processes. MasterClass doubled their testing velocity in 1 quarter by changing processes to give more autonomy, empowering engineers to be “mini-PMs”.
[💎@23:34] Each time engineers bring an experiment, challenge them to find one thing they could get rid off that would allow them to launch faster.
[💎@23:47] Think about the testing medium: how can you get the same answer differently (e.g. changing UI front-end with Optimizely) or doing something more hacky.
[💎@24:04] Working with a pod structure can be the most impactful change. Otherwise you might be waiting in the backlogs of other teams (e.g. design, content, data etc.).
[💎@24:45] There are usually some quick wins or winning themes that will improve your testing win-rate (e.g. Masterclass knows that video converts well for them and can double down on that), but the rest comes with better understanding your users over time.
Your test win-rate is also affected by your backlog size and quality. It comes down to the confidence aspect of the ICE framework: 10 or 50 things in your backlog? Does it come from diverse teams or just yours?
There will be some outliers that bring your average up. But as you optimize, it’s going to become smaller. Using advanced data analysis you can help determine what actions upstreams will benefit the business downstreams.
MasterClass evaluates the 3 metrics (velocity, win-rate and avg. lift/win) every quarter to reflect on how things went.
This gives you a hypothesis-driven growth strategy to maximize revenue:
Focus is critical and solves most of the problems.
[💎@04:03] The theory of constraints (discussed in the Goal book) states that if any one piece of the system is a bottleneck or a roadblock it hurts the system as a whole. And when you look at a specific piece, one can be optimized very well without impacting the throughput.
The goal of any company is to bring money.
There are a lot of ways companies can grow revenue. How do you know where to focus?
Above are metrics you can affect.
[💎@07:35] A lot of times companies and executive teams don’t look at how the growth metrics work together between different teams. Doing this allows you to understand what a +10% increase of a metric means in terms of revenue.
[💎@09:12] A lot of times, business health and growth can be at odds with each other because you can trade health for more growth. Looking at the CAC/LTV ratio in combination with Revenue is healthy because it considers both. It also gives you a common reference across the whole business.
[💎@10:40] Masterclass uses ICE scoring (Impact Effort Confidence with a score of 1 to 5 for each) in order to rank initiatives to optimize whatever they’re working on.
The confidence part is a reflection of the insights you can gather.
Once they have the value score, they separate in 3 categories:
[💎@20:07] The throughput funnel for a team is a function of velocity, success rate and average impact.
Testing velocity usually depends on how your team is structured from an organizational standpoint, on your resource allocation and on your processes.
[💎@22:37] You can drastically improve your testing velocity if you’re able to change how your team is structured and your processes. MasterClass doubled their testing velocity in 1 quarter by changing processes to give more autonomy, empowering engineers to be “mini-PMs”.
[💎@23:34] Each time engineers bring an experiment, challenge them to find one thing they could get rid off that would allow them to launch faster.
[💎@23:47] Think about the testing medium: how can you get the same answer differently (e.g. changing UI front-end with Optimizely) or doing something more hacky.
[💎@24:04] Working with a pod structure can be the most impactful change. Otherwise you might be waiting in the backlogs of other teams (e.g. design, content, data etc.).
[💎@24:45] There are usually some quick wins or winning themes that will improve your testing win-rate (e.g. Masterclass knows that video converts well for them and can double down on that), but the rest comes with better understanding your users over time.
Your test win-rate is also affected by your backlog size and quality. It comes down to the confidence aspect of the ICE framework: 10 or 50 things in your backlog? Does it come from diverse teams or just yours?
There will be some outliers that bring your average up. But as you optimize, it’s going to become smaller. Using advanced data analysis you can help determine what actions upstreams will benefit the business downstreams.
MasterClass evaluates the 3 metrics (velocity, win-rate and avg. lift/win) every quarter to reflect on how things went.
This gives you a hypothesis-driven growth strategy to maximize revenue:
Focus is critical and solves most of the problems.
[💎@04:03] The theory of constraints (discussed in the Goal book) states that if any one piece of the system is a bottleneck or a roadblock it hurts the system as a whole. And when you look at a specific piece, one can be optimized very well without impacting the throughput.
The goal of any company is to bring money.
There are a lot of ways companies can grow revenue. How do you know where to focus?
Above are metrics you can affect.
[💎@07:35] A lot of times companies and executive teams don’t look at how the growth metrics work together between different teams. Doing this allows you to understand what a +10% increase of a metric means in terms of revenue.
[💎@09:12] A lot of times, business health and growth can be at odds with each other because you can trade health for more growth. Looking at the CAC/LTV ratio in combination with Revenue is healthy because it considers both. It also gives you a common reference across the whole business.
[💎@10:40] Masterclass uses ICE scoring (Impact Effort Confidence with a score of 1 to 5 for each) in order to rank initiatives to optimize whatever they’re working on.
The confidence part is a reflection of the insights you can gather.
Once they have the value score, they separate in 3 categories:
[💎@20:07] The throughput funnel for a team is a function of velocity, success rate and average impact.
Testing velocity usually depends on how your team is structured from an organizational standpoint, on your resource allocation and on your processes.
[💎@22:37] You can drastically improve your testing velocity if you’re able to change how your team is structured and your processes. MasterClass doubled their testing velocity in 1 quarter by changing processes to give more autonomy, empowering engineers to be “mini-PMs”.
[💎@23:34] Each time engineers bring an experiment, challenge them to find one thing they could get rid off that would allow them to launch faster.
[💎@23:47] Think about the testing medium: how can you get the same answer differently (e.g. changing UI front-end with Optimizely) or doing something more hacky.
[💎@24:04] Working with a pod structure can be the most impactful change. Otherwise you might be waiting in the backlogs of other teams (e.g. design, content, data etc.).
[💎@24:45] There are usually some quick wins or winning themes that will improve your testing win-rate (e.g. Masterclass knows that video converts well for them and can double down on that), but the rest comes with better understanding your users over time.
Your test win-rate is also affected by your backlog size and quality. It comes down to the confidence aspect of the ICE framework: 10 or 50 things in your backlog? Does it come from diverse teams or just yours?
There will be some outliers that bring your average up. But as you optimize, it’s going to become smaller. Using advanced data analysis you can help determine what actions upstreams will benefit the business downstreams.
MasterClass evaluates the 3 metrics (velocity, win-rate and avg. lift/win) every quarter to reflect on how things went.
This gives you a hypothesis-driven growth strategy to maximize revenue: