Robert Magyar (Data Science Lead at SuperScale - Mobile Games) discusses how LTV predictions can help you and how to optimize LiveOps/Offers and Ivan Kozyev (Head of Analytics at Crazy Panda - Mobile Games) explains how to develop an effective LTV model for each stage of your game.
Chart your creatives on a X axis and check your D3/D7/D28 ROAS to quickly spot outlier creatives (both good and bad) so you can act on that (by reallocating spend for example). Example chart here.
A benchmark comparing ROAS (e.g. D7/D28) for each week (X axis) with success thresholds allows you to evaluate your UA strategies. Example chart here.
Understanding the CPI to spend relationship is a key factor in understanding UA payback and how you can scale your campaign. Example chart here.
Questions you need to ask yourself to find the most profitable players (for LAL on Android for example - changes coming to iOS):
1. Is this group great at buying IAPs, do they do it frequently?
2. Is this group heavily engaged, does their engagement grow over time?
3. Do this group of players watch ads frequently? Make sure you have a benchmark to compare these new LAL/audiences to.
Do not create too many groups/segments of players when looking at LTV. You need to make sure they are different so you can understand which group is better. It is not enough to segment based on how much they purchase however, you need to use other attributes too.
If your LTV curve looks like a step function with jumps, either your game is relying mainly on LiveOps offers (not ideal design) or the amount of payers/players is too low. Example chart here.
Your special offers are great if you can increase revenue per user while minimizing the discount. The value is in the personalization and showing the right offer (with a relevant content and price) at the right time. Example of special offer delivery system.
Predicting LTV is different at different game stages: soft launch, some time after global launch or when the game is at maturity. See characteristics and suggested approach here.
In soft launch we do not have the whole LTV curve but we somehow need to calculate the lifetime length so product knowledge is crucial because you're extrapolating. You have to know:
- Monetization limits (depth)
- User behavior
The most important step in LTV model development is your LTV model and forecast validation. Always have a validation sample to test the model against, and it also must be representative. Make sure you do not build the model to work especially well against your validation sample (i.e. "overfit").
Some time after global launch you need different LTV models: country groups (Tier 1 vs. Tier 2 vs. Tier 3), acquisition sources and optimization types (Google Ads vs. video networks) and monetization types (in-apps vs. ad-based, or live ops vs. regular purchases).
Always think about how the LTV model will be used. Example: LTV model for the UA team needs to be working with a small sample size so decisions can be made at the campaign level vs. LTV model for strategic decisions needs to be more accurate and can be more thorough.
If you are encountering issues when leveraging machine learning, build a quick model with rough "soft launch techniques" for quick validation. Have a few models using very limited amount of data so you can retrain the machine learning models as soon as possible.
Understanding the impact of LiveOps events on LTV is difficult when only 3 or 4 LiveOps have been done. You can avoid having to wait by looking at peaks during the LiveOps event. "Slice" the LTV curve into smaller periods, define a validation cohort for each slice and calculate the impact on LTV over the period. Then normalize the impact and calculate the final improvement. Example chart here
When evaluating the impact of LiveOps on LTV, do not forget to take into account the novelty effect: peaks tend to be higher during the first LiveOps events.
For special offers LTV prediction models start with a rule-based system, then a probabilistic system and finally machine learning system. With the rule-based system it is less risky and more transparent which also helps you identify the impact of the changes you make.
Chart your creatives on a X axis and check your D3/D7/D28 ROAS to quickly spot outlier creatives (both good and bad) so you can act on that (by reallocating spend for example). Example chart here.
A benchmark comparing ROAS (e.g. D7/D28) for each week (X axis) with success thresholds allows you to evaluate your UA strategies. Example chart here.
Understanding the CPI to spend relationship is a key factor in understanding UA payback and how you can scale your campaign. Example chart here.
Questions you need to ask yourself to find the most profitable players (for LAL on Android for example - changes coming to iOS):
1. Is this group great at buying IAPs, do they do it frequently?
2. Is this group heavily engaged, does their engagement grow over time?
3. Do this group of players watch ads frequently? Make sure you have a benchmark to compare these new LAL/audiences to.
Do not create too many groups/segments of players when looking at LTV. You need to make sure they are different so you can understand which group is better. It is not enough to segment based on how much they purchase however, you need to use other attributes too.
If your LTV curve looks like a step function with jumps, either your game is relying mainly on LiveOps offers (not ideal design) or the amount of payers/players is too low. Example chart here.
Your special offers are great if you can increase revenue per user while minimizing the discount. The value is in the personalization and showing the right offer (with a relevant content and price) at the right time. Example of special offer delivery system.
Predicting LTV is different at different game stages: soft launch, some time after global launch or when the game is at maturity. See characteristics and suggested approach here.
In soft launch we do not have the whole LTV curve but we somehow need to calculate the lifetime length so product knowledge is crucial because you're extrapolating. You have to know:
- Monetization limits (depth)
- User behavior
The most important step in LTV model development is your LTV model and forecast validation. Always have a validation sample to test the model against, and it also must be representative. Make sure you do not build the model to work especially well against your validation sample (i.e. "overfit").
Some time after global launch you need different LTV models: country groups (Tier 1 vs. Tier 2 vs. Tier 3), acquisition sources and optimization types (Google Ads vs. video networks) and monetization types (in-apps vs. ad-based, or live ops vs. regular purchases).
Always think about how the LTV model will be used. Example: LTV model for the UA team needs to be working with a small sample size so decisions can be made at the campaign level vs. LTV model for strategic decisions needs to be more accurate and can be more thorough.
If you are encountering issues when leveraging machine learning, build a quick model with rough "soft launch techniques" for quick validation. Have a few models using very limited amount of data so you can retrain the machine learning models as soon as possible.
Understanding the impact of LiveOps events on LTV is difficult when only 3 or 4 LiveOps have been done. You can avoid having to wait by looking at peaks during the LiveOps event. "Slice" the LTV curve into smaller periods, define a validation cohort for each slice and calculate the impact on LTV over the period. Then normalize the impact and calculate the final improvement. Example chart here
When evaluating the impact of LiveOps on LTV, do not forget to take into account the novelty effect: peaks tend to be higher during the first LiveOps events.
For special offers LTV prediction models start with a rule-based system, then a probabilistic system and finally machine learning system. With the rule-based system it is less risky and more transparent which also helps you identify the impact of the changes you make.
Chart your creatives on a X axis and check your D3/D7/D28 ROAS to quickly spot outlier creatives (both good and bad) so you can act on that (by reallocating spend for example). Example chart here.
A benchmark comparing ROAS (e.g. D7/D28) for each week (X axis) with success thresholds allows you to evaluate your UA strategies. Example chart here.
Understanding the CPI to spend relationship is a key factor in understanding UA payback and how you can scale your campaign. Example chart here.
Questions you need to ask yourself to find the most profitable players (for LAL on Android for example - changes coming to iOS):
1. Is this group great at buying IAPs, do they do it frequently?
2. Is this group heavily engaged, does their engagement grow over time?
3. Do this group of players watch ads frequently? Make sure you have a benchmark to compare these new LAL/audiences to.
Do not create too many groups/segments of players when looking at LTV. You need to make sure they are different so you can understand which group is better. It is not enough to segment based on how much they purchase however, you need to use other attributes too.
If your LTV curve looks like a step function with jumps, either your game is relying mainly on LiveOps offers (not ideal design) or the amount of payers/players is too low. Example chart here.
Your special offers are great if you can increase revenue per user while minimizing the discount. The value is in the personalization and showing the right offer (with a relevant content and price) at the right time. Example of special offer delivery system.
Predicting LTV is different at different game stages: soft launch, some time after global launch or when the game is at maturity. See characteristics and suggested approach here.
In soft launch we do not have the whole LTV curve but we somehow need to calculate the lifetime length so product knowledge is crucial because you're extrapolating. You have to know:
- Monetization limits (depth)
- User behavior
The most important step in LTV model development is your LTV model and forecast validation. Always have a validation sample to test the model against, and it also must be representative. Make sure you do not build the model to work especially well against your validation sample (i.e. "overfit").
Some time after global launch you need different LTV models: country groups (Tier 1 vs. Tier 2 vs. Tier 3), acquisition sources and optimization types (Google Ads vs. video networks) and monetization types (in-apps vs. ad-based, or live ops vs. regular purchases).
Always think about how the LTV model will be used. Example: LTV model for the UA team needs to be working with a small sample size so decisions can be made at the campaign level vs. LTV model for strategic decisions needs to be more accurate and can be more thorough.
If you are encountering issues when leveraging machine learning, build a quick model with rough "soft launch techniques" for quick validation. Have a few models using very limited amount of data so you can retrain the machine learning models as soon as possible.
Understanding the impact of LiveOps events on LTV is difficult when only 3 or 4 LiveOps have been done. You can avoid having to wait by looking at peaks during the LiveOps event. "Slice" the LTV curve into smaller periods, define a validation cohort for each slice and calculate the impact on LTV over the period. Then normalize the impact and calculate the final improvement. Example chart here
When evaluating the impact of LiveOps on LTV, do not forget to take into account the novelty effect: peaks tend to be higher during the first LiveOps events.
For special offers LTV prediction models start with a rule-based system, then a probabilistic system and finally machine learning system. With the rule-based system it is less risky and more transparent which also helps you identify the impact of the changes you make.
Notes for this resource are currently being transferred and will be available soon.
Intermediate and sophisticated approaches towards LTV measurement. Examples of custom and advanced solutions built to understand LTV of the users. What is easy, what is more challenging once you work on such solutions. How analytics and Growth teams can work together for final outcome. Presentation based on examples and case studies.
How to combat scaling issues
[💎 @10:15] Chart your creatives on a X axis and check your D3/D7/D28 ROAS to quickly spot outlier creatives (both good and bad) so you can act on that (by reallocating spend for example).
[💎 @11:29] A benchmark comparing ROAS (e.g. D7/D28) for each week (X axis) with success thresholds allows you to evaluate your UA strategies.
[💎 @13:54] Understanding the CPI to spend relationship is a key factor in understanding how you can scale your campaign.
[💎 @19:13] Questions you need to ask yourself to find the most profitable players (for LAL on Android for example - changes coming to iOS):
1. Is this group great at buying IAPS, do they do it frequently?
2. Is this group heavily engaged, does their engagement grow over time?
3. Do this group of players watch ads frequently?
Make sure you have a benchmark to compare these new LAL/audiences to.
[💎 @21:45] Do not create too many groups/segments of players. You need to make sure they are different so you can understand which group is better. It is not enough to segment based on how much they purchase, you need to use other attributes too.
[💎 @24:12] If your LTV curve looks like a step function with jumps, either your game is relying mainly on LiveOps offers (not ideal design) or the amount of payers/players is too low.
[💎 @30:50] Your special offers are great if you can increase revenue per user while minimizing the discount. The value is in the personalization and showing the right offer (with a relevant content and price) at the right time. Example of special offer delivery system.
Differences in LTV prediction calculations for hypercasual games vs. IAP based games?
Same model but you change the data. Leverage more ad attributes if you rely more on ads.
We will follow the whole way of developing and maintaining an LTV model for our game starting from the very rough extrapolation models at the soft launch and step by step will reach accurate user-based Machine Learning models for mature products. Moreover, we will peek into the main obstacles on our way and how to overcome them.
Works with games with different monetization models
[💎 @42:58] Predicting LTV is different at different game stages: soft launch, some time after global launch or when the game is at maturity. See characteristics and suggested approach below.
[💎 @52:00] In soft launch we do not have the whole LTV curve but we somehow need to calculate the lifetime length so product knowledge is crucial because you're extrapolating. You have to know:
[💎 @54:39] The most important step in LTV model development is your LTV model and forecast validation. Always have a validation sample to test the model against, and it must be representative. Make sure you do not build the model to work especially well against your validation sample (i.e. "overfit").
[💎 @58:10] Some time after global launch you need different LTV models: country groups (Tier 1 vs. Tier 2 vs. Tier 3), acquisition sources and optimization types (Google Ads vs. video networks) and monetization types (in-apps vs. ad-based, or live ops vs. regular purchases).
[💎 @01:07:02] Always think about how the LTV model will be used. Example: LTV model for the UA team needs to be working with a small sample size so decisions can be made at the campaign level vs. LTV model for strategic decisions needs to be more accurate and can be more thorough.
[💎 @01:10:18] If you are encountering issues when leveraging machine learning, build a quick model with rough "soft launch techniques" for quick validation. Have a few models using very limited amount of data so you can retrain the machine learning models as soon as possible.
[💎 @01:12:21] Understanding the LiveOps events impact on LTV is difficult when only 3 or 4 LiveOps have been done. You can avoid waiting by looking at peaks during the LiveOps event. "Slice" the LTV curve into smaller periods, define a validation cohort for each slice and calculate the impact on LTV over the period. Then normalize the impact and calculate the final improvement. Example chart below.
[💎 @01:13:45] When evaluating the impact of LiveOps on LTV, do not forget to take into account the novelty effect: peaks tend to be higher during the first LiveOps events.
Impact of iOS 14?
Ivan:
Robert
In Google App Campaigns, is it possible to track D3/D7 ROAS at the creative level?
Do you use reinforcement learning for offers?
Optimizing LTV models conservatively vs. aggressively and taking on a bigger risk of error?
Robert
Different models between markets/geos?
Ivan
LTV calculations for ad monetized games
Robert
Ivan
A/B testing for monetization
Robert
Ivan
Robert
Intermediate and sophisticated approaches towards LTV measurement. Examples of custom and advanced solutions built to understand LTV of the users. What is easy, what is more challenging once you work on such solutions. How analytics and Growth teams can work together for final outcome. Presentation based on examples and case studies.
How to combat scaling issues
[💎 @10:15] Chart your creatives on a X axis and check your D3/D7/D28 ROAS to quickly spot outlier creatives (both good and bad) so you can act on that (by reallocating spend for example).
[💎 @11:29] A benchmark comparing ROAS (e.g. D7/D28) for each week (X axis) with success thresholds allows you to evaluate your UA strategies.
[💎 @13:54] Understanding the CPI to spend relationship is a key factor in understanding how you can scale your campaign.
[💎 @19:13] Questions you need to ask yourself to find the most profitable players (for LAL on Android for example - changes coming to iOS):
1. Is this group great at buying IAPS, do they do it frequently?
2. Is this group heavily engaged, does their engagement grow over time?
3. Do this group of players watch ads frequently?
Make sure you have a benchmark to compare these new LAL/audiences to.
[💎 @21:45] Do not create too many groups/segments of players. You need to make sure they are different so you can understand which group is better. It is not enough to segment based on how much they purchase, you need to use other attributes too.
[💎 @24:12] If your LTV curve looks like a step function with jumps, either your game is relying mainly on LiveOps offers (not ideal design) or the amount of payers/players is too low.
[💎 @30:50] Your special offers are great if you can increase revenue per user while minimizing the discount. The value is in the personalization and showing the right offer (with a relevant content and price) at the right time. Example of special offer delivery system.
Differences in LTV prediction calculations for hypercasual games vs. IAP based games?
Same model but you change the data. Leverage more ad attributes if you rely more on ads.
We will follow the whole way of developing and maintaining an LTV model for our game starting from the very rough extrapolation models at the soft launch and step by step will reach accurate user-based Machine Learning models for mature products. Moreover, we will peek into the main obstacles on our way and how to overcome them.
Works with games with different monetization models
[💎 @42:58] Predicting LTV is different at different game stages: soft launch, some time after global launch or when the game is at maturity. See characteristics and suggested approach below.
[💎 @52:00] In soft launch we do not have the whole LTV curve but we somehow need to calculate the lifetime length so product knowledge is crucial because you're extrapolating. You have to know:
[💎 @54:39] The most important step in LTV model development is your LTV model and forecast validation. Always have a validation sample to test the model against, and it must be representative. Make sure you do not build the model to work especially well against your validation sample (i.e. "overfit").
[💎 @58:10] Some time after global launch you need different LTV models: country groups (Tier 1 vs. Tier 2 vs. Tier 3), acquisition sources and optimization types (Google Ads vs. video networks) and monetization types (in-apps vs. ad-based, or live ops vs. regular purchases).
[💎 @01:07:02] Always think about how the LTV model will be used. Example: LTV model for the UA team needs to be working with a small sample size so decisions can be made at the campaign level vs. LTV model for strategic decisions needs to be more accurate and can be more thorough.
[💎 @01:10:18] If you are encountering issues when leveraging machine learning, build a quick model with rough "soft launch techniques" for quick validation. Have a few models using very limited amount of data so you can retrain the machine learning models as soon as possible.
[💎 @01:12:21] Understanding the LiveOps events impact on LTV is difficult when only 3 or 4 LiveOps have been done. You can avoid waiting by looking at peaks during the LiveOps event. "Slice" the LTV curve into smaller periods, define a validation cohort for each slice and calculate the impact on LTV over the period. Then normalize the impact and calculate the final improvement. Example chart below.
[💎 @01:13:45] When evaluating the impact of LiveOps on LTV, do not forget to take into account the novelty effect: peaks tend to be higher during the first LiveOps events.
Impact of iOS 14?
Ivan:
Robert
In Google App Campaigns, is it possible to track D3/D7 ROAS at the creative level?
Do you use reinforcement learning for offers?
Optimizing LTV models conservatively vs. aggressively and taking on a bigger risk of error?
Robert
Different models between markets/geos?
Ivan
LTV calculations for ad monetized games
Robert
Ivan
A/B testing for monetization
Robert
Ivan
Robert
Intermediate and sophisticated approaches towards LTV measurement. Examples of custom and advanced solutions built to understand LTV of the users. What is easy, what is more challenging once you work on such solutions. How analytics and Growth teams can work together for final outcome. Presentation based on examples and case studies.
How to combat scaling issues
[💎 @10:15] Chart your creatives on a X axis and check your D3/D7/D28 ROAS to quickly spot outlier creatives (both good and bad) so you can act on that (by reallocating spend for example).
[💎 @11:29] A benchmark comparing ROAS (e.g. D7/D28) for each week (X axis) with success thresholds allows you to evaluate your UA strategies.
[💎 @13:54] Understanding the CPI to spend relationship is a key factor in understanding how you can scale your campaign.
[💎 @19:13] Questions you need to ask yourself to find the most profitable players (for LAL on Android for example - changes coming to iOS):
1. Is this group great at buying IAPS, do they do it frequently?
2. Is this group heavily engaged, does their engagement grow over time?
3. Do this group of players watch ads frequently?
Make sure you have a benchmark to compare these new LAL/audiences to.
[💎 @21:45] Do not create too many groups/segments of players. You need to make sure they are different so you can understand which group is better. It is not enough to segment based on how much they purchase, you need to use other attributes too.
[💎 @24:12] If your LTV curve looks like a step function with jumps, either your game is relying mainly on LiveOps offers (not ideal design) or the amount of payers/players is too low.
[💎 @30:50] Your special offers are great if you can increase revenue per user while minimizing the discount. The value is in the personalization and showing the right offer (with a relevant content and price) at the right time. Example of special offer delivery system.
Differences in LTV prediction calculations for hypercasual games vs. IAP based games?
Same model but you change the data. Leverage more ad attributes if you rely more on ads.
We will follow the whole way of developing and maintaining an LTV model for our game starting from the very rough extrapolation models at the soft launch and step by step will reach accurate user-based Machine Learning models for mature products. Moreover, we will peek into the main obstacles on our way and how to overcome them.
Works with games with different monetization models
[💎 @42:58] Predicting LTV is different at different game stages: soft launch, some time after global launch or when the game is at maturity. See characteristics and suggested approach below.
[💎 @52:00] In soft launch we do not have the whole LTV curve but we somehow need to calculate the lifetime length so product knowledge is crucial because you're extrapolating. You have to know:
[💎 @54:39] The most important step in LTV model development is your LTV model and forecast validation. Always have a validation sample to test the model against, and it must be representative. Make sure you do not build the model to work especially well against your validation sample (i.e. "overfit").
[💎 @58:10] Some time after global launch you need different LTV models: country groups (Tier 1 vs. Tier 2 vs. Tier 3), acquisition sources and optimization types (Google Ads vs. video networks) and monetization types (in-apps vs. ad-based, or live ops vs. regular purchases).
[💎 @01:07:02] Always think about how the LTV model will be used. Example: LTV model for the UA team needs to be working with a small sample size so decisions can be made at the campaign level vs. LTV model for strategic decisions needs to be more accurate and can be more thorough.
[💎 @01:10:18] If you are encountering issues when leveraging machine learning, build a quick model with rough "soft launch techniques" for quick validation. Have a few models using very limited amount of data so you can retrain the machine learning models as soon as possible.
[💎 @01:12:21] Understanding the LiveOps events impact on LTV is difficult when only 3 or 4 LiveOps have been done. You can avoid waiting by looking at peaks during the LiveOps event. "Slice" the LTV curve into smaller periods, define a validation cohort for each slice and calculate the impact on LTV over the period. Then normalize the impact and calculate the final improvement. Example chart below.
[💎 @01:13:45] When evaluating the impact of LiveOps on LTV, do not forget to take into account the novelty effect: peaks tend to be higher during the first LiveOps events.
Impact of iOS 14?
Ivan:
Robert
In Google App Campaigns, is it possible to track D3/D7 ROAS at the creative level?
Do you use reinforcement learning for offers?
Optimizing LTV models conservatively vs. aggressively and taking on a bigger risk of error?
Robert
Different models between markets/geos?
Ivan
LTV calculations for ad monetized games
Robert
Ivan
A/B testing for monetization
Robert
Ivan
Robert