Sharath Kowligi (Director of Ad Monetization at GameHouse - game publisher - and advisor to RocketShip HQ) joins Shamanth Rao to share the learnings from having run over 200 app store conversion experiments.
There is a lot of evidence showing that the 2 main things to test on the app stores are:
1. Icon
2. Feature graphic (video thumbnail).
The way to approach testing is to be systematic: it's to think about the testing cadence instead of trying to find THE big idea. Because once a team gets into the rhythm of testing, that's when you get better at it.
Putting testing cadence first is a great way to build a creative team. It lowers the ego threshold of being right and being wrong and puts the focus on doing things that are meaningful
For most B2C, you have to give the test at least 7 days. It allows you to run about 50 tests a year for one app.
If you want to do more tests you can also split between markets that you know behave the same way for YOU (e.g. could be US/UK) to run tests in parallel.
At least once a week, report on experiments whether they are done on the paid advertising side, the product side or growth in general and even on the CRM side.
Once you run the experiments 1/2/3 times and try on iOS what works on Android, you'll know if this is something you can do. If a killer creative on Android doesn't make an impact on iOS, then you can't carry results from Android to iOS.
There is a lot of evidence showing that the 2 main things to test on the app stores are:
1. Icon
2. Feature graphic (video thumbnail).
The way to approach testing is to be systematic: it's to think about the testing cadence instead of trying to find THE big idea. Because once a team gets into the rhythm of testing, that's when you get better at it.
Putting testing cadence first is a great way to build a creative team. It lowers the ego threshold of being right and being wrong and puts the focus on doing things that are meaningful
For most B2C, you have to give the test at least 7 days. It allows you to run about 50 tests a year for one app.
If you want to do more tests you can also split between markets that you know behave the same way for YOU (e.g. could be US/UK) to run tests in parallel.
At least once a week, report on experiments whether they are done on the paid advertising side, the product side or growth in general and even on the CRM side.
Once you run the experiments 1/2/3 times and try on iOS what works on Android, you'll know if this is something you can do. If a killer creative on Android doesn't make an impact on iOS, then you can't carry results from Android to iOS.
There is a lot of evidence showing that the 2 main things to test on the app stores are:
1. Icon
2. Feature graphic (video thumbnail).
The way to approach testing is to be systematic: it's to think about the testing cadence instead of trying to find THE big idea. Because once a team gets into the rhythm of testing, that's when you get better at it.
Putting testing cadence first is a great way to build a creative team. It lowers the ego threshold of being right and being wrong and puts the focus on doing things that are meaningful
For most B2C, you have to give the test at least 7 days. It allows you to run about 50 tests a year for one app.
If you want to do more tests you can also split between markets that you know behave the same way for YOU (e.g. could be US/UK) to run tests in parallel.
At least once a week, report on experiments whether they are done on the paid advertising side, the product side or growth in general and even on the CRM side.
Once you run the experiments 1/2/3 times and try on iOS what works on Android, you'll know if this is something you can do. If a killer creative on Android doesn't make an impact on iOS, then you can't carry results from Android to iOS.
Notes for this resource are currently being transferred and will be available soon.
Historically it was very hard to track changes and perform A/B tests. Now:
Google Play Store has become much more important to publishers and Google provides: more data, more granularity, better identification throughout the funnel.
→ even if the tests are done only on Google Play, it can still have a huge impact.
Visuals (vs. copy) are where you should start.
[💎@05:40] There is a lot of evidence showing that the 2 main things to test are: 1. Icon 2. Feature graphic (video thumbnail).
"The app stores really is just good packaging for your app"
For a Doctor game they tried an out-of-the-box idea for the icon that nobody thought would work and they add a +40% increase for that variant.
In this case the variant touched a very emotional part of the game story, although they didn't see it at the time.
A/B testing when done right (or when you get lucky) makes a huge difference. It changed the top of the funnel, and even the game story.
The biggest thing is the testing process vs. generating ideas, because you never know which one is the big idea.
[💎@13:35] The way to approach testing is to be systematic: it's to think about the testing cadence instead of trying to find THE big idea. Because once a team gets into the rhythm of testing, that's when you get better at it.
Some variants that can appear very different make no difference as well. Sometimes the audience just doesn't care.
[💎@15:30] Putting testing cadence first is a great way to build a creative team. Because any good creative team has people who believe in what they're doing. It lowers the ego threshold of being right and being wrong and puts the focus on doing things that are meaningful.
People the most involved in the testing process: growth team (selling the app) and studio (making the app). Whoever has the responsibility of a specific part is who does it.
[💎@18:17] For most B2C, you have to give the test at least 7 days. It allows you to run about 50 tests a year for one app.
[💎@18:32] If you want to do more tests you can also split between markets that you know behave the same way for YOU (e.g. could be US/UK) to run tests in parallel.
Stop the test after 7 or 8 days if you have no result.
[💎@19:40] At least once a week, report on experiments whether they are done on the paid advertising side, the product side or growth in general and even on the CRM side.
Some companies have totally different results on iOS vs. Google.
For the US, Shirath (for the apps he's seen) doesn't see a huge difference between iOS and Android when it comes to something being a killer creative.
[💎@21:20] Once you run the experiments 1/2/3 times and try on iOS what works on Android, you'll know if this is something you can do. If a killer creative on Android doesn't make an impact on iOS, then you can't carry results from Android to iOS.
Cost of implementation on iOS is higher: developing, submitting, waiting, etc. Also, using a 3rd party tool ("fake app store page") adds a layer/step to the funnel.
For their casual segment, a huge win on Android typically translates into a win on Apple. So they deploy on iOS.
D7 benchmark of 20% is a good number for the kind of apps he works on.
Historically it was very hard to track changes and perform A/B tests. Now:
Google Play Store has become much more important to publishers and Google provides: more data, more granularity, better identification throughout the funnel.
→ even if the tests are done only on Google Play, it can still have a huge impact.
Visuals (vs. copy) are where you should start.
[💎@05:40] There is a lot of evidence showing that the 2 main things to test are: 1. Icon 2. Feature graphic (video thumbnail).
"The app stores really is just good packaging for your app"
For a Doctor game they tried an out-of-the-box idea for the icon that nobody thought would work and they add a +40% increase for that variant.
In this case the variant touched a very emotional part of the game story, although they didn't see it at the time.
A/B testing when done right (or when you get lucky) makes a huge difference. It changed the top of the funnel, and even the game story.
The biggest thing is the testing process vs. generating ideas, because you never know which one is the big idea.
[💎@13:35] The way to approach testing is to be systematic: it's to think about the testing cadence instead of trying to find THE big idea. Because once a team gets into the rhythm of testing, that's when you get better at it.
Some variants that can appear very different make no difference as well. Sometimes the audience just doesn't care.
[💎@15:30] Putting testing cadence first is a great way to build a creative team. Because any good creative team has people who believe in what they're doing. It lowers the ego threshold of being right and being wrong and puts the focus on doing things that are meaningful.
People the most involved in the testing process: growth team (selling the app) and studio (making the app). Whoever has the responsibility of a specific part is who does it.
[💎@18:17] For most B2C, you have to give the test at least 7 days. It allows you to run about 50 tests a year for one app.
[💎@18:32] If you want to do more tests you can also split between markets that you know behave the same way for YOU (e.g. could be US/UK) to run tests in parallel.
Stop the test after 7 or 8 days if you have no result.
[💎@19:40] At least once a week, report on experiments whether they are done on the paid advertising side, the product side or growth in general and even on the CRM side.
Some companies have totally different results on iOS vs. Google.
For the US, Shirath (for the apps he's seen) doesn't see a huge difference between iOS and Android when it comes to something being a killer creative.
[💎@21:20] Once you run the experiments 1/2/3 times and try on iOS what works on Android, you'll know if this is something you can do. If a killer creative on Android doesn't make an impact on iOS, then you can't carry results from Android to iOS.
Cost of implementation on iOS is higher: developing, submitting, waiting, etc. Also, using a 3rd party tool ("fake app store page") adds a layer/step to the funnel.
For their casual segment, a huge win on Android typically translates into a win on Apple. So they deploy on iOS.
D7 benchmark of 20% is a good number for the kind of apps he works on.
Historically it was very hard to track changes and perform A/B tests. Now:
Google Play Store has become much more important to publishers and Google provides: more data, more granularity, better identification throughout the funnel.
→ even if the tests are done only on Google Play, it can still have a huge impact.
Visuals (vs. copy) are where you should start.
[💎@05:40] There is a lot of evidence showing that the 2 main things to test are: 1. Icon 2. Feature graphic (video thumbnail).
"The app stores really is just good packaging for your app"
For a Doctor game they tried an out-of-the-box idea for the icon that nobody thought would work and they add a +40% increase for that variant.
In this case the variant touched a very emotional part of the game story, although they didn't see it at the time.
A/B testing when done right (or when you get lucky) makes a huge difference. It changed the top of the funnel, and even the game story.
The biggest thing is the testing process vs. generating ideas, because you never know which one is the big idea.
[💎@13:35] The way to approach testing is to be systematic: it's to think about the testing cadence instead of trying to find THE big idea. Because once a team gets into the rhythm of testing, that's when you get better at it.
Some variants that can appear very different make no difference as well. Sometimes the audience just doesn't care.
[💎@15:30] Putting testing cadence first is a great way to build a creative team. Because any good creative team has people who believe in what they're doing. It lowers the ego threshold of being right and being wrong and puts the focus on doing things that are meaningful.
People the most involved in the testing process: growth team (selling the app) and studio (making the app). Whoever has the responsibility of a specific part is who does it.
[💎@18:17] For most B2C, you have to give the test at least 7 days. It allows you to run about 50 tests a year for one app.
[💎@18:32] If you want to do more tests you can also split between markets that you know behave the same way for YOU (e.g. could be US/UK) to run tests in parallel.
Stop the test after 7 or 8 days if you have no result.
[💎@19:40] At least once a week, report on experiments whether they are done on the paid advertising side, the product side or growth in general and even on the CRM side.
Some companies have totally different results on iOS vs. Google.
For the US, Shirath (for the apps he's seen) doesn't see a huge difference between iOS and Android when it comes to something being a killer creative.
[💎@21:20] Once you run the experiments 1/2/3 times and try on iOS what works on Android, you'll know if this is something you can do. If a killer creative on Android doesn't make an impact on iOS, then you can't carry results from Android to iOS.
Cost of implementation on iOS is higher: developing, submitting, waiting, etc. Also, using a 3rd party tool ("fake app store page") adds a layer/step to the funnel.
For their casual segment, a huge win on Android typically translates into a win on Apple. So they deploy on iOS.
D7 benchmark of 20% is a good number for the kind of apps he works on.