Creative Management Is the New Campaign Management

Item not bookmarked
Resource bookmarked
Bookmarking...
⛏️
Guest Miner:
Sylvain Gauchet
Review star
Review star
Review star
Review star
Review star
💎  x
11

Rick Grunewald (Digital Marketing Analyst at PerBlue - Mobile Games Studio) shares the studio's campaign management, structure and challenges and Bidalgo presents how their tooling help tackle most of these challenges.

Source:
Creative Management Is the New Campaign Management
(no direct link to watch/listen)
(direct link to watch/listen)
Type:
Webinar
Publication date:
July 18, 2020
Added to the Vault on:
July 26, 2020
Invite a guest
These insights were shared through the free Growth Gems newsletter.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
💎 #
1

PerBlue starts with a CBO campaign where they can run different types of audiences and different types of ad sets (example: worldwide CBO campaign with 1 broad, 1 LAL and 1 custom audience). It gives you a better idea of how creatives are performing and what Facebook considers being a performing creative.

09:28
💎 #
2

Setting up specific creative dashboard in the Bidalgo platform for their creative team: they can look at any time which characteristics perform better.

12:28
💎 #
3

A challenge of switching to a creative-centric approach is producing more creatives. Anything you can change with your current creatives can help to feed platforms with iterated creatives, including testing multiple thumbnails and ad copy variations. Make as many changes as you can.

13:30
💎 #
4

If you find a really high-performing creative you want to make sure you're giving it as much runway as possible. You need to keep a good balance between what's performing well and testing new creatives.

15:08
💎 #
5

Pull out specific trends that are really being called out in your creatives and align them with specific players in your app. This way you can use your own "best practices" for future creatives. Example: showing hero characters for RPG games. They use the labelling tool and creative tab of Bidalgo for that.

16:50
💎 #
6

To identify creative fatigue you can: 1 sort creatives by spend % to see if a top-spending creative is actually not the best performer or identify a high-performing creative that could be used more (e.g. on new channels).

22:39
💎 #
7

An "overtime graph" is a trend analysis of a specific creative that helps you find the early signals of creative fatigue. You can look at different metrics (e.g. D7 ROAS, IPM, etc.) to see when a gap is forming vs. spend (due to retargeting the same users for example).

23:35
💎 #
8

It's not only between the UA team and creative team but also with the monetization and product team. You can align some of the concepts in the creatives into some of the concepts in monetization. Example: if a user comes in on a creative with a certain character, you can implement it on the monetization side by making that character more visible.

34:41
💎 #
9

Most of PerBlue's performance is coming from campaigns that are more expensive (AEO/VO) so you can't necessarily test with these. So they start with a campaign structure that is more cost-effective which for them is MAI campaigns to draw some conclusions. They don't see ROAS from these campaigns but get good creative insights. If you run these "broad", CPIs will be even lower. Another option is running LAL campaigns.

37:14
💎 #
10

An MAI audience is going to be very different from a high value audience. If you're showing a creative to the latter you want to make sure it is optimized.

41:05
💎 #
11

To cross-test audience and creatives, try to take a given audience and break down different ad sets based on characteristics of the ads (could be by creative type, dimension, creative concepts, different characters, etc.). Using CBO, you can split test ad sets and see which ad groupings work the best with that given audience.

@Post-webinar
The gems from this resource are only available to premium members.
💎 #
1

PerBlue starts with a CBO campaign where they can run different types of audiences and different types of ad sets (example: worldwide CBO campaign with 1 broad, 1 LAL and 1 custom audience). It gives you a better idea of how creatives are performing and what Facebook considers being a performing creative.

09:28
💎 #
2

Setting up specific creative dashboard in the Bidalgo platform for their creative team: they can look at any time which characteristics perform better.

12:28
💎 #
3

A challenge of switching to a creative-centric approach is producing more creatives. Anything you can change with your current creatives can help to feed platforms with iterated creatives, including testing multiple thumbnails and ad copy variations. Make as many changes as you can.

13:30
💎 #
4

If you find a really high-performing creative you want to make sure you're giving it as much runway as possible. You need to keep a good balance between what's performing well and testing new creatives.

15:08
💎 #
5

Pull out specific trends that are really being called out in your creatives and align them with specific players in your app. This way you can use your own "best practices" for future creatives. Example: showing hero characters for RPG games. They use the labelling tool and creative tab of Bidalgo for that.

16:50
💎 #
6

To identify creative fatigue you can: 1 sort creatives by spend % to see if a top-spending creative is actually not the best performer or identify a high-performing creative that could be used more (e.g. on new channels).

22:39
💎 #
7

An "overtime graph" is a trend analysis of a specific creative that helps you find the early signals of creative fatigue. You can look at different metrics (e.g. D7 ROAS, IPM, etc.) to see when a gap is forming vs. spend (due to retargeting the same users for example).

23:35
💎 #
8

It's not only between the UA team and creative team but also with the monetization and product team. You can align some of the concepts in the creatives into some of the concepts in monetization. Example: if a user comes in on a creative with a certain character, you can implement it on the monetization side by making that character more visible.

34:41
💎 #
9

Most of PerBlue's performance is coming from campaigns that are more expensive (AEO/VO) so you can't necessarily test with these. So they start with a campaign structure that is more cost-effective which for them is MAI campaigns to draw some conclusions. They don't see ROAS from these campaigns but get good creative insights. If you run these "broad", CPIs will be even lower. Another option is running LAL campaigns.

37:14
💎 #
10

An MAI audience is going to be very different from a high value audience. If you're showing a creative to the latter you want to make sure it is optimized.

41:05
💎 #
11

To cross-test audience and creatives, try to take a given audience and break down different ad sets based on characteristics of the ads (could be by creative type, dimension, creative concepts, different characters, etc.). Using CBO, you can split test ad sets and see which ad groupings work the best with that given audience.

@Post-webinar
The gems from this resource are only available to premium members.

Gems are the key bite-size insights "mined" from a specific mobile marketing resource, like a webinar, a panel or a podcast.
They allow you to save time by grasping the most important information in a couple of minutes, and also each include the timestamp from the source.

💎 #
1

PerBlue starts with a CBO campaign where they can run different types of audiences and different types of ad sets (example: worldwide CBO campaign with 1 broad, 1 LAL and 1 custom audience). It gives you a better idea of how creatives are performing and what Facebook considers being a performing creative.

09:28
💎 #
2

Setting up specific creative dashboard in the Bidalgo platform for their creative team: they can look at any time which characteristics perform better.

12:28
💎 #
3

A challenge of switching to a creative-centric approach is producing more creatives. Anything you can change with your current creatives can help to feed platforms with iterated creatives, including testing multiple thumbnails and ad copy variations. Make as many changes as you can.

13:30
💎 #
4

If you find a really high-performing creative you want to make sure you're giving it as much runway as possible. You need to keep a good balance between what's performing well and testing new creatives.

15:08
💎 #
5

Pull out specific trends that are really being called out in your creatives and align them with specific players in your app. This way you can use your own "best practices" for future creatives. Example: showing hero characters for RPG games. They use the labelling tool and creative tab of Bidalgo for that.

16:50
💎 #
6

To identify creative fatigue you can: 1 sort creatives by spend % to see if a top-spending creative is actually not the best performer or identify a high-performing creative that could be used more (e.g. on new channels).

22:39
💎 #
7

An "overtime graph" is a trend analysis of a specific creative that helps you find the early signals of creative fatigue. You can look at different metrics (e.g. D7 ROAS, IPM, etc.) to see when a gap is forming vs. spend (due to retargeting the same users for example).

23:35
💎 #
8

It's not only between the UA team and creative team but also with the monetization and product team. You can align some of the concepts in the creatives into some of the concepts in monetization. Example: if a user comes in on a creative with a certain character, you can implement it on the monetization side by making that character more visible.

34:41
💎 #
9

Most of PerBlue's performance is coming from campaigns that are more expensive (AEO/VO) so you can't necessarily test with these. So they start with a campaign structure that is more cost-effective which for them is MAI campaigns to draw some conclusions. They don't see ROAS from these campaigns but get good creative insights. If you run these "broad", CPIs will be even lower. Another option is running LAL campaigns.

37:14
💎 #
10

An MAI audience is going to be very different from a high value audience. If you're showing a creative to the latter you want to make sure it is optimized.

41:05
💎 #
11

To cross-test audience and creatives, try to take a given audience and break down different ad sets based on characteristics of the ads (could be by creative type, dimension, creative concepts, different characters, etc.). Using CBO, you can split test ad sets and see which ad groupings work the best with that given audience.

@Post-webinar

Notes for this resource are currently being transferred and will be available soon.

Context

There is now a switch where you start by choosing your creative and aligning it with different channels. We're moving towards a creative-centric approach.


Creative Strategy at PerBlue


Performance gains at Perblue due only to creative changes

Performance vs. brand: strategy driven by IAP purchases. Try to have brand consistency across games but also do not hesitate to push the boundaries a bit.

Once they find a winner, they also iterate on thumbnails and ad copy variations to get marginal gains.


Campaign management

Changes in campaign management since the "blackboxing"? They moved more from a model where they were trying to be very detailed in the targeting to something that at least in the start is very broad in terms of audiences. They really like CBOs.


[💎 @09:28] PerBlue starts with a CBO campaign where they can run different types of audiences and different types of ad sets (example: worldwide CBO campaign with 1 broad, 1 LAL and 1 custom audience). It gives you a better idea of how creatives are performing and what Facebook considers being a performing creative.

There are very consistent trends over very different audiences.


Team structure

  • Try to integrate teams more together: UA & data team to pull out creative performance trends, creative performance sync with the creative team so it gets an understanding on why something is performing (the characteristics) and the KPIs to look at (beyond the concept).
  • [💎 @12:28] Setting up specific creative dashboard in the Bidalgo platform for their creative team: they can look at any time which characteristics perform better.


Creative challenges

They've been on multiple sides of the spectrum

  • When focus on targeting/settings: had less creatives.
  • [💎 @13:30] A challenge of switching to a creative-centric approach is producing more creatives. Anything you can change with your current creatives can help to feed platforms with iterated creatives, including testing multiple thumbnails and ad copy variations. Make as many changes as you can.
  • Bidalgo platform can help with some of these smaller changes.
  • When you have a lot of creatives you are faced with a new challenge because you might not be using high performing creatives as much as you should.
  • [💎 @15:08] If you find a really high-performing creative you want to make sure you're giving it as much runway as possible. You need to keep a good balance between what's performing well and testing new creatives.


Creative best practices for gaming apps

Something they've noticed is they line up with some industry standards but at the same time also defy some of these standards. Example: video tends to scale better but for them static is working really well.


Titles are very individual and there are different reasons they work well.


[💎 @16:50] Pull out specific trends that are really being called out in your creatives and align them with specific players in your app. This way you can use your own "best practices" for future creatives. Example: showing hero characters for RPG games. They use the labelling tool and creative tab of Bidalgo for that.


Something little like a character or a background might be the difference between hitting your KPIs or not.


Creative center on Bidalgo


Bidalgo built the "creative center".

Creative tab

  • Creative-centric approach, showing your creative performance from all channels. UA managers now often check performance at the campaign level then look at the performance on the creative level. You can also breack down by channels, creative "labels", etc so you can drill down to identify creative fatigue
  • [💎 @22:39] To identify creative fatigue you can: 1 sort creatives by spend % to see if a top-spending creative is actually not the best performer or identify a high-performing creative that could be used more (e.g. on new channles).
  • [💎 @23:35] An "overtime graph" is a trend analysis of a specific creative that helps you find the early signals of creative fatigue. You can look at different metrics (e.g. D7 ROAS, IPM, etc.) to see when a gap is forming vs. spend (due to retargeting the same users for example).

Creative Auto-Production

  • Takes your best performing assets, looks at your historical data then creates 100s of iterations based on your top concepts
  • Also analyzes iterations and recommends the ones that are most likely to perform because you can't necessarily test 100s of creatives.
  • Example of the creatives that can be mixed and matched


Q&A

  • How do you communicate with the art team and guide creative ideation?
  • Art team is actually part of the UA team. Monthly cadence to all get together and review performance, mock up some new concepts, etc. How are channels doing, specifics of creatives (dimensions working, video vs. static, characters doing well, etc.). Attach concepts to creatives based on players' "desires".
  • Spend time educating on performance, why they cycle through creatives (taking out creatives that are performing well), etc.
  • [💎 @34:41] It's not only between the UA team and creative team but also with the monetization and product team. You can align some of the concepts in the creatives into some of the concepts in monetization. Example: if a user comes in on a creative with a certain character, you can implement it on the monetization side by making that character more visible.
  • What kind of testing strategies can be successful especially if you have lower budgets?
  • Getting performance rankings (through the creative center) can help only running creatives that have a higher chance to perform.
  • [💎 @37:14] Most of PerBlue's perform is coming from campaigns that are more expensive (AEO/VO) so you can't necessarily test with these. So they start with a campaign structure that is more cost-effective which for them is MAI campaigns to draw some conclusions. They don't see ROAS from these campaigns but get good creative insights. If you run these "broad" CPIs will be even lower, another option is running LAL campaigns.
  • This has been particularly important now that they have to test a lot of creatives.
  • Know your user and connect your whole funnel for each app, because different apps might require different creative testing.
  • [💎 @41:05] An MAI audience is going to be very different from a high value audience. If you're showing a creative to the latter you want to make sure it is optimized.
  • Relationship between testing on IPM and the deeper funnel activity?
  • The main variable PerBlue looks at when they're testing is install rate. They give the same amount of spend and then look at the CPI and the install rate.
  • They also sometimes just want to see what the FB algorithm likes where they "scale testing": take 4 creatives, throw them on Facebook to see what Facebook thinks is going to be the best-performing creative.

Process from ideation to final concept?

  • Monthly review where they're concepting out.
  • Batches released within a 2-weeks period.
    - Always run with high performers that the creative team builds iterations on. After a couple of days of testing they know which ones perform better through MAI campaigns.
    - Also work on new concepts and they run them against each other + give them a bit more time in testing (7-10 days) and test more different things


Post-webinar Q&As


  • Q1: How do you see AI/machine learning impacting creative testing and production over the next 12-24 months?
  • A1: The increasing move toward AI/ML will continue highlighting the importance of pushing a higher volume of creative, particularly when it comes to iterations of top performers. High volumes of creative are going to be the ammunition these systems need to rapidly test and identify the best performing ads.
  • Q2: When the data is so granular, with spend per execution being so low, how are you able to determine a clear winner when testing?
  • A2:
  • PerBlue: If your executions are too low to identify performance differences, aim to structure your test so that you can lower your cost per execution. In a perfect world, we could run all of our tests on high value audiences. But, this would be expensive and take way too long to gain enough data. Instead, we use cheaper traffic with a shorter funnel event (install instead of IAP) to rack up testing data cheaply and efficiently. Take a look at your KPIs and see which metrics correlate closest with your goal KPI (for us at PerBlue, that is purchases). If it will help to control costs, structure yours tests around these events.
  • Bidalgo: Rick makes a great point about finding correlations between your main KPI(s) and upper funnel metrics to make determinations more quickly and cost effectively. Also keep in mind that there’s a human element to the analysis, so determining what makes your winner successful can be just as important. Is it about different placements? Different ages or genders? Is it truly just the different elements?
  • Q3: Do you find that it's more important to test new/different concepts vs. dynamic elements? Typically minor element variations don't drastically move the needle, right?
  • A3:
  • PerBlue: The majority of our creative production goes into proven concepts with dynamic elements. We also allocate about 20-30% of our production to new concepts. We’ve found the campaign performance is higher and more stable running bread and butter ads. We can continue to make performance improvements by introducing new and better iterations of the same concepts.
  • But, just like ads fatigue, it is possible for concepts to fatigue as well, especially in a more narrowly defined audience. This is why it's important to allocate some production to trying new things with creative.
  • Bidalgo: Once you have a concept or two that are proven to be successful, the majority of production should move to iterating on those concepts. New concepts are important for avoiding concept fatigue, but there are also ways to make different iterations look like completely new creatives. (E.g. changing the background and character/element but leaving the same video flow that was successful.) And, compared to making a brand new creative, combining multiple successful iterations into one promotional cut only takes a fraction of designers' time.
  • Q4: Here's a question for Rick: What are your main KPIs to manage creative performance? CTR? IPM? ROAS? Others? How do you use them for your decisioning?
  • A4: At PerBlue, we focus on IPM as one of our most important KPIs. Since we try to test in lower cost/CPI campaigns, we don’t put much emphasis on ROAS. IPM is also a great way to measure the full-funnel efficiency of a given creative and has been one of our top correlations for ads performing well in VO or AEO campaigns. IPM also correlates strongly with a low CPI and higher scale in our campaigns.
  • Q5: What is your suggestion on cross-testing audience testing / creative testing? Do you test different creatives for different audiences?
  • A5:
  • PerBlue: We have done some audience cross-testing in the past. We tend to see the same creatives perform among different audiences. One approach we have found to be successful is trying to test at a broader concept level rather than the ad level, since there can be so much variation between individual ads.
  • [💎 @Post-webinar] To cross-test audience and creatives, try to take a given audience and break down different ad sets based on characteristics of the ads (could be by creative type, dimension, creative concepts, different characters, etc.). Using CBO, you can split test ad sets and see which ad groupings work the best with that given audience.
  • Bidalgo: It's definitely smart to look at different creative sizes/video lengths to test separately, since delivery is different based on these kinds of specs. This is especially important for things like Placement Asset Customization adopters. Overall, the method Rick presented utilizing LAL, Broad, and CA in one CBO is great for creative testing, as it can help cover both scenarios and give you insight into many different behaviors.


The notes from this resource are only available to premium members.

Context

There is now a switch where you start by choosing your creative and aligning it with different channels. We're moving towards a creative-centric approach.


Creative Strategy at PerBlue


Performance gains at Perblue due only to creative changes

Performance vs. brand: strategy driven by IAP purchases. Try to have brand consistency across games but also do not hesitate to push the boundaries a bit.

Once they find a winner, they also iterate on thumbnails and ad copy variations to get marginal gains.


Campaign management

Changes in campaign management since the "blackboxing"? They moved more from a model where they were trying to be very detailed in the targeting to something that at least in the start is very broad in terms of audiences. They really like CBOs.


[💎 @09:28] PerBlue starts with a CBO campaign where they can run different types of audiences and different types of ad sets (example: worldwide CBO campaign with 1 broad, 1 LAL and 1 custom audience). It gives you a better idea of how creatives are performing and what Facebook considers being a performing creative.

There are very consistent trends over very different audiences.


Team structure

  • Try to integrate teams more together: UA & data team to pull out creative performance trends, creative performance sync with the creative team so it gets an understanding on why something is performing (the characteristics) and the KPIs to look at (beyond the concept).
  • [💎 @12:28] Setting up specific creative dashboard in the Bidalgo platform for their creative team: they can look at any time which characteristics perform better.


Creative challenges

They've been on multiple sides of the spectrum

  • When focus on targeting/settings: had less creatives.
  • [💎 @13:30] A challenge of switching to a creative-centric approach is producing more creatives. Anything you can change with your current creatives can help to feed platforms with iterated creatives, including testing multiple thumbnails and ad copy variations. Make as many changes as you can.
  • Bidalgo platform can help with some of these smaller changes.
  • When you have a lot of creatives you are faced with a new challenge because you might not be using high performing creatives as much as you should.
  • [💎 @15:08] If you find a really high-performing creative you want to make sure you're giving it as much runway as possible. You need to keep a good balance between what's performing well and testing new creatives.


Creative best practices for gaming apps

Something they've noticed is they line up with some industry standards but at the same time also defy some of these standards. Example: video tends to scale better but for them static is working really well.


Titles are very individual and there are different reasons they work well.


[💎 @16:50] Pull out specific trends that are really being called out in your creatives and align them with specific players in your app. This way you can use your own "best practices" for future creatives. Example: showing hero characters for RPG games. They use the labelling tool and creative tab of Bidalgo for that.


Something little like a character or a background might be the difference between hitting your KPIs or not.


Creative center on Bidalgo


Bidalgo built the "creative center".

Creative tab

  • Creative-centric approach, showing your creative performance from all channels. UA managers now often check performance at the campaign level then look at the performance on the creative level. You can also breack down by channels, creative "labels", etc so you can drill down to identify creative fatigue
  • [💎 @22:39] To identify creative fatigue you can: 1 sort creatives by spend % to see if a top-spending creative is actually not the best performer or identify a high-performing creative that could be used more (e.g. on new channles).
  • [💎 @23:35] An "overtime graph" is a trend analysis of a specific creative that helps you find the early signals of creative fatigue. You can look at different metrics (e.g. D7 ROAS, IPM, etc.) to see when a gap is forming vs. spend (due to retargeting the same users for example).

Creative Auto-Production

  • Takes your best performing assets, looks at your historical data then creates 100s of iterations based on your top concepts
  • Also analyzes iterations and recommends the ones that are most likely to perform because you can't necessarily test 100s of creatives.
  • Example of the creatives that can be mixed and matched


Q&A

  • How do you communicate with the art team and guide creative ideation?
  • Art team is actually part of the UA team. Monthly cadence to all get together and review performance, mock up some new concepts, etc. How are channels doing, specifics of creatives (dimensions working, video vs. static, characters doing well, etc.). Attach concepts to creatives based on players' "desires".
  • Spend time educating on performance, why they cycle through creatives (taking out creatives that are performing well), etc.
  • [💎 @34:41] It's not only between the UA team and creative team but also with the monetization and product team. You can align some of the concepts in the creatives into some of the concepts in monetization. Example: if a user comes in on a creative with a certain character, you can implement it on the monetization side by making that character more visible.
  • What kind of testing strategies can be successful especially if you have lower budgets?
  • Getting performance rankings (through the creative center) can help only running creatives that have a higher chance to perform.
  • [💎 @37:14] Most of PerBlue's perform is coming from campaigns that are more expensive (AEO/VO) so you can't necessarily test with these. So they start with a campaign structure that is more cost-effective which for them is MAI campaigns to draw some conclusions. They don't see ROAS from these campaigns but get good creative insights. If you run these "broad" CPIs will be even lower, another option is running LAL campaigns.
  • This has been particularly important now that they have to test a lot of creatives.
  • Know your user and connect your whole funnel for each app, because different apps might require different creative testing.
  • [💎 @41:05] An MAI audience is going to be very different from a high value audience. If you're showing a creative to the latter you want to make sure it is optimized.
  • Relationship between testing on IPM and the deeper funnel activity?
  • The main variable PerBlue looks at when they're testing is install rate. They give the same amount of spend and then look at the CPI and the install rate.
  • They also sometimes just want to see what the FB algorithm likes where they "scale testing": take 4 creatives, throw them on Facebook to see what Facebook thinks is going to be the best-performing creative.

Process from ideation to final concept?

  • Monthly review where they're concepting out.
  • Batches released within a 2-weeks period.
    - Always run with high performers that the creative team builds iterations on. After a couple of days of testing they know which ones perform better through MAI campaigns.
    - Also work on new concepts and they run them against each other + give them a bit more time in testing (7-10 days) and test more different things


Post-webinar Q&As


  • Q1: How do you see AI/machine learning impacting creative testing and production over the next 12-24 months?
  • A1: The increasing move toward AI/ML will continue highlighting the importance of pushing a higher volume of creative, particularly when it comes to iterations of top performers. High volumes of creative are going to be the ammunition these systems need to rapidly test and identify the best performing ads.
  • Q2: When the data is so granular, with spend per execution being so low, how are you able to determine a clear winner when testing?
  • A2:
  • PerBlue: If your executions are too low to identify performance differences, aim to structure your test so that you can lower your cost per execution. In a perfect world, we could run all of our tests on high value audiences. But, this would be expensive and take way too long to gain enough data. Instead, we use cheaper traffic with a shorter funnel event (install instead of IAP) to rack up testing data cheaply and efficiently. Take a look at your KPIs and see which metrics correlate closest with your goal KPI (for us at PerBlue, that is purchases). If it will help to control costs, structure yours tests around these events.
  • Bidalgo: Rick makes a great point about finding correlations between your main KPI(s) and upper funnel metrics to make determinations more quickly and cost effectively. Also keep in mind that there’s a human element to the analysis, so determining what makes your winner successful can be just as important. Is it about different placements? Different ages or genders? Is it truly just the different elements?
  • Q3: Do you find that it's more important to test new/different concepts vs. dynamic elements? Typically minor element variations don't drastically move the needle, right?
  • A3:
  • PerBlue: The majority of our creative production goes into proven concepts with dynamic elements. We also allocate about 20-30% of our production to new concepts. We’ve found the campaign performance is higher and more stable running bread and butter ads. We can continue to make performance improvements by introducing new and better iterations of the same concepts.
  • But, just like ads fatigue, it is possible for concepts to fatigue as well, especially in a more narrowly defined audience. This is why it's important to allocate some production to trying new things with creative.
  • Bidalgo: Once you have a concept or two that are proven to be successful, the majority of production should move to iterating on those concepts. New concepts are important for avoiding concept fatigue, but there are also ways to make different iterations look like completely new creatives. (E.g. changing the background and character/element but leaving the same video flow that was successful.) And, compared to making a brand new creative, combining multiple successful iterations into one promotional cut only takes a fraction of designers' time.
  • Q4: Here's a question for Rick: What are your main KPIs to manage creative performance? CTR? IPM? ROAS? Others? How do you use them for your decisioning?
  • A4: At PerBlue, we focus on IPM as one of our most important KPIs. Since we try to test in lower cost/CPI campaigns, we don’t put much emphasis on ROAS. IPM is also a great way to measure the full-funnel efficiency of a given creative and has been one of our top correlations for ads performing well in VO or AEO campaigns. IPM also correlates strongly with a low CPI and higher scale in our campaigns.
  • Q5: What is your suggestion on cross-testing audience testing / creative testing? Do you test different creatives for different audiences?
  • A5:
  • PerBlue: We have done some audience cross-testing in the past. We tend to see the same creatives perform among different audiences. One approach we have found to be successful is trying to test at a broader concept level rather than the ad level, since there can be so much variation between individual ads.
  • [💎 @Post-webinar] To cross-test audience and creatives, try to take a given audience and break down different ad sets based on characteristics of the ads (could be by creative type, dimension, creative concepts, different characters, etc.). Using CBO, you can split test ad sets and see which ad groupings work the best with that given audience.
  • Bidalgo: It's definitely smart to look at different creative sizes/video lengths to test separately, since delivery is different based on these kinds of specs. This is especially important for things like Placement Asset Customization adopters. Overall, the method Rick presented utilizing LAL, Broad, and CA in one CBO is great for creative testing, as it can help cover both scenarios and give you insight into many different behaviors.


The notes from this resource are only available to premium members.

Context

There is now a switch where you start by choosing your creative and aligning it with different channels. We're moving towards a creative-centric approach.


Creative Strategy at PerBlue


Performance gains at Perblue due only to creative changes

Performance vs. brand: strategy driven by IAP purchases. Try to have brand consistency across games but also do not hesitate to push the boundaries a bit.

Once they find a winner, they also iterate on thumbnails and ad copy variations to get marginal gains.


Campaign management

Changes in campaign management since the "blackboxing"? They moved more from a model where they were trying to be very detailed in the targeting to something that at least in the start is very broad in terms of audiences. They really like CBOs.


[💎 @09:28] PerBlue starts with a CBO campaign where they can run different types of audiences and different types of ad sets (example: worldwide CBO campaign with 1 broad, 1 LAL and 1 custom audience). It gives you a better idea of how creatives are performing and what Facebook considers being a performing creative.

There are very consistent trends over very different audiences.


Team structure

  • Try to integrate teams more together: UA & data team to pull out creative performance trends, creative performance sync with the creative team so it gets an understanding on why something is performing (the characteristics) and the KPIs to look at (beyond the concept).
  • [💎 @12:28] Setting up specific creative dashboard in the Bidalgo platform for their creative team: they can look at any time which characteristics perform better.


Creative challenges

They've been on multiple sides of the spectrum

  • When focus on targeting/settings: had less creatives.
  • [💎 @13:30] A challenge of switching to a creative-centric approach is producing more creatives. Anything you can change with your current creatives can help to feed platforms with iterated creatives, including testing multiple thumbnails and ad copy variations. Make as many changes as you can.
  • Bidalgo platform can help with some of these smaller changes.
  • When you have a lot of creatives you are faced with a new challenge because you might not be using high performing creatives as much as you should.
  • [💎 @15:08] If you find a really high-performing creative you want to make sure you're giving it as much runway as possible. You need to keep a good balance between what's performing well and testing new creatives.


Creative best practices for gaming apps

Something they've noticed is they line up with some industry standards but at the same time also defy some of these standards. Example: video tends to scale better but for them static is working really well.


Titles are very individual and there are different reasons they work well.


[💎 @16:50] Pull out specific trends that are really being called out in your creatives and align them with specific players in your app. This way you can use your own "best practices" for future creatives. Example: showing hero characters for RPG games. They use the labelling tool and creative tab of Bidalgo for that.


Something little like a character or a background might be the difference between hitting your KPIs or not.


Creative center on Bidalgo


Bidalgo built the "creative center".

Creative tab

  • Creative-centric approach, showing your creative performance from all channels. UA managers now often check performance at the campaign level then look at the performance on the creative level. You can also breack down by channels, creative "labels", etc so you can drill down to identify creative fatigue
  • [💎 @22:39] To identify creative fatigue you can: 1 sort creatives by spend % to see if a top-spending creative is actually not the best performer or identify a high-performing creative that could be used more (e.g. on new channles).
  • [💎 @23:35] An "overtime graph" is a trend analysis of a specific creative that helps you find the early signals of creative fatigue. You can look at different metrics (e.g. D7 ROAS, IPM, etc.) to see when a gap is forming vs. spend (due to retargeting the same users for example).

Creative Auto-Production

  • Takes your best performing assets, looks at your historical data then creates 100s of iterations based on your top concepts
  • Also analyzes iterations and recommends the ones that are most likely to perform because you can't necessarily test 100s of creatives.
  • Example of the creatives that can be mixed and matched


Q&A

  • How do you communicate with the art team and guide creative ideation?
  • Art team is actually part of the UA team. Monthly cadence to all get together and review performance, mock up some new concepts, etc. How are channels doing, specifics of creatives (dimensions working, video vs. static, characters doing well, etc.). Attach concepts to creatives based on players' "desires".
  • Spend time educating on performance, why they cycle through creatives (taking out creatives that are performing well), etc.
  • [💎 @34:41] It's not only between the UA team and creative team but also with the monetization and product team. You can align some of the concepts in the creatives into some of the concepts in monetization. Example: if a user comes in on a creative with a certain character, you can implement it on the monetization side by making that character more visible.
  • What kind of testing strategies can be successful especially if you have lower budgets?
  • Getting performance rankings (through the creative center) can help only running creatives that have a higher chance to perform.
  • [💎 @37:14] Most of PerBlue's perform is coming from campaigns that are more expensive (AEO/VO) so you can't necessarily test with these. So they start with a campaign structure that is more cost-effective which for them is MAI campaigns to draw some conclusions. They don't see ROAS from these campaigns but get good creative insights. If you run these "broad" CPIs will be even lower, another option is running LAL campaigns.
  • This has been particularly important now that they have to test a lot of creatives.
  • Know your user and connect your whole funnel for each app, because different apps might require different creative testing.
  • [💎 @41:05] An MAI audience is going to be very different from a high value audience. If you're showing a creative to the latter you want to make sure it is optimized.
  • Relationship between testing on IPM and the deeper funnel activity?
  • The main variable PerBlue looks at when they're testing is install rate. They give the same amount of spend and then look at the CPI and the install rate.
  • They also sometimes just want to see what the FB algorithm likes where they "scale testing": take 4 creatives, throw them on Facebook to see what Facebook thinks is going to be the best-performing creative.

Process from ideation to final concept?

  • Monthly review where they're concepting out.
  • Batches released within a 2-weeks period.
    - Always run with high performers that the creative team builds iterations on. After a couple of days of testing they know which ones perform better through MAI campaigns.
    - Also work on new concepts and they run them against each other + give them a bit more time in testing (7-10 days) and test more different things


Post-webinar Q&As


  • Q1: How do you see AI/machine learning impacting creative testing and production over the next 12-24 months?
  • A1: The increasing move toward AI/ML will continue highlighting the importance of pushing a higher volume of creative, particularly when it comes to iterations of top performers. High volumes of creative are going to be the ammunition these systems need to rapidly test and identify the best performing ads.
  • Q2: When the data is so granular, with spend per execution being so low, how are you able to determine a clear winner when testing?
  • A2:
  • PerBlue: If your executions are too low to identify performance differences, aim to structure your test so that you can lower your cost per execution. In a perfect world, we could run all of our tests on high value audiences. But, this would be expensive and take way too long to gain enough data. Instead, we use cheaper traffic with a shorter funnel event (install instead of IAP) to rack up testing data cheaply and efficiently. Take a look at your KPIs and see which metrics correlate closest with your goal KPI (for us at PerBlue, that is purchases). If it will help to control costs, structure yours tests around these events.
  • Bidalgo: Rick makes a great point about finding correlations between your main KPI(s) and upper funnel metrics to make determinations more quickly and cost effectively. Also keep in mind that there’s a human element to the analysis, so determining what makes your winner successful can be just as important. Is it about different placements? Different ages or genders? Is it truly just the different elements?
  • Q3: Do you find that it's more important to test new/different concepts vs. dynamic elements? Typically minor element variations don't drastically move the needle, right?
  • A3:
  • PerBlue: The majority of our creative production goes into proven concepts with dynamic elements. We also allocate about 20-30% of our production to new concepts. We’ve found the campaign performance is higher and more stable running bread and butter ads. We can continue to make performance improvements by introducing new and better iterations of the same concepts.
  • But, just like ads fatigue, it is possible for concepts to fatigue as well, especially in a more narrowly defined audience. This is why it's important to allocate some production to trying new things with creative.
  • Bidalgo: Once you have a concept or two that are proven to be successful, the majority of production should move to iterating on those concepts. New concepts are important for avoiding concept fatigue, but there are also ways to make different iterations look like completely new creatives. (E.g. changing the background and character/element but leaving the same video flow that was successful.) And, compared to making a brand new creative, combining multiple successful iterations into one promotional cut only takes a fraction of designers' time.
  • Q4: Here's a question for Rick: What are your main KPIs to manage creative performance? CTR? IPM? ROAS? Others? How do you use them for your decisioning?
  • A4: At PerBlue, we focus on IPM as one of our most important KPIs. Since we try to test in lower cost/CPI campaigns, we don’t put much emphasis on ROAS. IPM is also a great way to measure the full-funnel efficiency of a given creative and has been one of our top correlations for ads performing well in VO or AEO campaigns. IPM also correlates strongly with a low CPI and higher scale in our campaigns.
  • Q5: What is your suggestion on cross-testing audience testing / creative testing? Do you test different creatives for different audiences?
  • A5:
  • PerBlue: We have done some audience cross-testing in the past. We tend to see the same creatives perform among different audiences. One approach we have found to be successful is trying to test at a broader concept level rather than the ad level, since there can be so much variation between individual ads.
  • [💎 @Post-webinar] To cross-test audience and creatives, try to take a given audience and break down different ad sets based on characteristics of the ads (could be by creative type, dimension, creative concepts, different characters, etc.). Using CBO, you can split test ad sets and see which ad groupings work the best with that given audience.
  • Bidalgo: It's definitely smart to look at different creative sizes/video lengths to test separately, since delivery is different based on these kinds of specs. This is especially important for things like Placement Asset Customization adopters. Overall, the method Rick presented utilizing LAL, Broad, and CA in one CBO is great for creative testing, as it can help cover both scenarios and give you insight into many different behaviors.