Evidence Based Creative and the Mysterious Case of the Iridescent Birds

cadillac-birds-evidence-based-creative

Before I dive into what I mean when I talk about “Evidence Based Creative” let me begin by addressing an elephant in the room.

I swear I can hear a collective groan rising from the throats of thousands of Creative Directors every time I write the words “Evidence” and “Creative” in the same sentence.

Even for me, an ad-tech geek who obsesses about data, it seems like an overreach to claim that we could ever fully measure what makes a given piece of ad creative good, great or game changing.

The problem is not that we can’t tell when certain creative performs better than others, we can and do it regularly.  What we often cannot explain is why one works and others less so.

Why not?  Because good creative is more than just copy and images, it draws us in on a human level.  Like good art, good creative work provides meaning, ideally shared meaning, that unites people around a brand.  While meaning is what gives texture to our lives, it remains forever subjective; you just can’t quantify meaning.  

So if you can’t measure meaning, then what do we mean by Evidence Based Creative?

Let me start with an example.  

My team at Teads Studio recently ran a large campaign where we were trying to suss-out which  elements of a piece of video creative – actually many different videos – were driving user action.  In this particular case we were using video completion and click-thru as the metrics.

As we poured over fresh performance data a week into the campaign we noticed a pattern.  Specifically, two versions of the video were driving significantly higher click-through rates.  When we went back to compare those videos scene-by-scene with the other lower performing videos, we found it.  There was one – arguably random – scene that set these particular versions apart.

What was in the one unique scene that drove better performance?  

Birds.  Loads of birds.

In one 2.5 second sequence of those two videos a kaleidoscope of thousands of iridescent emerald hummingbirds bursts on screen.  Those birds apparently touched a chord and drove user action well above the rest of the lot. Perhaps those birds triggered some unknown neural pathway that made the ad simply irresistible.  My best guess is that something about the aesthetics of the shining birds – it is a beautiful image – created some sort connection, coupled with the other visuals in the ad, that resonated more deeply with the audience.  It was something beautiful and extraordinary. At some level, we might say that the beauty of the image actually meant something to people.

Whatever the underlying cause, there was no other explanation for the performance boost than the presence of those birds.  This was the only major difference from other, almost identical videos and the only one shared between the two top performers.

Now I still can’t explain why that scene worked as well as it did.  Nor can I explain how some creative mind decided that an exploding fractal of CGI birds was just the thing to connect with users, but WOW!  Hand that person a golden Lion and I’ll be the first to raise a rosé toast to them on the Croisette in Cannes!

The point is, we can measure what works and riff on that to our heart’s content….  

“What happens if we make the birds magenta?  Can we make ‘em twinkle, just a little, not too much?!”

…but we cannot use data to design the perfect creative ex nihilo.  

Until we someday, maybe witness the emergence of super intelligent machines, no amount of data, research or A.I. will get you to those birds.  We still need creative minds to come up with the original idea. Creative Directors the world over can now breathe a sigh of relief. We’re not here to measure your work out of existence.

We can, however, use the tools of the data scientist, the researcher or the A.I. wizard to improve already great creative and make less-than-great creative work a whole lot harder.  This is what I’m calling “Evidence Based Creative”, the idea that we can use performance data, not just post-campaign, but live in near-real-time to identify what’s working, design tests, validate hypothesis and iterate.

This notion of A/B, or often multivariate testing has long been a staple of performance marketers angling to squeeze a few extra dollars of LTV out of their user acquisition campaigns.  Today we’re also seeing many top tier brands use it extensively as a strategy to build smarter awareness and consideration campaigns.

Whatever their brand objectives, this notion of Evidence Based Creative, or iterative creative testing and optimization is emerging as a key tool in the marketer’s arsenal.  The fact is, according to Neilsen research up to 65% of sales lift in digital is driven by the creative. As media people there are only so many levers we can pull to improve campaign performance.  Adding feedback loops between media metrics and creative optimizations is a critical step in bridging the gap and moving to the next level of campaign performance.

—–

As I’ve been involved in more such endeavors I’ve been lucky enough to learn a thing or two about what works and what doesn’t when it comes to running Evidence Based Creative campaigns for brands.  

Before I get to those key principles, it bears stating the overarching importance of mindset.  Everyone involved in this kind of test needs to be coming at it from the angle of curiosity and an openness to learning.  An intellectual posture that says, “I don’t know what I don’t know, but I intend to find out.” It’s the mindset of the scientist, the explorer, people in search of truth who are willing to take risks to find it.  If you want certainty and predetermined outcomes, this may not be for you.

Of course, most brands conduct extensive research already.  Brand Studies, sales lift, store traffic, ROAS are just a few.  Don’t get me wrong, these are valuable documents that measure the aggregate impact of a campaign and remain a key tool in a marketer’s bag of tricks.  But what got you here won’t get you there, and these reports all have two things in common. They are only retrospective, providing insight weeks, often months after the campaign has ended and the creative has already been retired or changed.

On top of those traditional methodologies that run a full campaign cycle, marketers also have access to real-time metrics in the digital media arena, metrics that make it possible to take the temperature of a given campaign on the fly, mid-flight.  You just have to look!

Perhaps most importantly, digital creative tools exist today making it far more realistic to scale creative across a large number of versions.  Combine real or near-real-time media metrics with scalable creative versioning and you’ve got all the technical ingredients you need for effective Evidence Based Creative strategies.

So what does a curious marketer need to consider when gearing up for their first Evidence Based Creative execution?    It boils down to 6 key principles that form a jumping off point to get brands started rapidly testing and iterating on their creative.

  1. High Performing Creative Rises Above the Crowd

As we discussed already, creative is subjective and can only be judged in relation to the performance of other creative.  The more versions or iterations you can put in front of an audience, the more you can learn about what is working and what is not.  So, cut two versions of that video, or 5, or 50. Run them simultaneously across the same audience segments and see what happens.

Again, many tools exist to accomplish this.  Don’t have budget to cut endless video iterations?  Sometimes we’ll run versioning only on interactive elements of a piece of video creative.  Say we have a single 15-second video ad. We may layer that video with 20 or 30 different variations of skins, overlays, end screens, CTAs, Messaging, product images, etc…  These static elements are far easier to adapt and scale than cutting multiple videos and though they won’t get you to our famous bird epiphany, the variability in performance from something as simple as swapping headline copy can be massive.

  1. Automate to Iterate

We use our own proprietary platforms to build creative iterations, automate versioning, simplify approvals and to run analytics.  We’ve developed an entire methodology for managing campaign flighting on complex tests.

A number of similar tools are available in the market and we’ve explored many of them.  The key here is automation, identifying tools that allow you to build more versions, faster and manage a campaign without getting lost in the complexity.

Creative editors that allow for bulk asset libraries which can automatically generate multiple versions from a repository of available pieces are a great start.  Dynamic Creative Optimization is another way to automate versioning. Even machine learning algorithms that take all of those pieces and are able to quickly identify which versions are working best against which audiences then upweight media budget behind them exist in a number of platforms today.

These are all tools for Evidence Based Creative and more are arriving every day.  I for one am in the market for an image-recognition tool that allows us to “tag” specific “scenes” in a video to aid in analyzing large sets of video assets.   Imagine if we had had a simple interface that could’ve tagged each scene in the videos with the birds, we wouldn’t have needed to comb through every video to identify scenes with common visuals because we could have just cross referenced the tags.  Now that would be cool! It would’ve saved time and money. Maybe one of you reading this are working on that now, if so let’s chat!

  1. Build (one) Clear Hypothesis Per Test

It’s tempting to want to test every variable, and we can indeed conduct multivariate tests that compare multiple elements in concert.  However, your hypothesis needs to be simple and allow for mutually exclusive outcomes if you want to actually learn anything meaningful.  This is why we tend to run tests in ‘sprints.’ Each sprint tends to test only one variable to get clear results, then we test another variable in the next sprint.  We will often compare the outcomes across sprints to try to determine if we’ve missed anything from one to the next.

It’s also possible to run this type of campaign in one go using DCO, but I’ve found this frustrating at times because we will see more of one version being served to a specific audience because it’s performing well, but it becomes harder to find the discrete elements that are impacting performance.

  1. Move Fast

There’s a lot you can learn from comparing performance across a set of creatives.  Want to learn more? Run another test. The quicker you can get the scale for meaningful data across multiple versions of creative, the quicker you can draw conclusions, conceive a new hypothesis and test new iterations.  

As mentioned above, we typically run in one or two week ‘sprints’ where we’ll isolate variables and test up to 100 versions simultaneously across multiple audiences.  Then lather, rinse and repeat in each consecutive sprint. The faster you move, the more you’ll learn and the quicker you’ll see performance lift.

  1. Scale Matters

My grandmother Becky is my one woman fan club.  I check in periodically to see how she’s doing but I also do it selfishly to get a little ego boost because, for her, everything I do is unequivocally “Amaaaaazing!”  But Becky is a sample size of one. Though her positive reinforcement is welcome, it doesn’t tell me much about how I’m actually navigating the world. Maybe my work is horrendous, maybe I’m a total failure, but if I rely on Becky as my only data point, everything would seem well, amaaaaazing!  

If you want meaningful insights you need meaningful scale!  We find that 50K impressions per version is about right as a minimum to drown out any noise in the data.  So if you’re testing 10 creatives, it’s best to run 500K impressions in your test, or 50K per version. More is better, but the key here is to ensure no single data point is skewing your results.

  1. Bust Those Silos

With all this talk of data and analysis, we might forget that it takes a team of highly skilled humans to actually make this work.  This can’t be outsourced to your technology vendor. The entire ecosystem of creative agency, media agency, brand and technology partners need to be onboard, aligned and in constant communication.

For example on one recent project our analysis/creative refresh/approvals process had to fit into a 72 hour window every 10 days to keep up with the sprints.  It works but only if everyone involved in the chain is committed to the project, respects very tight deadlines and talks regularly with each other.

Of course, there is far more to consider here, but much of it is best learned by doing.  We’re certainly not the only people out there conducting this type of iterative testing, the key is finding partners who can help.  My recommendation is that you start with an initial hypothesis and run a simple test, then ramp up the complexity from there.

Evidence Based Creative might sound like a lot of work, and in some ways it is.  But whatever your brand the insights gained from integrating an iterative testing initiative into your creative strategy can be the ticket to a much deeper understanding of your creative and ultimately unlocking what works best for your brand.

Far from representing a menace to the creative agency world, the techniques for Evidence Based Creative are wonderful tools for insight and improving creative performance.  They help us get beyond the raw data and dial down to which elements of a piece of creative are really impacting performance. Sometimes we even find surprises, like the beauty and shared meaning to be found in 2.5 seconds of brilliantly hued hummingbirds.

Grant Gudgel is VP, Head of Teads Studio.  His sandbox is at the intersection of creative, data and media; working with brands to build smart dynamic creative strategies and executions that are personalized to each user over the course of the customer journey.

Grant came to the digital advertising world through a series of fortunate accidents. Drawing on an insatiable curiosity he has at various times been an entrepreneur, a ski bum, a consultant and a carpenter. He has started, sold and/or advised businesses ranging from digital advertising, IoT, mobile apps, robotics, machine learning, education and non-profits.

This article is outside the Teads Blog.

Facebook
Twitter
LinkedIn