Creative analysis is a hot topic. Some people swear by it, pouring hours into dissecting every ad, but there’s a growing number of experts beginning to dismiss its importance for strong paid social performance. The argument is that with so much noise and chance, there’s no point in trying to understand what makes a winner a winner.
So it begs the question: are we wasting time analyzing creative performance? Or is it still critical to scaling winners?
I suppose we should dive in.
Creative is king, but creative analysis is critical
We’re all in agreement: creative is the primary lever we have to pull on paid social channels like Meta and TikTok. Every UA manager, Head of Marketing, and founder will tell you the same thing.
But if creative is king, that means every aspect of the creative process that goes into producing a creative is somewhat important. Right?
Coming up with new creative concepts is important. Testing them is important. And understanding why certain creatives work and others flop is important too. That’s where creative analysis comes in.
When it comes to creative analysis, there are really two camps:
Camp one: The we-analyze-everything people
Camp one is the data-obsessed folk. They’ll dig into every element of a creative. Not just concepts, hooks, and formats, but down to font size, typography, background hue, the speed of transitions — even voiceover volume.
Every variable is fair game.
Picture huge spreadsheets, overwhelmingly-detailed naming conventions and, of course, a helping hand from our good friend Chat GPT.
The idea is: “If we unpack and understand every single detail, we’ll uncover the formula for what makes a winning creative”.
But is this true?
Well, first let’s talk about this approach at a high level. A lot of good comes from this approach. It forces rigor — you build processes, generate hypotheses, and feed learnings back into your creative flywheel. And without that final step of feeding learnings back into ideation, you run the risk of turning creative testing into throwing stuff at the wall and seeing what sticks.
But there are downsides. If you spend too much time obsessing over minute variables, you risk miscategorizing randomness or trend-driven luck as insights. That meme ad didn’t win because the font was blue; it won because the trend was hot and timing was perfect. Next week, the trend dies, and you’re left chasing ghosts.
It’s also ridiculously time-consuming. You can spend hours dissecting every last detail only to find yourself no closer to better results. Or worse, you start believing that more analysis = a higher win rate. When in practice, that’s often not the case.
👇 A note on randomness
Randomness plays a bigger role in ad wins than we’d like to admit. Sometimes an intern throws together a quick ad inspired by a new TikTok trend, and that two-hour effort ends up being your top performer for months. Meanwhile, the hundreds of micro-insights you spent hours on barely move the needle.
It’s not even the 80/20 rule — think more like a 99.9/0.1 rule:
99.9% of what you’re analysing won’t change performance in a meaningful way.
That said, this approach can pay off. At Perceptycs, we’ve found countless winning iterations for our clients by following a rigorous (albeit pretty painful) post-experiment analysis process.
Camp two: The why-bother-speculating people
By comparison, the second camp looks at all this and says: forget it.
“There are millions of variables in play. You’ll never isolate why something worked. Just keep producing new ideas.”
There’s truth here too. The Meta algorithm processes billions of signals, more than we can ever meaningfully analyze. Even if you think you’ve cracked the code, you may just be stifling creativity with arbitrary rules.
Generally, the people in this camp prioritize big swings: fresh concepts, wild angles, better creators, newer trends.
This combats the over-analysis trap of camp one, and often leads to higher win rates. Fresh ideas drive breakthroughs more often than endless iterations do.
Yes, you can turn an ‘okay’ creative into a winner by creating variations and improving over time, but this still isn’t guaranteed. And nothing beats a unicorn concept rising to the top overnight.
However, there are of course downsides to this approach too. If you never look back, you miss chances to turn ‘okay’ ads into winners. There have been many many occasions where careful iteration (refining a promising format, swapping angles, tweaking hooks) transformed average performers into consistent long-term winners — without analysis, you leave that opportunity on the table.
A balanced approach to creative testing
Like anything in the app world, the answer isn’t clearcut. Neither of these camps are ‘correct’. For most teams, I think the best approach sits in the middle — avoid analysis paralysis, without ignoring the lessons you can learn from creative retros.
Start by asking yourself three questions:
1. How much can we act on immediately?
2. What is likely to have the largest impact if we do act on it?
3. What new hypotheses do we have?
1. How much can we act on immediately?
Any single insight can usually be acted on pretty quickly. Spot a new hook that’s outperforming others? Spin up more variants. Notice a static format that’s working? Brief a few iterations this week. Easy.
But when you stack up dozens (or hundreds) of micro-insights, things get unmanageable. You’ll never test them all. Which begs the question: why spend hours digging into variables you’ll never act on?
Chances are, if you’re doing things right, you’ll naturally default to the highest-impact variables anyway: concepts, hooks, creators, formats, angles.
Of course, how much you can act on depends heavily on resources. If you’re running thousands of active ads at any one time, you can afford to get more granular because you actually have the bandwidth to follow through. But if you’re testing 30 creatives a month, you’ll drown if you try to chase every variable. In that scenario, you’re better off sticking to the core drivers and leaving the font-size-and-color debates for someone with unlimited production capacity.
Pro tip
Check out this blog from David Vargas for three well-rounded creative testing frameworks, based on different resource levels.
2. What is likely to have the largest impact?
This is where prioritisation comes in. For most teams, the big levers are:
- Concepts
- Hooks
- Creators
- Formats
- Angles (value proposition)
These are the variables you can reliably experiment with to drive consistent improvements over time.
Yes, smaller tweaks: font color, typography, background music volume etc. can sometimes produce a bump. And if you’ve already seen a granular change make a clear difference in your own data, it’s probably worth testing further. But those cases are the exception, not the rule. There’s only so many font tests you can run before you hit diminishing or zero returns.
You don’t want to waste cycles chasing micro-variables when it’s the big levers that consistently shape outcomes. While it’s tempting to get lost in the weeds, [for the majority of teams] time is better spent sticking to the levers that reliably move performance.
3. What new hypotheses do we have?
Creative analysis only matters if it produces testable hypotheses. Otherwise, you’re just collecting trivia.
Your thought process might look like this:
- Funny hooks outperformed serious ones in the last test → let’s produce a batch of ads with different funny hooks and see if this holds true
- A creative with a bold font color suddenly spiked performance → let’s test a few more color variations to validate whether it’s a fluke or a lever
- Every winning ad last month featured a bright yellow t-shirt → let’s hypothesise that high-contrast clothing improves thumb-stopping power and run variants to test it
That’s how you turn observation into structured experimentation.
But be warned: there’s a slippery slope of leaning too heavily on the ‘because’. Maybe a yellow t-shirt works because it’s high-contract, sure. But maybe it was another reason, or one of a thousand possible reasons — or coincidence.
Unless you’ve seen something validated repeatedly across multiple tests, treat your ‘because’ as speculation, not gospel.
I recommend writing down your assumptions each time, and even gathering multiple interpretations from your team. Different people will be inclined to favour different reasonings, and that diversity of thought can spark stronger hypotheses.
Document, hypothesise, test. But keep your conclusions humble until the data proves them out.
Ad creative retros in practice
Not all retros are created equal. I recommend teams have two types of retros.
Short-term retros (end of each test cycle) that stay focused on the big levers: concepts, hooks, creators, formats, angles. These are the variables you can act on immediately — get new briefs written, spin up new iterations, and roll into the next testing cycle. Keep it tight, keep it actionable.
Larger retros (monthly, quarterly, even biannual) are deep-dives that give you the chance to zoom out across dozens or even hundreds of ads — revealing emerging patterns that are invisible in the short term.
Maybe all your sustained winners from the past three months shared the same storyboard structure. Maybe every underperformer came from the same creator. Maybe funny hooks consistently lag behind serious ones. These aren’t one-off quirks, they’re validated trends.
The bigger the dataset, the stronger the conclusion. With more data, your insights carry more weight and are less likely to be the result of randomness or a fleeting trend. This is where you can form more solid, evidence-backed opinions about why something works, and bake those into strategy with confidence.
Short-term retros keep your testing cycle sharp. Larger retros help you identify structural truths about your creative performance and guide long-term strategy. You need both.
Are you wasting time analyzing creative tests?
The short answer is no. The a-little-longer answer is not if you strike the right balance.
Analyze too little, and you’re just throwing ideas at the wall. Analyze too much, and you risk drowning in data that doesn’t matter.
Instead, aim for insight-driven efficiency:
- Only consider what’s actionable based on your available resources
- Prioritize variables that are most likely to drive big swings
- Generate hypotheses and test them, but don’t over-trust your ‘because’
By anchoring your process like this, you’ll build a creative loop that produces winners consistently, without burning your team out or wasting weeks chasing the wrong variables.

