The creative volume trap in Meta ads

Why more Meta ads won't win you better results (and what to do instead)

Nathan Hudson
PublishedLast updated

Summary

Excessive creative testing on Meta ads can reduce effectiveness by causing decreased creative diversity, strategist burnout, weaker experimentation, and account complexity. Sustainable growth comes from focusing on finding net-new winners, tracking win rate and cost-per-winner, prioritizing creative diversity, and fostering creativity over arbitrary ad volume.

Warning, this is perhaps slightly controversial for creative volume advocates: I’m about to go to war with Meta ads! The irony is that I run an agency producing an ungodly number of ad creatives for our clients.

Just over six months back I spoke to an app founder who pushes 500 new ad creatives on Meta every single day. That’s ~15,000 ads tested per month!

Since that day, I’ve spoken to dozens upon dozens of app founders, UA Managers and Heads of Performance who have all taken the same stance: “In order to take our Meta account to the next level, we need to test more ad creatives”.

But I disagree. More volume isn’t always the answer.

Volume is overrated

In short, a lot of teams are putting creative volume above everything else when it comes to Meta — at times setting a metric like # of ads tested per month as the primary measure of input. But this is a slippery slope. Not only are there some harmful, unintended consequences to be aware of, but strategically this can position the entire team to sprint off in the wrong direction.

The goal of creative testing is to find new winners. It’s not about hitting an arbitrary number of creatives tested. We want to find ad creatives that enable us to scale spend, improve performance metrics and unlock new audiences in our ad accounts. Pumping out as many creatives as humanly possible isn’t the best way to go about that.

Now I know what you’re thinking: 

“But Nathan! The more creatives we test, the higher likelihood we’ll find new winners. And the faster we test, the faster we’ll find new winners.”

Hmm.

I get where that line of thinking comes from, and in theory… I agree. If all other variables remain constant this would be true 100% of the time. But in practice, these variables hardly ever remain constant, and when they do, there’s a ceiling to hit and negative returns to follow. Like so ⤵️

The volume game is actually a trap

Still with me? Okay, let’s dig into why creative volume doesn’t = better. 

Decreased creative diversity

When sheer volume becomes the headline KPI, the quality of creative tests tends to drop. Every creative team naturally bends toward churning out ever-smaller tweaks. Five-pixel colour shifts, copy changes that barely register, trivial format flips — just to hit targets. Having to hit a high number of creatives makes you ration creativity.

At that point, you’re not testing hypotheses or uncovering genuinely fresh insights; you’re playing a numbers game that makes big swings a thing of the past, scatters your learnings across a flood of low-impact variants, and ultimately erodes your chances of finding new winners.

Quick case study

We recently onboarded a new client at Perceptycs who ran into this exact problem with a previous creative agency: the agency was commissioned to deliver a certain number of creatives each month. At first, things were great. New concepts, some nice iterations and a healthy win rate. But overtime, the win rate started to decrease and the new concepts weren’t taking off like they used to. Why?

The creative agency started delivering more and more iterations of historical winners, and fewer new concepts. They started playing it safe. 

At first, performance improved. Happy days. The iterations extended the life of winning concepts, the win rate technically went up. Things looked healthy again. Until they didn’t. Inevitably, the concepts fatigued and no amount of iterating could bolster performance. The client was forced to scale back — now we’re helping them rebuild the right way.

Honestly, it’s pretty easy to avoid something like that happening:

  • Cap the number of variants
  • Add a quota for iterations of winners
  • Ensure a high percentage of testing budget is pushed to new concepts

Bear in mind: if volume is still the focus, these measures will just open the door to even bigger problems…

Creative burnout

Creative teams, whether in-house or agency side, don’t resort to banking on iterations because they’re lazy. At least not in most cases (I hope!). Often it’s because of creative burnout. Over time, not only does it become harder to come up with fresh creative concepts, angles, and formats, but teams have to do so at an increasing rate. 

Sooner or later, win rate will start to drop, performance will get shaky, and that’s when the wheels come off. Finding genuine new winners becomes like drawing blood from a stone. Then:

  • Teams get demotivated
  • Downward pressure increases as performance drops
  • Everyone is back to playing it safe
  • Everyone is burnt out

Quotas or no quotas, when metrics are in the red month after month, most folk will take iterations over new concepts if it means stronger performance.

Poorer experimentation rigour

Perhaps one of the most harmful side effects of a volume-first approach to creative testing is the collapse of structured experimentation methodology.

By this I mean:

  1. Hypotheses development takes a back seat: “There’s no time to waste” becomes, “how many corners can we cut and still push out enough ads?”. That means teams rush straight into creative production without first articulating clear, testable hypotheses — and end up tinkering, not learning.
  2. Impact vs. effort prioritisation starts to erode: It would be a flat out lie to say that high effort always equates to high impact. Often we see simple, ugly creatives that took minutes to produce outperform Spielberg-esque creatives. But when that’s the case, there’s now a performance justification for quick and easy. All of a sudden, we prioritise based on production speed as opposed to likelihood-to-succeed. Long-term, this just doesn’t work. We need creative diversity, which means a mix of high-production and low-production creatives.
  3. Corners are cut when it comes to post-test analysis: When you’re staring at 200+ ad creatives, each with 20+ data points, and you have another 200 briefs to create this week… Trust me, you’re not feeling great about the task ahead! And that means more corners are cut. Placement analysis? Maybe next time. All of a sudden, you’re missing out on key insights, ignoring crucial learnings, and creative testing has become a matter of throwing stuff at the wall and seeing what sticks.
  1. Experiment documentation gets overlooked: Let’s not forget the impact on your processes and documentation. You can forget keeping logs or writing up experiment docs. When there’s no capacity for testing, there’s no point in keeping documentation updated. “No one even looks at those anyway.”

Maybe these things seem small. But when you’re scaling an account from five to six to seven figures in ad spend, you need some degree of structure and systematic process to consistently see success.

Now. Am I saying that it’s impossible to run a high volume of creative tests and maintain a rigorous approach to experimentation?

Of course not! But it’s a lot harder than if you adjust your volume. 

Ad account chaos

This is where the fun begins!

Have you ever pushed 100+ creatives live in a single day?

Scratch that. Have you ever tried structuring 300+ creatives per week, consistently, across different formats, concepts, angles, creators and languages for an iOS 14+ app where you have a limit of 18 campaigns each with five ad sets. Well?

Trust me. It’s uncomfortably frustrating.

Granted if you’re testing on Android first or leaning into web2app, these limitations aren’t an issue. But as Uncle Ben says, with great volume comes great structural complexity (or something like that).

Sure, you could just throw 50 creatives into an advantage plus campaign and let the winners rise to the top. But you’re telling me that I overcame creative burnout, put together hundreds of briefs, and forced myself to follow a rigorous experimentation process for the Meta gods to decide that 90% of ads shouldn’t get any spend?

Uh uh! Nope.

Assuming there is a hypothesis behind your creative or a reason you made that ad, you want to see it tested. When a creative gets spend and then fails, we should dig into why. We can look at on-platform metrics, dive into breakdowns, pay attention to placements etc.

But when the creative doesn’t get spend and we just say “Oh Meta didn’t push this creative because it’s not a winner and usually they’e right”, we now have 280+ losing ads and no indication of why they didn’t perform. Soon I’ll have thousands of losing ads, multiple failed concepts or formats, and zero data.

It’s almost as if I should have just tested fewer ads…

What should you focus on instead

Okay Nathan, I hear you say, what should I be doing? 

I’m glad you asked!

1. Set the right North Star

I’ll say it again: the point of creative testing is to find new winners! So that’s what we should be tracking:

  • # of winning creatives in a given period
  • Win rate (winning creatives ÷ total creatives tested) in the same period

If you can increase the absolute number of winners over time without your win rate collapsing, you’re doing high-volume testing the right way. I recommend plotting win rate against volume week-over-week (or month-over-month): when win rate starts to dip as volume climbs, you may be pushing things too far.

The goal is to generate net-new winners with maximum efficiency.

A key efficiency metric I like to track is Cost-per-winner (CPW):

(Total testing spend + total production costs) ÷ # winning creatives

This shows exactly how much you’re paying, on average, to uncover each new winner. If CPW drifts upward, you’re spending more to find less.

All of these metrics share the same North Star: more real winners, less wasted spend. 

If you see cost-per-winner climbing or win rate falling, it’s a signal to:

  1. Dial back volume
  2. Revisit hypotheses and creative diversity
  3. Double-down on quality guardrails

2. Focus on diversity as much as volume

Not only does creative diversity prevent ad fatigue, but it unlocks new growth in your ad account too.

By creative diversity, I mean:

  • Testing statics, videos, carousels. 
  • Mixing it up with high production value and low production value creatives
  • Pushing out different creative concepts and trends
  • Working with different creators
  • Trying out different editing styles
  • Focusing on different angles and value propositions
  • Experimenting with different AI creative formats
  • Balancing new concepts with iterations
  • Crafting scripts and briefs around different JTBD

I like to think of it like this:  

For every JTBD we’ve identified, we want multiple winners. For every placement we advertise in, we want multiple winners. For every demographic we deem as relevant, we want multiple winners.

The only way to do that is to focus on creative diversity and ensure we document our hypotheses and learnings to make these connections.

This also forces you to produce new concepts as opposed to iterating on historical winners. 

Tip: If you really want to be cautious, try adding quotas for iterations of historical winners and aim at least 60% of creative testing budget towards new concepts.

3. Reward creativity and celebrate big swings (not just wins)

Earlier this year, Deeksha, our Growth Lead, came up with a killer creative concept. It was funny, engaging and all round a great ad. But the first variant didn’t do that well at all. In fact, it flopped. But the ad still made our creative hall of fame and got celebrated on Slack. 

Now, on one hand, who cares? It flopped. After all, we want winners, right?

But the concept was super smart, and it’s that type of thinking and creativity that enables us to find new winners. In fact, it was her creativity that eventually turned that concept into a winning creative.

Treasure that creativity and celebrate big swings. No, they won’t all pay off. But if we don’t foster a culture where creativity can breathe, you just end up copying competitors off Ad Library 👀. That would make us a ‘not so Creative Agency’. And the same goes for you and your team!  

4. Don’t burnout your creative strategists

Finally, make an active effort not to burn out your creative strategists! In a world where creative is the larger lever to pull, creative strategists are literally your engine. 

If you’re expecting one person or even a few individuals to come up with dozens and dozens of completely new concepts each month on their own, at an increasing rate, to greater success… You need to rethink your expectations.

More and more frequently we’ve started supporting teams who already have strong in-house creative teams, but are looking to ensure diversity and increase volume without running into creative burnout. Bringing in an additional creative agency partner to buff up your creative efforts can enable you to reap all the rewards of high volume testing with very few of the drawbacks. (If that creative partner prioritises finding new winners and aren’t just playing a volume game, of course 😉).

Have I convinced you?

High volume creative testing can work. 

But increasing volume isn’t always the answer to your performance plateau. There’s a million and one ways to tank your Meta ads performance by focusing too much on creative volume, without the proper guardrails.

Instead, build a systematic creative testing strategy around a single North Star: new winners, delivered efficiently. Track your win rate alongside volume, keep an eye on cost-per-winner, and double down on creative diversity with quotas and hypothesised JTBD.

By swapping volume-chasing for insight-chasing, you’ll preserve your team’s creativity, maintain rigour in your experimentation and unlock real, sustainable lift in your Meta accounts. 

You might also like

Share this post

Subscribe: App Growth Advice

Enjoyed this post? Subscribe to Sub Club for biweekly app growth insights, best practice guides, case studies, and more.