Filler removed – Webinar: How to optimize your ad campaigns with signal engineering
[00:00:00] David Barnard: Hello, welcome to this session on Signal Engineering. A word I had never heard before until Thomas said, Hey, David, I’ve got this really great idea for a webinar and a topic that I think a lot more people should learn about. So that’s what we’re gonna do today. Did wanna let Thomas and Mark Marcus both of you introduce yourself?
Why don’t we start with Marcus?
[00:00:22] Marcus Burke: Yeah. Hi everyone. I’m Marcus a Meta Ads app growth consultant. Been doing this for, in the industry for 13 years, but doing consulting for the past three years. Quite active on LinkedIn. Maybe some of you have seen my post. If not, go and follow me.
[00:00:40] Thomas Petit: Yeah. So I’m Thomas. I’m also a growth consultant. I follow everything Marcus is saying about meta ads. I focus mainly on subscription business, on a broad range of topics. A lot of them on ua, but also a lot of them on broader growth topics. So that can be monetization, retention, data management, and so on.
And here, I think we’re a bit at the intersection of UA and data management. So hopefully an exciting one, even if the topic is a little bit niche. Yeah, very happy to share this talk with post Marcus and David. I think it’s gonna be great.
[00:01:09] David Barnard: Yeah, and I’m David Barnard, growth Advocate at Revenue Cat.
I host the Sub Club podcast where I’ve had both Marcus and Thomas on probably about time to get you back on Marcus and then Thomas. Our annual summer session will be coming up soon. And of course, revenue Cat is the sponsor slash organizer slash benefactor of all of this stuff.
We’re a subscription app platform with a mission to help developers make more money. This is an area so you’ll love this. Thomas and Marcus. Our head of product sent a message to all the product managers inside Revenue Cat saying, you should all go listen to this webinar. We’re a big growing team now and wanna do more stuff in this direction.
Hopefully hopefully, we’ll, you’ll see aspects of this product ties and we actually have been revamping our meta integration to make some of those easier. And keep working on stuff like this over time. So that’s revenue, cat. A little bit of housekeeping. If you have a question during the panel Marcus and Thomas are probably better at this than me, but the chat will probably, get pretty busy.
There will be revenue for cat folks in the chat, and Marcus and Thomas will probably jump in here and there. But if you have a very specific question, we will have a q and A at the end and I’ll maybe try and work in a few questions here and there if I can. But if you have a specific question, put it in the questions tab and then vote up the questions that you want answered and we’ll answer those either at the end or I’ll find a way to intersperse ’em.
This will be recorded. We live stream now to YouTube, so it’ll be on YouTube immediately after this on the Revenue Cat channel. So if you do need to bounce although, I find this and I say this now, a lot of times after I say it’s gonna be recorded, you’re not gonna come back and watch it.
So if you’re here, stick around, watch it now. You’re not gonna watch it later, even if it’s your best intentions to stick around. All right let’s get into it. Thomas, you are the one who proposed this, so tell me, what is Signal engineering?
[00:03:04] Thomas Petit: Flo should, I’ve come up with a perfect sentence to reply that because it was coming, but I’ll make it up on the go.
What we’re talking about here is about thinking and designing the data that you’re sharing with that networks for, to optimize campaigns towards. So the signal is what you tell Facebook, Google, TikTok, Hey, I want more of this, and I bring the topic first because I think it’s, it has layers in it and it’s quite interesting, but also because I’m seeing most people have a very basic approach of it.
Hey, here’s all my subscription. Bring me that. And I think there are ways to get smaller here and get better performance. And also because I had a couple of interesting discussion with Marcus about that with some complimentary experience. So I thought it was a great chat to have.
[00:03:48] David Barnard: And then Thomas in pitching, I mean in Marcus in pitching the webinar on LinkedIn, you had a great post kind of offering your own spin on that.
So what can you add to Thomas or clarify from what he said?
[00:04:00] Marcus Burke: Not too much to add here, I’d say yeah, the kind of designing and thinking about what you’re sharing is the main part here. Like you wanna do this intentionally and it’s not set in stone that like one app needs to optimize for one event just because all are doing that.
So as you will see there, there’s quite some thought that can go into this and quite a different, a few different strategies and yeah. That’s what we’re gonna talk about today.
[00:04:25] David Barnard: Cool. Let’s start with the basics then of campaign optimization. So Thomas, why don’t you run us through those? Like Marcus was saying, when you just default set up a, a meta campaign, it’s oh yeah just optimize to clicks.
That’s great. Yeah.
[00:04:40] Thomas Petit: I think one of the reason that comes like for this topic to be quite relevant is o over the time the bigger platform, they become more and more sophisticated. They put AI into it, and basically it’s very hands off. So Google would be like the maximum of it, but meta is not very far where you basically arrive.
So I have this amount of money, those are my creative assets, bring me stuff. And the more time goes and the less levers we have on those campaigns, like five or 10 years ago, we’d be like making trading on small audience and refining targeting in many sophisticated ways. A lot of this has been automated on most places, so it’s not the case for all.
For example, apple, such as, doesn’t use a lot of signals. It’s mostly still bid on keywords and stuff, but over the time a lot of it has been automated. Google and Meta are very far from in it. And you may read here and there like creative is the last lever we have left. Like basically marketers only have the creative assets and all the rest is automated.
It’s hands off. And that’s partly true, but that’s partly not true. And the way I see it. So we’ve got three main levers and obviously creative is a huge one and there’s a ton of things to do there. And there’s lots to talk about, not for today in format content and more. One is the budget and potentially bidding.
Bidding is a bit more complicated than that. But obviously you can still tell Google if you are 500 or 5 million to spend. But the last one is actually the signal, the data that you’re sharing where you’re telling the platform. I would like to generate more of this, more install or more clicks or more trials or more revenue or more whatnot.
And I think here there’s been very little conversation where it is one of the last levers that we have. So there are a lot of conversation on creatives. There is very little conversation on signals. And I thought it would be a good one to bring.
[00:06:28] David Barnard: Anything to add there, Marcus?
[00:06:30] Marcus Burke: Yeah, what’s up with Apple?
That they don’t optimize for any don’t funnel signal? I never got that. They’re focusing so much on their ads business and you’re still putting in your CPC targets. It’s just weird. Yeah. Other than that, I’d say, also just to close off that creative topic, I also like to think of creative as a signal in the end, meta and any other platform.
In the early days when you upload like a new campaign at Set Creative will firstly use on platform signal to like target. And really figure out like whom to show this to. So I think it’s also like important to think of that when you’re like doing creative testing that this is signal and if you try five different hooks, then you’re doing that so that you generate signal from a relevant audience for targeting.
And it’s not just trying random stuff you found in a kind of Airtable vault with the hundred coolest hooks to use on Facebook. But let’s rather talk about the down funnel signal part, I’d say.
[00:07:27] Thomas Petit: Yeah I still agree. It’s a good point to add and maybe we can reformulate this for people who are not very much in the weeds of ad networks like the advertising platform work.
And very often I get questions from people outside this industry, like they think as marketers we’re like run in targeting of oh, are you using interest and oh, are you targeting women between 20 and 40 in Colorado? Who like fashion? I’m like, no, I’m never doing that. And I think here it’s quite interesting in the, in def in, in the definition of what we’re talking, which is as marketers, the main level, we have to tell the platform who to target, which is which user to show ads on.
They’re the content of the ads. And the platform is smart enough to deal with who’s gonna be interested based on who interacts with it. And the signal, which is what do I want as a business goal? And they make a mix of it, like between the creative and what I call the signal. So like the event or the value we’re sharing the platform then makes the targeting based on these two elements.
So the way. Earth marketers do the targeting is selecting creative and signals and data signals, and then the platform does all the rest in very sophisticated ways that can go also sideways. Yeah.
[00:08:45] David Barnard: I don’t know if either of you maybe saw tweets about this or Reddit, but in a, in an interview with Ben Thompson mark Zuckerberg actually shared that, his vision of the future of advertising.
Is that you essentially just tell meta, here’s my product, here’s my goal, here’s my budget, and then it will eventually just, magic ai, just do everything and maybe, three or five years we’ll be there. It feels like we’re starting to get there with AI generated UGC style things.
And, but then to your point, Thomas, it’s even if we. Get to that point, you still wanna be, checking the math and making sure that it’s not degrading your brand and making sure that, there, there should hopefully still be some levers of control that smart marketers will be paying attention to and not just let the AI go completely amuck.
And I know it’s been a progression for the industry, like 10 years ago you were super deep and then when, especially when Google switched to UAC Thomas, you complained a lot. And fair enough, like it’s hard to hand it off to the algorithm and the algorithm doesn’t always make good choices.
The algorithm optimizes for Google as much as it optimizes for you will show, bad place, put your ad in bad inventory because it’s, needs to meet as long as it meets your goals, it’s fine for them and more profitable for them. So anyways, yeah, I think this will continue to evolve, but at the stage we’re at now, you’re right, these are the two like most important things you can do is creative.
And then what signal do you pass back?
[00:10:17] Thomas Petit: So let’s go through
[00:10:19] David Barnard: next. No, it’s
[00:10:20] Thomas Petit: interesting. Before you, you switch so I’m not on TI, I just want to drive it properly. What I want with UAC is the ability to prompt it of don’t do that with my brand, but I still want the AI to do that job basically because they’re doing it much better than us given the scale and the complexity.
Mark had a very good point though in what you said it was a little bit provocative, but we can see where it’s going with this. With a lot more automation on the creative. And you just said it, it’s just give me your product, your budget, and your goals, and I’ll do everything. And today is all about this goal.
So telling Facebook and the rest, what is your right goal is actually a very critical point when even the creative gets more automated.
[00:11:00] N/A: So
[00:11:01] Thomas Petit: I think you made my point of making this webinar, so thanks, Z.
[00:11:05] David Barnard: Yeah, good point. Good point. All right so let’s talk through the sources of signal. So we’ve talked about, sending the signal back to the ad networks, but what signal, where does that signal come from?
And Thomas, I’ll let you take it away.
[00:11:17] Thomas Petit: Yeah. A very interesting part of the conversation about the signal is what you send, what is this conversion, what is the trial? But actually, regardless of what it is it a trial? Is it value? Is it this, is it that how you send it also matters a lot like, and.
Different platforms give you different options to share this data signal or some don’t offer so many pla so many options, but, so it depends a little bit. Usually you can either, either send it through the platform, SDK. So Meta has the meta has dk for Google. That’s Firebase. TikTok has a TikTok.
SDK, not everybody has their SDK. So those who don’t have an SDK, you would’ve to go to the other method. So that would be in, in many case. That’s, that. It’s a really good option, especially since you don’t have if you don’t have an MMP, then yeah, you can just work with the Meta and Google SD K.
It’s fantastic. One because it’s free, so that’s a one point for them. We’ll get into benefits later, but yeah, that, that option is free. Just have to implement it. Many advertiser would send a signal, not through those SDK, but through an MMP. So mobile marketing partner, that’s like apps flyer, adjust, singular branch others, so there are pros and cons in doing this. One of the process that you just have to define your scenario once and they share it with all the platforms. So that’s very, that, that’s a good one. The other one is that some platform, like for example, if you want to advertise with app login they don’t have an sdk, so you’re gonna have to go through that other option.
You can send a signal through Apple’s SK network. So that’s more of a compliment than a replacement of the rest. It comes in parallel. And then some people would’ve a direct API where you don’t need the SDK and you don’t need an MMP. That’s a bit of an edge case. I don’t think it’s gonna be the main conversation of the day to day, but who can you give
[00:13:00] David Barnard: examples of those?
[00:13:03] Thomas Petit: Meta has one is
[00:13:04] David Barnard: like app loving, maybe one example.
[00:13:07] Thomas Petit: To my knowledge, you can’t advertise on app loving without an MMP. So I don’t think that’s the case, but maybe I missed it. On Facebook you can send the data through an API without having the SDK nor an MP, so that would be an option for me.
That’s the, that’s the most common one that I’ve seen. But there might be others. It is a bit of an edge case. This one is is less common.
[00:13:30] David Barnard: Gotcha. And then the last point on here that, that you made in our notes was that mixing sources can be problematic. What did you mean by that?
[00:13:38] Thomas Petit: Ha,
[00:13:39] N/A: let’s talk about this.
[00:13:40] Thomas Petit: Yeah, that’s, so I work with a bunch of other people who like see accounts and many people would have DSDK and an P both sending conversion at the same time. And there are ways to make this work great. And there is the duplication and so on. I. But very often, especially early stage when you start mixing sources and not the same thing arrive and the different name, there’s filtering happening and the deduplication is not necessarily perfect.
Different networks are good or bad with deduplication as well. On Google. You have to set it up properly. What is a primary conversion? Secondary conversion. There are ways to send signals from different sources and have it set up well, but what I see very often are people having problems because they’re sending from different sources and they mix up.
It gets messed up when the platform receive different things that they don’t necessarily interpret properly. SK network is an exception here, so because it comes so much in parallel and it doesn’t blend with the rest, there’s not so much a problem. So typically on Facebook, you can have either the MMP or the SDK sending data and have SK network in parallel.
Usually that’s not a problem using the SDK and the MMP at the same time requires a little bit more knowledge in about how to do it. And I wanted to mention it in the note because it’s so common to have this problem, like to see this problem like and before we start working it is oh, okay, we would open.
On Facebook, you see this data signal in the event manager and you would open the event manager and there’s red flags everywhere, events that are named bizarrely so this is quite common. So I think it was good advice, practical advice. If you’re not super sophisticated, pick one of the toso, get started from there.
You’ll build up maybe later, but mixing it is probably gonna create more havoc than good.
[00:15:27] David Barnard: Marcus, you seem to have thoughts there. And then let me interject here as well with our first question. ’cause I think you’ll enjoy speaking to this Marcus, while you also speak to everything else Thomas said.
And so the question is, which do you prefer for attribution a EM via MMP like apps flyer or using scan directly? A EM would have the shorter reporting delay. So yeah, why don’t you mix in the, that question while you also give your thoughts on mixing signals.
[00:15:53] Marcus Burke: Yeah. Yeah. Let’s start with that question from a participant here.
Generally I would advise to start with a EM just because it doesn’t have all the drawbacks of scan, meaning like the the delay in events arriving and then having to pass privacy thresholds. And especially as you’re starting out, that can be really a pain if you’re like, at small scale. And then figuring out like basically what can pain performance actually looks and.
Enough signals arriving for your kind of campaigns in order to optimize then targeting properly. So my default is am while I do see value in trying scan, especially now, I haven’t actually tried the new version yet. But I mean it’s supposedly getting better and I’ve seen accounts were then scanned due to how that signal arrives differently.
As an aggregate on campaign level also targets completely differently. So you could use it as a way to then broaden your targeting into different audiences. Of course, it comes with a whole lot of reporting headaches, so in the end it would have to be really worth it for you to open up that bag.
You specifically said a m via IAEM via MMP here. I’d say you might just as well do it through the meta SDK especially early on. It comes for free. You don’t need to implement a an P right away. With clients that have both. I tend to also test which one seems to be attributing more efficiently because I’ve definitely seen differences there.
In the end, I’m sure that just comes down to how kind of the code is written for each of these, and then one of them just is better at catching some of these identifiers they need for fingerprinting. But in the end, yeah, I found that sometimes the meta SDK works better sometimes in MMP work better for me.
So there is a lot to like figure out there. Generally I’d say. What I found is you want to have as few third parties involved as possible. So I’m sure like people are working with additional, like tools that they need to run their subscriptions. The bigger, like the more logic is built into is, and the more points where this could break, there are the clunkier it’s gonna get.
And then that just means less of the events that are actually driven by meta are gonna be attributed, which is usually the case. You won’t, you will never see like a kind of one-to-one match in what Meta is actually doing and what they’re reporting. It’s usually quite a bit below, which also comes down to view through not being reported by em.
But that’s something you need to figure out in the early days, like when you turn on another campaign, how many events are attributed to the campaign and how many are actually driven that you see in your backend so that you can basically layer on a multiplier so that you know the true performance of a campaign when it comes to.
Having multiple sources. I think the, I share all the headaches and I also hate messy event manager setups with 20 data sets and then MMP and SDK and things coming in from all ends. But I do definitely see value in kind of using like different sources for the diversification. Like these days, I think it’s important to have at least a campaign optimizing for web events just as much as for epi events.
So in the end, if you’re running like a a web quiz, then you will be sending that data through copy and then that will allow you to run a sales campaign instead of a app promotion campaign. And. Things happen in the algorithm. I just recently had a case again where kind of placement targeting got messed up.
I had 10, 15 apps reporting the same thing. I think, I’m sure it happened to more prices spiked and that was just issue on META’S end with the app promotion product. So if you then have your budget across two campaign types, then you’re more likely not to see it on both, and you can easily kind of de-risk and then shift budgets.
If all your eggs on one basket, then sometimes that can be a chaotic week as experienced.
[00:19:42] David Barnard: Yeah, totally. All right, Thomas let’s get into the mechanics then of the actual optimization events. So yeah, you get your standard optimization events of optimizing for impressions. I. Click install, but each of those has their own benefits and drawbacks.
So talk us through the standard events and then what you propose as a better option.
[00:20:06] Thomas Petit: So basically, the way I see it the most advanced platform, so that would be met than Google, they’re really good at giving you what you ask for. Meaning, let’s say you want to optimize for clicks. They give you a ton of clicks for a very good price per click, but they’re gonna cherry pick the clicks that are least likely to convert from a click to an install, from a click to a sale.
Why? Because you ask for a click, so they’re gonna give you the clicks that other people don’t buy, which are the ones that never convert. Like why would they give you the high value clicks if all you ask are just general clicks? And I’m not completely against the running campaigns that are on higher funnel that optimize for impression or views because they might have a completely different goal, like of bringing awareness to a new feature.
You’re not expecting sales to happen immediately, or you want to generate noise around a particular topic. So some people are can have like perfect logic to optimize very high in the funnel. But when you are looking for direct return for what’s gonna be the revenue coming from this and you’re optimizing towards that’s most likely a very bad idea to optimize for impression of clicks.
Because Facebook are gonna give you just what you ask for. And if you apply this logic a little bit lower in the funnel, and like for a very long time they were not this complex optimization. Or like we mentioned for before, apple is still at this stage, let’s say you say, okay I want those installs.
They’re gonna give you installs that don’t convert because they also preserve the symmetry. And if you ask for onboarding complete, they’re gonna give you onboarding complete that never start a trial. And if you ask for a trial, they’re gonna give you a trial that is less likely to go to convert than if somebody else is asking for, oh, what I want is a trial that is likely to convert and also likely to like realize my activities and also likely to not refund and also likely to renew and also likely to, I don’t know, opt in the push notification and invite the friends and rate my apps.
That is beautiful. But if there’s always a trade up, it would be too easy. No. Otherwise if optimized for the perfect event that. My the eight users of my app that are behaving the best for me to my interest there’s never gonna be enough signal for the platform to, to optimize for. And this is why here, the main trade off is volume versus quality.
So it’s extremely likely that even if you’re a small advertiser, you’ve got enough volume that you can optimize, be beyond an install like this is, there might also be reason to run, install, optimize campaign and maybe Marcus would quote one or two. But in most cases, even if you have a little budget, not a lot of volume, you may find an event that is lower down than the install.
Even if you’re a very large advertiser, you might not be able to go to very, like extreme depth in funnel. And so there is this trade off of quantity versus quality of. I want a business even that I, I want an event that correlates with my business goal. I want something that really aligns with what creates value for me.
Assuming the value here is revenue. You want something that is the closest as possible as money that will come in the future. And that’s the problem is that is in the future. So you are making bets but you want enough of them that the platform is able to run this campaign properly and there’s not a magical number, but you definitely want this event to happen around at least 10 times per day per campaign.
Some people say 50 per week, some people say a hundred per week. Like I think 10 is a good ballpark as a minimum. So if you’re not sending at least like those 10 per day, or five per day, or 20 per day, the company is very likely to be unstable even though your event is perfect. So there’s no perfect event.
The perfect event depends on your user behavior, but it depends on your capacity to actually send enough of those for the company to run properly. And that depends how, like the, you might have a lot of fragmentation in your account because you need to run locally and so you need a higher, or you might have such a huge budget that you can afford to say, oh, I’m gonna have one campaign running towards that even.
And one company running is that event. So that trade off I’m voluntarily saying that there’s not a good solution because the good solution will depend on your ability. To identify this correlation with your business goal, but to also maintain an acceptable volume on the campaign. The, so that the platform, even for the same advertiser, it happened to me that we changed a little bit the goal up or lower in higher or lower in the funnel because we’ve got less budget or because we’re spreading campaigns across for many reasons.
So it’s not a set in stone. There’s a lot of experimentation to run around it, but it’s not because you find one that works great, that is gonna work great all the time. And typically as the company grows, you can get a little bit smarter and go a little bit more in detail.
[00:24:55] David Barnard: Yeah. And to add onto that business goal can mean different things to different companies, and we’re primarily talking toward subscription apps.
But in, in the notes you also gave an example of a dating app where a dating app relies on a lot of people who don’t pay to be the inventory for the people who do pay. And so maybe if you’re dating app or a freemium app your business outcome is not just to get a subscription, but it’s to get more active users because that then drives the business goal.
And there’s some other tangible,
[00:25:26] Thomas Petit: it sometimes can get even you can have a secondary intention. Like for example, I’m gonna keep on your dating example. If you know that the high payer, it was dating
[00:25:36] David Barnard: example, to be fair. Yeah, fair enough.
[00:25:38] Thomas Petit: If you know that your high payers, they’re. 45 plus man that have a high purchase power, but they’re never gonna pay until they see a lot of profiles of hot chicks.
Maybe you want to run a campaign that optimize towards, oh, I need more of the audience target of that audience target so that they have what they need. I had a very example extreme example once where in the gaming sector where they had an iOS app targeting like wealthy countries where they were selling extremely expensive IAPs and those were weapons where you can basically kill other players.
And what they were doing is they were bringing hold of. Players on under the inventory that cost us very little so that these high payers could have somebody to kill with the super big weapon. And obviously the two campaigns that we’re running were not optimizing towards, in this case, the same osl countries.
They have different countries as well, but obviously not towards the same goal. Like one campaign was like, oh, I need people who can just live on the game for 10 minutes so that the other one has somebody to kill. And the other company would optimize towards the purchase of this. So see, even within one company, like you can have multiple goals in dating, in marketplace, in, in gaming.
There’s many example where you’re not trying to drive just one kind of users to your app. There can be a variety of users.
[00:26:57] Marcus Burke: Yeah. And I think even if you’re not like multiplayer and like your app is more like classic just subscription app, then still there can be value in having multiple events to optimize for, to broaden like whom you’re attracting.
And then some of them might convert quicker and monetize quickly. Others do it in the long run. So over time, as you learn which signals convert into which behavior, then you can also make the decision of saying, Hey, let’s, in the end, optimize for both. Because in the long run, that’s gonna help us address a bigger total addressable market.
[00:27:30] David Barnard: Okay let’s dig deeper into exactly how you qualify those folks. And then especially since we are mostly talking into an audience of subscription app developers here and most subscription apps these days do have a free trial. How do you engineer this signal for trials that are likely to convert when, as you were saying, Thomas, there is a time component too of you want the signal fed back into the algorithm as quickly as possible.
You also want as much signal as possible. Waiting three days on a three day free trial to get that conversion is often not practical. So how do you get the kind of signals that you need back into the platform of who actually is a good trial, who is a good user?
[00:28:15] Marcus Burke: Maybe one quick question on the previous one.
Thomas, install optimization for creative testing. Yes or no? I used to do
[00:28:22] Thomas Petit: it. I don’t do it anymore. And for the same reason we just discussed here that different events can attract different audience, but then if I’m running the, so it’s obviously cheaper and you get the signal faster, so it’s tenting to go on credit listing for install.
If you can validate that this audience looks like the same as you’re actually trying to attract, and not only in demographic in age and placement and I don’t know, gender and device, but actually in the behavior, yeah, that would be a great option. But in, in practice, I’ve seen very often those instant campaign bring a completely different audience.
Sometimes good for audience expansion, especially when I’m trying to, I’m starting to see fatigue inside like the core audience and we’re like. What creative works for other people, and then we’ll figure out how to monetize them. But in most cases, I’d rather have my creative testing run on the very similar events so that the audience are more alike.
I, I need the creative to work on the audience that works for me before I expand towards other audiences. So in most cases I avoid running those these days.
[00:29:30] Marcus Burke: Same here. And yeah, as close to business value as you can because in the end, if you want all these AI applications to do everything for you or like in the end meta and other platforms are forcing you to do it for you, then you wanna train them on something that has value to you.
You don’t want to tell them, Hey, I want clicks and only I know what happens afterwards. So I think that’s usually the way to go when you start. Anything else is definitely a lot more sophisticated and I know there’s people that need to go into other events just because in the end, optimizing for an event also creates more of a niche targeting.
While if you then go up upper funnel, then more people will be likely to take that action. In the end, any Meta Marketing pro will always tell you, build a kind of consideration funnel and run upper funnel awareness to feed that. I think that’s just for big brands that can actually then measure how that stuff is doing when you’re early on, like optimize for what’s actually gonna convert.
[00:30:30] Thomas Petit: So sometimes there is a little bit of experimentation, like when we want to see a completely new feature might attract a different audience, and then we figure out how this audience might be monetized. That can be an option. But that’s, in the field, 95% of the time is not what people want to be working on them.
This expansion happens later at specific times, very specific release in most cases. The problem is the opposite is you want you already have found some kind of traction on an audience. And you just want to scale on it. You want it to monetize better, you want to acquire more traffic, you want to acquire cheaper traffic, that’s still monetized.
So the typical question is elsewhere is just to not discard a hundred percent. And yeah, I look back to David’s question, which was pretty deep, a little bit complex. So I thought, oh, I’m gonna let Marcus on that, but that didn’t work. So let’s go at it and I’m gonna approach it a little bit differently so that we can recircle to it and which is Marcus already hinted at this, depends really much where you are.
And we said like even needs to happen enough, it needs to happen often enough. And here’s do you actually see a major problem that is related to the signal? One could be, oh I’m optimizing for trials, which is a very common case. One of the most common case for subscription because it happens so often, so early, it’s very convenient to optimize for all trials.
And I still have clients who optimize for this and have very good success. I’m not saying don’t do it, but let’s assume you’re optim Optimiz for this very standard setup and you’re selling, you’re seeing very low trial to pay conversion. Coming from this part may, maybe it’s just because your app is terrible or you’re extremely expensive, but let’s say just from Facebook, the try to pay conversion is a problem.
Then maybe that’s the time to work on this data signal. Maybe I should work on this. If you’re optimistic for trials and you’re seeing great trials to pay conversion don’t bother with qualifying it. Just get along with it. That’s completely fine. So it also has to start from a problem and oh, Thomas mentioned about going for qualified trials.
Let’s do that. No, you don’t need to do it. Maybe maybe it’s a terrible idea, actually. Depends on the case. But yeah, if you’re seeing the traffic you’re attracting from these companies is not as good quality as you would expect, as is the rest of traffic or it tank at some point of time.
Like it worked for a while and suddenly it doesn’t work anymore. And you identify that this step was the step. And any step that tanks, that is after the optimization event, you might have an opportunity to improve through this signal engineering through changing event a little bit because you’re gonna tell the platform a stop sending me all the trials.
Start sending me the trials that matters to me. And this is where I saw there was a question in Jacquelin. We’ll probably get there to it, but the, I think one of the, part of your question was how do I qualify the trial?
[00:33:13] David Barnard: Yeah. Oh. Cutting out. I think we lost Thomas.
Then
[00:33:20] Marcus Burke: let me answer the question.
What I like, what I really like about Meta is that oftentimes you can already find answers to why maybe your down funnel metrics don’t look as good in like data breakdowns. Oftentimes what’s the case when you optimize for a trial is meta gives you very young traffic because they love to start a trial.
It’s free when they start, but then they cancel quickly and young traffic is cheap. So in the end, oftentimes you see very low cost per trials from young audiences, meta pushes for them because they think, Hey, this is great. They’re not looking at what comes after. And that’s then what’s driving kind of you towards younger users who don’t convert and that makes your product metrics look bad.
While, of course that doesn’t necessarily be the case it might just be due to that audience targeting. So you have a few options here. Of course, you can just solve that through targeting and meta. You could very well exclude younger audiences. I mostly do that. So just targeting 25 plus. But of course you can also tackle this from a qualified trial perspective.
Like how what events do I wanna send back to the platform so that it goes after higher quality ones. For example, if you ask for h in onboarding, then you could very well say if someone is aging to 24, and I know they have a poor trial conversion, I’m just not gonna send a trial for 18 to 24 year olds in meta.
Both as an option in Google Ads, for example, you can do H targeting. So I know Thomas is a big fan of qualifying his trials there by H Group, which then allows you to optimize for that higher quality audience. And that way you can always kind of think about what is signal I already have in my onboarding first session, as we said, usually you want your conversion and event to happen within the first 24 hours.
So in the end, don’t try to optimize for a trial conversion or something great happening on day five. You still want it to be early, but there is gonna be signal already happening where you can see, hey, if someone took this kind of action, the likelihood of them converting is like 50% higher than from anyone else.
So that’s the kind of things you want to look into and then figure out are they happening still often enough so you could optimize for it? Of course, it might be that these kind of steps a user can take are already there, but also you can design your app in a way to implement them. So if you’re working in the marketing team that and then don’t just take what’s there also, like talk to product and think about what could that look like so that a user actually does trigger an event, which then leads to a higher value.
I haven’t tested much with it yet, but I also I really like how like in B2B sales usually leads are qualified by asking them about the urgency and the severity of their problem. So in the end in any kind of sales funnel, there would be a question around like, how long has this issue been going on?
Have you tried other solutions? Looking into other well-informed high intent and then how severe is it? Like how much is this impacting your daily life? And I think that’s a way to also think about like this trial event engineering. Through onboarding, like really what are questions and events you can input in that first session that will then likely tell you who the better audience is.
And then if you’re actually seeing them convert better, then start optimizing for them. Back to Thomas.
[00:36:28] David Barnard: Yeah, to Thomas. I don’t know how much of that you heard, but it’s okay to repeat. So why don’t you give us your take on signal engineering for qualifying a trial?
[00:36:39] Thomas Petit: I got part of it, just the end.
Sorry about the disconnection. There I catch the part where you said a lot of the onboarding question actually useful, and it’s true that without this, like the data that the user gives you without saying it, so that would be the device they have and where they from and so on, they can help, but there’s a moment where the onboarding question is gonna inform the best.
That and. At the beginning, I was over, over reliant on, on other factors. I know people who go even further in the sense that they designed the onboarding to have this information. It’s not only you start from there. No, what do I have? Does it make a difference? Like it, I’m gonna take every step in my onboarding and I’m gonna look, does it create variance of the revenue per trial?
So that would be mainly trial to pay conversion, but also refund renewal and so on. And like what? Create variance. Oh, that question doesn’t create variance. I’m not gonna use it for you. And then you basically go and look for which is, which question is discriminating, and then you might use it.
The second thing I might add, and I’m gonna take a guess that Marcus didn’t say it, which is at the beginning I made the mistake. That was like trying to identify the best user, the best audience, and send that to the network. The problem is that very often your best users, they’re not enough of them.
So I was filtering for the, let’s say the top 10% of users or whatever, which is also great research to run anyway. No, what do, what differentiate them, how they behave differently, what the answer. And it’s gonna inform you a lot of things you can do in your product. But from a UA point of view, I’d rather think negatively, which is instead of looking for the best users, I’m say, what characteristic do the worst user have?
The ones that always cancel no matter what which question? Because on the platform we mentioned we did a minimum amount of volume. So let’s say your budget today only allows you to send 20 events per day. If you select the 10% and you only send two per day is not gonna work well. But if you are able to remove the 3, 4, 5, that are the least likely to convert, so filtering negatively rather than positively is often for you, a bit more efficient because you’re just removing the worst, 10, the worst, decide 10%, 20%, 30%, rather than looking for the just.
And also because if you filter from the top, you might restrict the audience that okay, there is a great audience there, but it is a very small audience. So the campaign will be stuck at some point. So that would be my advice actually. So yeah look at your onboarding question for discrimination. Maybe come up with additional question for discrimination, but look at it from a negative standpoint.
Like what do users who convert the least have in common? And one thing that is very common is that you realize that users below 80 years old, they know exactly how to consult trial and they don’t have a lot of money, so they never convert. So one typical case of filtering trial or qualifying trial would be instead if I have asked the age of users and I realized that users below 18 or 21 have a terrible trial to pay conversion.
Then maybe I’m gonna start sending the network only trials for user over 18 or over 21 over whatever your data is telling you. I don’t want to pinpoint it another specific age. Or maybe you run campaigns specifically on that audience that optimize towards another even, and you have a campaign. Okay, everybody below 25, I’m gonna optimize towards something else because they see potential there, but not for a trial optimizing campaign.
[00:40:07] David Barnard: Technic technical question from me since I’m the naive one here, but maybe some folks in our audience are asking themselves the same question. One I’m assuming you can’t send a negative signal to the ad network saying, don’t send me folks like this. Okay. You’re shaking your head. So the answer is no.
You can’t send a negative signal. So then where are you doing this filtering? I, is this a, an algorithm inside the app itself where you are taking these signals and then feeding it back? And I guess does that depend on whether you’re using an MMP or a EM or the SDK or whatever? But talk me through some of the different setups that you would use to actually do this filtering of which events to send.
[00:40:49] Thomas Petit: So obviously depending on how you share the signal, you have different possibilities. And so the Facebook SDK is not super flexible, like you can’t tweak it, but it’s rather a little bit more complicated that. And I have personally one case where we’re using an MMP, but we’re using the M-M-P-A-P-I, not the SDK.
And this offers a lot of flexibility because I’m able to filter whatever I want before the MMP even sees it. So that’s very convenient and that’s very flexible because a lot of things are happening ahead. So it doesn’t mean that it is not possible. Especially with Firebase, there’s a lot that you can do.
Even with the M-P-S-D-K, there’s a lot. And even with the meta dk, there’s a lot you can do, but it might be a little bit trickier. But there’s a layer on top of what’s happening here, which is even actually happening from the device itself. For example, an onboarding question happens on the device.
I don’t need to touch any API for that. I’m just gonna program only share this event if this and these conditions are met, which could be I. If the device is not an iPhone eight or if the user answered X or Y and not Z at this question. So you can do from the device and then the SDK would pick it, whether it’s the Metas DK or the MK SDK and would share it to the ad platform Now, and I see there’s a question related to this, so I’m gonna go for it.
Some of this criteria, they may not come from the device, but from a server, and there is one that is extremely interesting, which is users who don’t immediately auto console. So a user would go through the onboarding and they would start a trial. And there are a number that is not trivial at all of people who would just write next, go to their app store setting, say, oh, cancel the auto renewal because I don’t know yet if I’m gonna want to pay or not.
And then come back to the app and use the app. So a lot of us, we want to use this. This information is not coming from the device. Usually we get it from Revenue Cat because it’s messy to get it from Apple. You can, but it’s very messy. But it’s coming from revenue cat, it’s coming from a server.
So you have to have it coming back to the device and then send it to the network. And what Marcus said before that, when there are more intermediaries involved in any, and I’m not saying Veronica is don’t do in the job, but comparing to filtering an onboarding question that is on the device, the likelihood of this breaking is low.
When you have the server event, the likelihood of breaking is medium because there’s just server response. The app needs to be still open, a bunch of things. So it is workable. It is just not a hundred percent perfect. Let’s say it still works. Then you’ve got a scale network. Which is even more complicated.
Sorry. Which never works. No, it never works. But I don’t know. I had a conversation last week about that, about the auto console. Yeah. We can send the auto console No problem. But it breaks like big time. Yeah. Yeah. I’ll go with a funny story about the auto console because for me it was a no-brainer filter of, oh, I’ve got 30% of people who auto consult the trial.
I’m just gonna send a trial that did auto consul in the first 20 minutes or whatever. So I’m delaying the event a bit, which comes with some complexity here because that this event just happened with a criteria is I need to withhold the event. Until I verified 30 minutes later that deliver was on co.
And by that time, the user might not reopen the app, so you can’t send it back. So it’s this is where it gets messy to do this. And it’s funny because I stopped using the auto console, not for this technical reason, but for another reason, which I’m gonna get into, which is I realized that later that a lot of people who auto console were actually ended up converting, but I wasn’t seeing them because they were converting on another product.
I have a subscription product in the onboarding. And if you don’t convert there later in the app, I’ve got different subscription products. Like we, we are offering different types of subscription and we never realized that actually a lot of the auto Conors is, yeah, they don’t want to be called by the purchase happening without notice.
But a bunch of them, they love the app and they end up considering on another thing. So I. Yes, I was filtering out a lot of people who would never pay, but I was also filtering out extremely valuable users who actually didn’t want to come in entirely, but use the app, decided to keep paying. And those users, they were not the majority, but they were very high value.
And I was like, oh, I’m filtering very high value users coming back to a platform. I want to avoid the situation. This is really a situation I want to avoid. Can you filter for the auto considers? Yes. Does it come with a couple of different drawbacks that are both technical and misfiring? Yes, but in some cases still was doing it.
And this is why I’m, I don’t like, from the very beginning we said different case applied to different apps and here I stopped doing that in a couple of cases where it looks the most obvious case, like on paper it looks obvious, this one but it’s not as obvious.
[00:45:44] David Barnard: Now that actually led leads me to the, another question I wanted to ask is wh where are you looking for these signals?
Wh what’s your preferred place to do that? Are you in Amplitude or Mix panel correlating? You know who starts a free trial? Cancels and then comes back. What percentage do that, where are you actually analyzing these signals in the MMP? In product analytics? In a data warehouse? Yeah. What does that look like?
[00:46:12] Thomas Petit: So that’s typically wouldn’t be in the MP because they don’t have all the layers of Tata. I want to add, like they don’t have the answers to the onboarding until I program them because I need it. So typically you wouldn’t, you can filter a few things like device and all, but that wouldn’t be the normal.
The most normal place is product analytics. So that would be omics panel. Hopefully you are well connected with revenue cards. So you have all information of behavioral, but also how the subscription react, because obviously it’s not a natural that Amplitude is gonna have all the renewals, the console, oh, it’s not a product and so on.
So if you have things set up properly between ika and products, product science is definitely the place to go. There are a bunch of cases where we actually do it in our own data warehouse, but those are. Sophisticated clients that have like very like big data warehouse where everything is to probably is the way Duolingo would do it.
And it’s the way a maximum of one or 2% in the chat are gonna do it. I’m not saying it’s not a recommendable way to do it, if you can, but the most common answer is gonna be this is gonna be an aptitude or big span. The small reality behind it is that I’m never the one doing it. I’m telling you, I’m telling what I want and I have a data analyst doing it.
So the truth is I very rarely do that myself. I sometimes go because I see the data is. Didn’t analyze one or two criteria that I’ve seen elsewhere, be one that is discriminating. So I’m gonna get into Amplitude and I’m gonna make my filter. But most of the case, I’m not even the one that running.
[00:47:41] David Barnard: Marcus, we’ve been talking a long time.
You must have a lot of thoughts.
[00:47:46] Marcus Burke: I, I’m still using the kind of still active on trial after like X minutes, hours. I. Yeah, I guess also that behavior, if a person comes back really depends on the app. In the end. If you have like really stronger retention and like people do keep using a lot, then it’s probably more likely to happen than when you are a little bit earlier and there’s maybe not as much going on in the long run.
As we know, apps tend to have not the best retention. I, yeah, i’ve seen, for example, I recently looked into this with a client and we saw that within 10 minutes, 15% of people canceled out of all cancellations. And then by hour 10 it was already 30% of them. And that’s the cutoff points we chose to now try optimizing for that.
Again, what I like in meta is that also we can quickly crosscheck our assumptions there based on data breakdowns. For example, now I’m running campaigns against these goals and I can look at the audience targeting of these campaigns and I can see how my age targeting gets older the deeper the event is in the funnel already showing, hey, meta picked up on this, going after a higher quality audience, which I know because I also have data on older people converting better.
So that’s a very good early sign that we’re actually driving the audience that we’re trying to go after. So again, like that’s why I’m just a big fan of meta and like you’re able to dig into this and figure out at least for now still yeah, what’s what’s the traffic you’re bringing in and do these assumptions actually make sense?
[00:49:12] David Barnard: Meta ads fanboy. I wouldn’t have expected that. That’s why you build yourself as a meta app grid consultant.
[00:49:19] Marcus Burke: If ever would start optimizing for down funnel signal, no. Maybe I would advertise there, but yeah.
[00:49:25] David Barnard: Was there anything else that you were gonna say on that, on those topics?
[00:49:28] Marcus Burke: The other thing, yeah, Thomas has already mentioned, but yeah, things just tend to be a lot more easy with like conversion API and if you can trigger a server to server events and then you don’t have to figure out all these kind of nitty gritty.
When does the SDK track how efficiently with scan? Even more I actually currently also validate with the client that you can even optimize for, you can run an a promotion campaign on meta and optimize for a conversion, API event. However, in my incrementality testing, I’ve seen that in the end, much less events get attributed.
So the SDK card. Like way more of the incremental value, which in the end means more signal, better targeting. So yeah, even despite us sending as many identifiers as we could through conversions, API, it didn’t get there in terms of that kind of match rate. And of course that’s gonna be an issue in the long run.
We’re gonna choose the more efficient setup there, while also my client, of course they would love to send everything through conversion. API handle it on their end and not even have to deal with them at ISDK at all. But at least with them it didn’t seem like it would be benefiting us. I wish.
[00:50:34] Thomas Petit: Yeah.
[00:50:35] David Barnard: Yeah. Alright, one more topic before we get to the q and a which we probably will go over a little bit on the q and a. But it seems like that the kind of magic solution here is that somebody opens the app, goes through onboarding, you predict their LTV, and you just send the predicted LTV.
When does that work? When does it not? And why?
[00:50:54] Thomas Petit: That’s a good one. So yeah we talk about events the whole time, but I specifically called it signals because events can be the value that come with the, whether it’s real or predictive. I hate sending the real value because it comes so late with the trial that it’s useless by the time.
So that is where you want predictive value in there, and that’s super powerful. Obviously comes with some trade off. So where it’s great is that the events are very binary, and what we’re trying to do here with this engineering is make this binary case a little bit better. I’m gonna qualify this, I’m gonna, I’m gonna restrict those and so on.
The value is obviously like an even more sophisticated way of doing it because instead of being, oh yes, this user is one, this user is zero, I’m gonna say, okay, this user is zero, that one is five, that one is 20, that one is. First is you do need enough data that you can build these predictions and these predictions, they can get sideways pretty fast.
Like they can be wrong just even if you have very good systems because you’re changing the product all the time because you’re changing monetization, you’re introducing new subscription products that you don’t know how they react, especially when you’re trying to predict very long term value, like LTV if you’re trying to predict a trial to pay probably a little bit easier because you don’t have to deal with all the renewals that come so much later that you don’t have so many of them.
But the first like requirement is that you need to have quite some sophistication in place to be able to predict if this user is gonna convert or not. And no matter how sophisticated you are, it’s not as easy as it looks. There are a few criteria that determine that there’s more or less chance for someone to convert, but there’s still a lot of hit miss in there.
And so we end up sending the wrong value. I got the anecdote, I’m not gonna name them, but one on network day is not Google or Facebook. They actually shut us down because and we can’t advertise with them anymore on value because we were sending a predictive value like of users who never generate any purchase and they claim that it was messing up with their system because then they believe this user have a value and they start sending them to other clients.
I was like, yeah, that’s great. I want this.
[00:53:00] N/A: That’s
[00:53:01] David Barnard: my old strategy.
[00:53:02] Thomas Petit: And, but they kicked us out. So I don’t think we’re big enough to mess up with their models. But you see why they would do it now, because I’m also sending a wrong value and no matter the sophistication, it’s gonna be wrong. So one is you need to be able to predict the value and it’s not that simple.
The second one is that value optimized campaigns they can be very powerful and. I found really strong scalability on them outside of meta, curiously, but that’s another question. Doesn’t matter. Google has them, for example, on Google and Android, on UAC and Android. They’re very powerful.
Like on Google iOS, they’re terrible. Don’t even try Google even remove them for a while. They bring them back now. But on Android, on Google, on Android, they’re extremely powerful. On some network on Apple, on Iron Source, they’re also very powerful. But one, they require more data, obviously.
Like they need more data point to figure out who are the high value users, who are the medium value users. So they typically require more data points. So you don’t need 10 conversion per day. You need more than that because you’re gonna need five of the high value, five of the medium value, 10 of the low value.
Like it takes more, just more data. So that’s, that is a drawback. You can only run it on bigger campaign. And two is. They have more volatility. Like they tend to react in unpredictable ways. Like I find that they’re, yeah, they bring a lot of value both in scale and in returns, but they’re harder to manage.
They go sideways very fast. The algorithm would make bizarre decision even more than on the other campaigns. So they come with a bunch of risk because you are operating at a higher scale with a higher volatility. So sometimes it is hard to handle. In a perfect world, I would want to have both because they may attract different people and like Marcus mentioned before, when one goes sideways, have the other one.
So in a perfect setup, I want to have value because I want to run it in parallel. I want to have a little bit of post world. But yeah, so they’re not like a silver bullet or Yeah, let’s all move to value. It’s great. They’re very volatile. It’s a problem.
[00:55:01] Marcus Burke: I also see them as an addition, and I find I haven’t corrected them for subscription yet, apps yet.
We tried, like the last time I really tried at a company where I would’ve said, we have enough data as a Blinkist. And in the end, what’s tough with this is that they’re designed for working with kind of a broad variety of values. And that’s in the end, something you need to fake with a subscription app because your SKUs, they don’t differ that much.
Like in a gaming company. People there’s people that are gonna purchase stuff for a thousand bucks in the first days and others purchase for $1. And that’s of course like way more interesting data for an algorithm like this to work with. In a subscription app, by nature, everything is around, around about the same price point.
So then you need to figure out, like adding this kind of predictive LTV on top, thinking about what multiplier do we give someone that’s older. Multiplier do we give someone that’s on a new iOS version versus an older one? So you can try to fake it, but I haven’t seen it working yet, but I know some people are running it.
But it comes as an addition to your app event optimized campaigns. And yeah, I know like there, there’s a whole bunch of companies in the space I think that are operating here. I know they’re like the guys at Journey from Denmark, they are doing like predictive LGB modeling for subscription apps, but also other companies.
And then they actually use, I think value-based lookalikes feed that signal to lookalike and then use that to adjust the multipliers on the platform. So it’s quite advanced. But in the end what they do for you is take all your data and tell you like who should get what multiplier because there are X more valuable than others, so that then an algorithm can actually be fed with it.
But advanced territory for sure.
[00:56:44] David Barnard: Yeah. I think we’ve gone through all the questions and we’re at the top of the hour. We’ve gone through all the notes that we were gonna cover in this conversation. But Thomas or Marcus, if either of you need to leave, that’s fine, but I did want to spend a few minutes going through and answering audience questions.
So I’ll answer ’em if you have any questions that haven’t been answered, go ahead and throw ’em in now and we’ll just go through in the order of upvote. Upvote, any questions you wanna make sure we answer? I’ll start with one. For iOS Google Ads, would you use Firebase or an MMP?
[00:57:22] Thomas Petit: We had this question before on, on Meta, and we didn’t mention the case of Google. So Google claims that Firebase is very much better than an MP for Google Ads. I fought this claim for a very long time. Finally the thing is and the difference of meta Google is gating some feature on sending it through Firebase.
So for example, the Value Company, you can only do it through Firebase. You can’t do it GMP. So if you want to do that, you use Firebase. You can, when you send via Firebase, use some kind of audience layering that you can’t through an MP. So there’s a bunch of whitelisted feature with Firebase, which tends to me to think that, yeah, that’s great to send through Firebase because then the, I have access to these features.
Marcus also mentioned before less intermediary the better. So that’s actually a good reason. To believe Google when they say that it might be better to have the signal directly from Firebase because there’s just less inventory less in intermediaries, so that’s great. So far, personally, I haven’t found as much difference in performance.
When you run even optimization I know a couple of people have seen it. It’s not so easy to determine precisely. There’s one very recent development that’s coming. It’s a new feature that’s coming soon that’s called ODM on Device Management where Google will officially fingerprint with the IP address, but it’s legit because they do it on the device, not on their server, which we can debate about it.
That’s not a question, and I think this could be really something that give Firebase an edge over MMP, but that’s a future thing. I have zero evidence today that using the MP is detrimental to your performance. There, there are, if you have the choice and you’re setting up from zero is definitely not a bad choice to use Fire five days.
[00:59:04] David Barnard: Cool. Marcus, do you have any thoughts or you just don’t even like Google?
[00:59:09] Marcus Burke: I’m not gonna challenge Thomas on Google ads. I’m just gonna say I use meta ads.
[00:59:18] David Barnard: Yeah, no, that’s it, that’s a really good point that and on, and Thomas alluded to it earlier, is that I Google ads perform way better on, android because they do have, still have full signal. That may change, as Thomas was saying, with Google doing more on-device fingerprinting. But yeah meta seems they also have more inventory.
[00:59:37] Thomas Petit: They also have more inventory, right? They’ve got better data. But there are a bunch of inventory from the play store play, store search, but also all the sort of the kind of featuring, but ads like all kind of them in the Play store that are extremely valuable and that no matter how good the measurement comes on iOS, they’ll never have those.
Yeah, no on, on Android, Google is a great channel and the market share they have on the Android market is a reflection of how good the platform is. I think on meta the very high market share that Meta has on iOS advertising is also a reflection that it just tend to be a platform that works well for advertiser on iOS.
[01:00:14] David Barnard: Yeah. Alright, next question. When implementing Meta App event tracking for mobile apps, should we only send the event we’re optimizing for example, purchases? Or should we also send additional events such as app install, activate app, add to cart, complete registration and level achieve to provide broader behavioral insights?
So yeah. Do you degrade signal by sending too many signals or, yeah. What do you do, Marcus?
[01:00:42] Marcus Burke: I would usually say send more, because in the end, that will allow you to do like deeper analysis on your campaign performance. There’s always gonna be these little quirks where you see there’s a campaign which has a good cost per action that you’re optimizing for, but then the funnel seems to break in some other point compared to other campaigns.
And then that’s the stuff you’re looking for in terms of like how you’re going to inform your creative strategy and what kind of, you can look then again into like audience breakdowns looking and I don’t know, do older people drop off at a certain point? Which again can inform like other parts of your like product development as well.
So I’d say send more. The campaigns are going to use the signal that you are optimizing for. So from that perspective, that kind of shouldn’t break anything. Yeah, so my advice send more, but then I. Don’t overdo it. Like meta isn’t your product analytics, just like your MP isn’t your product analytics and like you don’t need to be sending a hundred events in there because that’s a level of granularity that you’re gonna look at for your ad performance.
So a few key ones along your funnel can be super insightful.
[01:01:48] David Barnard: Gotcha. Next question. Have you started any campaign with scan four and meta? If so, have you noticed any significant changes in your events? Do you have several events configured for all the windows or the same for all of them?
[01:02:04] Marcus Burke: I haven’t, Thomas have you?
[01:02:06] Thomas Petit: No I haven’t. So I an answer to this, I’m personally entirely convinced that the post two and three, because it’s in the question. Therefore you to understand the quality of traffic, Facebook is not gonna optimize towards that. So don’t think that personally the setup, the scan four setup that I’ve done, it’s true that I, in several case, I’ve done the same events on the different prospects.
But it doesn’t mean it’s necessarily the best case. It’s just it made sense for me and it makes easier to read the results that way. But it, I don’t think it’s necessarily it’s because it’s gotta be for you to report and understand on make something that you can actually extract value from, but keeping mind that the optimization even is gonna be the one from the first perspective.
That’s the important one.
[01:02:53] David Barnard: I’ll resist the urge to rant for 20 minutes on Apple Building Scan and not actually providing a useful solution. But this is something I’m very frustrated about and have bugged Apple quite a bit about, because let the platform provider, they should be able to provide better signal, better tooling, and do it in a more privacy friendly way.
And instead we just get all the fingerprinting and it works. And so nobody uses scan and that’s a shame.
[01:03:20] Thomas Petit: I can run harder. David, I work with a kids app. So as a kids app, we can’t use the IDFA and we can’t use the Google advertising id, the gi, right? So on Apple we can use SCA network.
We’ve got a replacement system that we agree is not perfect, but that’s the, at least it exists. On the Google platform, the privacy doesn’t exist, doesn’t run on our network. So actually without the Google advertising id, there’s no alternative at all. So my rent is a preferred, bad one than none. But yeah, if we could have a better one on both platform, that would be amazing.
[01:03:54] David Barnard: Yeah, kids app is maybe the only exception or there are there other exceptions where there, where SKI network does provide value.
[01:04:02] Thomas Petit: I know one case I’m not sure I have the right to mention the name, but let’s say it’s a very big brand with some energy drinks that for legal reason, does not want to share anything with a user level with Meta and Google.
And so for them, having a SK network is actually the only alternative, just because some higher up in legal decided that the alternative was not an option. But that’s pretty much from in Edge case.
[01:04:29] David Barnard: Yeah. Yeah. Unfortunate though that apple’s not done better with all this. All right, next question.
So we, have you seen successful examples of multi, multiple qualified trial events or even trial starts depending on conversion confidence, IE demographics, time slash depth of engagement? I think we covered this one, but if there’s any additional thoughts.
[01:04:54] Thomas Petit: I thought that Marcus explained very well the, this question, but I thought that he mentioned especially when we say we filter people and the younger people tend to not convert and so on, that maybe I’m gonna run like anyone over 25 like optimizing towards them.
But I might run a campaign specifically for audience below this age. It’s just I’m gonna optimize towards a very different goal. So let’s say my main campaign on qualified trials, I’m gonna try to achieve $20 per trial. On the other campaign, because I know the trial conversion is so low, I’m gonna try to achieve five or 10.
So it’s perfectly useful to have the non filter as long as you optimize it differently. And that’s where the, it’s not about finding the one best signal ever, but about utilizing them in different ways. We mentioned events and value, but here’s a very good case as well, like the qualified trial on an audience and the other audience, but with a different bid level, for example.
[01:05:51] Marcus Burke: Yeah. Gotcha. Agree. And I’d say like I have seen cases where people did like optimize for like a number of events, like a trial start and then a qualified one and maybe another one. But in the end, like oftentimes. Really finding that event and proving that it does better is not that easy if you like run a m campaigns and a promotion because it’s not that you’re gonna get like a campaign identifier in the app and then it tells you, hey, user quality is higher.
You have to figure out through inter incrementality testing and the data you collect in onboarding did we actually bring in an audience that now serves us better and is that worth potentially even paying more upper funnel? Because that can happen. So it’s definitely not like you can, like now turn out five events today and then that’s gonna make you scale.
It will be hard to even just find one.
[01:06:41] David Barnard: All right. Another s sk at network question. Does scan help with a EM signal? Facebook documentation officially states it doesn’t use it, but we’ve heard it can help any hints as to whether SCAN does help or do you think Facebook’s documentation is correct, that it’s not actually taking that into account?
[01:07:04] Thomas Petit: Even if it doesn’t help, it can’t hurt, so I see no reason why not having a proper scan set up, let’s say it has zero impact on the optimization like Facebook claims, and it’s a very real possibility. At least it gives you the option to compare, you have the attribution settings where you can look at the scan campaign, maybe you can compare a scan campaign and an a EM campaign on the same criteria because then you have the second column.
So personally I see, I don’t see any case where it’s gonna hurt, so why not have it? Because it’s giving you potential reporting and potential other campaigns to run even if it doesn’t improve the a m performance. So I don’t see why not do it. No,
[01:07:45] Marcus Burke: I mean I know a m is working without scan, that’s for sure.
I’m not sure if it helps, but also for scan in the end also have has view through attribution and a m doesn’t, so that can be a really good indicator of seeing like how many users are coming in view through. And the younger your audience, the share can be quite high. So then basically just having SCAN as an additional reporting point is super interesting and it’s like not a big deal to set it up.
[01:08:10] David Barnard: Cool. Next question. Have you seen performance improvements with web to app campaigns on meta or other platforms by implementing the respective platforms? Capi? And then define for me what is capi, CAPI, the acronym?
[01:08:25] Thomas Petit: So CAPI means conversion, API, and it’s the, in the old times, if you had the website, you would implement Facebook Pixel on the website, and if you had an app, you would install the Facebook SDK to send your conversion.
What happens on the web part of things is that the pixel got less interesting as it gets filtered out by a bunch of browsers and people with ad blockers and cookie banners and so on. The pixel only reports a part of because it’s from techno Cappi, the conversion, API enables you to send all the conversion through the backend without all the first party.
So obviously from a legal point of view, you are only supposed to send what you have concern for. Practice is not necessarily how it happens. So Kathy is basically the backend version of the Pixel. And if you’re using a website, it’s obvious that you should be using the conversion API and not the pixel.
Maybe you can use the twin palette, but you need the conversion. API. On apps we’re usually using the SDK and or an MMP. Marcus did mention before that there are ways to send your in-app conversion through the conversion API as well, which in practice gives a lot of flexibility, but is not as reliable as the SDK and MMP.
It could create glitches and stuff. So to relate to the question. I don’t think you can compare web to app campaigns directly with app to app campaigns. But you can run campaigns that when you’re not doing a long onboarding, you’re just doing a short landing page and then it goes to the app, then you can send the event that happens in the app on a web campaign.
And that’s a very interesting one to run for me. They’re also compliments I find very hard to compare the performance of one and another side by side just because it’s measured differently. The audience is different, like it is just another auction pool. So it’s very hard. But definitely it’s a scheme that is working very well for some people.
I’ve also failed at making it work in a couple case, so it’s not like the ultimate thing. But if you are gonna run campaigns that are not app promotion, you need the conversion. A p If you run an app campaign, market’s already answered that, eh, it comes with some glits.
[01:10:37] Marcus Burke: Yeah, and if you have a quiz and you convert people on the web, then you should definitely have copy in place because the pixel is not gonna capture a lot of these purchases.
And you can actually check in events manager, your kind of event match quality on, on web. Facebook has a really good reporting around it and they tell you, Hey, we think this is your kind of match score, and we recommend sending these additional things so that we can increase it. And with a pixel, you might get up to a three out of 10, while with copy you can go up to nine.
So definitely add those identifiers so that meta knows what who’s purchasing and can then attribute back to the campaigns.
[01:11:14] David Barnard: All right. Here’s a fun one that won’t be controversial at all. What is your favorite MMP and I’ll add and why, or maybe a better question so you can be more politic is what are the benefits and drawbacks or are there specific setups that work better with certain MMPs or, yeah.
What are your thoughts on selecting an MMP to do all this?
[01:11:37] Marcus Burke: I don’t really have a favorite. I really liked singular for the longest time. They they had good creative analytics as well, and they were like a thought leader from the beginning and the whole like a TT rollout and scan. So they really doubled down on that and I like that standpoint back then.
I’m not using SCAN anymore but other, and then adjust for me for the longest time, I mean I’m from Germany. Many German apps used to work with Adjust. All the companies I worked for before did what I never liked about them is that they didn’t have a nice user interface. It was all just, you had to pull data from the API and then visualize that on your own.
If you just wanted to go in there. It was all just a big table and that was about it. They’ve changed that by now, so I don’t really have a preference these days. Apps flyer, adjust singular. I’m open to any.
[01:12:26] Thomas Petit: I, for the record, I collaborate with three of them. So I need I, I personally believe that on the pure hard technical attribution side the difference between them are not that significant.
Like they all work as intended. They’re not like, it’s not like one is a complete failure or whatever. There is one that is a little bit less meant for UA than the others, but I’m not gonna go too deep. Like the ones you mentioned adjust singular apps, flyer, I think they’re, they offer a reliability of attribution that is very, is similar and that to run your campaigns on meta through either of those three SDK is probably gonna yield the same performance.
So then there are differentiators and I think the interface is one singular was really good for very long time. There just has changed. Some people are very used to the absolute interface. So there’s also a very personal feel of, oh, I like this da, I understand this dashboard better. I’m gonna pick that one.
But UI is one. Definitely. Pricing is one. Definitely because those tools as you scale, they’re not cheap. And you might find out that through negotiation, the prices are not completely public. There are sometimes significant differences. I’m also seeing significant differences within the same MMP depending on how you negotiate.
So I’m saying pricing is a very big difference for what you can get and in my opinion, but it’s one that is hard to say. Before, support is a very big one and I’ve been a long time advocate of adjust and I love their team for a very long time. At least for the European support, I think the quality of support has gone from over average to under average.
And lately it’s been very disappointing. It really depends where you are. I have one team in Europe, they’re still very responsive and very technical. But overall, my, my experience has gone a little bit worse over time. And I think in these days where the role of the MMP is to pass through data, but also to help you figure out the scan environment figure out what’s the best, the educational and the accompanying you through it is actually an important part of the role.
So if you have the feeling that the or feedback from other people, I’m not saying that just feedback is terrible. It’s very geography oriented, but it’s a very important question to figure out at the beginning. And I’ve got great and poor experience with every single one of the MMPs. But locally I have countries where I would pick one rather than the other just because I know the support is great or not great.
Yeah,
[01:14:58] David Barnard: no, that’s a great answer. And nice navigating around all the complexities of that. Alright we’re pretty deep in here. We’ll do one more question because yeah, we’ve just gone super deep. Let’s see. And this is the highest up voted one, so we’ll just do it when mapping events on an MPI have the option to send all media sources, including organic, should I choose this option?
And then the same question for web to app as well.
[01:15:25] Thomas Petit: Yes. Yeah.
[01:15:29] David Barnard: Okay. And then that just gives you a more data to break down and analyze. If you collect all that,
[01:15:39] Thomas Petit: The whole concept behind these signals that you sent is you’re gonna, you’re gonna show the platform or your users with a bunch of parameters, ah, this is my users.
You’re gonna let the platform decide which they think are coming from their campaign. So the more user you restrict, the more parameters you restrict, the more restriction you put in there, the more signal loss there is on the way the least the platform can actually allocate properly. There is an argument to which extent they are actually honest with attribution and they’re not trying to gain users that are not coming from them and so on.
But this is not the question if you restrict what is organic? It’s very hard to define, if you tell Facebook, oh no, those you can’t even see is that somebody else has made the decision that they were organic, but maybe they were not as organic as somebody else might say.
So in my opinion, you shouldn’t restrict then you should be careful and not necessarily trust what the platform is telling you, but at least you should give it the opportunity to attribute and then make your decisions. So I’m not restricting, and that’s why I say the answer is yes.
[01:16:44] Marcus Burke: Yeah, I do. In the end, they need that signal.
Also someone not converting is a signal they can base decision on. So like the more there is, the better. I did have cases recently where while attributing through the meta SDK within our campaign setup, because we are sending through three different sources and as Thomas said, don’t make sources.
It got a bit messy. There was duplication happening and then we had to build in some logic where not at least not every campaign receives all signal because that was creating some chaos. But other than that, I’d say, yeah, don’t you MP restrict each each network because then the is calling the shots and they don’t necessarily need to be.
And then in the end what you wanna do is, again, crosscheck, what’s attributed through incrementality testing and like making sure that across all your networks you don’t attribute double as many conversions as they are happening in the app.
[01:17:35] David Barnard: Cool. This has been a ton of fun. Anything either of you wanna say as we’re wrapping up?
[01:17:42] Thomas Petit: No, it was cool. Very good to hear Marcus takes and I hope the audience found value. I saw there was a bunch of questions that might have been unanswered, but we’ve got the sub plus the sub forum to continue this conversation for the question that we didn’t, we couldn’t answer on time.
[01:17:58] Marcus Burke: Haven’t been there in forever. But yeah, let’s send them there and do an a MA. That was fun the last time I edited.
[01:18:05] David Barnard: Yeah, totally. Alright, thank you everybody for joining and Thomas and Marcus, thank you both so much for sharing. Both of you share so much on LinkedIn and Twitter and forums like this and in the SubHub community.
So thanks for sharing so much knowledge to help other folks in the industry who don’t get to see as much as you see. So I appreciate you coming on.