We started running paid acquisition campaigns about two months ago across two apps. Here is what we set up, what we observed, and where we still have open questions.
AI has collapsed the skill barrier in software. Tasks that used to require a team of engineers now require a few good prompts and a weekend. The result shows up everywhere, but nowhere as clearly as the App Store. Apple is processing more submissions than ever, in every category, at a pace that keeps accelerating.
Our first reaction was that this would make distribution harder. More competition means fewer users for each app, right? We have since changed our minds on that.
When a category fills up with apps, that category gets more search traffic, more editorial attention, more cultural surface area. People discover they have a problem they need solved, and they find ten apps trying to solve it. The market does not divide. It expands. The challenge we kept running into was not the competition. It was visibility. Getting discovered at all. That is the problem paid acquisition is designed to solve, and why we decided to run it.
There is a version of the app launch story that sounds very clean. You build something great, you release it, people find it, word spreads. That story exists. It just almost never applies to a brand new product with zero reviews, zero ratings, and no historical signal in the algorithm.
The App Store search algorithm, like any recommendation system, feeds on data. It wants to see engagement, installs, people coming back. A new app has none of that. So the algorithm ignores it, which means fewer installs, which means less data, which means continued invisibility. It is a cold start problem and it is brutal.
Paid acquisition was our way out. The primary goal is simple: spend less than you earn. Buy a user for €X, generate €X+ in subscription revenue from that user. That is what publishers have been doing for years, and it is a legitimate business model for an independent studio if you can get the unit economics right.
But profitability is not the only thing you get from running paid campaigns, and that is what made it compelling to us beyond the revenue angle. When you buy users and track them properly, you find out whether your product actually converts. Without that, you are guessing. You have no idea what percentage of people hitting the paywall subscribe. You do not know your D1 retention. You do not know what your trial-to-paid rate looks like at scale. The first few weeks of paid acquisition were as much a measurement exercise as a growth one. Those users also left reviews, generated engagement data, and gave the App Store algorithm something to work with, which matters a lot for organic visibility on a new app.
Before spending a single dollar, we wanted to make sure the ad platforms could optimize on what was actually happening inside the apps. So we spent time setting up tracking pixels for Meta and TikTok, and integrating AppsFlyer as our Mobile Measurement Partner to handle attribution and relay events back to the platforms. We also instrumented D1 and D7 retention properly, because those two numbers ended up telling us more about app health than almost anything else.
One thing we learned early: over 80% of our conversions happen at the paywall on the very first session. Attention peaks the moment someone installs and drops fast after that. We had built our onboarding around conversion from the start, with the paywall early and a review prompt inside the onboarding flow. That decision held up well in practice.
The goal of the first campaign was not to be profitable. The goal was to generate installs cheaply and learn what kind of creative worked.
We started at €20 per day per platform, running Meta and TikTok simultaneously from day one. They reach different audiences through different content formats, and we found early on that what converts on TikTok looks completely different from what converts on Meta. We wanted to understand both rather than pick one and extrapolate.
The metric we tracked was CPI, Cost Per Install. For US users, we were targeting below €1.20. US users cost more to acquire because they spend more. A €0.60 install from a market where subscriptions do not convert is worth less than a €1.10 install from someone with five active subscriptions on their phone. Cheap installs from the wrong market are not cheap at all. On Provenance, our install campaign settled at €0.78 per US install, well inside that threshold.
We ran each creative for a minimum of 48 hours before drawing conclusions, with a cap at around 72. The temptation to kill a creative after a bad first day is real and almost always wrong. Algorithms need time to learn. We made that mistake early and stopped.
Something Phase 1 taught us quickly: creatives burn out. An ad that performed brilliantly in week one starts losing its edge in week three. The audience has seen it. The algorithm has extracted what it can. Creative attrition is natural and relentless, and it meant the work of finding and testing new material never stops. Phase 1 was not a launch event. It turned into a permanent infrastructure.
The signal we were looking for at the end of Phase 1 was roughly 7 to 10% of users starting a trial at the paywall. Below that, something in the onboarding or paywall needed work before going further.
After a solid week of Phase 1, we had trial start events flowing through AppsFlyer. We fed those events back to Meta and TikTok and launched a second campaign, this time optimized for trial starts rather than installs.
We seeded Phase 2 with the best creatives from Phase 1, specifically the ones with the best CPI. If they drove cheap installs, they were the most likely to drive cheap trials too. We were not starting from scratch, we were promoting our winners.
We did not turn off the install campaign when we launched the trial campaign. It took us a while to understand why keeping both mattered. The two campaigns serve different functions and they complement each other. The trial campaign is more expensive per event but generates fewer users overall. The install campaign keeps conversion data flowing into the product while staying cheap to run at €20 per day.
More importantly, the install campaign became what we now think of as a creative academy. Here is how the loop worked: we tested a new creative for 48 hours on the install campaign. If the CPI was excellent, we graduated that creative into the trial campaign where real money gets made. Then we went back to the install campaign and started iterating to find the next winner. The install campaign was our testing ground, our nursery, our early warning system for creative fatigue.
On budget: we landed at €20/day for the install campaign and €100/day minimum for the trial campaign. That is €120/day per platform, roughly €3,600/month. We started with a single platform at that level and added the second once we had a working creative. The lower the daily spend, the longer you need to leave campaigns running before drawing conclusions. Algorithms need volume to learn, and we learned that cutting spend when results looked slow usually made things worse.
Before thinking about scaling, we needed to know whether what we were running was profitable at all. This is a step that is easy to skip, and it is an expensive skip.
The metric is ROAS: Return on Ad Spend. Revenue generated divided by money spent. A ROAS of 1.0x means break-even. Above it you are making money. Below it you are subsidizing your users, and you need to understand why before going further.
Calculating ROAS for a subscription app requires care because we are not selling at a fixed price. We are selling a trial that may or may not convert. The formula we settled on:
Expected revenue per trial start = conversion rate × net revenue per subscription
ROAS = expected revenue per trial start / cost per trial start
Here are the real Provenance numbers at the time of writing. Our Meta trial campaign is running at €13.64 per trial start. The yearly plan is €29. We enrolled in Apple's Small Business Program which drops commission from 30% to 15%, so net revenue per subscription is €29 × 0.85 = €24.65. Our observed trial-to-paid conversion rate over the past few weeks is 51.9%.
Expected revenue per trial = 51.9% × €24.65 = €12.79
ROAS = €12.79 / €13.64 = 0.94x
We are just below break-even after one month of testing. Essentially at equilibrium. That is actually a reasonable place to be at this stage. The conversion rate is solid, the creative is finding the right audience, and the main lever we have identified is bringing the cost per trial down through better creatives. Getting from €13.64 to €12 would push ROAS above 1.0x.
One framing that helped us think about the budget: at 0.94x ROAS on a €3,600/month spend, the campaign generates roughly €3,384 in revenue. The actual net cost out of pocket is around €216 per month, not €3,600. The gross spend is the working capital we need available, but the real burn is much smaller when the campaign is close to break-even.
The harder problem we ran into was visibility. With a 7-day free trial, we were blind for at least a week after any campaign change. Money spent, trials accumulating, but no signal on how many would convert until the period expired. And that was just the short-term blindness. The longer-term picture was more complex.
When running campaigns continuously and rotating creatives regularly, we were effectively buying different batches of users every week. Some acquired in a week where a great creative was running, some in a weaker week, some on Meta, some on TikTok. These users did not behave identically. A yearly subscriber from January might renew twelve months later. A weekly subscriber who churned after three weeks generated very different lifetime value from one who stayed six months.
The question we kept asking ourselves: for a specific group of users acquired during a specific period, what is the total revenue generated, and how does it compare to what we spent to acquire them?
The answer we landed on was cohort monitoring. We grouped users by the week they first opened the app, and tracked what happened to each group over time independently. For each cohort: spend, trial starts, conversions, churn, and total revenue as the cohort ages.
To get signal on recent cohorts before trials settled, we built a predicted ROAS calculation into our dashboard. For cohorts with trials still running, we applied a benchmark from the last fully settled cohort: their conversion rate and net revenue per subscriber. We multiplied active trial count by that conversion rate to estimate future paid subscribers, multiplied by average revenue, added collected revenue, and divided by total spend. It is directional, not a guarantee, but it gave us a signal within days of a campaign launching.
Once ROAS stays consistently above 1.0x and cohorts trend in the right direction, the question becomes whether to increase the budget.
The logic seems straightforward. If a campaign is profitable at €3,600/month, it should be profitable at €10,000/month. In practice it is more nuanced. The algorithm serves the best-converting users first. As budget increases, it reaches deeper into a broader audience pool and unit economics soften. This is expected and manageable, but scaling means watching numbers at a different level of attention.
We have not reached this phase yet on either app. It is the next step once ROAS stabilizes above 1.0x, which is where the creative work matters most.
Meta Ads and TikTok Ads for campaigns. AppsFlyer for attribution, event management, and relaying trial events back to the platforms. RevenueCat for subscription management and revenue data. A custom cohort dashboard we built on top of RevenueCat's API and the ad platform reporting APIs.
We are not a growth consultancy. We are a small independent studio running real campaigns with real money and sharing what we observe. The numbers in this post will shift as we collect more data. The setup has held up.
The things that stuck: buying data early instead of waiting for organic. Measuring trial conversion specifically, not installs. Building a creative loop that does not stop. Monitoring cohorts instead of aggregate metrics so each week's spend has its own P&L.
More posts to come on each of these.