Stop Building Features. Start Reading Your Data.

Analytics for indie developers who think they need more users. I had 599 installs and no idea what was happening inside my app. 92 users in PostHog rewrote my entire roadmap.

By ·

The Builder's Blind Spot

Six weeks after launch, I had 599 users and absolutely no idea what they were doing inside my app.

Downloads were climbing. Daily active users looked healthy. Fifty-nine people had subscribed to Pro. I was deep in the code, building AI features I was genuinely excited about. The roadmap felt obvious.

Then I looked at my analytics, and everything I thought I knew turned out to be wrong.

The problem was simple but devastating: I was building based on how I experienced my own product. My account had months of data, dozens of photos, every feature unlocked. When I opened the app, everything worked beautifully. But I was not my users. I never could be. And the decisions I was making from that perspective were leading me in exactly the wrong direction.

I instrumented specific events and funnels, watched patterns emerge from just 92 users, and that small-sample data completely rewrote my roadmap. App retention metrics do not require massive scale to be useful. The framework I used to segment users by engagement applies to any app, and you can implement the same approach this week.


You Have Context Your Users Never Will

When you build something, you accumulate context that real users simply do not have. You know where every button leads. You understand why features exist. You have patience to explore because you already know the payoff.

New users have none of this. They download your app, open it once, and make a decision in seconds about whether it deserves their attention. The empty states you skip past during development are the exact screens where real users decide your app is not worth their time.


The Roadmap I Built Was Almost Entirely Wrong

GainFrame is a progress photo comparison app. I had spent weeks building features I was proud of: AI deep dive reports, prediction models showing what users could look like in 4, 8, or 12 weeks, weekly summaries that automatically generated trend reports. The roadmap felt clear. More AI features, more analysis, more sophistication.

Then I added PostHog for deeper event tracking. Within five days, with data from just 92 users, three things became immediately obvious:


Common Instrumentation Gaps in App Retention Metrics

Most indie developers have analytics installed. Firebase, Mixpanel, Amplitude, PostHog. The dashboard exists. Events are firing. But the uncomfortable truth is that your implementation may be misleading you — not because the tools are broken, but because the instrumentation is incomplete.

A Note on Methodology: The metrics throughout this piece come from my internal PostHog and Firebase GA4 dashboards for GainFrame, covering the period from launch through the first six weeks. When I say "user", I mean someone who completed app installation and opened the app at least once. "Bounce" means a user who triggered only one engagement signal (typically opening a single feature tab) before churning. Cohorts were formed based on composite engagement scores, which I explain in detail below. All numbers are rounded and should be read as approximate figures from my dashboard, not statistically rigorous research.

My Activation Metric Only Fired on 2 of 8 Code Paths

I had a first_score_received event that was supposed to fire when a user got their first AI physique analysis. This was my primary success metric.

For weeks, my dashboard said about 16% of users were getting scored. The real number was closer to 45%. The event was only firing on 2 of 8 code paths that could trigger a score. I had been making product decisions based on fiction.

If you are not testing your analytics events the same way you test your code, your implementation may have similar gaps.


Over a Third Never Experienced the Core Value

This was the stat that rewired everything.

GainFrame is a progress photo comparison app. The entire value proposition requires photos. Without photos, there is nothing to compare, nothing to analyze, nothing to score. Photos are the app.

Yet roughly a third of tracked users had zero photos. Not one photo. Not a blurry gym selfie they uploaded and forgot about. Zero.

That is like building a note-taking app where a third of your users have never written a note. The product literally cannot provide value to these people.

User photo count — 91 tracked users
0 photos — 34 users (37%)
1-5 — 31 users (34%)
6-20 — 16 users (18%)
20+ — 10 users (11%)
Each square is one user. Red = never uploaded a single photo.

The Onboarding Step That Lost 15% of Users

Once I saw the zero-photo problem, I pulled the full onboarding funnel from PostHog. Step by step, where every user dropped.

The funnel from Welcome through Finish had roughly a 36% cumulative drop-off. But the drops were not evenly distributed. Most steps lost one or two users. One step lost twelve.

Import Photos. Step 11 of 17.

Users got through goal-setting, body stats, and pose setup. Then the app asked them to import or take a photo. And about 15% of remaining users closed the app and never came back. No error. No crash. They hit a screen that asked for something they did not have ready, and they left.

Onboarding funnel — 92 users, 17 steps
92
Welcome
86
Goal
83
Desired Outcome
81
Obstacle
81
Expectation
81
Gender
81
HealthKit
79
Body Stats
78
Age
78
Pose Setup
-15.4%
66
Import Photos
65
Baseline
65
Goal Weight
61
Notifications
60
Account
60
Widget
59
Finish

Import Photos lost 12 users in a single step — more than all other drops combined. Users were asked for photos before they had seen any value from the app.

17-step onboarding funnel. The red bar is where 15.4% of remaining users silently disappeared.

The Features You Built vs The Features They Use

There is a painful moment in every builder's journey when the data reveals that the features you are proudest of are not the ones your users care about.

I had spent weeks on sophisticated AI analysis. Deep dive reports. Prediction models. Weekly summaries. These were the features I talked about when describing the app. They were the technical achievements I was excited to ship. And they were being used by a tiny fraction of my user base.

App retention metrics are not just about finding bugs or measuring growth. They are about discovering the gap between what you think matters and what actually matters to the people using your product.

Compare: 10x More Users Than My Flagship AI Feature

Feature usage — unique users, all time
Compare
519
4.8x gap
Check-in
107
Deep Dive
71
Throwback
24
Weekly Summary
10
Compare — the simplest feature — dwarfs everything else. 70 weekly summaries were generated but only 10 were ever viewed.

The simplest feature in the app was used by 10x more people than the most complex one.

Hundreds of Weekly Summaries Created, Most Never Viewed

The weekly summary feature automatically generated trend reports for users. The system was working perfectly from a technical perspective. Summaries were being created on schedule. The AI was analyzing progress and writing personalized insights.

There was just one problem: almost nobody was reading them.

Of the users who had summaries automatically generated, only about 15% ever viewed one. I was generating content nobody asked for, solving a problem nobody had, and feeling productive while doing it.


The Conversion Plot Twist

The data told a story I did not expect.

I had assumed the problem was conversion. Users were not seeing enough value in Pro to subscribe. The paywall needed better copy. The pricing needed adjustment.

I was wrong.

Among users who actually received their first AI score, roughly half subscribed to Pro. In the more recent cohort, it was closer to 60%. Those are strong conversion rates for an indie app.

The problem was not that users saw the value and decided it was not worth paying for. The problem was that most users who started onboarding never received a score at all. They never reached the moment where the app demonstrates its value. I was optimizing the paywall when I should have been optimizing onboarding.

Install to subscription — Mar 12-30, 2026
Started Onboarding
342
100%
-36% lost
Completed
219
64.0%
-46% lost
Imported Photos
118
34.5%
82% kept
First Score
97
44.3% of completers
60.8% convert
Subscribed
59
60.8% of scored
36% drop during onboarding — the leaky top
60.8% score-to-subscribe — the strong bottom
The product converts well. The problem is getting users to experience it.

Building a Composite Engagement Framework

Raw percentages are useful but they did not help me prioritize. Knowing that a third of users had zero photos told me there was a problem. It did not tell me how to segment users or where to focus my limited time.

So I built a composite engagement framework using six signals:

  1. Completed onboarding
  2. Has photos
  3. Used Compare
  4. Used Check-in
  5. Used an AI feature
  6. Viewed a score breakdown

Each user gets a score from 0 to 6. This approach tracks actual behaviour patterns rather than surface-level metrics like downloads or session counts.

Over 40% Did Exactly One Thing and Left

The framework revealed four distinct user categories:

User engagement cohorts — 123 tracked users
41.5%
20.3%
22.0%
10.6%
Bounced 41.5% — 51 users 50 of 51 did exactly one thing: opened Compare with no photos and left.
Exploring 20.3% Onboarded + looked around. Never imported photos.
Activated 22.0% Found the path: onboarding + photos + compare.
Engaged 10.6% Multiple features, regular usage.
Power 5.7% 5-6 signals. Your subscribers and advocates.
62% of users are Bounced or Exploring — they never reached the core value.

Users Who Skipped Onboarding Almost Always Bounced

I had included a skip button in onboarding because I thought some users would prefer to explore on their own.

The data told a different story. Of the users who skipped onboarding entirely, 98% became bounced users. They did exactly one thing — opened Compare, saw an empty screen with no photos to compare, and left forever.

The skip button I added for user convenience was destroying retention. Twenty-one percent of all users were hitting it, and virtually none of them survived.

Only 16% Became Engaged or Power Users

Once I could see the cohorts, the roadmap rewrote itself.

I did not need more Pro features for the 16% who were already engaged. I needed to move the 62% who were bouncing or exploring into the activated tier.

The math made this painfully clear: if I fixed onboarding completion from 64% to 85%, I would get roughly a third more subscribers from the exact same install volume. No new marketing spend. No new features. Just getting more people to the value they already would have paid for.


92 Users Is Enough to Rewrite Your Roadmap

One objection I hear from other indie devs is that analytics are not useful until you have significant volume.

In my experience with early-stage apps, this is often wrong.

I had 92 users in PostHog when these patterns became clear. That was enough to:

App retention metrics do not require millions of users. They require asking the right questions and instrumenting the right events.

Finding the Biggest Leaks With Small Numbers

I cross-validated the PostHog data against Firebase GA4 and the proportions held. PostHog showed about 42% bounce rate; GA4 showed 50%. PostHog showed roughly 39% of onboarders never importing photos; GA4 showed 46%.

The small sample was directionally correct and actionable. You do not need statistical significance to notice that a third of your users are not doing the one thing your app requires.


The New Roadmap: Fixing Funnels Before Building Features

The data pointed to specific decisions, not vague intentions.

Built:

Killed:

Deprioritized:

Every single one of these decisions came from the data.


What These Fixes Should Actually Produce

Identifying the problems is one thing. Projecting the impact is what turns analysis into accountability. If the fixes do not move these numbers, then either the diagnosis was wrong or the execution was off. Either way, I will know.

Here is what I expect each change to produce and how I will measure whether it worked:

FixCurrentTarget
Onboarding completion64%80-85%
Import Photos drop-off15% lost<5%
Users with zero photos~37%<15%
First score activation44% of completers60%+
Bounce cohort (1 signal)40%+<25%
Weekly summary views~15% of generated40%+

The Revenue Math

The projection is straightforward. With current numbers, roughly 17% of onboarding starters eventually subscribe. If I fix onboarding completion to 85% and scoring activation to 60%, the same install volume should produce roughly 30-45% more subscribers. No new marketing spend. No new features. Just fewer people falling out of a broken funnel.

I will run the comparison over a 30-day window after each fix ships. The metric that matters most is the end-to-end conversion: onboarding_start to subscription_started. If that percentage does not move, none of the intermediate fixes mattered.

The Benchmarks I Am Setting

I am committing to publishing a follow-up with actual results. Not projections. Not "we expect to see improvements." The real numbers, compared against these targets, with an honest accounting of what worked and what did not.

If I learned anything from this exercise, it is that the uncomfortable data is the useful data. The same applies to publishing outcomes. If a fix did not move the number, that is worth knowing too.


The Implementation Checklist

The pattern that emerged from this analysis was clear: complete instrumentation reveals your biggest leaks, funnel visualization shows you exactly where users drop, and event QA ensures you are not making decisions based on broken data.

If you are building an app and have not yet implemented analytics with tested event tracking, you are guessing which screen is losing users.

Minimum Viable Analytics: The Events You Actually Need

Start with these core events before adding anything else:

For each event, define it clearly before implementation. What triggers it? What properties does it include? On which code paths should it fire? Document this in a simple spreadsheet so you can validate against it later.

Instrument Before You Build

Add event tracking for every step of your onboarding funnel and every core feature interaction before you ship. Not after. The earlier you have data, the earlier you can adjust direction based on verified funnel drop-offs.

Your most important metric is not downloads. It is the percentage of users who reach your core value moment.

For GainFrame, that is receiving a first AI score. For your app, it is whatever moment makes a user think "I need this". Measure that percentage. It will likely be lower than you expect.

Test Events Like Code and Build Funnel Visualizations First

My activation metric was wrong for weeks because I never verified the event was firing on all code paths. Treat analytics events like any other code that needs testing. Verify they fire when they should. Verify they do not fire when they should not.

Build a funnel visualization for your onboarding flow and your core feature adoption path. Look at it weekly. The step where users silently disappear is almost never the step you would guess.


Conclusion

The features you are proudest of are probably not the ones your users care about most. And that is fine. Your job is to serve the user, not to ship the most technically impressive thing you can build.

App retention metrics are not about dashboards or metrics for their own sake. They are about closing the gap between what you think is happening in your app and what is actually happening.

Start with the basics:

  1. Instrument your onboarding funnel
  2. Track your core feature usage
  3. Test your events like you test your code

You do not need millions of users to make decisions based on verified funnel drop-offs. You need 50 users and the willingness to accept that your assumptions might be wrong.

This week, pick one funnel to fix. Instrument your onboarding flow with one event per step if you have not already. Validate that your core value moment event fires on every code path. Build one funnel visualization and review where users actually drop.

The single metric to watch: what percentage of users who start onboarding actually reach the moment where your app delivers its promise?

That number will tell you whether to keep building features or fix the funnel first.

In my case, the answer was obvious. Funnels before features. Every time.


Coming Next: Running Ads Without Attribution

Fixing your funnel is step one. Step two is figuring out where your best users actually come from — and that problem is harder than it sounds.

In the next post, I will share what happened when I started running paid ads across TikTok, Instagram, and Apple Search Ads without proper attribution. The platform dashboards told me one story. The cohort data told a completely different one. Clicks and installs are vanity metrics when you cannot see which ad platform produces users who actually activate, retain, and subscribe.

The difference between a $2 install that churns in 24 hours and a $6 install that converts to a $50/year subscriber is invisible in every default ad dashboard. I will break down how I built cohort-level attribution to see which channels produce valuable users versus which ones just inflate install counts. If you are spending money on ads without this, you are spending blind.