Most early-stage founders have a roadmap full of features they want to build. It feels productive; you’re shipping, you’re making progress.

And while I love fast shipping over perfection, when that speed is focused purely on features, it can be dangerous — especially when you start planning months ahead. You’re assuming you know what users need before validating it, and you’re also committing to features and improvements in a space that is still highly volatile.

Then the roadmap becomes a to-do list of guesses; outputs rather than outcomes. You ship features with great excitement, but you don’t necessarily learn whether they actually matter. Or, at the very least, learning happens much more slowly than it should. 

I’ve seen new subscription apps spend months building features that nobody actually uses, because they never paused to ask: what do we need to learn right now?

For early-stage subscription apps, roadmaps shouldn’t be about what to build, as you really can’t reliably predict more than a quarter ahead anyway.

Instead, they should be about what to learn.

That’s exactly what we’ll focus on: building a learning roadmap.

A learning roadmap is a structured plan built around questions and hypotheses rather than features — designed to validate assumptions before committing to a build.

Focus on validating over shipping features

There’s a difference between a shipping mindset and a validating mindset.

  • Shipping feature mindset: What features can we release this quarter?
  • Validating mindset: What’s the most important question we need to answer right now?

Pre-Product-Market Fit, you’re not trying to build a complete app that does everything. Instead, you’re trying to figure out whether you’re building the right thing.

The goal is to learn what actually matters to your early users, and what drives them to pay.

The beauty of switching to a validating mindset is that it often doesn’t require building a full feature to generate insight. You focus on the smallest possible test that lets you learn.

How Robinhood validated pre-launch

Before building their trading app, Robinhood launched a waitlist landing page. The value prop was simple: ‘Commission-free trading.’ The main goal was a basic email sign-up, plus a referral loop that encouraged people to move up the list by sharing.

The result was over 1 million sign-ups before the app even existed.

But sign-ups alone weren’t the signal they focused on. They tracked several behaviors to understand the quality of that early audience:

  • Referral behavior: Were people sharing unprompted? (Yes, the viral coefficient was strong)
  • Email engagement: Did waitlist users open updates and ask about launch timing?
  • Willingness to act: When early access was offered to top referrers, did people actually work to earn it?

When you do eventually build a feature, it should usually be because something has been validated, and even then, you’re building it to answer a question, not just to add another piece to the product.

What does a learning roadmap look like?

A learning roadmap is structured around questions rather than features.

For each question, you should be clear on what you think the answer might be, how you’ll test it, and what would indicate whether you’re right or wrong.

The structure looks like this: Strategic Goal → Question → Hypothesis → Test → Success Criteria → Next Step.

It’s important that every question you try to answer is directly tied back to a broader strategic goal.

This forces you to be specific about what you’re trying to learn rather than just shipping and hoping.

For example, imagine you aren’t sure whether users understand the app’s value before they reach the paywall.

  • Strategic Goal: Improve the download-to-trial-started conversion rate.
  • Question: Do users understand the value before they hit the paywall?
  • Hypothesis: Users will better understand the app’s value if we show more visualizations of what the app looks like, directly linked to the benefits we’re measuring.
  • Test: Add a screenshot of the app to the benefits section instead of illustrations.
  • Success Criteria: A significant increase in the conversion rate at the first paywall.
  • Next step: If confirmed, explore other ways to communicate value more strongly. If disproven, test alternatives; for example, adding a short video before onboarding that shows the outcomes users can achieve.

Each question you’re trying to answer is essentially an opportunity to create value for your users.

The key insight, drawn from the work of Teresa Torres on continuous discovery, is to think about opportunities the way your customers would describe them, rather than in internal product language.

For example, you can reframe the question from the user’s perspective: “I don’t understand what this app does before I have to pay.”

Customer-centric framing like this helps you consider multiple possible solutions, and that’s why it’s important not to jump to the first test idea. 

If a test, such as “adding a screenshot instead of illustrations,” fails, it doesn’t necessarily mean the underlying question is answered. You may need to explore several different ways to test the same hypothesis before reaching clarity.

The way I approach this is by noting down a few things first:

  • The quantitative data behind the question or hypothesis, for example, we see a very low trial-start rate while onboarding completion is already high.
  • The qualitative data that helps us understand the behavior further, for example, during user testing, we observed that people who clicked away from the paywall were still trying to figure out what the app actually does.
  • According to SOSA 2026, 55% of all 3-day trial cancellations happen on Day 0 — making the question of whether users understand your value before the paywall one of the most critical assumptions to test early.

From there, I work out multiple possible test ideas. For instance, in this example, you could also try:

  1. Adding a short video to the onboarding that shows the app in action
  2. Testing a feature carousel at the start
  3. Showing a simplified “what you can do in the app” overview during onboarding

You can then return to the original question and evaluate which approach is most likely to help you gain clarity, given the insights you’ve gathered.

The three buckets: Now, Next, Later

If you’re wondering what you should be building 2–3 months from now, don’t worry, you really don’t need to know.

I’ve always said that a quarter in a startup can feel like a year in a corporation. Honestly, a month in an early-stage startup can feel like a quarter in a later-stage one. Things move fast, and thank goodness for expensive skincare and blonde hair to help hide the grey hairs I’m convinced startups are responsible for.

While your vision and strategy should be long-term, your roadmap can stay short-term. Trying to plan much beyond that usually just leads to endless rework.

I like structuring a learning roadmap into three time horizons: Now, Next, and Later; a framework popularised by Janna Bastow, which gives you permission not to do everything at once.

Your buckets can look like this:

  • Now (1–2 weeks): The most critical question you need to answer right now. Just one question. Maximum focus.
  • Next (2–4 weeks): The questions that are likely to come next, depending on what you learn from what you’re testing now. These are tentative rather than fully committed — for example: “If this happens, then we will do X”, based on the signals you’re seeing.
  • Later (backlog): Ideas and questions you want to explore eventually, but aren’t scheduling yet. These can and will change as you learn.

This approach keeps you focused while still allowing you to capture ideas without getting distracted.

If you’re anything like me, you probably generate a million ideas during the early phase. While long-term planning is hard, ideation usually isn’t, so having a backlog helps you park ideas and return to them later.

And honestly, a lot of ideas won’t survive that process (I have so many post-it notes with article ideas that, days later, make me wonder what I was thinking). So make a habit of tidying the backlog regularly.

How to prioritize what to learn

You can’t test everything at once; all those ideas and questions you want to explore will need to be prioritized ruthlessly.

This is where assumptions come in. Start by testing the assumptions that would undermine your strategy if they turned out to be wrong.

I usually think about three types of assumptions:

  1. Problem assumptions. Do users actually have this problem and care enough to change behavior?
  2. Value assumptions. Does our solution genuinely help in a way that users recognize?
  3. Willingness to pay assumptions. Will enough users pay enough at this price point for the product to be sustainable?

It’s very simplified, but as a new subscription app, you are essentially trying to answer these questions pre–product-market fit:

My suggestion is to test in that order. If the problem isn’t real, there’s no point in testing around value assumptions. If the solution doesn’t work, there’s no point in testing pricing.

From there, group related questions together; you’ll often find that many of them are connected. For example, returning to the paywall scenario, the question of “understanding value before the paywall” might break down into sub-questions such as:

  • Do they see what the app looks like?
  • Do they understand the outcomes they can achieve?
  • Do they know how long it will take to see results?

Grouping questions this way helps you identify the parent question to prioritize, and also reveals which sub-questions are likely to be answered together.

For example, if you’re building a sleep app and you’re unsure whether your audience struggles more with falling asleep or with staying asleep, that’s a problem assumption you can validate through user research.

Smaller tests, faster learning

Hopefully, you’re convinced by now that you don’t need to build a full feature to learn — and that, as a result, a traditional feature roadmap is somewhat limiting. The goal, instead, is to test assumptions with the least possible effort.

You’ve already seen how the waitlist approach worked for Robinhood, but there are many other options, too:

  • Prototypes. It’s never been easier to mock up prototypes using AI tools like Lovable
  • Painted door tests. Here, you basically fake it till you make it, literally. You could show a fake feature button that isn’t yet functional.
  • Manual versions of features. If you eventually want to automate something, start by testing the human version first. For instance, you might want to build automated skin analysis and recommendations, but initially test having someone manually provide the advice.
  • Start with the first part only. For example, if you’re building a community feedback loop, you might begin by testing whether a simple “like” interaction is used, even if the downstream functionality isn’t built yet. In one case, the like action didn’t even trigger any visible response at the start, but it helped validate that users were willing to engage with that behavior.

If you want to go even lighter, you can start with research, such as browsing forums to validate the problem space or conducting user interviews before creating any mockups. For early-stage startups, user research is almost always a good investment of time.

What happens when you learn

Every test will lead to one of three outcomes:

  1. Confidence goes up
  2. Confidence goes down
  3. The result is inconclusive

I’ll be honest, that last one kind of sucks, but I promise it can still teach you something. This gives you clarity on what to do next:

  1. If confidence goes up, decide whether you need further validation, whether there’s more to learn, or whether you can move on to the next question.
  2. If confidence declines, decide whether you need further validation, whether there’s more to learn, or whether you can move on to the next question.
  3. If the result is inconclusive, it could mean several things: perhaps the change didn’t have the expected impact, there wasn’t enough data, or the test itself wasn’t the right way to answer the question. You can then decide the best next step based on the cause.

The goal isn’t to be right. In early-stage work, you will be wrong quite often, and that’s completely normal.

​​The real objective is to learn quickly enough that being wrong doesn’t become costly.

How to know you’re making progress

A learning roadmap can sometimes feel slower because you’re not shipping visible features. This is where trust can start to waver, for founders and teams alike.

Progress simply looks different, and part of adopting a validating mindset is redefining what progress means.

Progress can be seen in several ways:

  • You can articulate what you now know that you didn’t know before
  • You’ve eliminated solutions that don’t work (negative results are progress)
  • Your questions are getting more specific and focused
  • You’re converging on a smaller set of high-confidence opportunities
  • Your success criteria are becoming more measurable over time

Discussing regularly in sprint meetings what you’ve learned and understood, as well as small wins, helps you better see the progress that might not show up yet in your dashboards.

Common traps with planning your roadmap

Across the startups I’ve worked with, the same roadmap mistakes keep repeating. They’re rarely about effort or intent; they’re about how decisions are made. 

1. Building before validating

This often shows up in disguise.

Teams will sometimes say they aren’t following a feature roadmap, yet a specific feature or update suddenly starts feeling inevitable. Everyone becomes convinced it’s important. When you ask what that belief is based on, the answer is usually something like, “we just know” or “we have to do this.”

That’s still building before validating.

Confidence is not the same as evidence. Even strong intuition needs to be grounded in research, data, or real user signals. The more excited you are about an idea, the more disciplined you need to be about pressure-testing it.

2. Too many priorities

If everything is a priority, nothing is.

A strategy is not ten parallel projects; it’s a clear commitment to one or two things that matter right now. Your roadmap should reflect that focus.

Be ruthless with your “now” bucket. If you’re trying to answer several big questions at once, you’ll likely make shallow progress on all of them and meaningful progress on none.

3. Vague success criteria

“We’ll see if users like it” is not a success criterion.

“It’s slightly better” isn’t either.

Early on, statistical significance isn’t always possible, and that’s fine. What matters is being explicit about what success looks like before you start.

Ask yourself:

  • What signal would make you confident enough to continue?
  • What outcome would clearly tell you to stop?

Also, think through secondary metrics in advance. For example:

  • If the main metric doesn’t improve but another one does, what would you do?
  • If the main metric improves but something else worsens, is that acceptable?

Planning these scenarios early prevents a lot of post-hoc rationalization.

4. Ignoring negative results

It’s tempting to explain away data that doesn’t confirm what you want to believe. As one speaker once said, “If you torture the data enough, it will always confess.” That idea has stuck with me because it’s painfully true.

There will almost always be a way to frame results as “not that bad” or to cherry-pick supporting slices of data. Clear success criteria help protect against this, but mindset matters too.

Negative results aren’t something to be ashamed of. Most things won’t work; that’s completely normal. Treat failed experiments as learning rather than something to justify away. That’s what actually moves teams forward.

5. Never moving on

The final trap is getting stuck in testing mode.

Testing can feel safe because there’s always one more variant to try or one more week of data to collect. But at some point, you need enough signal to make a decision and move forward.

Roadmaps are not about infinite validation — they’re about building confidence and then committing.

Be honest with yourself about whether more testing will actually change your decision, or whether it’s just a way to avoid making one. 

Some signs you’re ready to move on:

  • You’ve tested across 2–3 cohorts, and the pattern is consistent
  • The signal is strong enough to justify allocating resources (not just “slightly positive”)
  • Further testing is unlikely to change your decision
  • You’re delaying commitment rather than genuinely learning

Use this list of common mistakes to keep yourself in check; you can even use it as part of a sprint retrospective. Going through the following questions as a team:

  1. Were there any cases where we built before validating?
  2. What are our top priorities? Do we need to narrow them down?
  3. Are the success criteria of all the experiments clear enough?
  4. What were the ‘negative’ results this sprint? What did we learn from them?
  5. Is it time to move on from any focus areas?

Stay the course, refine or pivot?

In Eric Ries’s Lean Startup framework, there’s a recommendation to hold regular (often monthly) meetings to decide whether to pivot or persevere, essentially, whether to change direction or stay the course.

I agree with the methodology, but the binary framing can feel a bit too black-and-white. The decision is usually not as simple as either killing an idea or continuing unchanged. Sometimes something is working partially, and the pressure to choose between pivoting and persevering can push teams toward perseverance because pivoting feels scarier. That’s where sunk cost fallacy starts creeping in; you’ve invested so much that it’s hard to step away.

Instead, I prefer thinking about it in three options:

  1. Stay the course (persevere). When the core hypothesis is confirmed, focus on execution and iteration.
  2. Refine. The direction is generally right, but some details are wrong. This isn’t about small optimization; it’s about making a meaningful adjustment, such as changing the target audience, refining positioning, or altering features.
  3. Pivot (or kill). The core hypothesis is disproven, meaning you need to fundamentally rethink the problem, the user, or the solution.

Most launches don’t need a pivot; they need refinement. A true pivot should only happen when evidence clearly contradicts your core assumption about the problem or user, such as when target users don’t perceive value or when there are no meaningful signals of monetisation.

A well-known example of a major pivot is that of Instagram. Originally launched as Burbn, the product started as a check-in app with many features.

After launch, the data showed that:

  • Most features were being ignored
  • Photo sharing had unusually strong engagement

The founders stripped away everything except photos because they realised their core hypothesis was wrong: users weren’t primarily looking for a check-in app or the broader feature set. Instead, they doubled down on the signal with the strongest user engagement and pivoted around that.

When can you move on from a learning roadmap?

A learning-style roadmap isn’t meant to last forever.

At some point, it can start to lose its usefulness. Instead of providing clarity and direction, it may begin to feel like a long list of unresolved questions. As teams scale and the number of initiatives grows, a pure learning roadmap can sometimes create more noise than focus.

This is usually a sign that you’re getting closer to product–market fit.

As confidence increases, the balance naturally shifts. Post–product–market fit, you tend to have a clearer understanding of who your users are, what they want, and which problems truly matter. At that stage, a more traditional, feature-oriented roadmap (still focused though on outcomes) often becomes more practical.

That doesn’t mean learning stops.

Markets evolve. User behavior changes. Competitors improve. The difference is that the focus moves from mostly learning with some building to mostly building with some learning.

You’re no longer questioning everything, but you still leave room to validate key assumptions and challenge strategic direction when necessary.

In that sense, a learning roadmap is what helps you reach the point where a feature roadmap actually starts working.

Building roadmaps around questions, not outputs

So remember, in early-stage subscription apps, strong roadmaps are built around questions — not features, not incremental improvement points, and definitely not growth for growth’s sake.

A simple structure works well:

  • What are we doing now?
  • What are we doing next?
  • What comes later?

Your backlog belongs firmly in the later bucket.

The real work is being ruthless about identifying the assumptions that could undermine your idea if they turn out to be wrong, and prioritizing those first.

Start by testing small and moving quickly. Only then should you expand and build with greater confidence.

This approach can sometimes feel slower because you’re shipping less. In practice, it’s usually faster. Teams that default to constant building often spend months delivering features that nobody actually wants. Starting with learning helps avoid the waste of building things just for the sake of building.

If you want to go deeper into defining your strategy and identifying the right questions to answer, you can explore my full course on building an app that people will pay for.