The complete beginner’s guide to app user testing
Everything you need to kickstart app user testing—tips, tools, and techniques.
When it comes to user research, it’s easy to hide behind surveys and, if you’re feeling daring, some heatmaps and user recordings. However, to understand both potential and existing customers, no technique is more impactful than user testing.
I’ve run hundreds of user tests in the last ten years and never have left a conversation without new insights and ideas. Whether it was with a superfan or lost customer, I’ve left with a bunch of ways to improve a brand.
Yet, many people don’t run user tests, fearing it’s either too expensive or too time-consuming.
In this blog, I’m going to show you this doesn’t have to be the case. I’ll help you to understand some of the nuances of testing for apps versus websites, and I’ll take you through my own experiences as well as top tips from leading experts.
Once you understand the differences, we will talk about tools (spoiler alert: you might not need much!), as well as a step-by-step guide to user testing. Let’s dive in.
What makes user testing apps different?
Before we get into what’s different, let’s start with what’s the same. The foundations are similar: you must recruit and speak to relevant users and have a structured approach, from task to documentation.
The basic process is the same. You need to consider the nuances, and especially the following four points.
1. Platform-specific consideration
When testing a website, I’ll usually opt for mobile first but try to get a few desktop cases and, when relevant, tablet.
With a mobile app, depending on the platforms you’re on, mobile isn’t just mobile: it’s also iOS and Android. This is made more complex when you have a web app as well, or if users frequently use your app on a tablet.
The impact? If you have…
- A wider range of platforms you support
- Also have a web app
- Notice in the analytics a wide range of devices being used (e.g., mobile and tablet or types of devices)
…You’ll need to increase the number of users you’re testing. I tend to start with 8-10 users, but with apps, this may increase to 30 users depending on how much variation we need to account for.
2. Behavior can differ
Don’t assume that behavior will be the same for a web version with similar features. Shilpa Manikanteswaran, an Insight Analyst and a Personalization Specialist at Decathlon (a sports retail company) saw a significant difference in the number of targetable users. Web appeared to offer the best quantity and quality of users.
For Decathlon, even though the depth of customer knowledge and trust of Decathlon on the app was high, they still saw the consent rate for users was low compared to web for both opting in to tracking and personalized notifications.
What does this mean in practice? Decathlon had to understand each target audience separately and customize its approach for both audiences. So if you have a website/web app, don’t just assume you can directly translate learnings from there to your app; you’ll still need to user test and research it separately.
3. Testing remotely can be more challenging
This comes down to two reasons. The first is that not everyone finds it easy to share on mobile, especially older demographics.
“It’s key to get the app in peoples’ hands. Many people are not that tech-literate, so trying to screenshare can be tricky when testing asynchronously. If you’re super low on resources, screen-sharing app mockups can work. However, the best tests are watching people interact with their phones in person, in real life. Try cafes, the train station, coworking spaces – anywhere you can find some testers.” – Rosie Hoggmascall, Product Growth Consultant & author of the newsletter Growth Dives
Secondly, and just as importantly, is context. As you’ll see in a later example, people rarely use your app relaxed with no distractions. Apps are commonly used on the go and in distracting situations, so testing in person, as Rosie suggests, also helps to create a more realistic scenario. In-person testing can be scary, so Rosie suggests the following to get comfortable with in-person testing:
“My biggest learning is little and often. Try to build your user research muscle over time. At a family dinner? Get people to try apps in front of you. During Christmas? See how people interact with their phones. Building this muscle over time can help you make better decisions when designing features.”
4. Apps need to be installed and updated
With an app, there is a pre-step that you don’t find in the majority of web user testing: a user. If they don’t have the app, they will need to download/install it or potentially update it for existing users.
This means if you are user-testing the first part of the journey, you will require additional time or clear instructions to install or update the app pre-testing. Usually, I account for 25-30 minutes to conduct user testing. This is shorter than ideal, but it makes a big difference in the uptake rate. If you additionally need to go through the install and sign-up journey, I’d recommend aiming for 40-45 minutes.
5. Push / in-app notifications
While web push notifications do exist, they are less commonly used. This adds an extra layer to apps as it can be interesting to see these being used in real time but are more challenging to control. It’s worth having a separate section in your documentation to note down any pushes they may receive and their interaction with these.
App user testing tooling
I’m a massive believer in guerilla user testing; just getting out there with no additional tools necessary and keeping it very simple.
I once did user testing for an app startup called VidiVet. The concept is simple: when something is wrong with your pet (their main focus is dog owners), you leave a voice message or video for a vet and receive personalized advice within minutes.
Now, as a dog owner, I can assure you that your dog never gets sick or eats something strange at a convenient moment. It’s usually when you are rushing to squeeze in a quick walk before work or when you are away on holiday.
So what did I do? I took my dog for a walk and approached random dog owners in the park. In return for a free year membership, would they be willing to answer a few questions and test the app?
I had zero tooling beyond using a standard iPhone voice recorder to document their responses. They’d take out their phone and start installing and trying out the app, while my dog, Loki, and theirs usually played and caused havoc around us.
It was perfect, as this realistically mimicked the situation in which they’d usually download it and all the distractions that would be taking place around them.
All this to say, a realistic context trumps fancy tooling any day. Don’t get me wrong, tooling can be helpful, but it can only support a few specific challenges.
1. Recruiting users
If I do use a tool, it’s Respondent, especially if I have a very niche target audience. I usually rely on it as it has a huge pool of users worldwide, a quick response rate, and a professional audience. With the recruitment costs and rewards, this usually works out to about $60 – $80 per user. Alternative options are Userfeel or Userbrain for unmoderated user testing.
2. Scheduling the sessions
If you use a tool like Respondent, scheduling is often built in. If not, I recommend a tool like Calendly to make it super easy for users to schedule a call. You don’t want to risk losing participants before they’ve even started through a complicated system or lack of availability.
3. Building prototypes
If your app isn’t ready to be tested or you want to test a new setup, tooling can also be helpful in building prototypes. One of the most common tools is Figma, but if you lack design resources, Rosie Hoggmascall has another helpful recommendation:
“One of my favorite tools is Marvel App – a super easy prototyping tool that even non-designers can use. Figma prototypes can be tricky and take a while to get going. Marvel is a lot easier to create something to test with users in less than half a day.”
4. Recording user testing sessions
Tooling is also helpful when it comes to recording sessions. This is especially true when you are user testing alone or want to share the findings with others afterward.
I would use Zoom or Google Meet (note you need a premium account to record), as it allows for recording.
An AI notetaker that records, like Fireflies, can also be helpful as you not only have a recording but also notes to help you to test apps specifically and review the session’s findings.
If you conduct more frequent user testing, investing in a more premium tool like Lookback can be worthwhile. This allows for additional note-taking, viewers, and the organization of findings (learn more about the pros and cons and how to specifically test apps).
Please note you must consider GDPR or other requirements for storing or sharing data.
I’ve focused on the basics; you can get even more advanced with eye-tracking tools like Tobii, but for 99% of organizations, that won’t be necessary.
Step-by-step guide to app user testing
Step 1: Determine your goal
Before you rush off to recruit users you must determine what you hope to learn.
I recommend using quantitative data to determine where to make the biggest impact. Let’s say you see a vast drop off during the first two steps of onboarding — your goal may be to understand what’s driving it.
From there, you’ll find it helpful to write out your assumptions. You don’t want to focus too heavily on them and get caught up in a confirmation bias, but it can help you formulate the right task and questions to learn more.
Then, you can define the task itself. The number one rule is to tell them what to do but not how to do it.
Step 2: Recruit users
Once you have prepared your task and a script of a few questions you want to ask, you can start recruiting. We’ve already talked about implementing tools to recruit users. The other options are:
- Contacting a specific segment of your email base
- Ask friends for acquaintances who match the target audience
- Go out and just ask people
- Use online groups or communities of relevant users (just be conscious of community rules)
I screen users upfront with a user testing survey, especially for non-customer user testing. It’s critical that you avoid a bias.
For example, let’s imagine that you are testing a workout app focused on women who want to build strength.
A poor question would be:
Would you like to improve your strength?
- Yes
- No
A better question would be:
What is your most important fitness goal?
- Improve endurance
- Lose weight
- Get stronger
- Improve flexibility
- Feel better
- Other ____
A specific question for screening users for apps is to understand upfront which device or operating system they use to ensure you get a good and relevant mix.
A final part of recruitment is to offer a reward for their time. It doesn’t have to be monetary, but some compensation goes a long way. I usually aim for at least £30/$40 for 30 minutes. When speaking to existing customers, I tend to avoid only offering a complimentary subscription to the app, as it only entices superfans. Instead, I’ll often offer a gift card instead, e.g. Amazon or another relevant website that I know must customers will use. This is also an option for non-customers if you aren’t using a platform to recruit customers (most have built-in monetary rewards).
One final tip: Send reminders to your booked sessions! I use Calendly and automate it to send a reminder one day before the session. Calendly also offers a way to screen users if you aren’t using a specific tool to recruit users.
Step 3: Prepare your user testing
There are a few other aspects you want to consider beforehand:
- Do you want to record the session, and with which tool?
- Will anyone join the call to take notes?
- What is your process for no-shows? Personally, I message after five minutes to check in, and after fifteen minutes to ask if we should reschedule.
Where and how will you document your findings?
When it comes to documenting findings, I recommend a rainbow sheet. These are great for creating a visual overview of your notes and identifying patterns across user tests.
Step 4: Time to have some conversations
The start of a user test is purely about getting people comfortable. I start with a small introduction that explains who I am and clarifies the session’s focus.
I also start with some Jobs to Be Done questions to understand the user better, but ensure that you respect the time you’ve set for the call and what you’ve already covered in the screening beforehand.
Then, after introducing the task, it is over to them. You are there for guidance and note-taking. Here are some examples of questions you can ask during testing:
- Could you explain why you clicked?
- What made you stop scrolling?
- What is going through your mind?
- I noticed you reacted <X>. Can you please tell me more about it?
Sometimes, users will ask you questions. With VidiVet, people got stuck working out how to download the app (it was a web-to-app signup flow). As painful as it was to watch them struggling, I’d just encourage them to keep looking. I only stepped in for one user after five minutes of struggling, as they seemed ready to grab their dog and run.
You also want to encourage them to share their thoughts without feeling judged. I usually tell them before the task, “You know that inner monologue we all have at times? This is your chance to say it out loud. I may ask you questions at times, but that isn’t because you are doing anything wrong, it is purely to understand what you are thinking.”
Step 5: Analyze results & design your testing backlog
After the user tests, I finalize my rainbow sheet. I update and tidy it up after every user test while the findings are fresh in my mind.
Usually, I’ll have ten or more observations that apply to at least five or more of the testers.
These are still hypotheses that need to be tested and not immediately implemented. I organize them around an opportunity area. For example, let’s say you observe a lack of clarity on what an app does during the onboarding phase, which leads to a drop-off. There are multiple reasons why it might not be clear and various ways to test it.
Apps differ once more from the web because they can be harder to A/B test. However, certain tools are making this easier, as Hannah Parvaz, Founder of Aperture, shares:
“Every time you want to update your app, you need to go through a hefty approvals process in the App Store or Play Store. But there is a shortcut. There are now third-party tools like RevenueCat, which provide a no code, no-approvals-needed system to test and optimize parts of your product, including paywalls.”
RevenueCat mainly focuses on A/B testing for pricing, packaging, and paywalls; if you’re looking for A/B tests for other aspects of your app, both Optimizely and Apptimize are great options.
Hanna Grevelius, Chief Product Officer at Golf GameBook (and a RevenueCat customer), shared the following on why A/B testing findings first is key.
“User testing in the mobile space can be unpredictable. We once ran an A/B test called ‘The Ugly Paywall Test’. No one was a fan of the design or thought it would perform well, but our qualitative research consistently showed that for our target audience, less is more across the app. And, to our surprise, the less attractive paywall won! This insight continues to shape our approach, highlighting that a straightforward, value-focused presentation can often be more persuasive than something visually polished or loaded with relevant, but ultimately excessive, information.”
If you don’t have enough users to A/B test, you can also prototype a solution and run a new round of user tests (for iterative user testing like this, you can work with a smaller base of users each time, e.g., five users).
Time to get testing
All that’s left is to get started. As I mentioned previously, people tend to find user testing to be a daunting task. It can feel overwhelming, complex, and out of your control. But user testing is the key to discovering what people want from you and, just as importantly, what they don’t. Any negativity that arises in the process is a chance to streamline your brand for your users and create superfans by the end of it.
With this guide, you have everything you need to get out there and start testing. And I genuinely mean to get out there, as you shouldn’t wait for users to come to you. You’re ready to navigate the nuances of testing for apps, and I have no doubt you’ll find the insights worth the time and effort.
You might also like
- Blog post
The complete guide to SKAdNetwork for subscription apps
Understanding Apple's privacy-first attribution
- Blog post
“A big market is great only if you can take a substantial share of it” — Patrick Falzon, The App Shop
On the podcast: estimating the revenue potential of an app, crafting an exit strategy, and why LTV is such a terrible metric.
- Blog post
Effective testing strategies for low-traffic apps
Is A/B testing off the table? Let’s rethink experimentation.