An app is a standalone installation rather than website that can be simply navigated to via a browser. Therefore application testing has some unique challenges beyond website testing, which is detailed here in the complete guide to website testing.
Mobile application install
When installing an app you need to consider how much space it requires, and ensure end users are aware before they install.
Does the app install correctly for all devices that are considered in scope. Often companies will state the app works on as low an OS version as possible to achieve a wider user base, but often don’t test on the lower end of the OS scale to ensure an app installs without any issues.
If the app fails to install correctly, what behaviours can be expected i.e. error codes, phone crashes etc. If you know what the possible failures are, then FAQs can guide the user to a successful outcome.
How much battery does an actively running or running in background app use. With so many apps vying for a user’s attention, power hungry apps are often uninstalled. Testing an app over a period of time to assess battery usage and regular monitoring ensures your app performs at acceptable levels for an end user.
Getting real user feedback
The approach to receiving and managing user feedback is very important. Although you can setup an app in a test environment so it’s available to a limited number of users in order to get feedback prior to live, it can be difficult to coordinate results or get users to proactively test the app for you. If the test version of the app causes crashes or impacts the users device they will be concerned about the quality. End users are not testers and therefore, even during testing, have an expectation of quality. If you launch an app in live to a limited number of customers, if they give feedback via reviews in the stores, this can have a negative impact on user installations when rolled out to a wider audience. Although soft launches can have a benefit, they can have a very negative impact on a release if not managed correctly. Any soft launch like this should have expectations, both positive and negative, documented so the senior management team understand what the results may look like i.e. good to go live or release will need to be delayed and more testing is required.
Cross-device and cross-platform testing
There are 1000s of different screen sizes, operating systems and launcher options available to customers. A user quickly becomes a SME of that device and understands how it behaves. Therefore replicating all of these expert users and understanding their behaviours can be very difficult.
If your app is going to be used in multiple countries then you need to be able to test different languages and the impact that has on content. Whether the language reads from left to right or right to left or words are shorter/ longer.
Regression testing functionality to ensure existing functionality doesn’t break or deteriorate over time with the constant requirement for new features. Managing both new feature testing and regression is often difficult. Regression packs are often neglected and no longer accommodate changes or runs consistently.
The importance of early detection
Fixing defects late on in your development process is a very expensive and complex process
Why? Let’s look at the lifecycle of a defect from identification to fix when it’s in production.
If a defect is reported by a customer, that customer must report it to a call centre. The call centre operator takes down the necessary details and then it is sent to their manager. The defect is passed to the tech department, where a developer works on it. But the developer often cannot reproduce the defect in their test environment. So, it goes all the way back to the source and the cycle starts again.
What’s more, an undiagnosed defect in your live app may cause ongoing instability and you could lose customers without understanding the root cause. Any such defects could also cause a domino effect, where you fix one thing only to unleash a raft of new defects.