With the ever-increasing complexity of websites, the huge number of plugins and integrations and sheer amount of devices and browsers now being used by website users, exploratory testing has never been more essential.
Having carried out thousands of exploratory tests over the last nine years, Digivante has seen some incredible business outcomes as a result of exploratory testing. These include doubling the conversion rate of a travel company through to finding eight critical defects that incurred a high street retailer a seven figure loss.
This post will explain what exploratory testing is, what your key considerations should be when conducting exploratory testing and how you can mitigate risk and maximise the commercial return from your digital assets through the use of exploratory testing.
How does exploratory testing work?
The best and most effective way to execute high-quality exploratory testing is to use lots of professional testers simultaneously along with a review methodology that delivers a quality control process for the test. Both the testing and the review process can be managed internally. However, there is a capacity challenge as in-house QA teams are rarely larger than a couple of people.
Done right, it can rapidly deliver huge coverage in terms of the volume of testing, the number of unique journeys undertaken and the wide array of devices/browsers/operating systems tested. This delivers huge amounts of value as non-standard journeys will be explored, as well as those deemed to be core and critical – we all know website users don’t stick just to the happy paths!
By its very nature, exploratory testing is conducted with little guidance being provided to the testers involved. This benefits the test as it results in a small amount of simultaneous ‘user experience testing’ also being undertaken. As the testers are not necessarily your target demographic, they will use the platform in the manner that they would expect based on their experience with more familiar platforms.
While this certainly is no substitute for a professionally conducted usability study, it does mean that customer experience defects are identified in addition to those affecting core functionality. This can rapidly improve the ease of use of your platform and subsequently, your conversion rates. In Digivante’s experience, this is especially true for companies that have not carried out Exploratory Testing before or are using testers familiar with their platform.
Partnering with a company like Digivante that can provide on-demand testing at scale, can help achieve the scale of testing that is so beneficial to this style of testing. With a community of over 55,000 testers, Digivante can deploy two hundred testers per test to interrogate a website, application or system and find functional or journey defects by using the tester’s experience and ingenuity.
How could exploratory testing help your business you ask? Read our business case for exploratory testing.
The challenge and complexity involved in executing a professional exploratory test is often underestimated, as on the surface the process seems quite simple.
The key here is to focus on business outcomes; is the purpose of the test to simply tick a box to say a test has been executed?
In this case two people on two devices will suffice. If the objective, however, is really to deliver a better user experience and ensure that the investment made in the digital assets of the business yield their expected return, then the test has to be executed in a professional manner. This will involve adopting processes that ensure quality, speed and a cost effective test that will deliver measurable business outcomes.
Here we’ve outlined some of the key considerations when looking to undertake an exploratory test:
- How many testers and/or workdays of testing do you feel you need so that you’re comfortable all defects that can affect your users will be found?
It is important to consider that all users will have unique ways of navigating the site or app on different devices and browsers. This results in hundreds of unique, potential configurations and user journeys. The less used to the website being tested that the testers are, the better there testing will be.
- The only way to achieve scale within a time frame that takes days not months, is to use large numbers of professional testers simultaneously – a nearly impossible challenge for any business to fulfill internally.
- The skills and experience required can naturally be a challenge for organisations to identify in-house as there are only so many hours in the day. Release schedules are more demanding and frequent than ever, and the simple fact is, high-quality testers are a significant resource cost on the balance sheet, so rarely do large companies have more than five QA’s in-house.
- As a result, compromises are often made, either in the number of workdays dedicated to testing, the number of devices and browsers tested on or the quality of the testing itself, as internal staff or worse, customer user groups, are used to execute user testing on just two or three devices!
The outcome? A test that is pointless and not worth the investment. With poor coverage and quality of process, users will still be compromised. This will result in continued site abandonment and interrupted conversions caused by frustrating or critical errors that will cause conversions to decline at an alarming rate.
- The main limitation of exploratory testing is that whilst it can mitigate lots of risk through the sheer volume of testing, it cannot guarantee execution of every possible user journey.
- The more testers you have and the longer they spend testing, the greater the number of user journeys that will be covered, thus mitigating more risk.
This is the reason exploratory testing should deploy in-scope and out-of-scope restrictions within the test, eliminating areas of little-to-no concern. This focuses the testers in the areas that are of most concern and in turn, those journeys that are of the most importance to users and the business.
- When testing at scale using a group of professional testers, the likelihood that more than one tester will find the same defect is very high.
- Even if the testers have visibility on each other’s defects they may still log their version as it will be slightly unique, due to their individual journey, browser or interpretation.
- To mitigate this, a review process should be implemented; consolidating all defects into one report, discarding duplicates and in some cases known defects that can’t be fixed, commonly referred to as a backlog. These could also be simply placed out-of-scope.
- Despite adopting best practice by specifying what is in–scope and out–of–scope, sometimes even professional testers may raise something that the business would not see as a worthy defect.
- The role of the review extends beyond just quality control. Defects that fall outside the test specification are identified and removed, reducing unnecessary noise and ensuring development resource can be utilised effectively and cost efficiently.
Reproducing a Defect
- One of the most frustrating things when a defect has been raised, logged and investigated, is for development teams to discover it cannot be reproduced. To avoid this, first and foremost, the instructions to reproduce must be clear.
- A written step by step guide on how to reproduce it should be provided, along with an explanation of what was expected versus what happened.
- However, nothing is appreciated by development teams more than a screen recording of the user journey showing exactly how the defect was arrived at. These videos often hold critical information that enables dev teams to undertake a route cause analysis and arrive at a fix quickly and easily.
So many defects, where do I start?
- The triage process looks like a daunting task, especially when a test with 200 professional testers can deliver over 150 defects. The best way to manage this sudden influx of defects is to undertake a triage analysis once the test is complete.
- To achieve this, defects need to be subjected to a set of rules/criteria, then categorised and prioritised accordingly. It is recommended to have at least two tiers of categorisation. These tiers could be P1, P2, P3, each of which are identified to have the highest to lowest impact on the user journey, for example.
- This tiering enables defects to be filtered so that they can be fixed in accordance with the demands of the business, existing workflows or the development and release schedules.
Exploratory testing, executed in the right way using the right processes, can provide a regular audit of your site, app, software or system which will enable it to perform to the highest standard. This ensures users expectations are met, resulting in high adoption rates, utilisation and in the case of ecommerce sites, conversions. Even in environments utilising a high level of automation, exploratory testing still proves invaluable for identifying and fixing ‘new’ issues at speed and uncovering the ‘don’t know, don’t knows’, providing peace of mind and high levels of risk mitigation. Exploratory testing coupled with regression testing is industry best practice in supporting development teams on a release by release basis in the agile test environment.
For more information on our approach to exploratory testing book a call with a Solutions Consultant here.