Whoever would think one of the most important aspects of software development is the writing of test cases? It’ll come as no surprise that I am one of those who thinks it’s very important. And I’ll even go so far as to say the writing of a test case can either make or, if done badly, break a release.
I’ve been in software testing for over twenty years and have seen a plethora of test cases. Some have been near-perfectly written. Others, not so much. And for the latter group, the end result is poor quality testing, poor tracking of issues, and frustrations between QA, product and development teams because it wasn’t clear at the outset what was being tested and for what reason.
But before I go into how we write test cases at Digivante, let’s cover a bit of background.
What exactly is a test case?
A test case is a detailed written explanation of a specific scenario that you want to test, written step by step with expected outcomes at each stage. Say, for example, you wanted to test a new visitor to your site purchasing a specific item via PayPal – you’d detail those steps starting with the very basic, “Go to [URL].com” all the way through to “Complete payment on PayPal”.
Test cases are used to test any type of software. From popular applications like Microsoft Office, to ecommerce websites, to popular iOS or Android apps – all will have hundreds if not thousands of test cases to run.
Test cases are used pre-release to ensure that the release is ready to go into the world, but also test cases can be run post-launch too. We’ve worked with clients to monitor their live site by testing key user journeys on a daily basis to report issues impacting availability or customer experience.
Test cases always have an outcome. If the test case is successfully run i.e. the steps were followed without incident at any stage, it will pass. If at any point it fails at one of the stages, it’s logged as an issue with steps to reproduce it and related image/video evidence. There are a whole range of types of issue too: functional, usability, accessibility, security are just some.
What test cases are not
To be clear, test cases are not the same as test scripts. Nor are they scenarios or use cases or plans for that matter.
Test scripts are very similar to test cases – and arguably could be the same thing – but we more commonly use test scripts when we’re referring to test automation. Because automated tests involve a machine running the test rather than a human, a test script needs to be written for the machine to know how to run the test. At Digivante, when we create automated scripts, they are always written based on a test case. We write our test cases the way that we do, to ensure automation testers can refer directly to the test case rather than the requirement, thus not duplicating effort or wasting time on prep work for the automation script. If an automation script is proving to difficult or taking longer than expected to create, we can always run the test case manually.
Test scenarios are what test cases are created from. For example, a scenario might be, “Check behaviour when a valid email but invalid password is entered.” This is stating the main reason for the test but is not outlining the steps to run that test.
A use case is a description of how software should perform under given conditions. It describes how a product will or should work. Let’s take an ecommerce fashion company as an example. A use case might be, “A customer clicks on an item to select the specific colour and size they wish to purchase. The ‘Add to shopping basket’ button should only appear if the item is in stock and available at the warehouse.” Like a scenario, a use case is what you can build your test case from.
A test plan is a very detailed, dynamic document that outlines your overall test approach. It would include your objectives for testing, the schedule and estimated timeline, what resources are required to deliver the plan.
The different types of test cases
There are lots of different types of test cases to cover requirements such as:
Functionality – to test whether software functions perform in the way expected.
User interface – to check for grammar and spelling, visual inconsistencies e.g. colour, find broken links.
Security – to check whether the system protects data, so test cases will look at authentication and/or encryption.
Integration – to check how different software interacts with one another. This might include your website interacting with an order processing system or a payment system, for example.
Usability – to check how users might use your application to complete a particular task e.g. purchase a pair of shoes.
User acceptance swiftly follows usability – where business users test the system to ensure it works as per users requirements.
Regression – checks whether new code changes have affected any existing features.
Those are just a few of the different types. This TechTarget articles covers different types of test cases in more detail.
The art of test case writing. What’s in a name?
Now we’ve covered the background to test cases, I’d like to delve into how we do things here at Digivante.
When it comes to test case writing, a lot can be derived from its name. When my team writes test cases the first step is always the name, and while this might seem trivial, in my view it is a critical point in showing your understanding the aim of the test case and the coverage it provides.
Through a static review of the requirements, you start to form an understanding of the changes, required coverage for your test case, questions for product owners and developers, so that a detailed step-by-step test case can be written.
Structuring test case names
At Digivante we structure our test case names in a very specific way.
We start with the type of test i.e. Regression (Reg), new functionality (NF), cross browser (CB) etc.
Agree the format for Test Case ID. Usually this is a numerical value e.g. 0001, 0002 and so on.
Location code is added if the test is being run in different countries (e.g. payment tests)
Next, we consider the functional area. This is the high-level area that a requirement exists under.
Often product owners are responsible for specific features, so it’s important to be able to quickly identify test cases that provide coverage for them, and it also means related issues are traceable back to these specific features.
Product owners can use results to identify high and low risk areas and assess volatility to change. Each feature will contain functional areas, which are a more granular breakdown.
Finally we add the detail of the test case and the aim of what it’s covering.
To illustrate what this looks like in reality:
Client specific format
Sometimes it is required that we add additional information to the test case titles on a client specific basis such as in the below example:
All of the above allows us to quickly assess test coverage via a related coverage matrix.
What’s a coverage matrix?
We use a Coverage Matrix to ensure the test cases we write cover all the areas adequately. Once complete, it allows all stakeholders to understand the coverage when a set of test cases is executed.
The Coverage Matrix covers each variable to be considered in testing and the test case’s status for each; this helps ensure that all relevant test scenarios are included and can be a great exercise to go through to ensure full, efficient coverage during existing test pack reviews.
Using the Coverage Matrix can also help identify which test cases are to be used for different test approaches as you may want to run only a subset of the test pack, based on the new functionality being introduced.
The Coverage Summary is used in conjunction with this to provide as a high-level overview, typically at a reporting level rather than technical level, to ensure the right level of focus is in the right areas.
Having the name underscore separated allows us to quickly separate the test name in Excel, which creates the coverage matrix. Then using pivot tables allows for a volumetric breakdown of the coverage.