In a recent article published by the SD Times, it mentions that 85% of US CEOs do not see an issue with releasing poorly tested software, as long as it’s “patch tested” later on (essentially, testing in production). Ironically, it also goes on to mention that 95% of CEOs and 76% of testers surveyed reported that they have concerns with regards to losing their jobs in the wake of a software failure. On the face of it, that sounds contradictory – how can they be comfortable with doing something that increases the chances of the thing they fear the most? In any case, it highlights that software testing isn’t taken seriously by a many of CEOs – and that, quite frankly, is a risky game to play.
So, why should a CEO care about software testing?
Let’s look at that question from two angles.
If a CEO is asking that question, that’s not a good sign. They are the proverbial commander-in-chief, the person that everyone looks to for direction and inspiration. If they are not insisting on uncompromising quality above all else, you’ll likely fail in the modern age of unabated competition and innovation.
Some CEOs – and boards – also look to automation as being their saving grace – why do humans take days to run something when it can be run in minutes/hours by a computer? But that couldn’t be further from the truth. You generally won’t automate everything (on average, 50% of your regression pack is typically automated) – if something has passed in the last 5 releases, it’s an ideal candidate for automation. But… you’ll never be able to replace the ‘testers eye’ for detail and to notice something that just simply just doesn’t feel right. Automation only sees what it’s told to look at. It doesn’t provide you with an accurate and wide ranging view of whether your product possesses the quality you hope it’ll have. You need to embrace a pragmatic approach to automation.
On the flipside, some have quality front and centre of their mind – 27% of CEOs in the Fortune 100 have degrees in engineering and science – these CEOs understand the importance of safety critical systems, and the level of testing needed to ensure these are working as designed (functionally) and can support the level of demand required (non-functionally) when the time comes. In some cases the quality of a system influences whether someone lives or dies. However, not all CEOs will possess such degrees to afford them that approach and discipline towards quality. Are those CEOs at a disadvantage?
The short answer is no. Companies can and should take software testing seriously.
Making testing integral to an organisation
Some companies may be more capable than others, but generally this is because they’ve made it part of the fabric of their organisation. They’ve embedded it as far left in their development processes as possible (also known as shift left testing), they’ve ensured it’s measurable, they constantly review and refine their processes. They ensure that their testing is not just merely testing, but something that has been elevated into the realms of ‘quality assurance’. For them it’s not just about checking what they’ve tested, but how they’ve tested. These companies do not rest on their laurels, they do not get complacent and they do not accept that they are simply “good enough”. There is always room for improvement.
CEOs in this space require the utmost commitment and dedication to that cause from their teams, they insist that corners are not cut, and that if it’s not ready, it’s not shipped. An approach that underpins and supports that is Agile development. The only things that are delivered in each sprint are the things that are ready – that meet the acceptance criteria. If they haven’t, it’s back into the backlog for a future sprint. It’s as simple as that.
With that said, if you use more traditional testing methodologies such as waterfall, you risk issues being baked into the process from the very outset. A high number of defects on waterfall projects can be traced all the way back to the requirements themselves, and do you know what is quite ironic about that? When they are simply requirements, and no development or testing has been undertaken, it is at that point that they are cheapest to fix, but given the nature of traditional projects, that issue often won’t be noticed until it’s too late, and often months down the line – thus no longer being a cheap fix, but a rather more substantial cost in all regards – time, money and resource.
It really does underpin the importance of software testing, quality standards and mature software testing process(es) as being absolutely vital to any one (or any CEO) if you are to sink or swim in a world of unbated competition and innovation. If you don’t do something well, someone else will.
If you “do not see an issue with releasing poorly tested software, as long as it’s “patch tested” later on”, you’re doing nothing more than engineering failure.
We’ve all heard Jeremy Clarkson utter his famous line; “How hard can it be?”
Well, we’ve all heard of “Instagram vs Reality” – let’s do “Idea vs Reality”
I’d like to develop and release this application by September
- Requirements/User Stories – what do you want to build? How should it work? How many users should be able to concurrently use it? How secure does it need to be? Are there any regulatory standards that need to be met? What kind of users need to use it? What platforms does it need to work on (desktop, laptop, tablet) and in what way (mobile/tablet/desktop etc)? What does it integrate into or from? What problem does it address?
- Stakeholders – do we know who are our subject matter experts? Who is our project sponsor/accountable executive?
- Definition/Measure of Success – how will you know this has been a success, and how can you measure that? (this is often overlooked but hugely important…)
- Timeline – when do you need it by and why? Is there a key date tied to something else? Is it a regulatory requirement to be live by a given date?
- Budget – how much budget do you have? A lot can influence this – from timeline, to planning, to technical requirements, to resourcing etc.
- Planning – based on the requirements, how long would it take a single resource (1 in each of the respective areas) to deliver this? Does this match with your aspirational timeline?
- Resourcing – based on the planning, do you need more resource (you plan for 1x of each area – how many multiple in each area do you need) to meet your deadline? Can you even afford more resource?
- Environments and Infrastructure – do you have the necessary infrastructure to support the development of this? Do you have environments to develop and test it within? Do those environments support and enable the necessary end-to-end flows of this system? These of course come with requisite costs. Again, this feeds into the budget discussion.
- Development – do we have a plan for developing this? In what order are items required? What is the logical order (what dependencies are there)? Have you estimated the effort (this feeds into the planning, resource and timeline discussions)?
- Testing – how long will it take to analyse the requirements or stories? What scenarios or stories need to be covered? How long will it take to script this? How long will it take to execute this? Are we testing this traditionally (often phased) and if so, what phases of testing do we need? Or are we testing this in an Agile/Scrum manner, in which case it’s sprints? There are of course different types of testing in software development across functional and non-functional testing. Do we need accessibility testing to ensure it is usable by people with disabilities like hearing, colour blindness, old age and other disadvantaged groups? Do we need cross-browser/digital testing to ensure it works on our key devices (often informed by Google Analytics data)? If it’s a secure application, do we need to security test this? All this feeds into the software test strategy or test planning you’re undertaking – again, this feeds into the planning and timeline and budget discussions.
- Defects – when issues are raised, do we have clear definitions of what constitutes the various priority and severity levels? Do we have a capable defect manager that ensures the data within these defects is the best it can be (so time is not wasted by development going back and forth with the tester), and is able to ensure the right people see them, and act upon them in a timely and ordered manner?
- Reporting – who does progress need to be reported to? In what form does that reporting need to be presented? What kind of test metrics do you need?
- Traceability – a defect should be linked to a test case which should be linked to a requirement or user story – full end-to-end traceability allows a stakeholder to evidence coverage, or conversely, where something has gone wrong to trace it back through its coverage to a requirement/user story.
If software testing is not embraced and given the due care and consideration it needs, the consequences can be grave.
The infamous TSB migration failure
A prime example of this came on April 22nd, 2018 when TSB, who were divesting from Lloyds Banking Group, had taken the decision to go-live on their own platforms. What ensued was one of the largest IT meltdowns seen in recent history, which went on to cost the bank £330 million; £125 million of which was related to resolving customer issues, £122 million for hiring additional staff or contractors (to augment the resource required), with the remaining covering fraud, operational issues, loss of waived fees and charges to name but a few.
Customers were affected for a number of weeks following the migration, both at home (via web and mobile channels) and in branch offices. Some 80,000 customers chose to switch their accounts away to other banks with a reported 200,000+ complaints raised.
It is a prime example of how an executive function – and not just a CEO – should not operate when it comes to testing and more widely, delivering big change.
TSB themselves commissioned a 3rd party – Slaughter and May – to investigate and report on the what and why of this failure. The findings are stark and make for some very sobering reading.
Whilst we won’t ever know the exact thinking behind the decisions the TSB executive team made, what we do know is that the testing was determined as being ‘inadequate’ and that “TSB did not give sufficient consideration to whether a largely single event migration was the right choice, what the risks of this approach would be, or how those risks would be mitigated”. Additionally, despite the migration project consisting of 70 third-party suppliers and 1,400 people, TSBs owners (Sabadell) decided to conduct the migration over a single weekend – and that TSBs discussions around this ‘big-bang’ approach were not “substantial”. I won’t go into much more detail about those failings, as this could be an article (or even a series of articles) all on their own! But, it’s clear that the decisions here were not driven by quality, and were likely time or money driven. And more often than not, that approach will lead to failure.
Ignore testing at your peril
Overlooking, undervaluing or under-appreciating the necessity for a robust test approach and planning is something you do at your peril. The risks are simply too high – in TSB’s case it will have undoubtedly had an impact with:
- Legal and Compliance Risk – the CEO was interviewed in a treasury select committee, and will no doubt have attracted the eye of the financial conduct authority
- Strategic and Reputational Risk – objectives will have been impacted, as will have reputation throughout the fiasco
- Operational Risk – a huge impact, no doubt, departmentally, and within the various internal channels that run the bank
- Human Risk – the wellbeing of staff during a massively troubling period
- Security Risk – huge amounts of fraud occurred as a result of the issues over that fateful weekend, and the weeks/months to come.
- Competition Risk – some 80,000 customers moved away to other banks
At the end of the day, testing is all about identifying and mitigating risk, not introducing it.
If your CEO cares about testing, they care about delivering a product to their customers that is the best it can be – and this in turn can have a positive impact on reputation and profits. You don’t want to be a company that is synonymous with poor user experience, or unstable services, because if quality is compromised in the name of saving time or money – it will do the exact opposite – it will cost you more money and more time down the line when you will need to remedy any shortcomings and issues that arise.
I think it’s also worth noting that this doesn’t just apply to big releases – if you can nail down a solid, considered and robust approach to your testing (and quality in general), it can ensure that even the smallest pieces of work receive the same quality standards as their bigger counterparts.
If you start with quality in mind, you’ll end with quality delivered. Underpin that with sound and considered processes – ensure you review, refine and refactor as you go, and quality is very much assured.