It turns out, poor web quality can be pretty expensive.

Even more so if you find those pesky bugs after launch. Because according to the Systems Sciences Institute at IBM, the cost to fix an error found after product release was four to five times as much as one uncovered during design and development. Staggeringly, that’s also up to 100 times more than one identified in the maintenance phase.

To bring those calculations to life, CISQ research from 2018 found that poor web quality was costing organisations $2.8 trillion in the US alone.

And as DeepSource reports, if bugs aren’t found early enough, the drain on developers is as exponential as the expense.

Here’s why:

  1. Firstly, it’s much easier for developers to find problems when they’re still writing the code – it’s fresh in their minds.
  2. Next, reproducing problems in the testing phase is often a tricky and time-consuming task. Time’s never on your side in development sprints, right? That’s the point.
  3. And when the issue is out there and live? Unfortunately, the problem becomes compounded. At this stage, with all the development time and resources needed, the cost can be up to one hundred times higher than if they were fixed before release.

Simply put, poor quality has pricey repercussions. And if they aren’t caught early, external failure costs can end up draining your internal resources. But don’t take our word for it, even some of the world’s largest organisations have borne the brunt of those pesky bugs.

Poor web quality examples

1. CNN’s new-look website

News travels fast – at least, it should do.

In 2015, CNN updated its website in an attempt to bring the brand up-to-date. Unfortunately, although the design was cleaner and less cluttered, it didn’t perform well.

In fact, according to W3-Lab, the site’s loading time was a staggering 20 seconds. To put this figure into context, Google’s recommended page load time is under 2 seconds.

The main issue? The sheer size of the images. A quick test to see if these could be cached beforehand would have fixed this.

But the issues didn’t stop there.

Users were forced to scroll to see the headlines and the new design displayed only half as many clickable stories. For an international news hub, that’s not ideal. Even a single usability test prior to launch would have uncovered these major problems.

Simply put: first impressions on websites can prove critical. Because according to Peep Laja, founder of CXL, it only takes 0.05 seconds for a user to form an opinion about your website, determining whether they’ll stay or look elsewhere. And with SEO and UX going hand-in-hand these days, businesses can’t afford to chance it without thorough testing.

2. Google Nest’s thermostat glitch

Even companies that seemingly wrote the rules on UX and web quality can slip up – especially when they don’t follow their own advice.

Back in 2016, Google left many feeling frosty when a glitch in its Nest thermostat left users unable to control the temperature. This was due to a buggy software update that drained the device’s battery. To make matters worse, this happened in January.

The company put the issue down to a combination of a firmware update, old air filters and incompatible boilers. But this really should have been tested – especially before one of the coldest weekends in the year.

Anyway, lessons learned for the future – right?


Fast-forward five years and it happened again. This time members of the Google Nest Community reported a disconnect between the device and Alexa. When users asked Alexa to turn off the thermostat it failed to do so – despite saying it had. Google knew of no possible workaround at the time.

So, when did the issues start?

When Google updated the software.

However, these issues aren’t inevitable. In fact, issues like this could be avoided with some thorough post-release and integration testing.

Nest and Alexa is just one example of a communication breakdown that can occur without thorough testing; when there’s a symbiotic relationship between multiple sites or apps, one outage can have catastrophic consequences.

3. Facebook, WhatsApp and Instagram fail – the social dilemma

When Facebook decided to stitch together the infrastructures powering Messenger, WhatsApp and Instagram, it seemed like the perfect way to keep people engaged inside their ecosystem. Not only would there be more advertising opportunities; it would stop users looking elsewhere for messenger services, too.

However, this would be much more of a technical challenge than they anticipated.

For example, unlike Facebook Messenger and Instagram, WhatsApp does not store messages and keeps minimal user data. At the time, it was the only one of the services to use end-to-end encryption by default.

Technically tricky, it didn’t take long for the bugs to bite.

Users were struck by service issues throughout March and July of 2019; just as soon as the developers began trying to tie the apps together. These included not being able to post photos on Facebook, send messages on WhatsApp or view stories on Instagram – core functionality aspects of each service.

Facebook put this down to “routine maintenance” and the issues lasted a couple of hours each time.

Unfortunately, this wasn’t the end of it.

In October 2021, all three of the apps went down again. But this time, for six hours.

As reported in Sky News, Facebook blamed the outage on a “faulty configuration change” which has now been rectified. But on top of the fact that 3.5 billion users were left disconnected, these issues have further consequences for Facebook; both financial costs through loss of ad revenue and reputational when considering Facebook’s history of data breaches.

One possible way to help avoid a crisis like this is end-to-end regression testing. By running functional and non-functional tests you can check to see that no newly developed code, e.g. a string that’s trying to integrate multiple apps, is going to cause bugs or breaks in the software.

Without a doubt, it’s a time-consuming task when you’re facing release deadlines. But doing so will also save development resources and, more importantly, budget down the line.

So, what’s the real cost of poor web quality?

Undoubtedly, quality issues can come at a cost to businesses. Especially when potential customers end up bouncing and seeking services elsewhere – and on other devices.

According to a study by Perficient, 68.1% of all global website visits in 2020 came from mobile devices – an increase from 63.3% in 2019. With that in mind – and also considering Google’s ranking factors – businesses just can’t afford to ignore responsive design anymore.

Because at its most severe, this mistake can cost organisations millions.

In 2019, IT consultancy Accenture was sued for a whopping $32 million by Hertz for not delivering a web solution that worked on all standard devices.

According to the lawsuit, Accenture neglected “medium” displays for tablets, e.g. iPads. Consequently, the Hertz site couldn’t automatically resize for these types of devices. To make matters worse, the firm then “demanded hundreds of thousands of dollars in additional fees to deliver the promised medium-sized layout.”

In reality, the whole situation could have been avoided with thorough cross-browser testing. This is crucial before release because even the best emulator can’t guarantee an identical experience to real-life; testing on actual devices and installed browsers can give you greater confidence in the results you receive.

OK, so what’s the key to keeping costs down?

It’s simple.

Always make room for testing.

When testing timelines were squeezed on the Transport For London (TFL) Crossrail project, developers found a bug that would cause the launch to be delayed. Consequently, customers had to put up with the overcrowded existing service, without the benefits of Crossrail. The delay is estimated to be costing around £30 million a week.


Sure, some errors can’t be avoided. But a project of this scale must have a robust workaround in place. Because when a business’s revenue and reputation are hit, the situation becomes much more severe.

Big brands like Netflix, Target and Disney were all hit with huge million-dollar lawsuits from the US Department of Justice. Citing the Americans with Disabilities Act, their websites were all deemed unfit for the browsing needs of disabled customers.

But inaccessibility doesn’t discriminate; it affects everyone.

Along with the fact that Google penalises websites that aren’t accessible, the pandemic caused a surge in online traffic. With that in mind, it’s more important than ever that sites provide an intuitive experience as standard.

“When designers build digital experiences with accessibility in mind, all of us benefit. For example, an interface that can be tabbed through quickly and logically isn’t just helpful for people who have trouble operating a mouse – it’s also the fastest and easiest way for anyone to navigate most sites,” argues Jonathan Hensley for The Guardian.

A round of accessibility testing can ensure that your site is compliant with the internationally recognised standard benchmark: Web Content Accessibility Guidelines (WCAG 2.2) and is well worth setting some time aside for.

So, what’s the moral of the story?

Poor web quality pays a heavy price – on revenue and reputation. Weighing up whether to run another round of tests or not in the next scrum? Yeah, better to be safe than sorry.

Published On: January 13th, 2022 / Categories: Website testing /