You wouldn’t serve a dish without tasting it, right?
That’s because no matter how closely you follow the recipe, there’ll always be variables you can’t account for.
Well, the same can be said for the release testing process. The in-house testing environment isn’t a real-world “kitchen” and your product manager’s palate probably differs from users.
But what’s the difference between this and cooking? Rectifying software issues after service affects more customers and can also be incredibly costly.
In fact, Cambridge Judge Business School research found developers spend 620 million hours a year debugging software failures – that’s 13 hours per issue, on average. Even more alarming is that this process is costing companies around $61 billion per year.
Sound like a lot? The plot thickens.
Because this isn’t a one-off cost. Software issues that haven’t been picked up in the release testing process can cause long-term reputational damage, too:
- According to research from Upland, 21% of users will abandon an app after a single use if they encounter an issue.
- And with 2.87 million apps available for download on the Google Play Store alone, businesses just can’t afford to leave those vital first impressions to chance.
So, what’s the answer?
It’s simple. Make post-release testing an essential part of your release process.
Why is post-release testing important?
Essentially, post-release testing helps you reveal previously overlooked issues and unintended outcomes. These often arise from the fact you’ve used sandboxes and stubs for integrations, making assumptions about the data received. What’s more, it’s likely that you would not have deployed the full code release into a test environment. Instead, the team would have applied multiple patches over a period of time.
The thing is that go-live brings all these core releases and patches into one build; manual config is applied (subject to human error), content is created and delivered, live integrations are connected and new business processes are all applied.
But it does so for the first time.
That’s why a solid round or post-release testing is so important.
Because if you do it fast and thoroughly, you can correct faults before conversion rates drop.
And if you don’t?
Well, you could end up undermining the customer experience and damaging your brand. Important to avoid when you’ve spent so much time and budget making it what it is.
And your digital products are no different.
Building these involves weeks of design, development, upgrades and migrations. In fact, a single development cycle could include countless patches, config updates or content changes. And for larger projects like migrations or transformations, it could take months before the team is ready to go live.
But go-live is often the first time the complete build is deployed in full. And moving from version 2.3 to 2.4 in a test environment is a much smaller job than jumping from version 1.5 to 2.4 in the real world.
Essentially, there’s a lot more room for error in the live environment.
And after all that hard work? Well, you definitely don’t want to fall at the last hurdle.
How to make post-launch part of the release testing process
Release testing doesn’t leave anything to chance. It’s about answers – not assumptions. And this is why it’s so important to factor it into your post-launch plan.
Here’s our advice on what your release testing process should look like:
Plan for post-release testing at the development stage
You don’t need to wait for the product launch to start thinking about post-release testing. Instead, start plotting in your test strategy at the start of development.
If you’ve done your due diligence around QA and testing throughout the process, this shouldn’t be too much of a task. In theory, the post-production test plan should be a reflection of the full test plan for that release.
To cover maximum functionality, your post-release test plan could include steps to:
- prioritise test cases and cover the maximum amount of functionality
- test new features as well as major existing features
- verify major impact areas
- address any critical bugs found in the test environment
OK, this may sound like a lot of work. But if it’s planned correctly and carried out by a professional team of testers, deployment can save a ton of time and resources in the long run.
One thing we would suggest is that the deployment window is as long as possible. This is to allow for setup, testing and any urgent patches. In our experience, a 4-hour window is ideal – but you can book in a longer period if required.
Because however realistic the test environment is, you can never be 100% sure how your product will act in the real world.
Don’t assume anything
More than likely your team will use stubs and sandboxes when testing integrations during development.
And these techniques are fine, in theory.
After all, you don’t want to involve objects that would answer with real data – or expose host devices and operating systems to threats.
However, they also lead you to make a lot of assumptions about how data mapping works when it’s live.
Unfortunately, it doesn’t matter how realistic the test environment is – it’s always going to have its differences.
For example, the development team often needs to do some extra coding to get certain features working specifically for the test environment. But accidentally leave this string of code in the final release and there’s going to be headaches.
Speaking of which, another cause of pain during the process is passwords.
Check passwords and third-party integrations
So, you’ve been testing in a simulated environment for months. At this stage, you’ve got so used to the passwords you’re using that you could probably login with your eyes closed.
But when you do at last connect to live integrations, your linking accounts and passwords are more than likely going to be completely different to the ones you’ve been testing with. It seems like a small detail but it’s easy to miss. And the last thing you want to be doing is scrambling around looking for – or even resetting – passwords when there are real-world consequences.
But these aren’t even the biggest problems with testing in simulated environments.
Compared to the real versions, third-party integrations and sandboxes in dev environments might not be fully up-to-date.
Everything could be ticking along nicely until you release the live version. Then suddenly you notice that the real version was updated a month ago and you didn’t even realise; it’s completely incompatible with your code.
In our experience, this often happens with payment integrations. For example, the payment provider might add a new field weeks before you go live and they haven’t informed you or you’ve not tracked third party changes. Either way, it’s easily missed when you’ve got a hundred other things to check.
But when your API doesn’t include those tiny updates, it fails – and in a big way. An entire delivery process and a heap of effort undermined – all due to a couple of seemingly insignificant fields.
Test at times that work for you (and your customers)
The good thing about post-release testing with Digivante is that we can deploy it on days and at times when the fewest number of customers are using your website or app.
We can do so because our testing community is based around the world and available around the clock. This allows us to work to your business and customer’s schedule, ensuring a seamless experience for everyone involved. So whether it’s the small hours or the crack of dawn, we’ll run concentrated testing in the very first moments of going live. Just like it never happened.
But it’s not just customers that you need to consider.
To limit disruptions in-house, make sure any changes that impact business processes are documented and confirmed prior to releases. Also, make sure business users know the release plan and when/if they need to be taking on training or change to the process.
With implications for both colleagues and customers, there’s a lot on the line to get things right. But don’t worry, we’ve got the experience to guide your post-release process from start to finish.
An expert perspective on post-release testing
At Digivante, we take a strategic and consultative approach to post-release testing. Essentially, that means we won’t just carry out the tests, we’ll help you define and scope the requirements too. That means there’s no doubling up on work or resources – vital when the project budget is rapidly getting squeezed.
But it’s not just ‘issues’ we’ll be keeping our eyes on; instead, our testing team will carefully monitor anything that could have an impact on the launch. These could be features that made the release, the ones that didn’t and, crucially, any workarounds that have been applied.
And, importantly, we’ll keep you posted in real-time.
The difference with Digivante’s release testing process
By including Digivante in your release testing process, nothing is left to chance. That’s because we use tools that keep your team and your stakeholders on the same page throughout the entire process.
For example, our testing portal acts as an accessible gateway for the product owner, live support and software testers to see exactly where the post-release process is at. It allows everyone involved to view the latest results, immediately presenting actionable insights.
Haven’t got a live support or testing team to check on the portal? No problem.
We can manage all issues that have been raised and progress them right through to development. Our main objective is to quickly identify any deployment or configuration issues in a live environment. Not only that, but we’ll also be on the lookout for functional problems that have been obscured by setup issues in the test environment as well.
To ensure we don’t miss anything, our comprehensive process includes two rounds of exploratory testing:
- The first is carried about by our testing community. The good thing is, Digivante has tens of thousands of experienced software testers in almost 150 countries – so the scope can be as large as you need it to be. Consequently, our professional pool of software testers can uncover an average of 100 defects when run against the live environment.
- The second is looked after by the Digvante Test Lead. Reassuringly, this experienced tester is allocated during the first four hours of going live to manage any issues and all communications.
OK, so that’s the testers sorted out.
But one major concern with going live is making sure your release works on every major browser and device. It’s even more of a worry when there are now over 60,000 operating system, device and browser combinations in the world to test on. No single traditional software testing company can provide that amount of devices or software platforms to ensure compatibility for every visitor – it’s just not realistic.
That’s why we deploy a cross-browser test with the crowd to ensure all the key customer journeys are run against your top converting browsers and devices.
But you don’t have to wait until it’s time to go live; our consultants can work with you to define your test cases from the offset. That way you can test on a greater range of devices during the development process as well.
And speaking of test cases, we can use an API to automatically trigger the running of these. Crucially, these API calls can be built into development code. So when a release is deployed, the test execution is automatically triggered.
So, there you go. Post-release testing doesn’t have to be a last-minute dash.
Planned correctly, it can become a seamless and essential part of your release process. One that’s guaranteed to ensure whatever it is you’re cooking up leaves customers hungry for more – not struggling with the aftertaste.
Have you got business requirements or software releases you need support with? Our professional testing team is waiting and ready to help. Get in touch.