Most ecommerce teams recognise the pattern… A release has been through regression, key journeys work, and nothing looks obviously broken. And yet, once it reaches production, something doesn’t land quite as expected. Conversion dips, users hesitate, or a journey behaves unpredictably on certain devices or under certain conditions.

This tension was the focus of a recent live webinar discussion between Digivante, QE Babble, and Specsavers, where we compared notes on where traditional QA approaches start to struggle in real ecommerce environments and how teams are adapting.

The conclusion wasn’t that regression testing is broken. Far from it. It’s that, on its own, it can’t surface every risk modern ecommerce teams are exposed to.

What regression testing does well (and where it stops)

Regression testing is the backbone of reliable ecommerce delivery. It protects known functionality, prevents repeat issues, and gives teams confidence to release frequently without reintroducing old problems.

The limitation is simple and structural: regression is based on what you already know to test.

Ecommerce users rarely behave in neat, predictable ways. They switch devices mid-journey, browse on unreliable connections, interact with features in combinations no one anticipated, and take paths that feel minor internally but matter hugely when real customers and real revenue are involved.

In high-traffic environments, small points of friction can have outsized consequences.

Exploratory testing and the value of the unknown

Exploratory testing exists to address that gap. Instead of confirming expected behaviour, it focuses on uncovering unexpected behaviour, the things teams didn’t realise they needed to test for.

Crowdtesting 101 Webinar Image

This distinction came up repeatedly during the discussion, particularly in examples shared by Jack Duncan, Principal Quality Engineer – Acceptance at Specsavers, who spoke candidly about how regression and exploratory testing play different roles in practice. As Jack explained, regression validates the known. Exploratory testing reveals how users actually behave, often surfacing issues that internal teams didn’t know existed.

Crucially, this isn’t work that sits outside existing QA processes. When done properly, exploratory findings feed directly back into regression:

  • new behaviours become new test cases
  • unexpected friction points become known risks
  • future releases become more resilient

Over time, the unknowns shrink…

Why internal QA teams can’t do this alone (even when they’re excellent)

For most internal QA teams, the challenge isn’t capability. It’s constraint. Even strong teams are limited by:

  • time and release pressure
  • access to a wide range of real devices and environments
  • the cost of scaling exploratory testing on demand
  • natural bias that comes from being close to the product

None of this reflects a lack of skill. It’s simply the reality of operating inside fast-moving ecommerce organisations. This is where mature teams bring in external testing support, not to replace internal QA, but to extend it.

How crowdtesting fits into a mature QA strategy

Crowdtesting has a mixed reputation, largely because when it’s poorly managed it creates noise rather than insight. Used properly, it does the opposite.

In effective setups: internal teams define scope, priorities, and risk boundaries. Exploratory testing is targeted, not indiscriminate. Real users test on real devices under real conditions and issues are validated and reproduced before being shared.

The result isn’t a flood of bugs. It’s clear, actionable insight that teams can trust and act on quickly.

This model works particularly well in ecommerce, where device fragmentation, browser variation, and unpredictable user behaviour are part of everyday reality.

A real-world ecommerce example from Specsavers

One of the most practical moments in the conversation came when Jack shared how Specsavers has used exploratory crowdtesting to diagnose production issues that internal teams struggled to reproduce reliably.

In one case, the issue was visible to customers but inconsistent internally. By introducing real users on real devices, the team was able not only to reproduce the original problem, but to uncover additional issues that hadn’t been visible before.

Once fixed, the same testing approach was rerun to validate the solution.

That loop – explore, learn, feed back into regression – is what turns crowdtesting from a safety net into a strategic capability.

It also reflects a longer term way of working. Digivante has supported Specsavers over time by helping scale test coverage, reduce regression cycles, and increase confidence in ecommerce releases.

Testimonial Quote from User Acceptance Test Manager, Specsavers

Why this matters more in ecommerce than most sectors

Ecommerce platforms sit at the intersection of speed, scale, and customer expectation. Releases are frequent, traffic is unpredictable, and user patience is thin!! Journeys can technically “work” while quietly undermining trust, confidence, and conversion, which is why some of the most daaging issues are also the hardest to detect.

This was a recurring theme in the live discussion between Digivante, QE Babble, and Specsavers. Rather than focusing on tools or tactics in isolation, the conversation centred on how mature ecommerce teams balance regression, exploratory testing, and real-world risk when the cost of getting it wrong is high.

If this article resonates, the full webinar is worth watching. It goes deeper into how these decisions are made day-to-day, with practical examples from Specsavers and Digivante’s perspective on applying exploratory crowdtesting in live ecommerce environments at scale.

Watch the webinar: Crowd Testing 101 with QE Babble, Digivante, and Specsavers

If you’d like to see how this way of working has played out over time, you can also read our Specsavers case study, which details how Digivante has supported increased test coverage, faster regression cycles, and greater confidence in ecommerce releases.

Read the Digivante × Specsavers case study