In the earlier days when most companies used a waterfall style of development, Google was joked about in the industry that it had its products in beta forever. In fact, Google has been a pioneer in building a case for testing in production. Traditionally, a tester has been responsible for testing all scenarios, both defined and extempore, in a test or staging environment before the build could go live. But today, this premise is changing on several fronts.
For instance, the tester is no longer alone in testing. Developers, designers, build engineers, other stakeholders, and end-users, both within and outside the product team, are testing the application to provide feedback.
The test environment, product under test, underlying technologies, test combinations (devices, platforms, browsers, etc.), are all so complex today. The services mindset is very high; gone are the days of a local setup to test for the product. Cloud environments offer the scale and ease needed in testing complex interfaces, and the production environment offers unique testing opportunities that cannot be fully implemented before release.
And the constraints the test team works within, including time, cost, and availability of niche testers, are more prominent than ever before.
With all these factors at play, testing in production is inevitable. However, instead of looking at testing in production as an option that is being thrust on the teams, if one closely looks at the inherent benefits it holds, this exercise can greatly help in beefing up the quality of the product.
So, what does it really mean to test in production, what are some of the ways in which it can be done, and how does it help?
What It Means to Test in Production
Testing in production is an exercise where the quality function of validating and verifying an application is taken up in the live environment, after release, either by the tester himself or by end-users. There could also be others such as business people, marketing teams, or analysts who share feedback with the product team. Testing in production gives more realistic opportunities to test, increases application transparency between the core product team and users, and supports the idea of continuous development through continuous testing. In this mobile-first world, testing in production is a core technique to embrace in your testing process.
Of course there is a negative connotation to issues reported from the field: This basically meant the tester did not effectively and comprehensively test the product. An issue reported from the field is handled with very high priority and fixed as soon as possible. While issues that show up in production can still leave a black mark on the tester’s effort, a test in production initiative nowadays has an expanded scope. It is not just reactive actions in response to user-reported issues, but also proactive planned test efforts that are taken up before the product is officially rolled out.
Techniques for Testing in Production
The core techniques for testing in production encompass active and passive monitoring (involving real data and synthetic transactions), experimentation (both controlled and uncontrolled) with real users, and stress tests to monitor system response. While these may sound simple, one has to be extremely sensitive in all of these tasks, as live users’ data is often involved and has to be protected (the sensitivity is not just with any data loss, but also with data privacy and security). Also, the volume of transactions in live environments are so high that any test effort here has to be adequately monitored and followed up on.
Techniques such as telemetry, or diagnostics around software usage, bring readability into the outcomes of testing in production. While active monitoring focuses more on user-generated outcomes, passive monitoring relates to test effort from the test teams with synthetic data. So an example of active monitoring would be if you engage a beta crowd team of users to provide you feedback, whereas passive monitoring is when the test team initiates monitoring either by itself or through the rest of the product teams. This method can also include the operations and support team executing a set of automated sanity tests on an ongoing basis to keep tab on the health of the application in the live environment.
Experimentation through a controlled form often involves techniques such as A/B testing to gauge user feedback on specific scenarios, while uncontrolled experimentation relates to beta testing and crowdsourced testing. Stress tests in a live environment need to be closely monitored, especially ones performed during peak load seasons for the application. For example, a shopping application’s peak season is during the holiday season or specific sale offers they introduce. Instead of passively waiting to hear issues from the field during such peak seasons, testers should monitor loads at such times proactively and have a ready team to address issues.
Relating to the above, products today also have a persistent social media presence. Retail stores and even desktop applications have their own dedicated pages on Facebook, Twitter, and LinkedIn. The tester has to proactively watch for discussions on these forums to see what users have to say about the product—usability, performance, overall functionality, and UI are some of the top areas to seek feedback. User field studies, visits to enterprise deployments (in the case of enterprise applications), and booths at events and conferences are all great places to get live feedback, which ultimately flows into the bucket of testing in production.
Words of Warning
While testing in production has ample scope and potential, it is not an invitation for testers to delay their testing responsibilities until after release. The collaboration and collective ownership that quality has evolved to requires others on the team, such as developers, designers, architects, and build engineers, to also take up testing in all environments, including live ones. The truly passionate tester will take advantage of this to free up cycles and take on bigger and better things.
It may also be tempting to adopt testing in production at an organizational level to promote a faster time to market at a lower cost, but such a strategy should absolutely not be encouraged. The product quality, user loyalty, brand acceptance in the marketplace, and the overall positioning of the test team can suffer.
Testing in production should be seen as a double-edged sword—very effective if used correctly, but harmful if trespassed into unprepared. With the bounds well defined and value proposition outlined, testing in production has a lot to offer in the coming years, especially as the lines between the product team and end-users get increasingly blurred.