Testing the Unexpected: A Shift Right in DevOps Testing

[article]
Summary:
When it comes to testing in DevOps, more than simple regression checking can be automated. By shifting right in the lifecycle and testing in production, you can analyze the undefined, unknown, and unexpected by relying on real traffic and unpredictable test input. With shorter implementation and release cycles, testing and production come closer together.

DevOps is not only about developing and releasing applications; testing is also an essential part of the practice. But the role of testing in DevOps is not as visible as the other practices, and there are often disputes about how testing should be performed.

Due to the increasingly automated nature of the software delivery lifecycle in DevOps, continuous testing has become a popular movement. However, test automation gives many testers pause. Though automation can be helpful, particularly in a DevOps environment, it cannot (and should not) replace testers entirely.

In particular, the context-driven testing community emphasizes that testing and automated checking are different and that validating expected results is not testing. Through exploring and experimenting, testers play a crucial role in ensuring the quality of software products.

However, by insisting that real testing activities cannot be automated at all, testers are left out of the continuous testing conversation. That introduces the risk that important improvements in test automation will be shaped more by software engineers than by the testing community. There is a need to find an adequate response to the demand to orient testing toward an accelerating pace of development, with shorter implementation and release cycles. The question is not whether testing will change, but rather who will drive the innovations in testing.

A Case for Shifting Right

You’ve probably heard of the “shift left” movement in software testing, describing the trend toward teams starting testing as early as possible, testing often throughout the lifecycle, and working on problem prevention instead of detection. The goals are to increase quality, shorten test cycles, and reduce the possibility of unpleasant surprises at the end of the development cycle or in production.

But there’s another new idea: shift right. In this approach, testing moves right into production in regard to functionality, performance, failure tolerance, and user experience by employing controlled experiments. By waiting to test in production, you may find new and unexpected usage scenarios.

The popular test strategy model of the four classifies tests based on whether they are business- or technology-facing and if they more guide development or critique the product. However, Gojko Adzic observed that "with shorter iterations and continuous delivery, it’s difficult to draw the line between activities that support the team and those that critique the product." As an alternative, he suggested using the distinction of whether you’re checking for expected outputs or looking to analyze the undefined, unknown, and unexpected:

Gojko Adzic's alternative to the agile testing quadrants model

Let’s look at some perhaps unfamiliar testing approaches that focus on the unknown and unexpected.

User Comments

3 comments
Venkata Rama Satish Nyayapati's picture

Excellent Stefan! I see some value in them. I understand that certain risks would also manifest themselves when testing in production, but still the idea is to foresee them and reduce their impact as early as possible and what better way to explore them than by testing in production.  

October 10, 2016 - 2:15am
Myrna Bittner's picture

Thank you for the great article Stefan!  What if you could test production in development?  I think about your shift right - when I present, I talk about shifting completely left - maybe it forms a complete circle:)  We use bots and our continuous dynamic field simulation framework to run completely realistic production activity at scale against entire tech stacks in development.  Sometimes we even use metrics from production activities for bot goal seeking.  We instrument the systems completely so everyone can see everything and test-fail-fix before any customer pain.  We are having a lot of success from clients ranging from SaaS IoT (invehicle device, TV settops) to mobile apps and web systems.  There is just so much value to be found in minding the gap.

October 24, 2016 - 3:05pm
Stefan Friese's picture

Hi Myrna,

The boundaries between development, testing and releasing/operating gets blurred so you could argue which trends are towards development and which towards production. To test with bots is a great idea - what we are doing is to to some extent comparable: we use production traffic as test input but duplicating production API calls as use them as test input (for performance tests) - see also: http://tinyurl.com/zfhlzem

October 25, 2016 - 8:04am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.