Shifting Your Testing: When to Switch Gears

Shifting your testing either left or right can meet different needs and improve different aspects. How do you know whether to make a change? Let your test cycles be your guide. Just like when driving a car with a manual transmission, if the engine starts to whine or you’re afraid you’re about to stall out, switching gears may be just what you need.

Everyone keeps talking about shifting testing left. A few weeks ago, it was shifting right. But where are you now?

Before you shift your testing practices, it’s important to evaluate your current position. Let’s look at the benefits of shifting either left or right, including what needs can be met and what can be improved by shifting.

It can be helpful to have a visual, so let’s use something already associated with shifting: a car’s gearbox! As we’re talking about testing, imagine the set of gears below.

Gearbox for a car with a manual transmission

The Middle: Third Gear

This is the current situation for most companies. In development, only basic and technical checks are made. You have a dedicated testing system where your quality checks are done. In general, developers develop and do the technical stuff, and testers check the functions at different levels and in combination with other applications.

Apps are packaged so that the connections between them are dedicated to their individual functions and possibilities. This setup is rather static and requires much effort in maintenance and knowledge of each product, technology, and workflow. Every application is dedicated to a certain team, developer, or company, and those people take care about it. If you suffer a problem, you have to ask this one guy, and if he is not available, you are stuck. Information stored in human silos decreases your time to market significantly.

I worked with a bank that employed this setup. They had dedicated systems with straightforward testing in every stage. Test cases where done for dev or test systems, and real production problems needed to be recreated in test to reproduce them.

Problems included a long time to market, inconsistent test data through the systems, differences between the environment of each system and stage, and some errors occurring only when certain systems where connected. Most of the time testing went into the setup and then tried to align all systems and imagine them as they would run in production.

A database of test data that could just be imported was a help, because users and accounts could be created on the fly with any date on them; otherwise, testing would have taken ages. But is there a better way?

Shifting Left: First Gear

Why do we test so late in our development lifecycle? If we manage to find bugs earlier, we can solve them more easily, not to mention more inexpensively. So let’s shift testing left along the software development timeline, moving it into the dev environment.

This is possible because there was a shift in setup, too. Systems have migrated to the cloud, and testers have moved away from dedicated connections for every application to an API. In an environment controlled by an API layer, every app calls a function over a service, either a generic or dedicated one, enabling earlier testing. Checks are done over the services. Data is passed through a request and is received by the response. If a system is not available, service virtualization makes it reachable.

Service virtualization is just a way of mocking any kind of connected system, letting testers play around with it. The simulation can even be done with complex environments, and people can get training and practice like in the released setup, with much less risk. Data streams are recorded and will be replayed once needed. Service calls are made to the API layer and the virtualized service answers the same way a productive system would do. The application is part of the whole environment, and a huge landscape is covered already in development.

A module-based testing approach also changes the game. Developers create modules that are used later by testers. Just think of different perspectives on the system under test: Dev only looks at their part, while testers look at the combination of systems and see the bigger picture throughout the system landscape. Reusing some of the artifacts would create a new way of creating process chains or can be used for extending the simulation, which is just changing the start and end. Developers can define connection details like users, passwords, endpoints, and security up front, and testers just take it for the test environment. (Yes, just a test environment. It is not meant to be a dedicated testing area without virtualization, where the systems really interact with each other.)

Test data and variants of all tests can be shared between dev and test, so a useful test can be run in dev already, always depending on the importance and the recorded data flow. Multiple people can share knowledge about the steering. The service call can be very generic and is highly dependent on the data provided. This means finding bugs earlier, which results in less testing effort in further stages and shortens your time to market.

I also worked with a credit card company in the process of shifting their testing left in this manner. They had managed already to get rid of the static staging and move to APIs, and even though they were not generic enough and still more specific to a certain function at this point, they still detected many errors early. They had a dedicated testing team just focusing on the API layer and checking the connection between it and the applications. Testers created modules with developers, representing the requests and responses. The same modules also could be reused in testing with different connection parameters, which saved a huge amount of time because everything was predefined, and the dev team knew what parameters the testers needed in case of any change. Big issues and data inconsistency were already checked in the dev environment and could be solved up front in the deployment to test.

They had just started to mock apps with service virtualization, but for the one they had done already, the developers told me how far their checks can go now because of the application behavior, which decreased maintenance and testing time later on.

Shifting Right: Fifth Gear

This method is special because we kind of mix it up and try to get the best of both worlds. We increase our testers significantly, but our time to market needs to be very short—because testing is in production.

One of the best examples for shifting testing right is Amazon. Their customers and users of their product are the testers. Operations are highly integrated and get more significance in QA. This works best if you have already gone through shifting your testing left and have just one big product that moves through an automated build.

To ensure testing in production works, more effort is needed than in the other gears. Testing processes have to be defined close to perfectly and all work should be automated, favoring the API layer. Whatever you forget to check will be present in production, so an area that wasn’t covered in test could lead to real damage. Your product should already be super stable and maybe not have too many things connected—or at least everything should be tested with above 90 percent risk coverage—or else operations would not be capable of doing a deployment.

Every part should be handled independently. One deployment to production should consist of small bits and pieces, because this ensures better test coverage. Huge features can cause more side effects than expected, so you want to have them included in your tests as early as possible. The final test itself will be handled through the users, and their feedback will provide you with the bugs.

Nevertheless, it is required to have a fully implemented continuous integration QA pipeline, which will be triggered during the build. Most of the tests happen automatically within the CI tool and the build is only done if tests are green. Ops will take care of the delivery pipeline, so testers need to create and maintain tests and check the results. In the best case, testers should be able to recognize false positives and assess the need for test or dev involvement.

If you walked through the whole gearbox, you should gain an advantage from the first gear. The API modules from dev could be reused here, and maintenance on a technical level is just done once. The final execution state is based on feedback. There is no need any more for go-live discussions after your tests; it’s more about getting emails from upset customers.

It is a good option to increase user acceptance and make features based on user experience, but your product needs to be at a certain level to go for it. If your change does not affect too much of your business risk—meaning if both damage in case of failure and frequency of using it are low—you can easily trust your gear. In every other case, try to be sure that all tests cover as much as possible and all side effects are detected. The higher your identification and test coverage are, the higher the trust in your pipeline and your setup for delivering your product quickly and securely.

I’m currently setting up this process for a client that’s a mobile provider. They have their application ready and are just adding features on features, so smaller parts. Every third-party application is virtualized and the connection runs through APIs. Every build passes through a CI pipeline and is checked on the dev side up front with automated API and JUnit tests.

If this is fine, it’s moved automatically to a dedicated test environment. There, a CI tool reimagines the machines, and the solution, including environment and testing tool, is built. Once done, automated functional tests are run using UI, API, and real devices. If all tests are green, the build will be passed to production and built there, without any manual input. This requires trust in the systems and applications, but it saves time and money.

Changes in tests happen through all stages just by maintaining a single object. One test is created and affects multiple environments, thanks to prebuilt modules. The biggest challenge here is the alignment in the virtualization of the different stages. During test case creation, we introduced a staging and used the “four eyes” principle to check that everything was tested and covered.

Don’t Be Afraid to Switch Gears

Every environment is dependent on many factors, including application, team, and infrastructure setup. Different testing strategies and approaches can be used for different stages, and if your process moves you to a different stage—or if you think a different stage could serve you better—then shift in that direction.

Every shift is a process, not a moment; your journey will create the moments for you.Don’t hesitate to shift your testing through the different phases of the software lifecycle if you think it will benefit your processes more. And don’t let the idea of a having to create an entirely new test plan hold you back: Test reusability is high, and something defined in the third gear may save you time and money in the fifth.

Let your test cycles be your guide. Just like when driving a car with a manual transmission, if the engine starts to whine or you’re afraid you’re about to stall out, shifting gears may be just what you need.

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.