Where Your Money Is Lost in Testing

[article]
Summary:
Companies that want to reduce testing costs usually try working with fewer people, or even cutting back on the amount of testing done. But with those approaches, quality usually suffers. Releasing a critical bug and suffering the subsequent pain usually costs multiple times what testing would. There are better ways to save money, and it can be done just by being smarter about our test cases and their structure.

Testing costs money, obviously. But releasing a critical bug and suffering the subsequent pain usually costs multiple times what testing would, without taking marketing and image damage into consideration.

Testing is a must and there is no way around it, but nevertheless, people tend to want it to be more efficient and cheaper than it was before.

In the first iteration, testing was moved from business testers to offshore. Then testing came to nearshore, and now I see testers in agile teams with developers together. Lesson learned: It can save me money, so let testing be as close as possible so we can more quickly react to changes.

On the one hand, companies want to be agile, to be faster through DevOps, to shorten their release cycle, to go to market as fast as possible, to integrate all tests in the CI pipeline, and to be generic enough that they can test multiple environments, whether they are real or simulated. But on the other hand, these same companies do not want to spend money on tools and services to get an overview of their quality and the highest risk coverage possible.

What is the benefit of migrating to the latest enterprise cloud and rebuilding the system completely if you do not check that all the processes are working like they did before? What if you spent time and money on test cases that cover only half your risk, but you don’t know that because no one thought to check test coverage? Those questions may tell another story.

We need to think more about saving costs on our test cases and their structure itself.

Saving Money on Test Cases

As systems change over time, we will need to adopt stable test cases to secure their execution. If you change something on the technical side—a new control, changed control, new page, new feature, new environment variables, etc.—you need to add it to your test case.

Adoption and maintenance take most of the money spent in testing, but there are simple changes that can reduce these efforts.

For instance, it will save you a lot of costs if you have the technical objects separated from the rest, centralized to enable the possibility to adjust just one object to maintain all connected tests. If you have a login window with the controls of the username, password, and login button as one object, you could call it a module, or define a collection of all application controls together, like a base class, and reuse them all the time.

This method is the same as object-oriented programming and will create reusable artifacts. You can just take one page, list all the controls of that page, map each of them with your test case, and reuse those objects all the time. Technical changes will have less impact to your tests, they are more stable, and maintenance for all test cases can be done with a single change, because you only have to modify the one module containing the changed control.

Current typical testing is done by writing code and using code-based frameworks, and most of the money is spent on maintenance of technical changes that come with a new version of a feature. Apply object orientation on the technical side and you will save costs, which can be invested into innovation to grow.

Technical issues are still the main problem to break a test in current setups, but what about test data? I have seen multiple customers use the same test data repeatedly. They create a new test case just to have another set of data run.

You can change that to a database and a methodical approach to let data be separated from the test case, and partially even from the tool or code. Reading a prepared file from the business would have been another valid option to solve it. The benefits will change your testing game immediately!

From now on business users could change the tested data on their own, without letting the testers know. They could add more data sets or remove outdated ones. The SQL queries change dynamically, depending on the input data, and even the usage of masked productive data is possible, because it just affects the data in the database, which was pulled by the query.

Business users also gain the knowledge of what data is consumed for which test, and they could easily react to changed requirements and added features if they are already connected with a test case. A randomized test data approach can be used, where the query always returns a random dataset that must meet certain defined criteria. In fact, this is closer to a production environment than to a testing one, depending on the data set.

Do you already see how this affects your costs? The business can find errors easily, because they know the used data and can reproduce the failure. Randomized test data increases the test coverage and could be pulled from productive data, which is already there and just needs to be anonymized. Risk coverage increases with this masked productive data, because you test what users are actually using. Whenever users tend toward a certain feature, you can already test it with the right trended data in test environments.

Additionally, the workflow through your application is a way to shrink maintenance to a minimum and save money. How easy it would be to generate a data-driven structure of a test case by using the data in the way I described above?

Create blocks of predefined test steps and reuse them like objects. Make them as generic as possible and dependent on the data you use with simple conditions. Whenever the workflow changes, you have again a single point of contact, and adding new variants to your test case will be easy. One change will affect all your test cases, with all the consumed data for a specific use and need.

For one worldwide business, my team and I changed testing for them completely by just clustering the test cases together and extending them with a data-driven approach. Conditions drove the way the test cases behaved and built the base on how they were behaving during execution in the system under test. Imagine marking a material as not existing in your system, and the test case knows automatically that it needs to verify the error message instead of the success, just by the input data. If you get another error that needs to be checked in another system, you just pull the block and add its data dependent to your structured test case.

Another option would be to cut down test cases to small bits and pieces and see the sequence of them as the real test case. You must define a list of test cases that are parts of the whole. Newly created data could be passed through a database and forwarded by a predefined status dynamically. This approach may lead to multiple reruns of several data-generating test cases, but it would increase the reusability of test case blocks to enable a single point of change and reduce duplicates throughout your test portfolio.

It is no longer necessary to spend so much money on the creation of new test cases. Testers just pull their test cases together from the predefined blocks and have to ensure the data flow is given. Only completely new features need to be created, as a new structured block, which can then added to your library of test steps. Whenever another test case passes your new feature (or app in the middle), the new block will be pulled to your test case, and whenever it changes, there is just the need of a single change in this specific block.

Saving Money on Test Structure

Moving away now from the test cases itself, you can also save a big amount of money in the planning and initial setup phase of your tests.

Imagine how much faster it will be to have test data created up front not in the user interface, instead of creating them manually or by UI-based test cases. The data can just be fetched into a database and picked up by test cases, no matter if they are linked to a specific case or picked by certain criteria.

If you run a microservices environment, you could run API calls automatically, unattended and in parallel, to set up your test ready environment and be ready for automated test case runs. It’s creating non-UI test cases to create or gain test data, and, depending on your setup, you could either save it in a database or call the services directly during your test.

Service virtualization also could save you costs in hardware or third-party vendors, while shifting your testing to the left and increasing your test environmental uptime. Check dependencies early in your testing process to avoid downtime, and set up substitutes for unstable apps, features, or web services. Standardize tools and methods to reduce costs in ramping up people, and adopt simple reporting without the need for integrating multiple tools.

There are many ways of saving costs in testing, but thinking of what you want to do up front will be the best course of action, instead of having to change later to a more performant way. Clear structures and strategic testing, supported by an object- or model-based approach, should lead to success.

You will save money during testing with artifact reusage in data-driven test step blocks and generic technical elements steered by masked productive data. Technical depth, workflow, and test data need to be separated and adopted, to enable a single point of change on every object within your testing. Check tools to standardize and to take advantage of built-in frameworks that may solve problems already and fulfill your needs.

Think twice, think bigger, and think end to end to save costs and build a framework up front that is reliable and reusable.

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.