When it comes to testing, there tends to be a differentiation between “production software” and everything else. But our ideas and principles about testing software are true for all software, not merely the code that will run in front of customers or the APIs that make things happen. Any software built for a purpose needs to be tested against that purpose, including the software running our test automation.
There appears to be a great divide, or at least a misunderstanding, when it comes to software. There tends to be a differentiation between “production software” and everything else. Let’s try to shed some light on this notion and the fallacy around it.
Most people in software believe that production software should get tested. What this testing looks like varies and is often based on each person’s background, training, and experience.
There are those who demand that everything get tested. The person executing tests must walk through the various steps one at a time. They must observe and note the results of each action. Then there are those who assert that this is a waste of time and all testing should be automated.
There are some who believe a mix of automated testing and hands-on testing is best. There are a few who believe testing is a good thing in itself and should be done. They also tend to believe the testing should be done by someone else.
There are some problems in each of these firm beliefs, aside from the obvious takes of “I don’t want to do this, but someone should” and “I know better than everyone how this should be done in all circumstances.” The challenge is finding where a reasonable person could land and feel comfortable.
Let’s consider the one thing these beliefs (and their believers) have in common: software.
Most people can agree that software should be tested. To have confidence in how a given piece of software operates, whether it gives us the results we expect, and whether it can do whatever it is we need it for, we need some level of certainty. At the very least, we need some level of understanding as to how it behaves.
It is a good idea to know under what circumstances the software does what we expect. It is likewise good to know what it takes for unusual or unexpected behavior.
To get that information, we need to do some amount of testing.
The challenge of testing is not filling out forms. It is not creating test plans or scripts or cases. The challenge is not building the tool to test the software.
The challenge is the hard thinking around what questions need to be considered. Someone needs to dive in and look at expectations set, promises made, and the problems the software intends to solve.
These things are usually done by someone in the role of “tester,” and what they do is “testing.”
The problem with the term “testing” tends to be a certain level of baggage associated with it. There are those who say testing is breaking software. There are people who seem to flaunt finding a problem in someone else’s work. These views presume a rivalry or contest between the people developing the code and those exercising it.
At best, this is counterproductive and questions how well teams work together.
I define testing as a systematic evaluation of the behavior of a piece of software, based on some model.
The focus here is on the behavior. How does it act?
By keeping focus there, we can find behavior that is not what we expected. It may not be wrong or incorrect; it is simply not what we thought we’d see. This gives the tester an opportunity to clarify understanding around that particular function.
This becomes challenging when the people charged with testing the software are not actively involved in discovering and documenting requirements and expectations. It becomes harder still when people retreat into a void and come out some time later with a “finished” design or software product.
One of the strengths of integrated, collaborative teams is people can support and learn from each other. Questions can be asked and answered without waiting for approval from some level of bosses. Ideas can be shared. Suggestions can be made and acted upon.
For that to happen, people need to collaborate and work together.
The ideas of handoffs and passing code to testing need to end. People need to support the overall project. Those who are expected to contribute to the product’s quality need to be involved and participate in the entire project.
What does this have to do with testing software? Everything.
When you are developing and testing software, the problem the software is supposed to address must be the focal point of the effort. Everyone involved needs to have a shared, common understanding of the problem space and the intended solution. For an effective solution, people need to work and contribute as equals.
To test software effectively, this understanding must be part and parcel of the approach.
The software that has been developed as a solution for a business problem or need is an obvious consideration. As responsible professionals, we want to be certain that code will do what it needs to do. It must demonstrate suitability to purpose.
To do that, we must approach all aspects of the development process the same way. We must be able to show the work we are doing contributes to the suitability of purpose of the finished product. This includes the user stories, the documented requirements, and the design work. It also includes the code itself; the testing references, documentation, and approaches; any tooling used; any software written for test automation—anything that contributes to the final product.
Software for Testing
When people talk about testing software, the majority focus on the obvious cases. Happy paths and requirement confirmation tend to lead the way in many people’s minds. Very seldom do they consider the conditions or aspects that might impact their testing. Rarely, until it obviously fails, do they consider the environment they are testing in and the tools they are using for testing.
This is a problem.
The preceding cautions about testing software are true for all software, not merely the code that will run in front of customers or the APIs that make things happen. Any software built for a purpose needs to be tested against the software’s suitability for that purpose.
Software written to drive automation testing needs the same level of scrutiny as the software that gets put in front of customers, whether internal or external. They rely on that software to do their jobs, inform their decisions, and make or place orders for products and services. The myriad functions software runs for business and leisure activities would crumble without a good level of reliability.
The customers of test automation software are the development teams who rely on that software to relieve them of mundane tasks so they can work on more complex missions. They rely on that software to work as they expect and understand it to work so they can meet the needs of their customers.
The great challenge for many organizations is this same software tends to be fire-and-forget, with no further assessment after release. After it is written and implemented, people trust it will always be relevant. They fail to recognize that they have just created another set of legacy code that needs to be maintained with as much vigor as the code used by customers.
When we fail to review and update the code running in our CI/CD platforms or in our regression suite, we ultimately are failing our end customers.
When any piece of code that is currently running in production is modified for a new need, the automation code likewise must be reviewed and modified appropriately. One would not review impact to one section of production code without looking at how other sections that interact with it might be impacted. We must do the same with our test automation code.
The software running our test automation is production software. It needs to be treated and respected as such.