Have you ever heard someone say, “We can’t test this—it’s out of our scope”? I worked on a team for a while where I heard this often. I’m glad for hearing it, because it changed my career direction and gave me a new vision to follow.
Ensuring quality of the product is a joint effort, but it is the testers’ job to make sure it’s done properly. When someone says a feature is not testable through the methods we use, it does not absolve us from the responsibility of testing; that is still our job. But for many reasons, testers have felt that being sort of nontechnical is okay, which has rendered a completely new set of problems.
The story I’m sharing here was a similar case, where a major feature of an upcoming application was not being tested properly due to a lack of technical skills and to the testing techniques being used. But instead of playing the blame game, we introduced a new testing technique to the team, got it institutionalized, and made sure the most important feature of the product was tested properly.
Moving beyond Functional Testing
My company was developing a new embedded product that was sort of revolutionary in the internet of things (IoT) industry, and the most important piece of it was the algorithm we were designing.
Until that time we used to do just functional testing, but the new product’s algorithm was impossible to test through functional testing alone. We once explored “gray box” testing to see if it would be an option, considering static and dynamic analysis were part of another product’s testing we used to do, but it did not work out for this product line—the cost versus benefit did not add up.
The root problem, I felt, was a lack of general testing capabilities. The company was pinning its hopes on the algorithm of this product, but the testing team had no existing mechanism or skill set to test that portion. That’s when the quest began.
Learning the Source Code
Because functional testing was not an option, I decided to understand the source code and see how we could test it. But as is unfortunately sometimes the case, the source code was not directly accessible by anyone other than the dev team, and going to development to understand the code was not an option.
While going through the architecture and different modules, I realized debugging the code to understand the control flow would make things quicker. For software products it is relatively simple, but for embedded devices, it’s not that easy. The IDE was configured in emulation mode, so I connected and configured the debugger to the actual device, with the IDE to run the code on the product. It took awhile to get, but it was worth it: We were able to run the code base on the device while stepping through the complete control flow and read and edit values along the way, which was a great help.
Development was big on maintainability, so the code was readable. In a few days the control flow and general architecture were clear. Wrapping our hands around the algorithm took awhile, but we got there. Once we knew the conditions used, it was easy to reverse-engineer the algorithm.
This is when I realized we could use the debugger as a white box testing tool to check the algorithm’s correctness. We finally reached a solution that could make these tests possible.
It’s important to remember you should not always look for set testing patterns every time. The objective is to test, using any means necessary to get the job done—even if that means introducing errors into the code, what’s called fault injection.
Boundary Value Analysis in Action
Depending on a bunch of variables, the algorithm decided what to show on the screen. All the variables and their cutoff points were put into a spreadsheet, giving a clear idea of the boundaries for each variable. Once we had tests above and below each threshold, it was just a matter of manipulating the variables at the code base at the right time.
Writing the right value was important, and equally important was at what time in the control flow to inject the test value. Because we were running these tests from within the code, we tried to keep them as near to an actual user environment as we could. Values were changed in the code as soon as the device sensed and recorded a reading for any of the variables, which made the process realistic. Even when doing tests at the business logic tier, it’s a good idea to keep them as close to a real-life scenario as possible.
Demonstrating Testing Value
The idea of testers debugging source code and executing tests within the code base was not easy to get approved from the larger group. Therefore, the strategy was to go through the entire research under the radar. This meant we had to complete our planned activities along with all this research. The end game was to capture issues in the algorithm and demonstrate the effectiveness of the process. We figured getting the approval to embed these tests into the testing regular process would be easy once the results were evident.
This was the risk we took: We were betting on finding actual issues in the algorithm. Once we found a few, the project was brought into the limelight. It was well received, and everyone saw the clear value in it. There was nothing more to debate, and our process was institutionalized immediately. It has been an integral part of the testing process for the product since then.
Showing instead of telling is a powerful concept. Sometimes, to get things done, you have to just do it and then show up with the results.
A Tester’s Job Is to Test
It is the tester’s role to make sure the product has been tested adequately. That measure should not consider what we can test, but rather what must be tested.
Naturally, we must prioritize our tests, keeping the application under test’s competitive advantage the highest priority because that will have the greatest impact on customers. If that feature is not testable, don’t hesitate to find a new way to test it. The testing techniques out there are guidelines to help with testing, not hard and fast rules to be followed to the letter.
Make sure while testing to keep the test as close to real end-user conditions as possible. And finally, to cut through the red tape and speed up acceptance of your new process, show the benefits instead of just stating them—it makes all the difference.