Sometimes we get so focused on solving the problem in front of us that it doesn't occur to us to ask if we are solving the right problem. Linda Hayes finds that starting a new year makes her think less about what has been and more about what could be. In this column, she offers her thoughts on the validity of the way we approach the most variable of all factors: the user.
I was discussing test design with the test manager for a large enterprise application. We were looking at a particular screen whose layout was completely dynamic depending on the actions of the user. She was bemoaning the fact that there were so many possibilities it took forever to test them all. I suggested that we identify the most common or likely scenarios and test only those. She countered by saying that the software had to support any eventuality, and so instead we should figure out how to use automation to test every possible outcome. Since she was the customer, her method prevailed, and much was invested trying to develop a sophisticated approach for solving the problem.
Aside from the fact that my experience told me this was a losing battle (after a great deal of time and effort, she eventually agreed), something about this whole discussion was unsettling, but I didn't put my finger on it until a later and seemingly unrelated incident occurred.
I was working with a different customer to help them document their business processes for testing, and it quickly became clear that there was no standard approach for performing many of their most critical processes. For example, one user navigated through five extra screens that her peers did not use. It turned out she was taught to do that by someone who used some of the same screens as part of a different process—like someone giving you directions starting from his own house. Because of these kinds of inconsistencies, the company had over a thousand test cases that, when analyzed, yielded frequent duplication resulting from multiple approaches to the same process.
Reflecting on this, I realized what these two incidents had in common: The users were uncontrolled variables. The essential randomness of user behavior, in turn, vastly complicated the test process. How can you test for interactions you can't predict?
That's when I realized we were solving the wrong problem. The answer is not to develop ever more test cases or more complicated test automation approaches. The real solution is to control the user.
Technically, it may be easier than you think. Organizationally is another matter.