New Year, New Level: What's Next in Automation

[article]
Summary:

Sometimes we get so focused on solving the problem in front of us that it doesn't occur to us to ask if we are solving the right problem. Linda Hayes finds that starting a new year makes her think less about what has been and more about what could be. In this column, she offers her thoughts on the validity of the way we approach the most variable of all factors: the user.

I was discussing test design with the test manager for a large enterprise application. We were looking at a particular screen whose layout was completely dynamic depending on the actions of the user. She was bemoaning the fact that there were so many possibilities it took forever to test them all. I suggested that we identify the most common or likely scenarios and test only those. She countered by saying that the software had to support any eventuality, and so instead we should figure out how to use automation to test every possible outcome. Since she was the customer, her method prevailed, and much was invested trying to develop a sophisticated approach for solving the problem.

Aside from the fact that my experience told me this was a losing battle (after a great deal of time and effort, she eventually agreed), something about this whole discussion was unsettling, but I didn't put my finger on it until a later and seemingly unrelated incident occurred.

I was working with a different customer to help them document their business processes for testing, and it quickly became clear that there was no standard approach for performing many of their most critical processes. For example, one user navigated through five extra screens that her peers did not use. It turned out she was taught to do that by someone who used some of the same screens as part of a different process—like someone giving you directions starting from his own house. Because of these kinds of inconsistencies, the company had over a thousand test cases that, when analyzed, yielded frequent duplication resulting from multiple approaches to the same process.

Reflecting on this, I realized what these two incidents had in common: The users were uncontrolled variables. The essential randomness of user behavior, in turn, vastly complicated the test process. How can you test for interactions you can't predict?

That's when I realized we were solving the wrong problem. The answer is not to develop ever more test cases or more complicated test automation approaches. The real solution is to control the user.

Technically, it may be easier than you think. Organizationally is another matter.

About the author

Linda Hayes's picture Linda Hayes

Linda G. Hayes is a founder of Worksoft, Inc., developer of next-generation test automation solutions. Linda is a frequent industry speaker and award-winning author on software quality. She has been named as one of Fortune magazine's People to Watch and one of the Top 40 Under 40 by Dallas Business Journal. She is a regular columnist and contributor to StickyMinds.com and Better Software magazine, as well as a columnist for Computerworld and Datamation, author of the Automated Testing Handbook and co-editor Dare To Be Excellent with Alka Jarvis on best practices in the software industry. You can contact Linda at lhayes@worksoft.com.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Sep 22
Oct 12
Nov 09
Nov 09