You Don't Need Superb Technical Skills to Be a Valuable Tester

[article]

Testers who question their own technical ability have more to contribute to technical tasks than they realize because such tasks often don’t require as much technical ability as they think. Though strong technical ability can produce slick solutions; oftentimes, those solutions can be overkill, taking too much time to create and proving too complex to maintain. A sledgehammer isn’t needed to knock a nail into drywall, and testers don’t need to have developer-like technical ability in order to post big wins for the software-building team.

One of the most valuable things a tester can do is reduce the amount of time it takes to accomplish a task that doesn’t require human sapience. For instance, build installs often require multiple steps that are generally the same every time: Figure out which build is needed (which can be daunting in a busy configuration management environment), discover where the build lives on the network, learn where and how the build needs be installed, copy it to the testing environment, unzip, modify configs, install, reboot the server, etc. Automation of the material pieces of this process can save multiple hours every week. Do you have to be a code ninja to automate something like build installs? Do you have to wait until you get all your technical ducks in a row as a tester to tackle a task like this? A less-technical tester can create a less-technical solution to a problem. What matters more than design and spiffiness is that it works.

Having joined a few teams that handled build installs completely manually, repeating the same steps time after time, my guess is that many less-technical testers don’t realize how easy it is to decrease overhead and improve efficiency with some basic knowledge that anyone could acquire. We look at what the more technical-minded among us accomplish and discount our own abilities, thinking we could never do that. But the perfectionist in us fibs when it says what we’re capable of isn’t good enough and that we should stick to doing what we know until we learn to program, or until we become database wizards, or until we get some certification. Testers need to gauge their successes based on whether or not what they implement works, not what a more technical person could have designed.

Some time ago, I joined a team in which the testers had not automated any of the build install process. An experienced tester would spend about twenty-five minutes going through the whole process manually each time he needed to get a new build up and running, which could be multiple times per day. Upon working with the product, I noticed that much of the actual install was accomplished with PowerShell, a task automation framework developed by Microsoft. I knew nothing about PowerShell at the time, but some cursory research revealed the framework’s powerful scripting abilities. I tucked this information away in my brain while I focused on getting a win in fast—automating as much as I could of the build installs using simple batch files and scheduled tasks. It was crude and unsophisticated but did most of what I wanted it to do. Most importantly, the automation immediately impacted the test team’s timeline, and my boss was happy. Later, with other important tasks out of the way and with all my newly found time, I learned about PowerShell and used it to smarten up the build install process and overcome the limitations of batch scripting. If I’d waited to find enough time to learn PowerShell to begin with, automating the build process would have taken much longer.

Brian Yoss, a senior software QA engineer at McAfee, describes how he iteratively built a tool to automate build installs for the product he tests using AutoIt, a free scripting language for automating the Windows GUI and creating GUIs. Using a batch file to build a cluster environment on demand, he cut his setup time from four hours to forty-five minutes. Notably, he writes, “It doesn’t take a computer science degree to build a tool like this that will help increase efficiency and reduce overall cost to your organization.”

A former boss of mine got a test dashboard up and running within a couple hours one day by installing WampServer on a virtual machine, setting up an Open Database Connectivity (ODBC) connection, and copying some sample PHP from the Internet to create a website. Without more knowledge than a basic tutorial could provide, he was quickly able to see the latest test results, which testers were lagging behind in automating test cases, and how end-game testing was charting. My boss gained a clearer picture of where the testing effort was at any point, and he saved time by pointing management to the dashboard when they wanted information instead of digging up the information himself each time. His solution wasn’t pretty or elegant, and no one would have oohed and aahed over the code, but it worked.

In the movie October Sky, four high school boys in Coalwood, West Virginia, take up rocketry after learning of the Soviet Union’s launch of Sputnik 1. Their first launches are colossal failures. The rockets blow up on the launch pad or fly in the direction of spectators. The boys begin experimenting with new fuels and alternative designs and eventually get it right; the rockets are crude, but they fly. What it took was the boys’ willingness to start with the knowledge they had to build something and see what happened.

Likewise, testers need not fear any lack of ability or be driven by perfection, for such things only stunt progress. Growth comes through trial and error and plenty of reorienting around dead ends. Any tester who perseveres will solve big problems.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.