From One Expert to Another: Catherine Powell

[article]

Catherine Powell has been testing and managing testers and development teams for about ten years. She has worked with a broad range of software and focuses primarily on the realities of shipping software in small and mid-size companies. Catherine’s published works, blog, and contact information can be found at www.abakas.com.

Dave Liebreich: What excites you about testing? Why do you keep doing it, and what frustrates you about it?

Catherine Powell: The exciting thing about testing is also the frustrating thing about testing: the uncertainty. We can gain knowledge, disseminate information, and decrease risk, but we can never guarantee that something will work, and we’re never done. Every time we ship software, I hold my breath until we see it working, and then there’s a gleeful moment. It’s like working on a roller coaster without seat belts—boy, what a ride!

DL: How would you describe your ideal co-tester?

CP: I’ve worked on a lot of test teams—some more successful than others. The most successful test teams I worked on were very diverse; each of us had very different skills and approaches. My ideal co-tester, then, is someone who is not like me. I want someone I get along with and who has strong skills in areas I’m less good at.

DL: Speaking of skills, we’ve both tested computing infrastructure systems. What testing skills do you think transfer between that domain and other domains, such as enterprise software, web applications, and such? What skills don’t?

CP: There are so many things to test, and many of us will test a lot of different kinds of things! The good news is that, at some base level, testing is testing and most of our testing skills apply:

  • The ability to clearly communicate behavior and its effects
  • An understanding of how to work with a large variety of people
  • The ability to learn jargon and use it effectively
  • Mental modeling of a system and how all the pieces fit together
  • “Peripheral vision” to see defects and system behaviors, even when that’s not what we meant to look for

Tools and the domain knowledge do not translate between different types of products, but don’t be afraid of that. Your ability to learn quickly will help you overcome any difficulties.

DL: So, what kind of things do you think about when you are testing on a project in a new or unfamiliar domain?

CP: Different kinds of projects have the same needs, but in different quantities. For example, I worked on one project where the system could be slow, and it could be ugly, but the calculations had to be accurate and system downtime was a very big problem. We did a lot of functional and reliability testing, but no performance or usability testing. I worked on another project that was being sold as “like its competitors but easy to use,” and on that project we spent a lot of time on usability testing while skimping a bit on reliability testing. Downtime was not good, but it was more OK than being unusable.

One of the most important skills a tester can develop is the ability to gather information about what’s important. It’s impossible to test everything, so figuring out what matters—and what doesn’t—is a strong predictor of success.

DL: As a manager, have you felt the need to do hands-on testing with your teams? If so, what were the goals?

CP: I’m a big fan of managers testing. As an engineering manager I’ll make better decisions when I know more about what the system actually does, and getting my hands dirty gives me a good sense of that for the following reasons:

  • It’s fun!
  • It helps me understand better what teams are saying about the general state of the software, because I’ve seen it, too.
  • I keep my testing skills up, so I’m able to help out at crunch time.
  • There’s a lot of value in conversation with other managers to saying,  “Have you seen this? Let me show you what I mean.” My words are more compelling because I can speak from direct experience.

DL: What have been the biggest mistakes you have seen a test manager make?

CP: One of the mistakes I first made as a test manager was around estimation. The program manager was putting together a release timeline, and the development manager stood up and said, “We will be done June 1.” I stood up and said, “We think we need about six weeks to test sufficiently.” So, the program manager set an August 15 release date and I said, “OK!.”

Right there, I made two mistakes:

  1. August 15th happened to be a Sunday. I should have looked and at least made sure it was a work day!
  2. I didn’t account for re-testing. If coding had truly finished on June 1, then we would have hit the release date. But, this was a normal project: We found some bugs, development fixed a few things, a beta customer had some requests, and we simply didn’t hit the date.

The lesson I learned is that you cannot assume you will only test once. You will have to test more than you think, or retest things that change after your first test. Keep track of how much testing you add or redo in a project and, over time, you’ll learn how to accommodate that in your estimates. The test estimate includes not only the testing, but the rework and the retesting.

DL: What has been something interesting you learned at a conference and were able to apply in your work? 

CP: Working around the same group, it’s easy to get in a rut. Conferences are a great place to walk up to a stranger and say, “Here’s this problem we test at work. How do you or would you do it?” Frequently, I’ll walk away with a brand new idea to approach a problem.

For example, my test team was struggling with finding new ways to test documentation. We’d all read it so much we were skimming it and missing errors in documentation changes. I mentioned this at a conference, and the person I was talking to said, “Do you speak another language?”

“Sure," I said.

“So, translate the documentation.” It was a great idea! The next release, we translated the documentation and found a lot of errors we had previously missed.

DL: What trends in testing have you seen and wished were not happening? 

CP: There’s a real struggle going on today with how to blend testing into engineering teams as those engineering teams change structure and function. The agile movement is a huge driver of this, but cost-cutting measures, the decrease in average tenure at a job, and the increasing complexity of systems are also contributors. There are a lot more people out there performing test functions—developers and customers are now “testers” in a lot of places.

A quite vocal portion of the testing community has reacted to this by throwing up walls and finding ever more ways to declare testers special and different, which just makes me cringe. Yes, a good tester has a lot of great skills, but there’s no reason to hoard those skills or to be defensive about it.

I think the testing community should be embracing customers, developers, and other opportunistic testers and finding ways to provide them with skills, techniques, and guidance to perform testing. Let’s teach others and show them how to gather information.

DL: What trends in testing do you wish were happening? 

CP: I’d like to see testers spending more time understanding business needs. Most testers right now come from a very software-centric mindset. They have information about the software, and they disseminate knowledge about the software, but all that information is expressed about and around the software.

The salesman just wants to know if there’s a software configuration that fits within the customer’s budget. He doesn’t care that “with four servers, the mean time to failure is fifty years, but with five servers that failure rate drops by half.” Instead, he wants to say that the customer will kick us out at the first failure, and he wants to know if he can sell them four servers or if he should really push for five. Answer the need, don’t just describe the software.

DL: What advice would you give to testers? 

CP: Learn everything you can. Testers are expected to do so many different things. We’re system administrators, coders, requirements analysts, customer mind-readers, and bearers of bad news and of glad tidings. We’re psychiatrists and bullies who nit pick everyone’s work. We can only do this if we’re learning constantly—new tools, new techniques, and new skills, whether they’re statistical analysis techniques, new languages, or better techniques to earn trust. Learn whatever you can in software and other domains.

DL: Thanks Catherine for taking the time to answer my questions. I look forward to “talking shop” with you again. 

CP: You’re welcome, Dave. I enjoyed it.

Tags: 

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.