Noel: Your upcoming session, "Maybe We Don't Have to Test It" addresses the concept that perhaps testers don't have to test everything, and that doing so could be harmful. How is this so?
Eric: On most software development teams there are a variety of bits coming from those crazy programmers. Some bits never go to production, some are too awkward for testers to test, some are not important enough for testers to test, some are so straight forward they need not be tested, etc. The tester who treats testing like a factory routine, “I must test everything”, may be weakening our reputation as testers. That thinking emphasizes testing importance because of a thoughtless process rather than a service to provide the most valuable information to someone.
Noel: What are some of the benefits that not testing can bring to a team's success, and what are some that that can be felt on a tester's own individual level?
Eric: As a tester, have you ever gone through an entire day without finding bugs or any new information to share? Then later found yourself on a gold mine of interesting test results with little time left to continue exploration? Which scenario made you feel more valuable to your team? I think testers know a lot about where to spend their time and where not to. I’m hoping to show that it is okay to speak up and say, “hey, you guys okay with me not testing this so I can spend more time on that?”.
Noel: You've mentioned that testers have been taught that they're responsible for all testing. Who taught them this, and do you see it as a difficult task to tech testers otherwise?
Eric: Who taught them? Process hawks, test managers, programmers, and lazy testers. Yes, it’s difficult to un-teach this dogma. It’s like telling your oil change guy not to check the air filter unless you ask. Eventually, they realize they can get more oil changes done.
Noel: You're currently managing a test team at Turner Broadcasting, and have been involved in testing the "traffic schedule" for Turner's numerous networks. What kinds of challenges does that type of testing pose that may not be found on what some may consider to be a traditional testing environment?
Eric: We deal with millions of commercials that are constantly moving to better commercial breaks to optimize thousands of rules. The landscape of our data is mostly virtual and not saved to a database. In addition, broadcast schedules are full of time dependencies. Our tests often involve simulating a 30 hour broadcast day, across multiple time zones, before, during, and after it airs. Yes, we have 30 hour days in the broadcast world…seriously creepy.
Noel: For someone who attends your session, and who may be unconvinced that they could convince their team or management back home to try "not testing" - what would you suggest they use in their argument?
Eric: Instead of arguing, it may be easier to change your definition of “testing”. Mark it “Verified” if you must, attach an artifact that explains the testing you didn’t do and why. If your artifact explains it responsibly, I suspect most people on the team will never know the difference. But I don’t think you’ll have a problem explaining this concept after attending my session. See you there!
Quality assurance manager for Turner Broadcasting System’s Audience & Multi-Platform Technologies (AMPT) group, Eric Jacobson manages the test team responsible for Turner’s sales and strategic planning data warehouse, and its broadcast traffic system. Eric was previously a test lead at Turner Broadcasting, responsible for testing the traffic system that schedules all commercials and programming on Turner’s ten domestic cable networks, including CNN, TNT, TBS, and Cartoon Network.