Streamlining the Test Process

[article]
Building Efficiency into Test Execution
Summary:

When building large test suites, one problem that crops up is test case redundancy. Test suites are especially vulnerable to this when many members of the test team are writing test cases. The likelihood of one engineer writing test cases that are somewhat covered by another engineer's is very high. This results in duplication of effort when executing the tests. I will present some strategies for avoiding this problem when constructing the test suite, as well as methods for maximizing efficiency with your test suite.

I don't know about you, but the way we did things at our start-up company wasn't exactly ISO 9000 compliant. Often, when the entire team was testing the product just weeks before release, trying to get through that last bit of functional and regression testing before shipping, people would be saying things like, "Hey, did you run that test on Win2k? Good, I can pass my test case" or "Oh, you're downloading that content too? That's what I'm doing!"

This indicates test case duplication in your test suite. Redundancy is but one problem that plagues inefficient test suites. Other problems can include lengthy, laborious test cases that return very little added value, and system-level test cases that aren't validating the system properly. I will take a look at each of these problems individually, and suggest techniques that may help you when designing a test suite, or when fixing your current one.

Overlapping Functional Areas
Let's approach testing a system from a conceptual level first. Functionality in and of itself is useless to any system. A "find" feature is only useful when you have a document that needs searching. A "download" is only useful when you have a server or peers from which to acquire content. Viewing email is only possible when delivered by a mail server. These examples illustrate the necessary presence of integration points . All of us are well aware of integration testing, and we understand that functionality doesn't exist in a void. Functionality happens when it is enabled, so to speak, by integration points with other functionality. Integration of functionality means that data or communication is somehow shared between that functionality--that's what makes it work! But, as we shall see, that's what makes it hard to test as well.

Sometimes the lines between functional areas are so closely tied by their integration points that it's very difficult to distinguish between the two (or three, or four). This can make for some difficulty in determining where a functional test should begin,where it should end, and at what point another functional test should pick up and continue testing other functionality. In other words, when the line between functionality is blurred, so is the line between test cases, and that can lead to test case overlap, and in some cases, outright test case duplication.

Let me illustrate this with an example. Let's say our crack-shot test team is testing a brand new media player named FooPlayer. FooPlayer is a simple media player that has basic functionality such as playing multiple audio file formats, volume controls, and the like. One logical breakdown for test ownership among our team members might be:

  • Bob: Install, launch, shutdown
  • Lucy: File-format support
  • Suzy: GUI interface

Now, let's look at what each tester might do when performing a very simple test of their functional area. Bob runs a simple test:

  1. Install FooPlayer: verify installs properly
  2. Launch FooPlayer: verify launches properly
  3. Shutdown FooPlayer: verify shuts down properly
  4. Uninstall FooPlayer: verify uninstalls properly

Now, what would Lucy's test look like if she were testing a media type?

  1. Launch FooPlayer: verify launches properly
  2. Open an .MP3 file: verify FooPlayer accepts file
  3. Play .MP3 file: verify FooPlayer plays the file
  4. Stop .MP3 file: verify file stops playing
  5. Shutdown FooPlayer: verify shuts down properly

And what would Suzy's test look like to test a GUI control like volume?

  1. Launch FooPlayer: verify launches properly
  2. Open an .MP3 file: verify FooPlayer accepts file
  3. Play .MP3 file: verify FooPlayer plays the file
  4. Adjust volume: verify volume changes according to adjustment
  5. Stop .MP3 file: verify file stops playing
  6. Shutdown FooPlayer: verify shuts down properly

Notice

About the author

Andrew Lance's picture Andrew Lance

Andrew Lance (andrew@centerspan.com) is a senior quality assurance engineer, technical lead for CenterSpan Communications, a company developing cutting-edge content delivery solutions based in Hillsboro, Oregon. Andrew has worked with test automation technologies for more than five years and has participated in every major phase of automated testing, from design and implementation to maintenance and support.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03