There are two distinct roles in many software projects that are involved with testing: developers and testers. Should they take the same approach to testing, or are there some principles that apply to only one of the roles? What should they do to coordinate their work? Danny Faught went through an exercise to compare and contrast and found that the questions he couldn't answer were as interesting as the questions he could answers.
Tied into the idea of roles are the different levels of testing. In this article, I focus on the top and bottom levels-unit tests, which help us overcome the fear of change, and system tests, which help us overcome the fear of release.
First, let's explore some common ground.
Developers and testers want to do a good job.
This is important. Though it seems obvious when you read it, it's easy to lose sight of this idea when you're getting frustrated working with someone who's on the other side of the fence. For example, I frequently get frustrated when developers don't write automated unit tests. However, many developers want to do better unit testing but can't convince their manager to let them spend the time necessary to do it, even with the prospect of greatly reducing the debugging effort later in the project.
Testing in isolation makes it easier to isolate bugs.
When you have unit tests that test code in extreme isolation, it's easy to figure out the cause when the tests fail, and it's relatively easy to thoroughly test the paths through the code. If we test at the subsystem level when we can, rather than exercising the entire system, we get the same benefit. But it's also important to make sure that all the parts play well together.
Both kinds of tests require maintenance.
Tests at all levels need to be maintained when the system under test or the platform you're testing on changes. And at all levels, you can apply techniques to reduce the burden of maintenance.
What's the Difference?
All testing shares many common principles. But we can find a few differences if we dig for them.
Unit testing is best done by a developer who knows the product code.
Even if the developer isn't an experienced tester, it's usually better for someone who is intimately familiar with the product code to write the unit tests. For system testing, projects often benefit from having a test design specialist make the decisions. So the unit tester is often a development specialist with limited knowledge of test design, and the system tester is often a testing specialist with limited design and coding skills.
A unit test suite may need to run 100 times a day.
But a system test suite may be run only a few times a day, week, or month. The speed of the unit tests can be more critical than the system tests.
You might have more unit test code than product code.
Depending on how thorough and concise your unit tests are, you may have as much or more code in your unit tests than in the product you're testing. This doesn't tend to be the case for automated system-level tests. Though, scripted manual tests, when you have to have them, can easily exceed the length of the requirements and design documents.
Some items are harder to put either in the "same" or "different" categories.
What level do we do boundary testing?
Boundary tests can be done at the unit level, where it's easier to stimulate many different kinds of inputs. And boundary errors (like using < instead of <=) are often unit-level errors. But do developers commonly have the test design skills to design boundary tests that are both thorough and efficient? If not, the boundary tests can be done at a higher level. Or, if you aren't coordinating all the various levels, you might be doing it unnecessarily at more than one level.
Who should veer off the happy path?
Who is doing negative testing? Tests that result in an error as the expected result can be half of your tests or even 90 percent of your tests. Inexperienced test designers at all levels of testing often focus too much on the "happy path" positive tests and don't adequately test the robustness of the system when it encounters errors. You also have to watch out for people who like the instant gratification they get from negative tests and who don't pay attention to the mundane but more important happy path tests.
Unit tests are good for making sure you exercise your hard-to-reach, error-handling code. But you also want to make sure the error handling works all the way through the system. It's not easy to decide how much negative testing should be done at each level.
Write one assertion per test?
Some developers say that the best way to write unit tests is with only one assertion per test, and anything more complex should be split into more than one test. Some system test designers, especially those who are testing against formal requirements, say the same thing. But I've found that complex tests that are more like user scenarios are much more likely to find bugs. So the challenge is balancing these two ideas and coordinating which tests levels we use the nasty but productive complex test scenarios.
What level are the bug fix regression tests?
Some organizations add new regression tests when they find a bug that wasn't caught by an existing test suite. If you have unit tests and system tests, where do you put the new regression tests? Do you analyze whether it's a simple unit-level bug or something that can only be found with a higher-level test? Most teams probably send it off to a system test team and don't think about where the test should go.
Maybe you have some challenges to share about how to coordinate the testing at the various levels. Just keep in mind that the first challenge is to open the dialog between the people who are doing the different levels of testing.