With Extreme Programming, programmers are taking responsibility for writing their own unit tests. What work does this leave for testers? Some people think that XP saves costs by eliminating the need for testers. Does programmer testing really take the place of tester testing? In this column, Bret Pettichord offers ways for testers to provide value to XP teams.
I've found that there are at least three things that people can mean by "XP":
- What the books say1
- What teams coached and trained by the XP founders actually do
- What people who've only read the books are doing.
According to the books, XP includes unit testing (done by the programmers) and acceptance testing (done by the "customers"). Programmers use unit tests merely to verify that the software works as they intended. The acceptance tests are necessary to validate that the software actually works the way the "customer" wants it to.2
Specifically, the "customer" is expected to write the user stories (similar to use cases) and then write tests for the user stories. The books are a little vague on who the "customer" is, which is why I've put it in quotes. The "customer" is the person who makes the business decisions. Outside XP, this person is often called the product manager or business analyst.3
Even teams well trained on XP often have trouble getting this "customer" to take the time to write the acceptance tests. So they end up being written by programmers or testers and they are often written late. Indeed, studies have shown that acceptance testing is one of the least-well-adhered-to practices that make up XP.4 To help repair this deficiency, XP co-founder Ward Cunningham has recently developed an open-source testing framework for acceptance testing.5
The XP books don't claim that programmer testing supplants tester testing. Rather they claim that the total cost of testing is reduced, largely by avoiding the surprises of the interminable testing phases that often plague the final stage of software projects. But testing still must be done, both from programmer and customer perspectives.
There were some early claims that XP was going to put testers out of business. This was partly hyperbolic—meant to inspire programmers to do better unit testing. But I suspect it may also have been a response to experiences with testers who were obstreperous and unconcerned with the success of the larger project. Indeed, Lisa Crispin fought some early battles to find a place of respect for testers on XP teams; she later wrote a book describing her methods.6
With test-last programming, there was no doubt of the need for testers. But the XP context forces testers to articulate how they can be of service to a team. Many testers, trained in unsuitable methodologies, will fail.
Indeed, some believe XP tries to minimize the role of the black-box tester because of bad experiences with ineffective testers. Specific complaints are that testers
- Object to process improvement that doesn't conform to their traditionalist views
- Focus on bugs that are of little importance to the project
- Lack technical skills needed for serious contribution, and
- Lack education and insight into modern development methods and architectures