Deliberate Testing in an Agile World: An Interview with Dan North

[interview]
Summary:

In this interview, technology and organizational consultant Dan North discusses deliberate testing in an agile world. He talks about how testing was perceived before agile became such a big part of the industry, and whether or not we've lulled ourselves into a false sense of testing security.

Josiah Renaudin: Today I'm joined by Dan North, who's a keynote speaker at our upcoming STAREAST conference held in Orlando. First, could you tell us a bit about your experience in the industry?

Dan North: I’ve been working in IT as a developer, coach, consultant, and various other roles for about twenty-five years, in a varied mix of organizations and industries. In terms of agile experience, I first came across Extreme Programming (XP) around 2000 and joined agile pioneers ThoughtWorks in 2002 as their first UK technical hire. I spent eight years there, helping to grow the London office to around 250 people, and along the way I developed behavior-driven development, which I describe as a second-generation agile method, inspired by the work of Kent Beck, Ward Cunningham, Martin Fowler, and other Agile Manifesto signatories.

Since leaving ThoughtWorks at the end of 2009, I’ve been exploring other software delivery methods, as an employee of an electronic trading firm and then as an independent, which has led to my current “Software, Faster” body of work.

Josiah Renaudin: Before agile became a mainstream methodology, how was testing treated or perceived within a standard organization?

Dan North: Well, “agile” is really a blanket term for a whole family of methodologies. The industry seems to have adopted agile as a synonym for Scrum, but that’s a historical accident. In any case, traditional plan-driven software delivery methods tend to view testing as a separate stage, near the end of development, and informally that testers were like second-rate programmers. Testing was viewed as something you did to learn your technical chops so that one day you would graduate to programming.

Josiah Renaudin: Was it difficult to maintain project cohesion within an integrated development team with testing often being outsourced?

Dan North: I think it’s difficult to maintain cohesion with anything being outsourced, unless the outsourcing partner is genuinely a partner and is treated as a first-class player. Usually we outsource things we think are commodity activities, as a cost-saving strategy. Outsourcing something as critical as testing has never made sense to me.

Josiah Renaudin: Why do you think we’ve lulled ourselves into a false sense of security with agile testing?

Dan North: Most of the teams I work with who would describe themselves as agile tend to have two types of testing: automated feature and unit testing, and manual exploratory testing. When you look at the rich and varied landscape of software testing, it’s almost embarrassing how many types of testing we aren’t even aware of; never mind whether or not we are choosing to do them.

Josiah Renaudin: Do you think we automate too much, or too little in our current testing climate?

Dan North: Yes! I believe we automate both too much and too little, or rather, we tend to automate indiscriminately, which leads to both of these. This is a result of having an arbitrary goal of “automation,” driven either by a test coverage metric or just the received wisdom that “Automation Is Good.” Automation is just a technique, and like any other technique, it can be used well or poorly, and can provide benefit or hindrance.

User Comments

1 comment
Tim Thompson's picture

Testing is essential, as is automating the right tests, and exploratory testing. What I am missing tto be mentioned in the interview as key testing type is a systematic approach. Just yesterday I tested an application that moves data from system A to system B. That app was supposedly already tested, yet there were no test cases and no test results. By the looks of it all worked well until I conceved of a way to compare all the records on both sides. As it turns out a key element of data was not always moved over properly. While it worked in 99% of the cases in my test data the 1% where it failed are unacceptable. In production the failure rate might have been significantly higher (or lower) depending of the nature of data. That showed that the exploratory, undocumented testing was not sufficient. I am sure that defect would have been found eventually...by a customer. The other aspect to the problem is that management thought that all is well and fully tested which is why the app was already deployed in production, fortunately only on early adopter/pilot sites.

Another important aspect to testing is not just to find the defects and document them, but also to get the defects fixed. That is becoming more and more of a struggle as features always trump quality. Adding features as bad as they may be will make it possible for companies to deliver a product at the promised delivery date. Making it work right becomes more and more an afterthought. To me THAT is the new reality where testing / QA is again a second class citizen even with Agile methods in place. In fact, Agile makes this even worse because it invites decision makers to dismiss issues with the argument that the fix will be "backlogged and hit the next iteration". Often the story gets pushed to the next iterations and eventually put on the postponed list until a customer complains, then it is out of a sudden front and center and we all have to drop everything to fix it.

One solution might be to determine quality metrics and acceptable levels of these metrics for "ready to ship". Quality in the softare world is a very subjective thing and difficult to measure aside from counting open bug reports or other obvious metrics that do not carry much information.

May 21, 2016 - 7:24am

About the author

Upcoming Events

Apr 24
Apr 24
May 07
Jun 12