StickyMinds technical editor Matthew Heusser is a consulting software tester and software process naturalist. In this interview, Matthew shares his thoughts on tester and programmer relationships, the impact of Acceptance Test Driven Development, and benefits of "lean coffee" gatherings.
Noel: First off, if you could introduce yourself, and maybe go into what you're going to be doing for StickyMinds and SQE—that would be great.
Matt: Thanks, Noel. For starters, I've been in technology ever since I graduated college, with a Math/CS degree, in 1997. After a couple of years of programming, I went back to school at night to get a masters degree, which is where I ran into extreme programming, agile development, and testing. During college, I wrote a couple of pieces for the school paper and published an article on AngryCoder.com, but my first real media experience was writing for StickyMinds. Likewise, my first "real" test conference presentation was at STAREAST in 2004. Fast-forward almost ten years, and I've done some more interesting stuff, teaching IS at night at Calvin College, working from home for Socialtext, and going independent and building a practice with consulting, training and writing.
Ten years after that first presentation at STAREAST, when I had the opportunity to come back as interim editor of StickyMinds...how could I say no? So here I am.
Other interesting stuff? I am very interested in building and sharing knowledge in testing. In my spare time, I organize open-space test conferences and other events.
Noel: You and I were talking before doing this interview, and you brought up "the testing metaphor." What is that exactly?
Matt: Most folks agree that “just” development isn't enough to get to our end result, which is high-quality delivered. The question mark is how we get there. Most of us agree that some part of that is testing. The second question mark is what does testing mean?
- Is testing evaluating the software to build confidence it is fit for use?
- It is checking the specifications against some external standard, a 'spec'?
- Is it trying to break the software, or looking for problems?
- Is it reporting on the status of the software or the test project? (The "headlights" of the project?)
All of these position testing as sort of the opposite of programming. Programming is the creative process; testing is the destructive one. Testers and programmers are “enemies.” We never say it, but these ideas do influence our behavior.
Now imagine a different world, where pair programming is not two programmers, but instead a programmer and a tester. During the programming phase, the programmer is in charge, 'driving' the development. The tester needs to be technical enough to say, "Hmm...this function is getting mighty long," or, "We sure are passing in a lot of in variables here," or perhaps, "It's it about time to write a unit test." But, he doesn't have to be a production programmer.
When things switch to “test” for the story, the tester takes charge and the programmer becomes a sort of navigator or helper. Now thing about that two or three iterations in, the tester is going to have his exploratory test ideas while writing the code in the first place, so we're going to get stronger code and reduce defects, which reduces re-work. Reducing rework means we get to production faster with higher quality.
I'm not saying that example is right; it feels very code-centric to me. I'm saying I want to experiment with different ways of doing it, that we haven't figured it out yet.
Noel: We were also discussing the impact that Acceptance Test Driven Development (ATDD) has had on the software development world. I was curious as to how large you see this impact as being. I interviewed Jeff "Cheezy" Morgan a while back and he mentioned that it's so easy to demonstrate it's directly relation to higher quality software.
Matt: Well, I can tell you a few things I have observed. For example, say you take five software teams and give them a specification for a simple Fahrenheit to Celsius converter. Say the spec is in plain English and about as long and complex as you would expect for a program that translates on simple formula. It will give advice about input, limits, rounding, that sort of thing.
Now take a number that translates exactly as “2.” The output should show two decimals to the right. Should it appear as 2.00? Some of the programmers will guess one thing, and some of the testers another, and you'll have a stupid argument, which requires a decision maker to intercede, a lot of waiting, re-coding, and re-testing.
That waiting, arguing, and re-work is additional work on the system. It is demand, but it is accidental, not essential. One term for that is “failure demand.” Prevent it, and you are preventing rework. I'm not sure what Cheezy was getting at, but most of the teams I work with have significant opportunities to reduce failure demand, and ATDD is one way to do it. Getting the software "more righter" in the first go, reducing the number of hand and pass-backs in the process.
Noel: What areas might ATDD not be as effective a practice?
Matt: This is a great question, and it lines up with the way I think. I don't just recommend practices to teams I have never met. My preference is to study the existing system and what problems it has. If the problem of the existing system is one that ATDD offers to solve, then it might be time to make a recommendation. Sadly, in a text format, that doesn't always come through, so I might appear an "ATDD guy." I sure hope not. I'm more of a “software guy.” Right now, many of my clients have seen benefits, or could benefit, from ATDD, so I'm excited about it.
Now to your question.
If your product owner, developer, and tester are all the same person, you might not need anything like ATDD. If there is little room for ambiguity in the requirements - say you are attempting a system rewrite, and you can know correct behavior because it matches the old behavior - then you might not need ATDD. If nobody really knows what they need until someone takes a crack and fails, then I'll probably try for some rapid prototyping method, or 'tracer bullet' approach over ATDD. Those are just a few offhand. I'm sure with a little time we could come up with a nice handful.
Noel: You and I met at STARWEST in Anaheim, California, and I saw that you were leading a lean coffee exercise one morning. How did that go? I'm fascinated by the whole "agenda-less meeting" concept, but I can see where there may be some skeptics who wonder just how much can you accomplish without an agenda. What's your take on the benefits of "lean coffee" gatherings?
Matt: Are you kidding? I spent most of the conference without an agenda, hanging out in the hallway!
Hanging out in the hallway actually makes sense if you come to the conference with problems to solve, processes we you want to change, resistance, schedule pressure, that sort of thing. You could go to the sessions, but those are about solutions other people had to their problems; you want something custom, for you.
But, hanging out in the hallway has limits. To those new to the conference scene, getting to meet people is awkward. Figuring out how long to talk, finding people interested in the same things as you—so you go to the sessions and hope for the best. And the sessions have value, don't get me wrong.
Lean coffee is a chance to have both.
We start at 7:30AM, which gives us a few extremely motivated and engaged participants. The folks at the conference to party, er, team-build, don't get up for lean coffee, and that's fine. After introductions, we write down our session ideas on index cards, dot-vote (everyone gets two votes) and we sort. After five minutes on one topic, we vote up or down more times or not. It's an incredibly powerful way to move a meeting according to what people want to talk about, not what somebody else thought might matter. At STARWEST, we scoped to testing, but you can scope the meeting to solve any particular problem, and then flow to what people want to talk about. I remember on the last day we talked about centers of excellence, interviewing testers, and interviewing vendors—something in particular that I think is important, but we often give short shrift to.
Early this year, at the Agile Conference, Adam Yuret ran lean coffee for five days in a row. It was fascinating watching the topics move from introductions and what you want to learn, to what you are learning, to what you'll do differently back at the office and how to keep the learning flowing. I should credit Adam here with a lot of popularizing of lean coffee within the software community. He is in Seattle, and the concept was invented within the Lean community up there. You can learn more at http://leancoffee.org/.
Matthew Heusser is a consulting software tester and software process naturalist, who has spent his entire adult life developing, testing, and managing software projects. He has served as the lead organizer of the Great Lakes Software Excellence Conference, organized a workshop on technical debt, and taught information systems at Calvin College. Matthew blogs at Creative Chaos, is a contributing editor to Software Test & Quality Assurance magazine, and is on the board of directors of the Association for Software Testing. Matthew recently served as lead editor for How to Reduce the Cost of Software Testing (Taylor and Francis, 2011). Follow Matthew on Twitter at @mheusser or email him at [email protected].
Nice read and a good way to know more about you Matt. I particularly enjoyed the piece about the lean coffee meetings!