There's a common misconception that test-driven development is a testing technique when in fact it's a design technique. In this column, Jeff Patton explains this and how you might use your unit tests to explicitly guide and describe the design of your software.
For those already doing test-driven development, you probably know that test-driven development is not testing. So, go ahead and grab a cup of coffee, but you should keep reading to see if you agree or disagree with everything I'm saying.
For a few years I've been using unit testing frameworks and test-driven development and encouraging others to do the same. The common predictable objections are "Writing unit tests takes too much time," or "How could I write tests first if I don't know what it does yet?" And then there's the popular excuse: "Unit tests won't catch all the bugs." The sad misconception is that test-driven development is testing, which is understandable given the unfortunate name of the technique.
If I were designing a bit of code the old-fashioned way, I'd think about the object I was about to create, what it would do, and maybe even write it down or draw a UML diagram. Then I'd write the code. But now I'd start by writing a unit test in my favorite unit-testing framework. Here's where I get to have fun and pretend I've already written the code for my object. So, while still imagining my object, I'd ask, "What would its class name be? What other objects would it collaborate with? What methods would I call on it? What would they return?" In my unit test, I instantiate the imaginary object and ask it to do a bit of work for me, which I would assert it did flawlessly. Afterwards, I'm left with a little bit of code that uses the object I have to build, and describes exactly how the object will be used. I've designed the object—now I just need to write it.
If you're a java developer using a newer IDE, then you know the tests you write give your IDE the information it needs to write much of the code for you. For example, the IDE might offer to create the class and any methods you'd used in the test case, saving you lots of typing. After creating the class, I implement the methods I've described. Here is where it gets fun: I know I've implemented the code as designed by running the unit test!
Once you start looking at your test cases as a description of the software design, they start to look different. Pay attention to the names of your tests. Instead of writing tests named after a method on an object you're testing, try using the test name to capture the intended consequences.
Instead of a single test for the processFoo() method on your object:
If I convert those test names to sentences, I might end up with something like this:
- Processing a Foo produces a Bar.
- Processing a Foo logs timings to the Foo Log.
- Processing a Foo with bad input throws an Exception containing the bad data.
Now my names look more like design specifications! Folks who've been doing test-driven development for years see it this way. In Alistair Cockburn's book Crystal Clear, he specifically points out the importance of readability of the test case as a design document. Nifty open source tools like TestDox from Chris Stevenson actually let you browse and write test cases in your IDE as little bulleted design documents.Now that we've established that test cases can describe design, let's talk about how they can morph into tests.
Following the rhythm described above:
- Describe design in the unit test.
- Write the code to make it work.
- Validate that it works by running the test.
If we repeat this enough times we'll have a lot of these little tests that describe the design. In Hunt & Thomas's book Pragmatic Unit Testing, they suggest asking some questions at this point:
- What else can go wrong?
- Could this same kind of problem happen anywhere else?
The answers to these kind of questions detail my design around edge cases that could cause problems. Thus I start to explicitly deal with designing for and validating the design of the code in a variety of tough conditions. Now this starts to feel more like testing.
At the end of the day, my fattened test case includes all the tests I wrote while I was imagining the design of my conceived of object, and all the tests I wrote while trying to put my newborn object through a workout. There's at least as much code in the test case as in the actual class being tested.
I guess this makes it sort of "design and test," but in the end we still haven't ensured our object is bug-free. We just know where the bugs aren't—and where the design is. Unit tests have been compared with shining a flashlight into a dark room in search of a monster. Shine the light into the room and then into all the scary corners. It doesn't mean the room is monster free—just that the monster isn't standing where you've shined your flashlight.
Looking back at the three common reasons for not writing unit tests, let's substitute design for test:
- Designing takes too much time.
- How could I design first if I don't know what it does yet?
- Designing won't catch all the bugs
Those statements sound a bit different now don't they? For those making these sorts of statements about unit testing, be aware that those proficient in test-driven development might be rolling their eyes because the above design statements are what they hear. For those already doing test driven development, put down that coffee and go help others understand your cool design technique.Sources on the Web:
Pragmatic Unit Testing
Alistair Cockburn's Crystal Clear
Editor's Note: For more on this topic, look to the February issue of Better Software magazine, featuring "Double Duty: Using the Unit Tests You're Doing to Help with the Documentation You're Not," by Brian Button. Brian will also facilitate a StickyMinds RoundTable discussion on the topic from Feb 7 - Mar 18, 2005.