When writing quality code, Tod Golding feels like an ultra-paranoid customs agent. Every method clients try to pass is scrutinized for ill data. Because we all want to write robust code, we become defensive programmers. In this Code Craft, Tod shows how you can write defensive code today to protect yourself tomorrow.
Writing quality code sometimes makes me feel like I'm an ultra-paranoid customs agent. There's an air of suspicion about each client that tries to cross the border and execute any method in my class. I look over each parameter passed to me, inspecting its content for any illegal or shady data others might be trying to bring into my environment. If someone even looks at my code the wrong way, I'll send him packing with a clear explanation of how he has violated the rules of entry.
This is the mindset of the defensive programmer. And, in an ideal world, this would be the mindset we'd all have as we write every method of every class. We'd all like our code to be this robust. Still, even though defensive programming may add value, we don't always see it showing up in our code. In fact, there's a lot of code
out there right now that may not contain any defensive code.
Even though we typically understand the value of defensive programming, there are still times when we will choose to selectively relax the restrictions we have placed upon our borders. It's almost as if we've signed treaties with certain clients, and these clients are allowed to wander freely within our code as "trusted citizens." Our relationship with them is so strong that we're willing to allow them to enter and leave without even once inspecting their baggage.
This leaves me wondering why there is so much variability in our approach. Why are we allowed to treat one block of code with so much paranoia only to let the next client through uncontested? Why are these selected clients given such special treatment? There are a number of common factors that might lead a programmer down this path.
The most common rationale is what I call "self-defense." In this scenario, you're writing the class and the client that's calling the class. Developers in this scenario sometimes find it silly to write defensive code. After all, if you can't trust yourself, whom can you trust? "C'mon," you'll say, rolling your yes. "I'm the only one calling that code, and I know what's valid and what's not."
There are also times when developers think the number of possible clients is so limited that there's little need to be defensive. Imagine a scenario where you've got a user interface that calls your code when a button is pressed, and the press of that button currently represents the only invocation of your code. In this controlled environment, you might wonder if it makes sense to code defensively. The only thing that can break this is the call from the UI and, if that's going to fail, system testing is going to find that bug right away. So, why bother writing extra error detection code just to capture a scenario that the UI will already catch for me?
The problem is that the list of scenarios in which we don't code defensively seems to get longer and longer. Though we may have the best intentions, we continually rationalize away the need for adding this extra level of robustness to our code. It's as if there's an 80-20 rule out there for what's supposed to get tested and, conveniently, it's the 80 percent of the code that doesn't end up behaving defensively.
I think the whole idea of defensive programming somehow has the side effect of turning us into defensive people. Each time we sit down to write new code we seem to start looking for reasons to explain how this particular chunk of code doesn't warrant the