Automated tools are essential to software development. Tools can take the drudgery out of the more tedious development and testing tasks and let us get back to what we love: writing code (or in the tester's case, breaking code). This is especially true for security testing where the goal is not to prove that the software does what it is supposed to do, but rather that it doesn't do what it's not supposed to do. This is a much more difficult, if not actually an impossible, but, thankfully, we have some great tools to help us out. In this week's column, Bryan Sullivan covers one of the most valuable of these tools: the fuzzer.
Fuzzers are tools that feed random input to software programs for the purpose of revealing defects in those programs' input-processing logic. For example, if you create a malformed data file and open it in a word processor, will the word processor fail gracefully and inform the user that the file appears to be corrupt or will it crash? Does it start executing the data in the file as if it were assembly code, potentially leading to a complete compromise of the system's security? This scenario is not as impossible as you might think if the program contains a buffer-overflow vulnerability. Fuzzers can reveal vulnerabilities like this by automatically generating variations of malformed input and feeding those variations to the program being tested.
You might ask, "How much fuzzing is enough?" It's unlikely that you will find a critical security vulnerability with just a single fuzzed input variation. If you do, it's extremely unlikely that you've found the only one in the code and, therefore, one iteration is clearly not sufficient. On the other hand, you could easily let the fuzzer run for days or weeks, testing millions of iterations, but you still wouldn't be assured that every possibility has been covered. The rule that we use in the Microsoft Security Development Lifecycle (SDL) process is that every input must pass 100,000 fuzzing iterations without failure. If a defect is found in the input-processing code, then the module must start the fuzzing process over again after the defect has been fixed and then pass another 100,000 iterations without failure.
Another good question is, "How much of the data should be malformed?" Let's return to our word processor example. Suppose that the word processor's data-file format includes a checksum to ensure that the file hasn't been corrupted or tampered with. Unless the fuzzer is aware of this and then modifies the fuzzed data file checksums accordingly, the vast majority of fuzz iterations that it creates will immediately be discarded by the word processor due to checksum failure. This is not only a waste of time, but it also leads to a false sense of security, as the input-handling logic isn't tested beyond the very shallow first level (the checksum logic). The opposite scenario, however, can be a problem as well. Suppose that our fuzzer is aware of the checksum and then modifies its fuzzed output data files so that the checksum is always valid. What happens now if there is a defect or security vulnerability in the checksum logic? This logic isn't being tested, as the fuzzer has implicitly assumed that it's always correct; this is an assumption that may not be valid. To put it in higher-level terms, by giving the fuzzer knowledge of the expected input, you're telling it to create input in the way that the program expects to receive it. The whole point of fuzzing, though, is to test the program by giving it unexpected input. This is the classic debate between "dumb" fuzzers that have no knowledge of the expected input format and "smart" fuzzers that do. The reality is that neither type is sufficient on its own and to achieve the best results, both types of fuzzers should be used to test the application.
If you haven't started already, I strongly suggest that you make fuzz testing part of your testing process. The material presented here doesn't even scratch the surface of the power and possibilities of fuzzing. If you're interested in learning more, I recommend reviewing the book Fuzzing: Brute Force Vulnerability Discovery by Michael Sutton, Adam Greene, and Pedram Amini.