Manual vs. Automated Code Review

[article]
The Fight for Superiority
Summary:

It's a battle between human and machine-a theme that could be ripped straight from a science-fiction story, but it is not. This is a reality many testers face when trying to determine if human expertise and intuition can detect more security flaws than automated tests. In this week's column, security expert Bryan Sullivan weighs both sides and offers his verdict.

Recently, I had the privilege of viewing a great presentation on security-testing strategies given by Vinnie Liu, Managing Director of Stach and Liu. The crux of Vinnie's argument was that, while many professional code reviewers and penetration testers claim that manual code review is always the best and most accurate way to find security defects, there are, in fact, situations in which automated analysis tools (either white box or black box) will outperform an expert human reviewer.

This is not to say that expert reviewers don't have their place, but most design-level security issues cannot be found by automated tools. One good example of this type of vulnerability is improper forgotten-password functionality. On some Web sites, when a user has forgotten his password, the application will prompt him to answer some questions to verify his identity such as, "What was the name of your first pet?" This is not a security problem in and of itself, but not all identity verification questions are equally secure. One verification question that I've seen on a number of Web sites is, "What was the make of your first car?" The problem with this question specifically is that there are only a handful of possible answers. There aren't that many auto manufacturers in the first place and, furthermore, it's unlikely that a first-time car buyer is going to purchase a Rolls Royce or an Aston Martin. Without knowing anything about the user, an attacker could guess Ford, Toyota, Honda, Jeep, etc., and stumble onto the right answer within a dozen tries, in most cases.

The point of this is that there's no way that any kind of automated tool could determine if the make of your first car is a good identity verification question. This doesn't mean that humans are always better than tools, though. Once we start looking at implementation-level defects or vulnerabilities that arise through configuration mistakes, we start to see a number of cases in which a scanning tool will beat a human reviewer. I wrote a September 2008 article for StickyMinds.com titled Warm and Fuzzy that extolled the benefits of fuzzing for finding obscure

About the author

Bryan Sullivan's picture Bryan Sullivan

Bryan Sullivan is a security program manager on the Security Development Lifecycle (SDL) team at Microsoft. He is a frequent speaker at industry events, including Black Hat, BlueHat, and RSA Conference. Bryan is also a published author on Web application security topics. His first book, Ajax Security was published by Addison-Wesley in 2007.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Apr 29
Apr 29
May 04
Jun 01