I don't know about you, but I have tried my best to avoid security testing. For one thing, it is hard enough—dare I say impossible—to thoroughly test everything a normal user needs to do without trying to test what a malicious hacker might try to do. Even the vendor who develops the operating system and development tools that power the majority of applications has to release security patches on a daily basis. And the news keeps getting worse.
I used to think security testing meant making sure unauthorized users could not log into the application, access functionality without the proper rights, or enter bad data. But now it turns out that the very code that supports the appropriate use of the application can be hijacked behind the scenes to perform evil acts. It's bad enough for developers who have to double-think every line of code they write—and that assumes they've been trained on security risks. For testers examining applications for vulnerabilities, security testing opens up a bottomless pit of risk and responsibility.
Frankly, it makes me want to trade in my QA career for an easy job, like running an obedience school for cats. But denial is getting harder to maintain. It's not practical for most companies to invest in a completely separate security testing team. The skills mix is too rare. It requires security, development, QA, and application subject matter expertise; the additional costs are hard to sell. More and more, companies are looking to integrate security testing into development and QA.
Fortunately there are tools that can help with integrating security testing. For developers, source code analysis tools scan for known risks, holes, and traps. For operations support, firewalls, application shields, virus scanners, and other ubiquitous barriers provide a protective shell for applications. Automated test tools simulate hacker activity. These penetration tools (pen tools or app scanners for short) offer a combination of automatic and scripted capabilities that can be run against test or live sites.
The automatic functionality of these tools acts as a "crawler," navigating the application to attempt attacks. But while this technique has the advantage of being automatic in the sense that no customization or special skill is required, it has the disadvantage of providing low coverage of probably 20 percent or less. Automatic tools simply don't have the application awareness to access—let alone exercise—all the layers of a typical application interface.
To get acceptable coverage, these tools require customization in the form of application-specific scripting. Furthermore, they require the user to be knowledgeable about what the risks might be, which excludes most traditional QA testers. And if you think developing and maintaining automated scripts for your functional and load testing is already overtaking your miserly resources and schedule, just wait until you acquire yet another tool and its baggage of scripts.
Even if a pen tool exposes a problem, the most information it usually offers is the URL at fault. The developer still has to trawl through the code trying to spot the weakness and correct it—again, assuming the developer has the skills to recognize and remedy the problem. In QA, we try to control the risks we introduce ourselves: missing requirements, poor design, and sloppy coding. In security testing, we are trying to read the minds of people we've never met and whose motives and methods we can't predict. All of this begs for more training, more resources, and more time, all of which are in famously short supply in every organization I know.