Security Testing: What Fresh Hell Is This?

[article]

I don't know about you, but I have tried my best to avoid security testing. For one thing, it is hard enough—dare I say impossible—to thoroughly test everything a normal user needs to do without trying to test what a malicious hacker might try to do. Even the vendor who develops the operating system and development tools that power the majority of applications has to release security patches on a daily basis. And the news keeps getting worse.

I used to think security testing meant making sure unauthorized users could not log into the application, access functionality without the proper rights, or enter bad data. But now it turns out that the very code that supports the appropriate use of the application can be hijacked behind the scenes to perform evil acts. It's bad enough for developers who have to double-think every line of code they write—and that assumes they've been trained on security risks. For testers examining applications for vulnerabilities, security testing opens up a bottomless pit of risk and responsibility.

Frankly, it makes me want to trade in my QA career for an easy job, like running an obedience school for cats. But denial is getting harder to maintain. It's not practical for most companies to invest in a completely separate security testing team. The skills mix is too rare. It requires security, development, QA, and application subject matter expertise; the additional costs are hard to sell. More and more, companies are looking to integrate security testing into development and QA.

Fortunately there are tools that can help with integrating security testing. For developers, source code analysis tools scan for known risks, holes, and traps. For operations support, firewalls, application shields, virus scanners, and other ubiquitous barriers provide a protective shell for applications. Automated test tools simulate hacker activity. These penetration tools (pen tools or app scanners for short) offer a combination of automatic and scripted capabilities that can be run against test or live sites.

The automatic functionality of these tools acts as a "crawler," navigating the application to attempt attacks. But while this technique has the advantage of being automatic in the sense that no customization or special skill is required, it has the disadvantage of providing low coverage of probably 20 percent or less. Automatic tools simply don't have the application awareness to access—let alone exercise—all the layers of a typical application interface.

To get acceptable coverage, these tools require customization in the form of application-specific scripting. Furthermore, they require the user to be knowledgeable about what the risks might be, which excludes most traditional QA testers. And if you think developing and maintaining automated scripts for your functional and load testing is already overtaking your miserly resources and schedule, just wait until you acquire yet another tool and its baggage of scripts.

Even if a pen tool exposes a problem, the most information it usually offers is the URL at fault. The developer still has to trawl through the code trying to spot the weakness and correct it—again, assuming the developer has the skills to recognize and remedy the problem. In QA, we try to control the risks we introduce ourselves: missing requirements, poor design, and sloppy coding. In security testing, we are trying to read the minds of people we've never met and whose motives and methods we can't predict. All of this begs for more training, more resources, and more time, all of which are in famously short supply in every organization I know.
Companies invest heavily to open their doors to the world on the Web but rarely focus on hiring enough bouncers to keep out the riff raff—even though the risks are real and can be devastating.

Are you depressed yet? As I said, this whole area is so overwhelming that I've worked overtime to avoid it. And while denial is not usually a successful strategy for something this important, it may have served me well in this particular case. Now there are tools that combine the best of all skills needed to test code for weaknesses.

These new tools hitchhike onto existing test tools—even manual testing—and run in the background, watching everything the application does looking for security holes. They can also trace the flow of information through the application on the server side and figure out whether it offers the potential for mischief. I don't have to see it or even understand it—I just have to invoke it. These tools also provide security coverage testing metrics so I can tell when I need to expand my scope.

Even better, when such tools spot an issue they don't just announce it, they give me the exact line of code and a detailed description of why it is a problem and—get this—how to fix it. I can just paste this into a defect report to my developer and come off looking like a security genius. This technology exists; you just have to seek it out.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.