The Case for Cooperation between White-Box and Black-Box Test Tools


Although white box and black box testing both produce good results, they are more reliable when done together. Bryan Sullivan lists the strengths and weaknesses of each testing approach and how gray box" testing should be in your testing strategy.

Whether its focus is on detecting functional defects, performance bottlenecks, scalability issues, or security vulnerabilities, virtually every automated Web-application-testing tool on the market today follows one of two separate testing methodologies:  the tool either analyzes the source code of the target application (white-box analysis) or it runs the application and interacts with it through its UI (black-box analysis). The relative merits of each of these approaches are hotly debated within the testing community, but the truth is that neither approach is adequate separate from the other. Black-box analysis is like chocolate and white-box analysis is like peanut butter: each is good on its own, but by combining the two we can create a much more delicious outcome.

The main problem with using each type of tool individually is that one alone doesn't provide adequate code coverage. In this context, code coverage refers to the ability of the testing tool to identify the entire testing surface while thoroughly examining every path of execution of the application. Unless you can be sure that your entire Web application has been tested, you cannot rely on the results of the analysis. This assurance that the entire application has been examined is especially important for security analysis tools. A single functional defect in a page visited by only one user in a thousand may have little impact on the overall perceived quality of the application, but that same security vulnerability on that page may open the entire site to a devastating attack.

Between white-box and black-box analysis, which offers better code coverage? Source analyzers definitely have the edge at finding every page in the application-they already have a list of pages from the source code. They don't need to resort to fragile techniques such as spidering the site or prompting the user to provide a list of pages, as required by black-box analysis tools. Pages like administrator portals that are not linked to any other page in the site may be completely missed by black-box tools but would be found by white-box tools. White-box tools also have the advantage at finding unusual edge cases in the code. An online shopping site might process orders made on leap days (February 29) differently from orders made every other day of the year and a black-box tool might never test this condition.

On the other hand, source analyzers cannot analyze modules for which they don't have the source code. For Web applications, these modules could include controls or libraries purchased from third-party vendors, database-stored procedures, external Web services, or modules native to the underlying Web server or application framework. Also consider that newer Web technologies, such as Ajax, rely heavily on the client-side of JavaScript. JavaScript actually has the ability to programmatically modify itself at runtime. This makes reliable analysis of a complex JavaScript application virtually impossible when using only white-box tools, but a black-box analysis tool would handle this situation easily.

Given that neither type of tool provides assurance of complete code coverage on its own, your first instinct might be to use both a white- and black-box analysis tool and then combine the results. This is a great idea-when it comes to testing, as two heads are almost always better than one but, unfortunately, it still doesn't solve our problem. Remember the relative weaknesses of each tool: black-box tools can have trouble finding unlinked pages and unusual execution paths in the site, while white-box tools can have trouble with pages that use third-party controls or rely heavily on JavaScript. What if an application contained a page with both of these troublesome attributes, like an Ajax-enabled, unlinked administrator portal? Using two tools won't help in this situation, since the white-box tool will find the portal but won’t be able to test it; the black-box tool, on the other hand, won't be able to find the portal at all. This is not as uncommon a scenario as it might seem, especially given the exponential growth of interest in Ajax. The most complete method of ensuring code coverage in Web applications would be to use a hybrid white-box/black-box analysis tool. This approach is sometimes referred to as gray-box testing.  Tray-box testing usually implies that the tester does not have complete access to the source code of the application, though. In this case, however, the hybrid tool will have complete access to the source and, ultimately, will perform both a complete white-box and a complete black-box analysis. Additionally, instead of just running each method independently, the tool will apply both techniques in cooperation with each other. First, the white-box component executes, finding defects in the code and compiling a complete list of the site pages and execution cases (like the leap-day exception). After it finishes, the white-box component passes this list to the black-box component, which ensures that each page and execution case in the list is thoroughly scanned.

In this way, the strength of each type of tool is used to overcome the other's weakness. The cooperation between the separate components is the key that makes code coverage more complete. Hopefully the white-box and black-box analysis camps will take inspiration from the peanut butter cup and start working together to create more reliable and more trustworthy Web-application analysis tools.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.