As Web services increase in complexity and connectivity, security is growing as a major concern. Many security breaches have been the result of poorly tested software that allows unexpected inputs to pass and weaken security measures. Such inputs can create conditions in which intruders can obtain access to parts of the system that would otherwise be secure. One effective way for development teams to prevent unexpected inputs is to perform thorough white-box testing at the unit level. Unlike specification testing (which tests that code behaves as it was intended), white-box testing checks for the conditions and inputs that are not expected, thereby enabling developers to more thoroughly test for what they cannot foresee. By performing such testing at the unit level, developers can quickly and easily identify and correct any weaknesses before security breaches have the opportunity to occur.
Web service security is much more critical and complicated than most people in the industry seem to realize. Most of the current security discussions address identity authentication and message exchange privacy. These are undoubtedly critical security issues, but solving these problems will not guarantee security. In fact, I expect lack of security to remain a huge menace even after these authentication and privacy issues are solved. Why? Because Web services' fundamental architecture opens the door for serious security breaches. Anyone who passes a Web service's first layer of defense can reach the parts of your application you made available, but also might be able to access and manipulate parts that you thought were private. Fortunately, practices such as unit testing can help you create a multilayer defense which prevents authorized visitors from performing unauthorized actions.
Establishing the First Layer of Defense
Most current discussions of Web service security focus on the mechanics of the following fundamental security issues:
- Privacy: For many services, it is important that messages are not visible to anyone except the two parties involved. This means traffic will need to be encrypted so that machines in the middle cannot read the messages.
- Message Integrity: This provides assurance that the message received has not been tampered with during transit.
- Authentication: This provides assurance that the message actually originated at the source from which it claims to have originated. You might need to not only authenticate a message, but also prove the message origin to others.
- Authorization: Clients should only be allowed to access services they are authorized to access. Authorization requires authentication because without authentication, hostile parties can masquerade as users with the desired access.
For a detailed discussion of these issues, see the article I wrote with Jim Clune, "Security Issues with SOAP," Crosstalk, July 2002.
By dealing with these four fundamental security issues in whatever manner makes most sense for your Web service, you will establish a critical first line of defense against security breaches. If you don't have at least this layer of defense, your Web service (and all of the data that passes through it) will be wide open to anyone who wants to access and manipulate it.
Understanding the Potential for Danger
The very nature of Web services provides clients who pass the first layer of defense with unprecedented access to the system's inner parts. While other types of applications have executables that act as a skin that covers and protects the application's inner functionality, Web services peel back this skin and actually expose the system's inner functionality to outside Web service clients. This is done by providing a public interface through which clients can invoke the service's methods. However, through this interface, clients can access and manipulate not only the exposed methods, but also any part of the application that can be accessed from the exposed methods.
If it's possible to wreak havoc on your system by executing methods anywhere within your Web service, you'd better be 100 percent certain that clients cannot reach these methods through the designated service entry points. Often, unexpected paths through an application provide clients access to areas that you thought were private. If the service is implemented in C or C++, these unexpected paths can stem from obvious sources such as buffer overrides or general data corruption. However, even "safer" languages such as Java can be tricked into providing unexpected access to supposedly private methods.
For an example of how Java provides opportunities for security breaches, imagine that you are programming in Java and the names of the methods you are invoking are specified dynamically (and are thus unknown until runtime) because you are using reflection to invoke methods. If clients pass certain parameters in this situation, they might be able to invoke methods that you would not expect to be accessible (based on a typical analysis of the code).
You can still get into trouble with Java-based services even if you're not using reflection and all your method invocations are explicit. One way to get into trouble is to leave an opening through which a hacker can insert a jar file onto your machine or into your classpath. If the hacker successfully adds a jar file, all client method invocations will call the hacker's methods instead of the service's original methods. Uncaught runtime exception handling code can open another security hole. For an example of how uncaught runtime exceptions can lead to security breaches, imagine that a client receives an uncaught runtime exception which is thrown up several layers in your call stack before it is handled, and the handling code exposes some functionality that you did not want exposed. It would have been difficult for you to predict this behavior because it stems from exception handling code that is far removed from the service you are handling.
While hackers can occasionally access the inner workings of a traditional application (for example, by causing memory overwrites or exceptions), it is markedly easier for hackers to do so with Web services because Web services allow the initial access into the application. If you have a traditional application, hackers trying to access the parts of the program you want to protect would have to do something comparable to picking the lock on your home's front door, then locating your private cash stash. With Web services, you hand the crook the key to the house and hope that he doesn't stumble upon something you don't want him to take. Fortunately, you can cut off access to private areas of the application by establishing security boundaries within the Web service. A solid security boundary will protect the private areas of the application like a vault protects the items locked within it; when you have such a boundary/vault, you can rest assured that whoever gains access to your service/house will not be able to touch the methods/items you are trying to protect.
Verifying Inner Security Boundaries with Unit Testing
I've found that unit testing is one of the best ways to ensure that the parts of your application that you intend to be protected are actually protected. By "unit testing," I mean "testing the smallest unit of an application (a class in Java or a function in C), module, or submodule apart from the rest of the system."
Unit testing is helpful for this type of security testing because when developers and testers test at the unit level, it is considerably easier for them to test all of the possible paths that hackers could take during their attempts to reach unexposed methods or perform illegal operations. Developers sometimes make dangerous assumptions such as "There is no way to reach Method Dthough Method A." Unit testing—in particular, white-box testing (trying to fully exercise all paths through the unit with a wide range of unexpected inputs)—is probably the best way to verify that these assumptions are correct.
White-box unit testing involves designing inputs that thoroughly exercise the exposed methods, then examining how the application handles the test inputs. For example, if you wanted to check if any possible uncaught runtime exceptions cause a service to expose "protected" methods, you would flood the service's exposed methods with a wide variety of inputs to try to flush out all possible exceptions, then examine how the service responds to each exception. If you wanted to verify that hackers could not place Java .jar files in your application and/or CLASSPATH, you would design test cases that attempt to add such files through every possible service entry point, then see whether these attempts fail. If you find these or other security holes during the testing phase, you have the opportunity to fix the problem before an actual security breach occurs.
How do you determine how much testing is enough? Ideally, you want to check whether any possible input causes unexpected access, but testing every possible input to a method is typically not feasible. A more practical goal is to try to cover each path through the unit at least once.Establishing a Final Defense with Design by Contract
If a security breach would have very serious consequences for your application, you might want to consider building a final layer of security into the code itself. If your Web service is built using Java, you can do this with Design by Contract (DbC).
DbC was designed to express and enforce a contract between a piece of code and its caller. This contract specifies what the callee expects and what the caller can expect (for example, expectations about what inputs will be passed to a method or conditions that a particular class or method should always satisfy). DbC tools generally have developers incorporate contract information into comment tags and then instrument the code with a special compiler to create assertion-like expressions out of the contract keywords. When the instrumented code is run, contract violations are typically sent to a monitor or logged to a file. The degree of program interference varies. You can often choose 1) nonintrusive monitoring (problems are reported, but program execution is not affected), 2) having the program throw an exception when a contract is violated, or 3) having the program perform a user-defined action when a contract is violated.
DbC can enforce security boundaries by ensuring that the application never accepts inputs known to lead to security problems or enters a state known to compromise security. The first step in creating an infrastructure that provides these safeguards is to use unit testing to determine what inputs and situations can make the service vulnerable to security breaches, then writing contracts that explicitly forbid these inputs and situations. Next, you configure the program so that whenever the conditions specified in the contract are not satisfied, the code fires an exception and the requested action (for example, a method invocation) is not allowed to occur. When this infrastructure is developed after thorough unit testing, it provides a very effective last layer
Unless the industry develops an easy way to ensure Web service security, I fear that the security issues inherent in the very nature of Web services will make it difficult (though not impossible) to apply them in situations where security is of utmost importance. However, Web services can nevertheless be applied easily and profitably in many situations where security concerns are irrelevant. I predict that Web services will enjoy the most success and acceptance in the variety of possible implementations that do not involve security issues.