Uncovering Hidden Boundary Values in Testing

Boundary value analysis is a stable of test design, but sometimes the boundaries are not so obvious to the black-box tester. These are called hidden boundaries. This article provides several examples of hidden boundaries, along with some tips to design your test plan in order to reveal hidden boundaries.

Using boundary value analysis, coupled with equivalence partitioning, is one of the fundamental practices of test design. The theory is that for a particular input, the more interesting values to use in tests are those at the boundary of the input range. The values in between are often considered “equivalent” when it comes to testing. 

For example, if your app has a feature to enter a discount for a price, the valid range is probably 0 percent to 100 percent. The values to test would be those around the boundaries, or -1 percent , 0 percent , 100 percent , and 101 percent . You would probably also chose to test a “normal” discount of 20 percent . If those work correctly, you are not likely to find bugs by testing every other possible discount. You are looking for correct behavior at the boundaries and just outside the boundaries.

However, sometimes there are boundaries that are not apparent. I’ve been bitten by hidden boundaries, resulting in bugs that escaped to customers. This doesn’t happen very often, but when it does, I get to hear that magical phrase: “Didn’t you test this?” 

In this article, I’m going to detail several causes of hidden boundaries: data-type boundaries in the underlying implementation (for example, 16-bit to 32-bit data types); trust boundaries, especially in a redesign or refactor; data values with special meaning; and Easter eggs.

Data-Type Boundaries

One telecommunications management platform I tested had a feature to configure various parameters on a remote device. One parameter had a range between 1 and 1,024,000. The UI stored this parameter in a 32-bit integer and correctly handled the input validation and system behavior, or the expected boundary values (0, 1, 1,024,000, and 1,024,001) and several normal values expected to be used by customers. We did not exhaustively test every value, as the values in between were seen as equivalent to each other.

However, there was a protocol in the communications network between the management platform and the remote device that limited the data types to 16 bits. The communications module split the 32-bit value into two 16-bit values and transmitted them separately. This is where the bug lay.

When the customer set that parameter to 65,536, it was instead set to 0. The code was supposed to send the parameter as two values, 65,535 and 1. The value of 65,535 is the maximum number represented by 16 bits. The code to split the values was off by 1; it tried to put 65,536 into that 16-bit value.

The boundary between 65,535 and 65,536 was not apparent in the UI. It didn’t show up in the specifications or the test plan. It was only apparent if one understood the whole system involved with making that setting.

Now, when I see a numeric input that takes larger numbers, I test the boundaries that may be present by the underlying data representation (256, 65,535, and 4,294,967,295).

Trust Boundaries

One tenet of software architecture is to define trust zones. If a client request comes from within the trusted zone, the receiving API can assume these calls don’t have malicious intent and can bypass some of the input validation. This is very useful to improve system performance and developer productivity.

On one project, we were making a concerted, rapid effort to expose our underlying business logic to external companies through web services. This project was called service enablement and was largely putting a web services wrapper around our existing API calls.

These API calls were formerly only available to modules internal to our application, and they were designed to be internal modules. Consequently, the inputs were thought to come from trusted sources. Input validation was pretty lightweight in order to save coding and processing time.

However, with the service enablement project, some of those API calls were exposed to external callers. The “happy path” test cases used the same inputs as our application use, and all was well—until we started testing from a web services perspective and being skeptical about the inputs.

One particular field had read and write ability in the API, but logically, it was meant to only be read. The business logic calculated that value based on data from the database. The UI only read that value, but when the developers exposed that API, we were able to write that value, putting inconsistent data into the database.

Any time the trust boundary is moved or assumed, there is the potential for serious bugs. It’s a good practice to explicitly check the trust boundaries with the developers and design your test plan accordingly.

Special Data Values

Many apps have special behavior for specific values of data. These behaviors might not be visible from the specification or the interface, but they could cause bugs. 

One system had a special provision for testing billing by having a unique combination of credit card number and other data in that user’s profile. It was meant to not bill the testers and test accounts. However—you guessed it—a bug with that special credit card check caused a number of customers to get our service for free.

This wasn’t widespread or a devastating outcome, and certainly those customers didn’t mind, but it exposed the danger of using special handling of data to guess that the account was a test account. A better design would have been to specifically designate those accounts as test accounts.

Special handling for specific data is a good area to explore when looking for bugs.

Easter Eggs

An Easter egg is some special functionality built into an app—perhaps something fun, like a mini game or a credits scroll with the team name. The first Easter egg in software was in the Atari 2600 game Adventure. If players picked up a certain invisible object, they would enter a secret room and see a message from the game creator. This room did not appear in any of the documentation or tests, because no one else knew about it.

Easter eggs are fun, but they might expose vulnerabilities, especially if not tested thoroughly. One buggy Easter egg that I’ve seen is a special logo that replaced our normal brand on an app for unique days. For example, on a holiday like Independence Day, the logo would change colors and have a firework graphic. That’s pretty fun—except for that time it didn’t work and the user saw a 404 error instead. The graphic was corrupted, and we didn’t test that particular holiday date prior to release.

What to Do about Hidden Boundaries

By the nature of hidden boundaries, they are hard to find, especially in black-box testing. Randomly searching for hidden boundaries is difficult and has a low probability of success.

However, not all is lost. Here are a few tips to deal with hidden boundaries. 

  • Attend the design reviews and look for mismatches in data types and any data transformations. Inquire about the transformation and design tests as appropriate
  • For numeric data input, test the potential boundaries defined by the underlying data types
  • Study the code packages and look for any files or resources that may be an Easter egg
  • Ask the developers about boundaries that might exist and any Easter eggs

The potential existence of hidden boundaries gives good reason to peer inside the black box to examine some of the underlying architecture and implementation for your application. Hopefully, this article inspires you to explore the application and expose some potential problems.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.