Designing Logical Load Tests

[article]
Summary:

The notion of "load testing" is often ambiguous, meaning different things to different people. Identifying exactly what the load parameters are for a given test is a crucial part of quantifying the goal of a load test. In this article the author explores the idea of load testing and offers several techniques for achieving good data points and quantifiable results through stress and load tests.

Load testing is an ambiguous and strikingly imprecise technique used by quality assurance engineers to validate functionality when the product under test experiences abnormally high amounts of traffic or usage. Here are some reasons that load testing tends to be difficult to quantify:

  1. We're dealing with potentially very high numbers to convey peak usage of systems, and many times the numbers will be inaccurate. For example,
    "Technically, max users on our Web site figures out to 103,894 but we'll just round down to 100,000."
  2. Frequently, load testing is at best a scaled-down simulation of real-world testing, not simulating the actual load of a real-world environment. For example, "Production has ten Web servers behind a load balancer. At 100,000 hits an hour, that's 10,000 hits per Web server. Let's simulate 10,000 hits on a single Web server, and we'll have validated the system by calculation."
  3. Load testing is frequently performed at the system level, thus touching potentially hundreds of data validation points, but not actually testing any one of them.

The result is that often load testing becomes synonymous with "loading up the system and seeing what breaks." This illogical, and frankly inconclusive, method to load testing does nothing but mask real potential problems. Moreover, it's very difficult to quantify such tests and it often leads to inaccurate or wrong conclusions.

I want to emphasize a few basic points to keep in mind when designing load tests. These points don't cover how to execute or run load tests, but they should be considered before the tests are actually run. They may give you a way of designing the plan by which you will conduct load testing.

The first step in constructing a logical stress test plan is to pick a goal that you want to validate. You must have a specific hypothesis in mind that you are testing with your load tests. Some examples of bad test goals are

  • to make sure our system handles heavy load
  • to see what the breaking point of our system is
  • to find the bottleneck in our system

Examples of good, accurate test goals are

  • to verify that the database is not the bottleneck of system capacity
  • to explore what happens when 10,000 people try to navigate our Web site's most popular path simultaneously
  • to verify our Web server load balancer is appropriately routing connections

Second, it is vital to pick distinct test parameters and validation points. We all learned in science classes that when conducting an experiment, one must pick control parameters and a test parameter. Load testing is very much like conducting an experiment, in that you may not really know what the system is going to do when you apply your test coverage. Thus, you need to be able to identify distinct parameters that you will hold constant, and preferably a single test parameter to which your load will apply.

Once you have identified what you are going to test (i.e., your test parameters), it's time to decide how to test-not what tool to use, or what hardware will be required-but by what methodology you are going to conduct the exact load test. Before you can assign numbers and values to the parameters you identified previously, however, you need to establish the assumptions from which you are operating, and from which you will make your calculations of the test and control parameters. For example, if our test goal is "to verify our Web server load balancer is appropriately routing connections," our assumption is "a connection simply represents a literal TCP/IP connection request to the load balancer that is sustained over time." Obviously,

Tags: 

About the author

Andrew Lance's picture Andrew Lance

Andrew Lance (andrew@centerspan.com) is a senior quality assurance engineer, technical lead for CenterSpan Communications, a company developing cutting-edge content delivery solutions based in Hillsboro, Oregon. Andrew has worked with test automation technologies for more than five years and has participated in every major phase of automated testing, from design and implementation to maintenance and support.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Apr 29
Apr 29
May 04
Jun 01