Understanding Software Performance Testing, Part 3

[article]
Summary:

Most people don't fully understand the complexities and scope of a software performance test. Too often, performance testing is assessed in the same manner as functional testing and, as a result, fails miserably. In this four-part series we will examine what it takes to properly plan, analyze, design, and implement a basic performance test. This is not a discussion of advanced performance techniques or analytical methods; these are the basic problems that must be addressed in software performance testing.

This is the third part of a four-part series addressing software performance testing. Parts one and two can be accessed below.

The general outline of topics is as follows:

Part 1
- The role of the tester in the process
- Understanding performance and business goals
- How software performance testing fits into the development process
Part 2
- The system architecture and infrastructure
- Selecting the types of tests to run
Part 3
- Understanding load (operational profiles OP)
- Quantifying and defining measurements
Part 4
- Understanding tools
- Executing the tests and reporting results

Understanding Load
Performance testing refers to the behavior of the system under some form of load. We need to examine exactly what we mean when we use the term "load." In the simplest view, load can be expressed as a formula: Load = Volume/Time. The problem is, what does the volume comprise and over what time period? This is a simple concept known as an "operational profile." (OP).

Load is always relative to a specified time interval. The volume and mix of activities can change dramatically depending on time. To understand load more effectively we need to define what time interval is important to properly assess load (volume).

First, we need to determine the time period we are interested in, sample it, and analyze the content of those samples. People, devices, etc., tend to work in patterns of activities. Those patterns have a fixed or limited duration. Each sequence of events forms an activity pattern that typically completes some piece of work.

To analyze behavior, we typically look at short time periods, one day, one hour, fifteen minutes, etc. I have found that most people work in an environment that is in constant flux-chaos, if you prefer. Embedded application testers have a slight advantage here, devices (mechanical or electronic) are not as easily distracted or interrupted as humans.

The selection of the time period is dependent to some degree on the type of application being tested. I use an hour as a base and then make adjustments as needed. If a typical session for a user or event in our sample is fifteen minutes, then each one-hour sample now contains four session samples. We would use this information to create the necessary behavior patterns using our load generation tool. This would have to be adjusted to a general activity number (per user, per hour) for a mainframe or client/server application, as those devices typically connect to the application (network) and remain connected even though they may be on but inactive. This problem may also occur for certain types of communications devices such as BlackBerries, which are always on but may not be doing any activity other than polling.

Most elements we want to measure vary based on known and unknown factors. This means we cannot sample or measure something once; we must sample it several times in order to compute a meaningful set of values.

How many samples or measures do we need? Table 1 shows a rough rule of thumb for the relationship between sample size and accuracy of the sample contents. You will notice this is very similar to what you see in commercial polls (sample more than 1,000 and the margin of error is +/- 3 percent).

Sample Size

Margin of Error
(roughly)

  30

50%

  100

25%

  200

10%

1,000

3%

Table 1

These numbers represent the number of times you have to sample something to gain the appropriate level of confidence in the data (margin of error). If you sample the specified time period thirty times

Pages

About the author

Dale Perry's picture Dale Perry

With more than thirty years of experience in information technology, Dale Perry has been a programmer/analyst, database administrator, project manager, development manager, tester, and test manager. Dale's project experience includes large-systems development and conversions, distributed systems, and online applications, both client/server and Web based. He has been a professional instructor for more than fifteen years and has presented at numerous industry conferences on development and testing. With Software Quality Engineering for eleven years, Dale has specialized in training and consulting on testing, inspections and reviews, and other testing and quality-related topics.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Sep 22
Oct 12
Nov 09
Nov 09