Most people don't fully understand the complexities and scope of a software performance test. Too often performance testing is assessed in the same manner as functional testing and, as a result, fails miserably. In the final installment of this four-part series Dale examines what it takes to properly plan, analyze, design, and implement a basic performance test. This is not a discussion of advanced performance techniques or analytical methods; these are the basic problems that must be addressed in software performance testing.
This is the final part of a four-part series addressing software performance testing. The general outline of topics is as follows:
- Part 1
- - The role of the tester in the process
- - Understanding performance and business goals
- - How software performance testing fits into the development process
- Part 2
- - The system architecture and infrastructure
- - Selecting the types of tests to run
- Part 3
- - Understanding load (operational profiles OP)
- - Quantifying and defining measurements
- Part 4
- - Understanding tools
- - Executing the tests and reporting results
As previously noted, tools provide answers to questions. The first parts of this paper addressed the problem of defining the questions to be asked of the tools and examined those elements of a performance test that need to be in place before you should even be concerned with tools. When we talk about performance tools, we are talking about three specific categories: load generators, end-to-end response time tools and monitors (probes).
A successful performance test requires all three types of tools, properly configured and working together.
Load-generation tools cannot measure true end-to-end timing of a user experience, as they use virtual users or clients and do not include the workstation overhead in their measurements. If the user experience is critical-heavy graphics usage, large amount of business logic on the client (fat versus thin)-then an end-to-end tool is essential.
Monitors (probes) are essential to solving any performance problems that may be identified during the testing. Performance refers to how something behaves (performs) under a defined set of circumstances. When running a test, measures have to be taken in many areas: memory, CPU, threads, network statistics, database measures, internal application measures, etc. These measures may have to be taken from several platforms or servers depending on the type of application under test.
Getting the right set of tools is essential to a successful test. The combined tool set provides the tester with an answer.
When looking at load generation, there are multiple possibilities:
- Generate the load manually employing actual users
- Build your own tool
- Purchase or lease a load tool
- Use a third-party application service provider (ASP)
Manual generation of load is probably not going to work very well. Unlike tools, real people get bored, tired, and distracted from the task at hand. Although some types of tests-such as a sudden ramp-up of users-are possible using this method, the results are generally not very reliable. Employing actual users also may have a scaling problem; there are only so many people available and so many workstations. Tests to stress the system beyond normal limits may be impossible. It can also be very expensive to bring in a sufficient number of people, usually on a weekend, to generate the necessary volume of activity for the test.
Building your own tool is sometimes the only possible choice. The majority of commercial tool vendors focus their tools on specific areas of the market, preferably an area with many possible clients. As such, there may not be any commercial tools available for the architecture or infrastructure on which you are testing.
There are advantages and disadvantages to building your own performance tools, as shown in table 1.
|It takes time to engineer
and create the tool
will contain the features you require
|The tool will require
initial testing to ensure it functions as expected
can be modified to provide all necessary trace and internal measure
information as the architecture changes
|Once built, the tool has to be
maintained internally. All new functions have to be created