An effective test program, incorporating the automation of Internet, intranet, and component testing, involves a full-project lifecycle dedicated to the effort. Automated testing, using pre-developed tools, amounts to a development effort involving strategy and goal planning, test requirement definitions, analysis, design, development, execution and evaluation activities.
An effective test program, incorporating the automation of internet, intranet, and component testing, involves a full-project life cycle dedicated to the effort. Automated testing, using pre-developed tools, amounts to a development effort involving strategy and goal planning, test requirement definitions, analysis, design, development, execution and evaluation activities.
The use of automated test tools to provide performance and analysis of software for Internet, Intranet, and Component testing has proven beneficial in terms of product quality and minimizing project schedule and effort. To achieve these benefits however, requires that the application development team and the testing team develop a rapport early in the application’s development life-cycle, not the day before testing is to begin. Early involvement will allow a greater understanding of the application’s functionality, and the needs of the customer, e.g., the application’s development team, which will provide benefits in developing an appropriate test environment and generating a more thorough test design.
To make the quality of its products and services competitive, Performance Testing focuses on increasing first-pass successful testing and reducing down-time in all of its operations, instead of typical cost-of-quality analysis.
Software testing is a method of determining problems before an application is promoted into production. The testing that occurs on the software for Internet and Intranet applications for NIS and for component testing to support this development is not intended as Unit Testing. It is therefore anticipated that our business partners and customers will provide the applications to Performance Testing in a condition that is ready for benchmark, load, and stress testing to verify that the application or new release is ready for promotion and activation.
Load testing requires tight focus. A comprehensive Test Program consists of knowing what might be tested, what should be tested, and what can be tested. In an Internet or Intranet application, what might be tested are all untested areas that fall within the purview of the test organization. For the Internet and Intranet applications, this is any area identified by the business in Design Documentation or in the Requirements Tree created for the applications/projects. What should be tested are those untested areas that affect the customer's experience of quality directly. What can be tested are those untested, critical areas of the resources of the group.
Testing involves operation of a system, application or project under controlled conditions and evaluating the results (e.g., “if the user is in interface "A" of the application while using hardware "B", and does "C", then "D" should happen"). The controlled conditions should include both normal and abnormal conditions. Testing should be an intentional attempt to cause errors to determine if things happen when they should not or things do not happen when they should. Testing is oriented to detection. The system, application or project under test should be tested to provide responses to three areas:
- Functionality, and
- Production Reliability
Because the application is designed, developed and installed by an application team not associated with Performance Testing, Performance Testing should be prepared to test the application as a whole (integration testing) with the application in place and ready for testing for verification and validation that the application will perform, be functional and provide reliability as designed.
In each test case, risks have been identified and included into the test design. In the areas described above, the risks are:
- Access methodology
- Competitive/Industry Baseline
- Load Balancing
- Existing Middleware
- Database Metrics
- Initial Operability
- Browser UI Functionality
- End-to-End Functionality
- Cross Browser compatibility
- DCOM Technology
- JVM Compatibility
- Integrity Under Load
- Performance Predictions
- Server Utilization
- Availability across Geograpics
- Application-wide Visibility
Test Plan Objectives
The objectives of Performance Testing is to:
- Ensure the quality of Internet, Intranet, and Components submitted by Nationwide Insurance Application Development Teams for testing;
- Validate the benchmark test times for the software submitted;
- Validate that the software submitted for testing will operate under determined loads using data and information provided by the Application Teams;
- Validate that the software submitted for testing is tested in an environment that mirrors the production environment;
- Validate that the software submitted for testing will grow as determined by business application documents and business needs; and,
To assist a testing team team and ensure that all criteria for the test are complete, a Pre-Test Checklist has been developed. This checklist should be considered a "work-in-progress" and should not stand as a static document.
Conduct of Tests
Web application testing should never be done on the production version of the application. It is possible that high test loads might cause the production system to fail, and fictional transactions performed during a load test might result in incorrect orders being booked and shipped (e.g., quote and bind). It is imperative that a separate test environment be established. How closely this environment mirrors the original is dependent on the resources of the business.
In the next article, I will present the four types of performance tests that should be considered by Application/Development teams.