Everyone agrees that application performance matters. The more difficult issue is defining application performance. Is it how fast the servers respond? The time from user click to server response? How well the system handles five hundred or ten thousand users? Depending on the application, the answer can be any or all of those. Therefore, what matters most is the ability to test applications to ensure they provide performance appropriate for each definition’s use.
In order to understand if performance matches needs, testing is a necessity. While there are many areas that help define testing parameters, three overarching testing concepts must be addressed in order to provide appropriate performance for modern applications: your users, your data, and your environment.
What Are Your Users Doing?
While almost everyone recognizes that it’s important to understand your users, that knowledge is not often formalized. When it is, the formalization often happens within the testing groups, almost completely separately from development and operations. Performance testers have a unique opportunity to work with development and operations teams to fully understand how the application is used in production.
Therefore, it’s important to focus on conducting a production traffic analysis early in order to build user cases and performance simulations that model how the business workers do the work the application is going to assist or replace. The analysis can be completed using logs, custom metrics, or monitoring tools and should involve the operations team, as they have knowledge of the real-world issues that need to be included in testing. Performance test models should be assessed annually, at a minimum, and compared to production to ensure that performance tests grow and mature along with the application. This activity opens the lines of communication among all teams involved in the software development lifecycle and helps create a feedback loop, which is the cornerstone of the DevOps movement.
The goal is to ensure a clear definition of what performance metrics matter to the business and to guarantee that your performance metrics are aligned with ever-changing real-world needs.
Leveraging Test Data Management
Data has always been important, but the growing complexity and volume of data in modern applications means that test data management is gaining focus. Formalizing what data is available as well as how it is accessed from sources and delivered to customers is critical to providing applications that meet performance objectives.
In an attempt to keep on schedule and minimize the impact on production, test data is often generated behind the scenes. That can result in clean, generic data, sometimes with “unimportant” fields left empty, or little variation and not enough outliers. Other times, data is generated to check certain validation routines in unit testing, then left in the performance testing data sets, providing nonrepresentative data and introducing a large risk to the validity of any testing completed. It is critical to get full data sets from operational systems to provide system testing that replicates the real data that will be managed in the system as closely as possible.
The size of the data sets is also critical. The volume of data in use by applications is increasing rapidly, and performance doesn’t change in a linear fashion. It’s important to make as few assumptions as possible about scaling to larger data loads. Test with a data set that is as large and realistic as possible. Even with larger test sets, the difference between test and product data volumes and variability are such that you must always be prepared to continue testing in production to understand true application performance.
Servers, networks, and clients are all important to the success of an application, but it’s the data that is the reason for the application. This is why all teams involved in the software development lifecycle need to be concerned about test data, from the inception of a product through go-live. Each project and its requirements will dictate which solution is the best fit, whether that means using database copies, mirrors from production, a test data management tool, or actual in-production testing.
How Test and Production Environments Compare
IT executives often make test assumptions, such as, “Our test environment is half the size of production, so we’ll run half the load.” In past years that has been good enough, but the growth of data sizes and increases in system complexity and analytic capabilities mean that such simple math is no longer accurate. Twice the data can require far more than twice the server load to process complex analytics. Doubling the number of client connections can potentially lock certain areas of data for more than twice the time, and others for far less—and that doesn’t average.
Unless a business can afford to have a test environment identical to production (including load balancers, content delivery networks, firewalls, etc.), the development, QA, or staging test beds will not show all issues. That’s why testing must always continue in production. Analysis of performance and changes to the system is a necessary component of production. The ability to have a robust test environment can minimize production testing, but testing in production will always be necessary.
Another issue beginning to be addressed in testing is the growth of mobile applications. Mobile users are everywhere, and someone will definitely try to access your application through a mobile device, whether you plan for that or not. Adding a thousand mobile users has a different impact on application performance than adding a thousand wired users. An application can have performance that looks fine over a LAN but provides unacceptable performance when passed through a busy mobile tower. Performance degradation from mobile applications doesn’t only impact the client; slower response times also can keep data and server processes locked longer than a hardwired network connection.
As test beds become more like production for configuration and data, they will become more valuable to the support organizations to create provable value and deliver reliable test results.
Performance Testing: The Big Picture
Testing has historically been the stepchild of the development process, but the changes in data volumes, how we manage both server and client technology, and how users access applications are increasing the complexity of systems and making many companies realize that performance testing must take a more prominent place in the application development cycle.
To improve the quality and performance of your software applications, focus on three high-level concepts:
- Know your users: Performance testing only works if you know not only what your users want, but also how they do their work.
- Leverage test data management: The growth in both complexity and volume of data means that a clear focus needs to be on providing the right data for testing.
- Concentrate on test and production: As much as possible, mirror production in test, but know there’s no clean formula and be prepared to test production environments.
With these concepts in mind, testing can be improved in order to provide applications that meet the needs of modern business users.