When you do the same thing many times, you can start to make false assumptions about your work process—and testing is no exception. Sofía Palamarchuk discusses some common fallacies about performance tests specifically, and how they can end up costing testers and developers significantly more than they should.
It’s always interesting to find out the many ways in which we can be wrong. In his book Perfect Software and Other Illusions about Testing, Jerry Weinberg explained a number of fallacies regarding testing in general. Here, I’m going to discuss some that relate specifically to performance tests—and how they can end up costing testers and developers significantly more money down the line.
1. The Planning Fallacy
We often think that performance tests only take place at the end of a development project, just before rollout, in case we need to do some fine-tuning to make sure everything goes smoothly. That’s why performance testing is seen as a solution to performance problems. But, in fact, it’s about detecting and anticipating problems in order to start working on their solutions. The problem is that when we consider performance testing only at the end of the project, when we encounter very serious problems, at that point their solutions come with higher costs.
So, to keep costs low, it’s best to consider performance from the early stages of development. We should carry out intermediate tests throughout the development lifecycle in order to detect important problems that arise before they spiral out of control.
2. The “Just Add More Hardware” Fallacy
It’s typical to hear that performance testing is not necessary because any problems detected may be solved by simply adding more hardware, such as additional servers, memory, etc. This assumption is quite mistaken. Consider the case of a memory leak. If we add more memory, we might keep the server active for five hours instead of three, but we won’t be solving the problem. It also doesn’t make any sense to increase infrastructure costs when we can be more effective with what we already have and reduce fixed costs in the long run.
In short, adding more hardware is not a good substitute for performance testing. Instead, we should be finding the root of the problem and applying a real solution instead of a patch.
3. The Testing Environment Fallacy
There is another hardware fallacy asserting that we can perform tests in an environment that does not resemble the actual production environment—for example, testing for a client on Windows and assuming that the application will function just as well for another client who will install the system in Linux. We must make sure to test in an environment as similar to the production environment as possible because there are many elements from the environment that affect a system’s performance, such as hardware components, operating system settings, and the rest of the applications executed at the same time.
Even the database is an important aspect of the performance testing environment. Some think performance tests may be carried out with a test database, but then problems with SQL queries might go unnoticed. As a result, if we have a database with thousands of records, the SQL response time will not be optimized and would surely bring along tremendous issues.
4. The Comparison Fallacy
It’s one thing to assume that you can use a performance testing environment that does not resemble the actual production environment, but it’s another to make conclusions about one environment based on another. We should never extrapolate any results. For instance, you cannot duplicate servers to duplicate speed. Neither can you simply increase memory to increase the number of users supported. In general, there are numerous elements exerting an impact on the overall performance. The chain breaks at the weakest link, so if we improve two or three links, the rest will continue to be equally fragile—in other words, if we rectify some of the elements that restrict a system’s performance, then the bottleneck will move along to another element. The only way to make sure everything is functioning as it should is to keep on testing performance.
Extrapolating in the other direction is not valid, either. Imagine the case of a client with a thousand users executing with an AS/400 functioning perfectly. We cannot consider the minimum hardware necessary to provide support for ten users. We must verify it through testing.
5. The Thorough-Testing Fallacy
Thinking that one performance test will prevent all problems is, in itself, a problem. When going about performance testing, we must intend to detect the riskiest problems that would have the greatest negative impact, taking into account time and resource restrictions. It’s practical to limit the number of test cases (usually to no more than fifteen) because it is very costly to carry out a performance test including all functionalities, alternative flows, data, etc. This means there will always be situations that go untested and will produce problems, such as some blocking in the database, or response times that are longer than acceptable.
The most important thing is to cover the most likely cases that are most risky. Every time a problem is detected, we must try to apply that solution to each part of the system where it could have an impact. For example, if we detect that the database connections are managed inappropriately in the functionalities being tested, then once a solution is found, it should be applied at every point connections are involved. Solutions are often global, such as the configuration of a pool’s size, or the memory assigned in the Java virtual machine.
Another valid approach that proves reassuring when it comes to performance testing is monitoring the system under production conditions. This lets us detect any problems that might have arisen because they are outside the scope of the tests, so that they may be corrected promptly. Remember, just because you run a performance test does not mean you are always clear of any possible problems, but there are several ways to ensure that you minimize that risk.
6. The Neighbor Fallacy
We tend to think that applications in use by others with no complications will not cause us any problems when we use them ourselves. Why should we carry out performance tests, when our neighbor has been using the same product and it works for them just fine?
Even when the system works with a given load of users, we must tune it, adjust the platform, ensure the correct configuration of the various components, and consider other factors that will impact our users with that system.
7. The Overconfidence Fallacy
There is a belief that the systems where we will encounter problems are only developed by programmers who regularly make mistakes and who lack experience, among other things. Some managers think their engineers are all super experienced, so there is no need to test performance, especially if they have developed large-scale systems before without any issues.
No. We must not forget that programming is a complex activity, and regardless of how experienced we may be, it is common to make mistakes. This is even more true when we develop systems that are exposed to multiple concurrent users (which is the most common case) and performance can be affected by so many variables. In those cases we must consider the environment, the platform, the virtual machine, the shared resources, hardware failures, etc.
These are just some of the fallacies I have come across in my experience as a professional software tester. Can you think of any others you have had to dissuade people from believing?
User Comments
Great Write up!
Best article I read yet... Thank you Sofia
Thanks so much Robert! I really appreciate it. You might like the posts I wrote for my company blog as well as the posts by my colleagues. You can find them here: http://abstracta.us/knowledge-center/
Happy Monday!
Thank you for this article Sofia and sharing your company's blog. Still consider myself a newbie to the world wide labyrinth of testing. In the continous effort to learn, grow and improve I am your newest fan!
Hi ,
Nice article!
can you please guide further , how can we create performance test environment similar to production .
I need more information on this with example if you can give.
Thanks,
vidula
Such an interesting read! Thank you for sharing this, Sofia. Performance testing is just one aspect of the overall software QA function, but yet holds massive importance, and rightly so. And nowadays, the conversation about performance testing often pivots toward performance engineering.