Performance testing is an important procedure to be carried out before approving any software product for shipment. You’ve probably heard some horror stories from senior colleagues about a time when the system was shipped without any performance testing. So now, it is an essential part of your testing. There are various tools for implementing performance testing for non-GUI middleware systems, but there are times we don't have the liberty to choose from an existing set of tools for performance testing
Performance testing is an important procedure to be carried out before approving any software product for shipment. You’ve probably heard some horror stories from senior colleagues about a time when the system was shipped without any performance testing. So now, it is an essential part of your testing. There are various tools for implementing performance testing for non-GUI middleware systems, but there are times we don't have the liberty to choose from an existing set of tools for performance testing.
Why Not Choose an Existing Tool?
The following are some of the reasons that keep us from choosing a tool already available on the market.
There are no suitable request firing options in the tool. There are middleware systems that have their own performance requirements that cannot be completely fulfilled by a commercial tool. For example, the Telecom Service Delivery Platforms that I have worked with are using Sigtran protocol. It is pretty hard to find a performance tool to support that protocol. There are a few other protocols that are not supported by the typical performance tools, such as Universal Computer Protocol and Computer Interact to Message Distribution. If the existing tools do not support our important performance requirements, we may be forced to choose a custom performance tool.
The testing tool's performance may not be enough. Commercial tools may be enriched with many features, but not all of the features are going to be useful at any given time. Due to these extra features, the performance tool might not perform to the expected level. We may also have higher expectations: firing requests at a higher rate, such as 2000 transactions per second (TPS), and using lower system resources (memory, CPU, I/O).
When tools provide more features they also might be using more system resources. These commercial tools usually recommend firing from multiple machines to achieve a higher number of requests per second, but these will add more costs to the project. If we are more concerned about the higher firing performance requirements with lower system resource usage, we may have to build our own tool that effectively fires at a high TPS.
The tool is not commercially viable. Some tools might provide enough options, but prices may not help us keep the costs within the project budget. There are times it may not be worth it to pay for tools. Free open source tools may not provide enough features. Building our own performance tool does not come for free either. We may have to estimate costs to build our own tool and then compare the costs of using existing tools to come to a decision on this.
In our company, we were using some Telecom-related protocols, and we couldn't find a suitable tool. We ended up building the performance tool ourselves. Once that became successful, we started to implement the tool for most of our projects.
Advantages of Building Your Own Performance Tool
Due to one or more of the aforementioned reasons, we may be forced to write our own tools to do performance testing. This gives us more freedom to decide on how our performance tool can be designed and what features to include. Below are some of the advantages of building your own customized tool.
We can freely enhance the performance testing tool’s features. If we choose an existing tool, we are limited to that tool's abilities. For instance, we may choose the most appropriate tool that suits most of the scenarios. But if we get a complaint from our clients about random response delays after a few months, then we are forced to measure these random delays. If our chosen tool does not support this feature, then we may have to look for another option to measure this. But if we have our own tool, then it will be easier to expand the tool's scope to support such new requirements.
We can reuse existing monitoring tools rather than building monitoring support in our tools. Operating systems usually come up with enough monitoring tools to monitor the system's resources, such as memory, server load, and CPUs. Further, Java has enough tools, such as Flight Recorder, GC log, Jstack, and Jconsole, so we can make use of these existing tools to complement our own performance tool. We may have to build the simple request firing tool, and for monitoring, we can make use of these existing tools.
We can build a reusable performance tool to justify the business decision. As an organization, we might have several similar products, and if we build a reusable tool it will help to justify our decision at the business level. As tech people, it is fun to build a tool. It will demand expertise on writing good code while being mindful about concurrency issues. If we can reuse the tool for various projects, it can help us cut costs as an organization.
We can become experts in using JDK and operating system-based monitoring tools. If we use JDK and operating system-based tools for performance monitoring, we can become experts in using them. Later, this experience can be useful when monitoring performance issues in the production systems. 99 percent of the time you will not be allowed to install and monitor using the performance software, because these may lead to security issues and may add overhead into the production traffic. Therefore, it is good to have some expertise on these basic system and JDK tools, which can always help you troubleshoot the production performance issues.
Disadvantages of Building Your Own Performance Tool
It is important to carefully analyze the need for writing your own tool. Generally, it is recommended to reuse well-established tools for typical performance testing, but there are exceptions. A clear analysis is strongly recommended before deciding to write your own tool. Here are some of the disadvantages of building your own performance tool.
Building the tool will require a lot of expertise and knowledge. You might need a lot of strong knowledge to write a good tool that can meet your expectations. The following points are essential: concurrency, efficient connection handling, and efficient memory usage. It is not advisable to go for your own tool if your team is lacking strong knowledge on the needed technologies.
Building a tool can be expensive. If proper estimation is not done, you might end up spending more than simply buying an off-the-shelf tool. Proper analyzation and estimation are recommended before deciding to write your own tool.
A performance tool's own performance issues are dangerous. This is the typical "who watches the watchmen" issue. If your tool is not clean, you might wrongly suspect the system, which has been tested for performance. So your tool needs to be properly reviewed for performance issues.
Guidelines for Preparing Your Own Performance Tool
Below are some recommended guidelines for preparing your own performance tool.
Clearly define the performance tool's scope. First, we need to choose the scope of the performance tool. Scope can depend on these options.
Request firing ability—The tool needs to support a varying number of transactions per second as some systems might get request traffic in a pattern based on a graph-like pattern or a constant pattern. If a request dependency on previously fired request responses is required, we may have to cache the response values of each request.
Available resources to run the tool—Depending on the resource limitation, we may have to tune this performance tool to work effectively. Memory and CPU usage need to be considered.
How the performance monitoring is to be done—Are we going to rely on the tool to do the performance monitoring by logging system usage details?
Choose simple and efficient technologies. It is important to choose an easier technology to make sure it can be developed and changed by anyone. If we don't have many complex requirements for this firing tool, we can even use simple scripting languages.
Make sure that the tool uses minimal system resources. The tool should not do any unnecessary calculations or unnecessary logging. It should do a bare minimum of activity to ensure it uses less resources and can perform at the highest rate with low system resource usage.
So, the bottom line is that depending on the nature of the project, you CAN write your own performance tool, but I only suggest this approach for high-end middleware systems that don’t have a suitable tool for performance testing.