Testing Testability

[article]
Summary:

Recently I overheard a conversation between a test analyst and a business analyst about how a function should be tested. The response from the business analyst was, "If it is not breaking the application, it must be working fine!" Testing staff comes across such scenarios where a part or functionality of the application under test is not "testable." The tests they carry out are not conclusive enough to say that the functionality is working as specified. In this week's article, Ipsita Chatterjee defines testability and looks at the benefits of incorporating it in the products. Also discussed are simple ways to monitor the incorporation of this non-functional requirement in the software development life cycle and a few industry myths about testability.

What is Testability?
Testability is a non-functional requirement important to the testing team members and the users who are involved in user acceptance testing. It can be defined as the property that measures the ease of testing a piece of code or functionality, or a provision added in software so that test plans and scripts can be executed systematically.

Why is Testability necessary?
A typical software development lifecycle involves requirements gathering, analysis, design, coding, testing, implementation, and maintenance. A testable product ensures complete execution of the test scripts. Assuming that good test coverage is applied, most of the defects will be uncovered and fixed before the product is released. This insures customers will report a minimum number of defects. A lot of money is spent on supporting and maintaining a product after its development. Testable products are easy and less costly to maintain. The chances of achieving customer satisfaction with such products is are much higher. Hence testability is an important attribute to the maintainability of any software product.

How is this attribute measured and monitored?
Being able to test software, a piece of code or functionality, depends on what the user can see and control, known as observability and controllability.

Observability enables a tester or user to see the external and internal of the software. When a user receives the correct expected output, but the internal or the background processes are not quite what was specified in the requirements, defects are often found elsewhere. This is more important in the case of unit and integration testing rather than a simple black box testing.

Controllability is a measure of how easily a tester or a user can create difficult scenarios to test the software under extreme circumstances. For example, behavior of an application cannot be tested very easily when the hard disk is full or table overflow conditions exist.

Incorporating Testability into Software
There are so many methodologies of software development that it is difficult to list specific or stringent rules for creating testable software. Just like testing should occur from the very beginning of a project, project artifacts should be reviewed for testability from the beginning as well.

In a traditional waterfall model, the phases we have are software specifications, detailed design specifications, coding, testing, and implementation. Most practical usage of this model adds an "iterative" approach at each phase. Testability can be incorporated into the various stages as listed below. The list is in no way exhaustive. Practicing some of these steps can lead to more innovative ways of addressing the non-functional requirement.
 

  1. 1. Software Specifications Phase: During the review process of this document, the testing team should be specifically quizzed on their understanding of the requirements, and how the requirements form into several functionalities. Apart from the clear and straightforward requirements of what the application should do or is expected to do, the behavior under abnormal scenarios should also be documented clearly. Walking through the Use Cases used to formulate the requirements can also be very helpful. Checklists should facilitate addressing issues on how to create different scenarios.
  2. Detailed Design Phase: In order to incorporate testability in this phase inputs and expected outputs should be clearly stated. The checklists prepared during the last phase should be supplied to the designer. The design should elucidate the system path clearly so that testing team knows which programs are being touched in what scenario. As mentioned before, a correct expected output is not enough to ensure that the background processes are giving the correct results. Provisions for adding extra code or extra interfaces to test the application under "difficult to generate" scenarios should be added in this step.
  3. Coding Phase: This phase is the most crucial phase. Testability approaches taken in the last two phases should be incorporated here. To achieve this, additional design parts or algorithms can be added and should be subjected to thorough unit testing. In order to make certain programs visible to the users and testers, test harnesses can be generated in this phase so that the testing team can start some testing at the program and component level. All the scenarios covering what the application should and should not do that were derived in the previous phases should be incorporated and unit tested at this phase.
  4. Testing Phase: The test plan and test scripts designed for this step should cover all the testability measures taken in the previous steps of the project. They should be thoroughly tested along with other functional and non-functional requirements. Any functionality, code, or program that is not accessible via User Interface or any other commands, should be subjected to other plausible methods of testing. Testability can be addressed at this phase by using specific queries (for certain applications), generation of stubs and drivers for integration testing, and using test harnesses for specific modules or components.

From the above discussion, it does not seem that testability is a very difficult property to incorporate in any software or component. On the other hand, if software is testable, it is much easier to execute test plans and test scripts systematically without using much ad-hoc measures during the testing phase. Then what stops us from testing testability?

Testability Myths
Some myths about testability delay its introduction in the project artifacts. Testability is perceived to be expensive because the cost of adding testability in the different phases is very easy to identify, but the losses incurred due to its absence are not very easy to determine. This is because customer satisfaction and maintenance are not even evaluated until the product's implementation or the end of the project. Some of the commonly found myths are:

Myth: Testability is expensive.
Testability doesn't have to be expensive. A small investment throughout the project phase can give us a major improvement in fault detection.

Myth: Testability can be a plug-in.
Testability is a way of ensuring quality. Just like quality cannot be added in a product as a separate ingredient, testability follows the same trend. It has to be gradually built into the product over time.

Myth: Low budget applications cannot afford testability.
Low budget applications will normally have large volumes of sales with large number of user licenses. This will increase the cost of maintenance if the application is not maintainable. Thus a modest investment right at the start can save a lot of hassles and maintenance costs after the sale and implementation of the software.

Conclusion
The discussion above points out the importance of probably the most neglected non-functional requirement of modern day applications. It also points out the myths that are associated with it. But the fact is in order to be competitive in the market place and get our money's worth, it is important that testability is recognized and steps are taken to address it from day one of any project.

References

  1. Gupta, S.C and Sinha, M.K, "Improving Software Testability by Observability & Controllability Measures". 13th World Computer Congress, IFIP Congress 94, Aug-Sep 1994
  2. Hare, M.; Sicola, S.; IEEE Region 5 Conference, 1988: "Spanning the Peaks of Electrotechnology", 21-23 March 1988, Pages 161-166
  3. Lutz, M, et al, "Testing tools", IEEE Software, pp 53-7, May 1990
  4. Wallace, D.R, "Verification & Validation", in Encyclopedia of Software Engineering, Ed. John J. Marciniak, John Wiley & Sons, 1994

User Comments

1 comment
Reinhard Roeder's picture

I agree with everything you describe about what testability is and why testability is necessary.However, what I am missing is how do I ensure testability and which tool(s) do I best use to establish realistic and meaningful testability?I have always carefully prepared a detailed testability analysis to double check how to verify the function and performance of electronic devices in space applications, and I standardized in the business area  of my (former) company theTestability Analysis, which includes an evaluation method applying SMART criteria. These criteria help to decide if a requirement is actually testable. The application of SMART criteria is a method to decide if a requirement is testable, what is given when a requirement is: Specific, Measurable, Achievable, Realistic, Time-bound.

July 2, 2022 - 8:16am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.