This article describes a real-life situation where the test organization participated as part of the development team and was included in all phases of the development lifecycle for a highly successful software project. The project was an upgrade of the software and hardware for a mission critical communications system in the aerospace industry.
This article describes a real-life situation where the test organization participated as part of the development team and was included in all phases of the development lifecycle for a highly successful software project. The project was an upgrade of the software and hardware for a mission critical communications system in the aerospace industry. It consisted of converting approximately 70,000 lines of Fortran on a VAX mainframe to C and C++ on a DEC Alpha. This paper will discuss the project situation and the solution. It will also present the issues, difficulties, and successes of working as an integrated team to develop software.
As part of a major overhaul to a mission critical message processing system, our development team was challenged with replacing the system’s core software that consisted of the user interface, message router, and external hardware commanding functions. This software was written in Fortran and hosted on an old, slow VAX 11/785-mainframe computer. As well as being slow, this old architecture limited the user interface. The new chosen architecture was to use the faster Dec Alpha, write the software in C and C++, and use a Tolarian graphical user interface. The conversion effort was not limited to re-write the existing 70,000 lines of code, but to re-engineer the existing design and add new features and functions. The system, including software and the host computers, was to be developed in one location, shipped to the customer’s facility, installed/integrated, and tested with a minimum amount of down time to ongoing operations.
Due to budgetary constraints, our customer strongly requested that we meet an aggressive, almost unrealistic deadline for this type of effort. Basically, the customer was asking us to reduce our planned development and integration effort by approximately 6 weeks. And, as most testers know, the development (coding) team was not going to budge on their code complete date. More likely, this would probably slip out.
The project team consisted of the following:
Since we—the test and integration team—knew that the development team was not going to reduce their schedule, our challenge was to develop a new test and integration strategy that would allow us to meet the new schedule and not sacrifice quality.
To reduce lost time and minimize the test schedule, we needed to change established processes and would also need to be aggressively involved in all development activities. The customer agreed.
Changing Tests Involvement
The established methods that had been used for years at this company minimally involved the test team in any software development activity. Involvement mainly consisted of giving the test team the requirements document and letting them develop test cases from it. Test was invited to reviews, but usually didn’t know the system well enough to effectively participate. Also, the test team was usually isolated from the development team, rarely meeting until it was time to test the product.
This had to change. We proposed and implemented a much more aggressive approach. This approach intimately and formally involved the test organization in every development activity. The test team was required to work closely with the developers to understand the inner workings of the system. They were to ask questions and challenge design and development concepts. During analysis and reviews, the test team was tasked to look at the system for testability and usability as well as meet required functionality.
During this process, both teams freely shared information about their efforts. Developers would ask ‘how do you plan to test this function?’ Testers would ask ‘how are you going to handle this possible scenario?’ By using this approach, we were not only able to thoroughly understand the system, but also understand the issues that arose during development. Without test’s involvement, the entire project team might have lost many hours arguing why the system wasn’t passing tests or meeting requirements. Also, this involvement allowed the test team to know the developers on a professional level—how good were they? Was Developer-A good, and his code and unit test solid? Was Developer-B not as good? We learned whose code needed more testing and whose needed less. We saved countless hours using this technique.
During the Software Development Lifecycle (SDLC) the test team was engaged in the following activities (see Figure 1). This kept them in tune with development and made efficient use of their time.
Changing the Build and Test Cycle
In addition to being closely involved with the development team, we also changed the existing software build and test cycle. We implemented a concept called ‘engineering testing’. Engineering Testing reduced the number of ‘formal’ activities that were performed by the Quality Assurance group. The formality of the old process significantly added time and complexity. During any formal activity, adherence to a standard process was required. Any issue, which was encountered during any formal activity, was documented, the resolution reviewed, accepted, and re-tested (if it occurred during testing) prior to continuing with any other activity.
Using the old process, development would complete the code and QA would configuration control the source and build the executable. The test team would then test the code and find any defects. If defects were found, they were documented, reviewed, and fixed. Then the process would begin again. This would continue until all defects were resolved.
‘Engineering Testing’ replaced the formal build and test process with one that was informal. Engineering Testing eliminated the premature formal build process until we were ready (see figure 2). It allowed us the flexibility to make needed changes between the scheduled ‘code complete’ date and the ‘formal environment build’ date (see figure 3). On our project, a code complete date was selected, and then we scheduled in three weeks of engineering testing. At the completion of engineering test, we built the formal test environment and began formal testing.
Engineering Testing also allowed us to quickly resolve issues and defects. During Engineering Testing we would make several passes through the test procedures/scripts to find defects. When defects or issues were found, we documented them on a spreadsheet. Depending on the severity, we would either present the item to the development lead immediately or wait and present the day’s issues at the end of the day. During the discussion between the test and development leads, we would determine if the item was a real defect, a new feature, or just a misunderstanding of how the system was suppose to work. Items that were not defects were deleted and the others tracked until fixed. While the developers were making fixes to the code, the test team would either test another part of the system, or work on modifications to the test procedures, thus minimizing lost time between builds. When fixes were complete, the modules or application was rebuilt and the process started again. Using this method, we resolved over 100 issues in less than three weeks.
As the formal test date approached test and development would gauge how ready the system was for delivery. If we were still finding a lot of major defects, we would have had to push out the date. Fortunately for us, we had only 2 defects still open and development was working hard to fix them. We decided to enter formal testing on schedule. Formal testing consisted of one final pass through all of the test procedures with QA witnessing the execution. Any defect or issue found at this time was formally documented and presented to the customer for review. Since development was unable to fix the last remaining defects, they were again encountered and documented. However, it was decided that these defects would not delay shipment to the customer’s facility. They were resolved in later releases.
Changing the Test and Integration Process
One of the most resisted concepts proposed to reduce the test schedule was a radical change to the established test and integration process. Because we were either reducing the amount of testing or replacing formal test activities with informal checkouts, the internal customer and their QA representative were wary of this approach. However, the test team was confident that this would not be a problem since hardware and software do not change between the development and operational facility (in theory). Even if there were problems, we felt they would most likely be configuration issues and quickly resolved.
Figure 4 is a summary of the activities that occurred during integration and test. Although the number of activities is similar, we significantly reduced the formality, and number of actual tests that were performed.
Issues: Although this process proved to be successful, it wasn’t without its issues, nor was it simple. The test team had to convince upper management, QA, the internal and final customers, that deviating from the existing process would not degrade the expected quality of the system. We had to assure them that test was actively involved in the development activities and that thorough testing was going to be performed. Being actively involved created it’s own issues. Some members of the development team initially had little respect for test’s involvement. However as development progressed, our value was recognized and we were repeatedly asked for our opinion on many development issues. Another issue of being too closely integrated was the potential to overlook minor issues as deadlines approached. Being part of the team, test did not want to be a problem, we wanted to be part of the team. However, we knew that being part of team meant delivering quality software. On the project, test was accused of this when the software was shipped with two known defects. One developer strongly voiced his opposition. The test team had a pivotal role in convincing management and the customer that these issues would not affect the performance of the system.
Successes/Benefits: Using these new processes and methods allowed us to meet the customer’s aggressive timeline without compromising the overall quality of the product. Even two years after the installation, only two minor defects where discovered in the original delivered code. Aggressively participating in the development activities prevented countless defects and when issues or defects were encountered, they were quickly resolved. Closely integrating the test and development teams also limited the usual friction between the two groups. And as a result, test was now viewed as a system expert and part of the ‘team’, not just a group who holds up the process.
In summary, integrating test professionals into the development team prevents defects, gives them up-front knowledge of the system, and promotes success of the entire project TEAM. It gives the testers the ability to create test documentation concurrently with software development, allows for quick resolution of defects. Also, challenging the normal process can be an effective means to reducing test time without sacrificing product quality.