Including a testing/QA component on a software project necessarily prolongs the schedule, right? Not so, according to Ross Collard. In this, the first of a three-part series, Collard explains how speed and quality assurance don't have to contradict each other. Read his examples of how testing can actually help reduce the time to market.
The Speed Imperative
Failing to deliver on time is probably the single largest problem facing the software profession (closely followed by and often associated with last-minute unpleasant surprises, reductions in the features delivered, less complete testing and lower quality, and cost overruns). So it is critical to be able to deliver systems and products to the market quickly. There is considerable evidence that the game goes to the quick, with few second chances for latecomers.
The Wall Street Journal made this observation about the winner-take-all society: "Today, it's axiomatic in Silicon Valley that the Internet is a 'land grab' where the dominant early player walks off with most of the booty." There's even a book by James Fallows, entitled Faster: The Acceleration of Just About Everything.
In the words of Ian Diery, former head of marketing for Apple Computer, "Being first is more important than being best." (This works well as a strategy…until the competitors catch up with the company's innovations, and the customers aren't so willing to compromise quality anymore.)
Put another way, the claim is that good quality delivered late is useless. As Napoleon Bonaparte said to a messenger in 1803, "Go, sir, gallop, and don't forget that the world was created in six days. You can ask me for anything you like, except time."
Does Fast Delivery Have to Compromise Quality?
There isn't an iron-clad tradeoff between time to market and quality: these two factors do not have to be mutually exclusive. There are many situations where improved practices have decreased time to market, and have increased the quality of the delivered system at the same time.
As one example, IBM reported that, in one of their divisions (the Santa Theresa Lab), over a two-year period the software testing efforts decreased by 43 percent on average, and elapsed time to deliver products to market decreased by more than 15 percent, when software professionals and their managers adopted more effective techniques for defect prevention, quality assurance, and testing.
The quality of the IBM division's systems improved at the same time, with the numbers of bugs remaining in the delivered systems dropping by approximately 50 percent.
Conversely, in many situations, longer delivery times have led to poorer quality. As the system delivery date becomes late and then more late, the temptations increase to "cut corners" on defect prevention measures, testing, and fixes.
As reported by Capers Jones, IBM found that the software products that have the fewest defects also have the shortest delivery cycles. In a study of 4,000 projects cited by Jones, poor quality was one of the most common causes of project overruns. The reason is the delay caused by reworking to remove defects.
In response to customer demands and competitive pressures, most organizations are continually attempting to reduce their product development cycles and system delivery times. Organizations including Hewlett-Packard and AT&T have reported that their system development and delivery cycles have steadily been reduced over the last decade, in part by making the test and QA processes more effective.
The (Apparent) Quality vs. Speed Contradiction
In a speed-focused environment, you are likely to hear the following arguments against testing:
Testing is the bottleneck in delivering systems. This statement is akin to saying that checking that each car runs is the bottleneck in the automobile manufacturing process. Testing requires tradeoffs of time, resources, and results, like any activity. But when the critical element is time, test teams can make decisions and take actions to avoid protracted testing. Even better, defect prevention practices such as walkthroughs and effective unit testing can deliver cleaner software and keep the time for testing and re-work to a reasonable amount.
We don't have time for those quality measures; we are in a rush. But the race does not always go to the swift. The landscape is littered with the carcasses of firms that delivered systems to market quickly, but with so many problems that they alienated their customers and lost anyway. The software field contains many forgotten names of vendors and products that once glittered with promise. The fact that they were first to market has long since become irrelevant.
This article is entitled "Speeding the Software Delivery Process," but it could be called "Speeding the Testing Process." The testing and quality assurance activities on a project have a rich potential for streamlining and improvement, and so are worth our attention.
So, How Do We Speed Delivery?
There is no silver bullet or quick two-minute answer to the question "How can we improve time to market without sacrificing quality?" There isn't one simple, magical thing we can do to speed delivery. Rather, we need to pay careful attention to several factors. Success comes from managing seemingly minor items which all can impact time to market.
A common approach to speeding the testing is simply to exhort the testers to "work smarter and harder" and to cut the scheduled test time, but this approach usually undermines the effectiveness of the test. "Damn the torpedoes, full speed ahead," is a great rallying cry-until you hit a torpedo.
Can we use the test and QA processes to speed not just the testing but also the overall product delivery cycle? Yes, definitely. There are many ways to reduce the test cycle time and speed the overall delivery process, including:
- managing testing like a "real" project
- strengthening the test resources
- improving system testability
- getting off to a quick start
- streamlining the testing process
- anticipating and managing the risks
- actively and aggressively managing the process
For the remainder of this article, I will elaborate on points 1 and 2 in the above list. Subsequent articles in this three-article series will discuss the remaining points 3-7.
1. Manage Testing like a "Real" Project
This suggestion may seem obvious, but the lack of sound project management is a common cause of delays and mediocre (or worse) quality.
Have a workable test plan. Taking the time to develop a plan that is more than a sketch on a napkin is a radical idea in some organizations. But without an organized plan, you can fall into testing blindly and reactively.
- Educate yourself early and as thoroughly as possible on the system functionality, the users' success factors, the likely risks and vulnerabilities, and the test environment and testing tools.
- Understand the "pull/push" nature of test projects. If support activities, like developing test procedures or test tool training, are not done during the first third of a project, in the hurly-burly of pushing toward the project deadline they never will be done.
- Enhance your estimating and negotiating skills. Estimating realistically is difficult, but can be done-it is an acquired skill to be vigorously pursued.
- Remember to identify and allow for contingencies.
- Incorporate mechanisms in the test plan, to monitor progress vs. plan and update the test plan periodically as conditions change.
Develop realistic estimates. Without estimates and schedules that can be relied on, we are already out of control. But most test team leaders and testers are not confident of their skills in this area.
Understand and control the factors that can lead to project slippage. For example, an apparently simple, last-minute change to a feature can play havoc with the testing schedule.
Manage the critical path. When any task on the critical path slips, he entire test project slips. This requires, of course, that we know what the ritical path is (and it can change frequently, as events change).
Coordinate closely with other groups. Most vendor software and eb-based development projects today utilize so-called RAD (rapid application evelopment) methods such as XP (extreme programming) and agile methods. To be ffective, RAD requires a more fine-tuned coordination of testers with others, uch as developers, Web site content managers, users, and marketers in a vendor rganization, than is needed with traditional system development methodologies.
Manage the interdependencies among activities. More than mere oordination is needed on most projects-the interactions among the o-dependent parties must be proactively planned and managed. Development and esting projects usually have many mutual dependencies among their internal asks, and more-difficult-to-manage dependencies with tasks external to the roject. We may even have gridlock: person A may be dependent and waiting on erson B, who is waiting on person C, who in turn awaits some deliverable from erson A.
Managing the interdependencies means that we first know what they are, and ave up-to-date information on the status of predecessor tasks. The test and QA ilestones should be included in the overall system development project chedule, and coordinated with the other project activities (design, rogramming, etc.).
Balance the test resources to the workload demands. Plan ahead: aintain a forward-looking calendar of the major systems and new versions lanned for the next eight to sixteen weeks, and their likely demands on the esting resources. Schedule and allocate test and QA resources for these evelopment projects, especially to provide early preventive QA.
Use project management tools. Realistically, test planning, cheduling, monitoring, and updating test project plans cannot be done without a ood project management tool, such as Microsoft Project. These tools facilitate ore sophisticated modeling and exploration of alternative courses of action, by llowing rapid iterations of planning scenarios and by using "what f?" sensitivity analysis.
However, the majority-perhaps 85 percent-of test professionals do not use roject management tools. They use informal devices like paper lists of tasks nstead (the "back of the envelope"), and manage by the seats of their ants.
Manage expectations. Sometimes the test function is blamed for late elivery, because the testing occurs at the end of the delivery cycle; herefore, it is conspicuous as the reason why the system is not yet ready. eople will say, "It takes too long to test!" Educate managers and lients by presenting status information on when a system was actually ready to est, how many defects have been found during the test, and what effort is equired for an adequate test.
2. Strengthen the Test Resources
Add test resources. Many test efforts today are under-resourced, artly because of optimism, denial, and lack of understanding of what is needed o perform an adequate test. Many managers and professionals are surprised by he amount of testing indicated by time-proven test estimating formulae, and rankly do not believe them. However, we have considerable evidence that these ules of thumb are about right, from actual projects in leading software rganizations. Many software vendors, for example, have a ratio of software ngineers to testers of 3:1 or 2:1.
Resist the temptation to add people resources to a late project. espite the suggestion above to add resources, Brook's law states that adding eople to a late project only makes it later. If the newly added resources join he test effort too late, or the newcomers are not experienced and already riented to the project, Brook's law will be obeyed.
In variations of the application of Brook's law, avoid bringing in new esting tools or changing test processes beyond the first third of the testing roject. The learning curve and dislocation caused by the changes are too risky n the last two-thirds of the project. (Fred Brooks is a professor at the niversity of North Carolina who led the software development team for IBM's reakthrough IBM/360 computer in the 1960s.)
Ensure the right person is allocated to each work activity (i.e., the est person available for that particular activity). Testing is a very broad ield with many specialized types of expertise and context-specific knowledge. A articular task that takes one team member five days may take another team ember only two hours.
Give the testers the facilities they need. It is not a good sign when esters are waiting for shared testers' equipment, which currently is tied up wth other demands. Or when testers are struggling and wasting time with quirky, ifficult, or obsolete tools. Or when testers do not have the support personnel hey need, such as network administrators to troubleshoot the network, or atabase administrators to extract and download test databases.
Upgrade the skills and experience levels of the test team. In some rganizations, juniors are used in testing to free the valuable people for the important activities." While they may be bright and motivated, eophytes can require an exceedingly long time to get up the learning curve or, orse, they may perform an inadequate test and not even know it. At the very east, the test team should be seeded with a few highly competent, attle-scarred veterans to guide the juniors.
Expertise is needed in three key areas: the functionality or subject matter, he technological foundation for the system, and effective test techniques.
Increase the involvement of the clients and end users in the test process. These people bring functional subject matter expertise, a fresh perspective, and additional hands to the test effort. Involving them early in the test process also builds psychological ownership of the system and narrows the gap between user expectations and the reality of what the system can actually do.