I have heard testers lament about being managed by non-technical people who cannot tell the difference between a PC and a microwave oven (they both have windows, don't they?)! Managers believe in management, and we technical people believe in subtler, sophisticated, deep technology. The concept is simple. Then I was subjected to a harsh reality!
One bright morning I was put in charge of process improvement--a buffer between management and engineers--and I simply didn't have a clue as to what could really improve our results, except maybe the application of our deep technology. It was then that I discovered some interesting morals that I would like to share with you.
There Is Something as Bad as Not Doing Testing: Not Managing it
When should you start testing? What are the testing priorities? What should go into the test plan? Who should do the testing and how should they get organized? Who should set up a testing environment and when? How much should it be automated? How are you going to discipline the communication between independent testers and developers? How are you going to manage testing, bug fixing, and development simultaneously? How do you decide that you can promote your product to beta testing or to release status? How do you make sure that the bugs you found have really been fixed in the final version?
How do you know if your bug fixing capability is catching up with your bug finding rates? How do you know if you need more resources to get the product to an acceptable quality level within the shipping date? And if users are involved in testing, how do you make sure that the right resources will be available when you need them for system testing? Have you planned for keeping inter-department system testing and problem fixing well in tune?
Wow! There are so many questions. And perhaps they are not even all the ones that come to mind. Well, let me tell you that none of the answers is technical in nature, and if you screw them up, it is as dire as having a non-tested product hit the marketplace and working in the software shop that
If You Don't Manage Quality, You Won't Improve it Just by Applying Some Fancy Quality Techniques
Peer reviews, white box testing, boundary analysis, and so on, are fine. But you should never forget that you do that for two basic reasons:
- Making customers satisfied and thus making an adequate sum of money for your company
- Avoiding making the same mistakes over and over again so that the next time you can tell everybody, "I have improved." This avoids a sum of money leaving your company as "non-quality" costs.
Well, it is not written in the technology itself how to ensure those two vital results. Those results come in the way you manage your process. Understanding your customers' needs and focusing quality control activities on what is relevant for, visible to, and frequently used by your customers is a largely managerial aspect of ensuring quality. Then, once you have sweated to get the right quality process in place, you simply can't lose the experience that you have made. Quality records analysis is a pivotal activity to learn from experience and find a way towards improvement. Here again, technology and management should work together not just to read and interpret data, but also to decide which actions should be taken.
People Are Not a Secondary Matter to Quality
No matter how much technology you give them--as old wisdom goes--the quality of the results will never be better than the quality of the people producing them. A fool with a tool is still a fool. Lack of motivation, bad organization, and scarce or nonexistent training are the plagues of software testing and quality control. Peopleware is still--and perhaps should remain--a black art, but, believe it or not, when it works it can get breakthrough