In today's world, both agile and traditional concepts for software engineering can be employed for testing. Whether you do agile, traditional, or something in the middle, you need to ask yourself, "Why do I test?" Jon Hagar did, and he relates his answers here.
An email discussion group some time back had a debate on why one should test. Some points made included the following: We test to find errors. We test to show software works. Managers often view testers as a non-value-added task or a waste. What customer uses test products directly? None. And certainly, many managers and company owners would like to see testing activities done away with.
Because, what you and I produce (tests) are not of direct use to the customer, testing is a non-value-added tax, which doesn't produce any sellable product. In the email discussion groups, there were many opinions voiced. Some authors expressed great personal reasons on why they test. But I decided to think about the reasons I test because I needed to understand my own raison d'être. As strange as it may seem, my schizophrenic nature enjoys software testing. I find that testing and quality are parts on a multiheaded beast.
Let's consider the heads of the beast. The complex and diverse heads keep the test life interesting and always a surprise. James Whittaker says testing is not art or craft, but testing is primarily a discipline that can never be fully learned and can always be improved ("Mastering the Discipline of Testing," STQE magazine, Nov/Dec 2002). I agree with this view and it is another of the heads of the beast. Testing certainly has elements of art and of craft. It also has engineering and science, both of which I like. It has management, mathematics, psychology, economics, logic, and much more. Again, I like these also. So these diversities all make my schizophrenic side(s) happy.
Like any multiheaded beast, there is a dark side. I do battle with the poor quality beast-head. I do not fight the developers, for we are a team battling to create a quality product. The teams along with customers are battling the invisible head known as complexity. That complexity often defeats our understanding and shows itself in the incarnation of bugs in software. The testers are but agents of delivery trying to free the functionality of the software from the miscommunication of the beast, which result anytime you have complexity. The miscommunication is between all the players: development, quality, management, users, customers, sales, systems engineering, and others. Miscommunication and complexity result in faulty understanding and thus, errors.
Many ideas, tools, techniques, visualizations, methods, and approach concepts are aimed at these multiple heads. At the heart of software, and software testing, is language. These languages include languages of computers, humans, tools, and notations (not to mention "thought-language" which we are sometimes expected to "read"). Some people believe that many human wars start because of language and the resulting communication-understanding problems. And so it is with software. We use one language to formulate the problems, we use another language to formulate a solution, but because no one player (user, developer, tester, computer) understands all of the languages used, we create yet more languages that try to bridge the problem-space-gap. We end up creating a modern tower of "babble." So, years back and even today, people put forth solutions.
Some of the first solutions were from the formal heavyweight process people. Some of them came along and said, be process based with heavy documentation, and all will be well . But it was not. The industry had semantic and syntactic problems between the players in our systems. We would slay one head of the quality monster, only to have another replace it. Nobody was happy. The whole story of heavyweight, process-focused ideas is much longer than I care to deal with in this