test design

Conference Presentations

S-Curves and the Zero Bug Bounce: Plotting the Way to Better Testing

The use of objective test metrics is an important step toward improving your ability to effectively manage any test effort. With the two test metrics-the S-Curve and Zero Bug Bounce-you can easily track the progress of the test effort. Learn to graph the S-Curve, showing cumulative test cases planned, attempted, and completed over time. Keep track of the Bug Bounce-the number of open bugs at the end of a period (usually one to several days)-and especially Zero Bug Bounce-the first time development has resolved all the bugs raised by the testers and there are no active outstanding issues. Improve your ability to communicate to the project team test results and test needs and make better decisions about when your application is ready to ship.

  • Derive a theoretical and actual S-Curve for test cases using historic and current data
  • Use the Zero Bug Bounce for tracking defect correction activities
Shaun Bradshaw, Questcon Technologies, A Division of Howard Systems Intl.
PairWise Testing: A Best Practice that Isn't

By evaluating software based on its form, structure, content, and documentation, you can use static analysis to test code within a program without actually running or executing the program. Static analysis testing helps us stop defects from entering the code stream in the first place rather than waiting for the costly and time-consuming manual intervention of testing to find defects. With real-world examples, Djenana Campara describes the mechanics of static analysis-when it should be used, where it can be executed most beneficially within your testing process, and how it works in different development scenarios. Find out how you can begin using code analysis to improve code security and reliability.

  • The mechanics of automated static analysis
  • Static analysis for security and reliability testing
  • Integrating static analysis into the testing process
James Bach, Satisfice Inc
Inside The Masters' Mind: Describing the Tester's Art

Exploratory testing is both a craft and a science. It requires intuition and critical thinking. Traditional scripted test cases usually require much less practice and thinking, which is perhaps why, in comparison, exploratory testing is often seen as "sloppy," "random," and "unstructured." How, then, do so many software projects routinely rely on it as an approach for finding some of its most severe bugs? If one reason is because it lets testers use their intuition and skill, then we should not only study how that intuition and skill is executed, but also how it can be cultivated and taught to others as a martial art. Indeed, that's what has been happening for many years, but only recently have there been major discoveries about how an exploratory tester works and a new effort by exploratory testing practitioners and enthusiasts to create a vocabulary.

Jon Bach, Quardev Laboratories
Your Development and Testing Processes Are Defective

Verification at the end of a software development cycle is a very good thing. However, if verification routinely finds important defects, then something is wrong with your process. A process that allows defects to build up-only to be found and corrected later-is a process filled with waste. Processes which create long list of defects are . . . defective processes. A quality process builds quality into the software at every step of development, so that defect tracking systems become obsolete and verification becomes a formality. Impossible? Not at all. Lean companies have learned how wasteful defects and queues can be and attack them with a zero tolerance policy that creates outstanding levels of quality, speed, and low cost-all at the same time. Join Mary Poppendieck to learn how your organization can become leaner.

Mary Poppendieck, Poppendieck LLC
"Risk" Is a Tester's Favorite Four-Letter Word

Good project managers speak the language of risk. Their understanding of risk guides important decisions. Testers can contribute to an organization’s decision-making ability by speaking that same language. During this session you will learn how to evaluate risk in both quantitative and qualitative ways. Identifying risk is important but managing risk is vital. Julie will discuss how to deal with the misunderstandings some managers have about risk-based testing including testing is always "risk-based," risk-based testing is nothing more than prioritizing tests, risk-based testing is a one-time-only activity, risk-based testing is a waste of time, and risk-based testing will delay the project.

  • Five different but complementary approaches to risk evaluation
  • Vital areas to consider when choosing your risk-based approach
  • Misconceptions of management regarding risk-based testing
Julie Gardiner, QST Consultants Ltd.
Testing Inside the Box

These days, we hear a lot about unit testing, testing for programmers, test-first programming, and the like. Design techniques for such tests and for improving system testing are often called white box test designs. Join Rex Black as he explains the basics of white box testing and compares
white box with other types of testing. Find out how the metaphor of "boxes" can inform-and confuse-us. Rex discusses the basis path tests, including cyclomatic number as a measure of complexity and a way to determine the number of tests necessary to cover all paths. He walks

Rex Black, Rex Black Consulting
It's Too Darn Big: Test Techniques for Gigantic Systems

Structuring test designs and prioritizing your test effort for large and complex software systems are daunting tasks, ones that have beaten many, very good test engineers. If you add concurrency issues and a distributed system architecture to the mix, some would simply throw up their hands. At Microsoft, where Keith Stobie plies his trade, that is not an option. Keith and others have reengineered their testing, employing dependency analysis for test design, model property static checking, "all pairs" configuration testing, robust unit testing, and more. They employ coverage to successfully help select and prioritize tests and make effective use of random testing including fuzz testing security. Finally, models of their systems help them generate good stochastic tests and act as test oracles for automation.

  • Test checklists for large, complex, distributed systems
Keith Stobie, Microsoft Corporation
Using Personas to Improve Testing

Too often testers are thrown into the testing process without direct knowledge of the customers' behaviors and business process. As a tester, you need to think and act like a customer to make sure the software does-in an easy-to-use way-what the customer expects. By defining personas and using them to model the way real customers will use the software, you can have the complete customer view in designing test cases. Get the basics of how to implement customer personas, their limitations, and ways to create tests using them. See examples of good bugs found using personas while learning to write bug reports based on them.

  • What you need to know to develop customer personas
  • Use customer personas for designing test cases
  • The types of bugs found by using personas but missed by other techniques
Robyn Edgar, Microsoft
STARWEST 2004: Testing Dialogues - Technical Issues

Is there an important technical test issue bothering you? Or, as a test engineer, are you looking for some career advice? If so, join Esther Derby and Elisabeth Hendrickson, experienced facilitators, for "Testing Dialogues - Technical Issues." Practice the power of group problem solving and develop novel approaches to solving your big problem. This double session takes on technical issues, such as automation challenges, model-based testing, testing immature technologies, open source test tools, testing Web services, and career development. You name it! You will share your expertise and experiences, learn from the challenges and successes of others, and generate new topics in real-time. Discussions are structured in a framework so that the participants will receive a summary of their work product after the conference.

Facilitated by Esther Derby and Elisabeth Hendrickson
The Four Schools of Software Testing

Testing experts often disagree. Why? Different testers have different understandings of the role and mission of software testing. This session presents four schools of software testing, each with a different understanding of the purpose and foundation of testing. One school sees testing based on mathematics. Another sees it as an activity that needs to be planned and managed. A third sees it as a basis for understanding and improving software process. And the fourth sees it as an intelligence service, providing actionable information. These all sound reasonable enough, but each has provided the foundation for a school of testing and different hierarchies of values. Learn more about the four schools of software testing and the effects they have on your life. You may find that you, your colleagues, and management are operating in different schools.

Bret Pettichord, ThoughtWorks

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.