Charters help you guide and focus exploratory testing. Well-formed charters help testers find defects that matter and provide vital information to stakeholders about the quality and state of the software under test. Rob Sabourin shares his experiences defining different exploratory testing charters for a diverse group of test projects. For example, reconnaissance charters focus on discovering application features, functions, and capabilities; failure mode charters explore what happens to applications when something goes wrong. In addition, you can base charters on what systems do for users, what users do with systems, or simply the requirements, design, or code. Rob reviews key elements of a well-formed testing charter-its mission, purpose, focus, understanding, and scope. Learn how to evolve a test idea into an exploratory charter using examples from systems testing, Scrum story testing, and developer unit testing.
Manual testing is the best way to find the bugs most likely to bite users badly after a product ships. However, manual testing remains a very ad hoc, aimless process. At a number of companies across the globe, groups of test innovators gathered in think tank settings to create a better way to do manual testing—a way that is more prescriptive, repeatable, and capable of finding the highest quality bugs. The result is a new methodology for exploratory testing based on the concept of tours through the application under test. In short, tours represent a more purposeful way to plan and execute exploratory tests. James Whittaker describes the tourist metaphor for this novel approach and demonstrates tours taken by test teams from various companies including Microsoft and Google. He presents results from numerous projects where the tours were used in critical-path production environments.
What do you say when your manager asks, "How did it go today?" As a test manager, you might say, "I'll check to see how many test cases the team executed today." As a tester with a pile of test cases on your desk, you could say, "I ran 40 percent of these tests today," or "At the rate I'm going I'll be finished with these test cases in 40 days." However, if you're using exploration as part of your testing approach, it might be terrifying to try to give a status report--especially if some project stakeholders think exploratory testing is irresponsible and reckless compared to test cases. So how can you retain the power and freedom of exploration and still give a report that earns your team credibility, respect, and perhaps more autonomy? Jon Bach offers ways for you to explain the critical and creative thinking that makes exploratory testing so powerful.
Whether we develop software-based systems to create invoices, solve difficult physics problems, diagnose heart disease, or launch rockets, we've learned that nothing stays the same very long and software defects are inevitable. However, one thing has remained constant—the role and value of testing has been misunderstood by many in senior management. A Lockheed Martin Fellow since 2005, Tom Wissink describes steps undertaken at Lockheed Martin to change this culture of misunderstanding into a culture of appreciation, satisfaction, and excitement. Tom's experience has convinced him that this change is not just theoretical but both possible and rewarding. In a few organizations, both large and small, this has resulted in dramatic changes including greater tester satisfaction, increased company profits, and improved software quality often delivered on time and within budget.
Session-based exploratory testing is an effective means to test when time is short and requirements are not clearly defined. Is it advisable to use session-based exploratory testing when the requirements are known and documented? How about when the test cases are already defined? What if half of the test team is unfamiliar with the software under test? The answers are yes, yes, yes. Brenda Lee explains how her team modified the session-based exploratory testing approach to include requirements and test cases as part of its charter. In one instance, during the short seven-day test window the team validated forty-one out of forty-five requirements, executed more than 200 test cases using seventeen charters, and identified fifteen new, significant issues. The team was able to present a high-level test summary to the customer only two days after the conclusion of system test. What did the customer say?
Interested in exploratory testing and its use on rich Internet applications, the new interactive side of the Web? Erik Petersen searched the Web to find some interesting and diverse systems to test using exploratory testing techniques. Watch Erik as he goes on a testing exploration in real time with volunteers from the audience. He demonstrates and discusses the testing approaches he uses everyday-from the pure exploratory to more structured approaches suitable for teams. You'll be amazed, astounded, and probably confounded by some of Erik's demonstrations. Along the way, you'll learn a lot about exploratory testing and have some fun as well. Your mission, should you choose to accept it, is to try out your testing skills on the snappiest rich Internet applications the Web has to offer.
Software created in regulated industries such as medical devices must be developed and tested according to agency-imposed process standards. Every requirement must be tested, and every risk must be mitigated. Could defects
still lurk in software wrung out by such an in-depth process? Unfortunately, yes. In fact, software defects are a major cause of medical device recalls each year.
However, by supplementing mandated requirements-based verification with session-based exploratory testing (SBT), the overall quality of mission-critical software can be significantly improved. Based on eight studies, David James
describes how to fit targeted exploratory testing into a regulated process. Specifically, David has found that defect discovery is twenty times less expensive through SBT than through formal verification. Applying SBT early,
Mind maps were developed in the late 1960s by Tony Buzan as a way of helping students take notes using only key words and images. Mind maps are quick to record and because of their visual approach, much easier to remember and review. Samuli Lahnamäki describes how mind mapping can be used as a
logging tool for exploratory testing and what information can be later derived from the testing maps. A pair of testers, one performing exploratory testing while the other records their journey with a mind map, is an effective documentation style. One concern with exploratory testing has always been its lack of a testing trail. Mind maps provide the documentation that can be converted to a formal test script if required.
Discover how mind maps can be an effective documentation tool in exploratory testing
You know the story: Marketing wants more features, faster release cycles, and release dates that do not slip. Customers want new functions and software that does not break. Testers and developers want to release high quality software with limited resources. Management wants good information to make ship don't ship decisions. What if, facing all of these wants, you could reduce testing time by up to 50% and release better code as evidenced by fewer defects with lower severity after release? George Bliss shows you how a switch from traditional script-based testing to session-based exploratory testing-along with agile development practices and more automation-achieved those results. With session-based exploratory testing, they delivered real-time status updates to management and helped to make the quality of software everyone’s business.
In order for exploratory testing to be perceived as a valuable process by all stakeholders in the organization, we need to make sure the result of that testing-our documentation-is presented with the same professionalism and attention to detail that distinguishes an artistic masterpiece from a paint-by-number kit. David Gilbert discusses the practical steps testers can take to improve the perceived value of exploratory testing in their organizations. He explains how we can apply a consistent, professional, and structured methodology to our exploratory testing and employ processes that will consistently create the level of detailed output that is considered the hallmark of any investigative analysis. Finally, David tells us how better to communicate the value of exploratory tests and document both the process and results of exploration in a way that stakeholders will understand.
David Gilbert, Sirius Software Quality Associates, Inc.