Process

Conference Presentations

STAREAST 2002: Writing Better Defect Reports

Why is it some testers get a better response from developers than others? Part of the answer lies in their defect reports. But following a few simple guidelines can smooth the way for a much more productive environment. That's because the objective shouldn't be to write the perfect defect report, but to write an effective defect report that conveys the proper message, gets the job done, and simplifies the process for everyone. It's important that you use this report to ask and answer the right questions. Kelly Whitmill gives you a quick mental inspection checklist you can reference each time you write a defect report. You'll walk away with information that can make a significant difference the day you get back to work, on a topic that's often overlooked in the industry.

Kelly Whitmill, IBM
Finding Firmware Defects

Embedded systems software presents a different breed of challenges to the test professional than other types of applications. Hardware interfaces, interrupts, timing considerations, resource constraints, and error handling often pose problems that aren't well suited to many traditional testing techniques. This presentation discusses some of these problems, and the techniques and strategies that are the most effective at finding software bugs in embedded systems code.

Sean Beatty, High Impact Services Inc
Robust Design Method for Software Testing

This session presents a robust design method based on the Taguchi Approach. A new and powerful way to improve reliability and productivity, this method has been applied in diverse areas such as network optimization, audio and video compression, error correction, engine control, safety systems, calibration, and operating system optimization. Learn the basics of the robust design method for software testing, and experience the principles through case studies like Unix system performance tuning.

Madhav Phadke, Phadke Associates
Automated Testing Framework for Embedded Systems

Is it possible to use an "open architecture" automation test tool to avoid the pitfalls of testing in the embedded, real-time world? It is now. In this session, Michael Jacobson presents an architecture that allows existing testing tools to be connected together as components in an automated testing framework targeted for embedded systems using network communications. He shows you how existing testing tools can become servers with just a couple lines of code. You'll even learn how each component can be changed and tested without requiring an update to the rest of the components, as long as interface communication is maintained.

Michael Jacobson, Northrop Grumman Corporation
What's That Supposed to Do? The Archeology of Legacy Systems

In testing utopia, all software products submitted for testing have thorough and comprehensive documentation describing how every program function should work. On planet Earth, however, test engineers usually have to make do under less-than-ideal circumstances. It's not uncommon for test engineers to be asked to verify the functionality of a critical legacy system which has no documented requirements whatsoever. While there are many reasons this can happen, the result is the same: You assume the role of an archeologist sifting through the layers of clues to reconstruct the specifications. Patricia Ensworth gives you instructions and tools so you'll be ready to roll up your sleeves and dig.

Patricia Ensworth, Moody's Investors Service
Proactive User Acceptance Testing

User Acceptance Testing (UAT) tends to take a lot of effort, yet still often fails to find what it should. Rather than being an afterthought subset of system test, effective UAT needs to be systematically planned and designed independently of technical testing. In this session, Robin Goldsmith shows how going from reactive to proactive UAT can make users more confident, cooperative, and competent acceptance testers.

Robin Goldsmith, Go Pro Management, Inc.
The Context-Driven Approach to Software Testing

Several jokes about consultants revolve around the idea that they answer most questions by saying "It depends." The context-driven school of testing accepts the "It depends" reality but then asks, "Depends on what?" Rather than talking about best practices, this approach asks when and why a given practice would be beneficial; what risks and benefits are associated with it; what skills, documents, development processes, and other resources are required to enable the process; and so on. Rather than dismissing an unpopular testing technique or test documentation method as useless, you should ask these questions to determine possible uses. The appropriate context might be narrow, but you'll learn a lot more about the technique and its alternatives by becoming aware of the context variables rather than ignoring them.

Cem Kaner, Florida Institute of Technology
Using Metrics to Govern Outsourced Applications

Outsourcing arrangements are established on the basis of a contractual partnership, with both parties having a vested interest in the success of the relationship. Success can be viewed differently by the outsourcing provider and customer, however, making the use of objective, quantifying service level metrics instrumental to the success of the contract. Learn how to properly identify and develop service level metrics required to support both business and technical deliverables.

Eric Buel, Eric Buel and Associates, Inc.
Basis Path Testing for Structural and Integration Testing

Basis path testing is a structural testing technique that identifies test cases based on the flows or logical paths that can be taken through the software. A basis path is a unique path through the software where no iterations are allowed; they're atomic level paths, and all possible paths through the system are linear combinations of them. Basis path testing uses a Cyclomatic metric that measures the complexity of a source code unit by examining the control flow structure. Basis path testing can also be applied to integration testing when software units/components are integrated together. You'll see how the use of the technique quantifies the integration effort involved as well as the design-level complexity.

Theresa Hunt, The Westfall Team
Calculating the Return on Investment of Testing

While revenues, cash flow, and earnings are vital statistics of a company's well-being, they're the by-product of what the company actually offers up as a product or service. Therefore, if the offering doesn't produce ROI for the customer, it doesn't represent a viable business opportunity. In this session, take a look at testing from the perspective that it's a service provided to your company. Since testing impacts not only your company, but also your company's customers, then you, as a tester, must provide and prove ROI to succeed in a business environment. Having the ability to discuss, define, manage, and demonstrate the ROI of testing is an invaluable skill. This session gives you the information and tools you need to define and demonstrate models of testing ROI, then translate them into upper management's terms.

James Bampos, VeriTest Inc/Lionbridge Technologies and Eric Patel

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.