Analysis
Conference Presentations
SM/ASM 2003: The Power of Retrospectives This paper discusses and explains the differences between post project reviews and retrospectives. Three retrospective case studies are also detailed. |
Esther Derby, Esther Derby Associates Inc
|
|
Bulletproof your Review Program! In this article, the author explains how to bulletproof your review program by avoiding traps that typically kill technical review programs. She also details four common reasons why review programs fail and what you can do to successfully implement one for your team. |
Esther Derby, Esther Derby Associates Inc
|
|
Introduce and Sustain a Worldwide Software Inspection Process In this presentation you will discover how to: understand the benefits that come with implementing inspections, understand the steps to rollout an effective inspection process, how to anticipate the problems that can be |
Marc Rene, GTECH Corporation
|
|
Traps That Can Kill a Review Program (And How to Avoid Them) Technical reviews have been around for a long time, and they're generally recognized as a "good thing" for building quality software and reducing the cost of rework. Yet many software companies start to do reviews only to have the review program falter. So the question remains: How can you succeed with a review program? Management support and good training for review leaders is a good place to start. But it's the details of implementation that truly determine whether reviews will stick, or they'll fall by the wayside. Esther Derby offers her insights based on observations from both successful and failed review programs. |
Esther Derby, Esther Derby Associates Inc
|
|
Compressing Test Execution Time to a 24-Hour Cycle Software development projects face a growing trend of tighter schedules, more complex environments, and increased time-to-market pressures. Thomas Poirier presents a composite case study that explores how frequently encountered situations can severely impact the duration of the Test Execution Cycle (TEC). Learn strategies and tactics to shorten the TEC to within a 24-hour cycle without sacrificing test coverage. |
Thomas Poirier, Conduciv inc.
|
|
Validation and Component-Based Development Component-based development is the practice of constructing software applications from new or existing encapsulated language-independent modules. In his presentation, David Wood details a case study on the use of opaque-box testing, coupled with code coverage and pre-/post-conditions, to provide validated software components. Learn about component-based development and how to apply it to your projects. |
Rob Harris, Harris Corporation and David Wood, Applied Object Engineering
|
|
STAREAST 2000: A Risk-Based Test Strategy Testing information systems should be based on the business risks to the organization using these information systems. In practice, test managers often take an intuitive approach to test coverage for risks. In this double-track presentation, discover how a "stepwise" definition of test strategy can be used for any test level as well as the overall strategy-providing better insight and a sound basis for negotiating testing depth. |
Ingrid Ottevanger, IQUIP Informatica
|
|
Trimming the Test Suite: Using Coverage Analysis to Minimize Re-Testing Coverage Analysis System (CAS) data is often useful in determining that enough tests have been written, and identifying C-code lines that have no test coverage. In this presentation, Jim Boone explores various methods that use CAS data to determine the best set of automated tests to execute for a corrected defect. Learn the strengths, weaknesses, and best stage for using each method. |
Jim Boone, SAS, Institute, inc.
|
|
STAREAST 2000: How Testers Can Contribute to Reviews Brian Lawrence begins his presentation with a brief overview of what a review is and how it works in software organizations. Although testers may or may not understand source code, they can still contribute considerable value in reviews. Learn how to devise tests as a review preparation technique that can identify potential defects and serve as a basis for test planning and design. |
Brian Lawrence, Coyote Valley Software
|
|
Interpreting Graphical Defect Trend Data Evaluation of graphical defect trend data can dramatically increase your ability to predict current project quality, schedule milestone compliance, and provide historical data for proper test and development scheduling of later revisions. Jim Olsen will explore some of the complexities in analyzing graphic defect trending in this presentation (winner of the Best Presentation award for ASM'99). Learn ways to determine how much time establishes a trend, when the appropriate time to start taking data occurs, what type of data to track, and how to estimate the amplitude of defect oscillations at the end of the product cycle. |
Jim Olsen, Novell, Inc.
|