development lifecycles

Conference Presentations

Testing: The Big Picture

If all testers put all their many skills in a pot, surely everyone would come away with something new to try out. Every tester can learn something from other testers. But can a tester learn something from a ski-instructor? There is much to gain by examining and sharing industry best practices, but often much more can be gained by looking at problem solving techniques from beyond the boundaries of the Testing/QA department. Presented as a series of analogies, Brian Bryson covers the critical success factors for organizations challenged with the development and deployment of quality software applications. He takes strategies and lessons from within and beyond the QA industry to provide you with a new perspective on addressing the challenges of quality assurance.

Brian Bryson, IBM Rational Software
STAREAST 2006: Apprenticeships: A Forgotten Concept in Testing

The system of apprenticeship was first developed in the late Middle Ages. The uneducated and inexperienced were employed by a master craftsman in exchange for formal training in a particular craft. So why does apprenticeship seldom happen within software testing? Do we subconsciously believe that just about anyone can test software? Join Lloyd Roden and discover what apprenticeship training is and-even more importantly-what it is not. Learn how this practice can be easily adapted to suit software testing. Find out about the advantages and disadvantages of several apprenticeship models: Chief Tester, Hierarchical, Buddy, and Coterie. With personal experiences to share, Lloyd shows how projects will benefit immediately with the rebirth of the apprenticeship system in your test team.

  • Four apprenticeship models that can apply to software testers
  • Measures of the benefits and return on investment of apprenticeships
Lloyd Roden, Grove Consultants
STAREAST 2006: Testing Dialogues - Technical Issues

Is there an important technical test issue bothering you? Or, as a test engineer, are you looking for some career advice? If so, join experienced facilitators Esther Derby and Johanna Rothman for "Testing Dialogues-Technical Issues." Practice the power of group problem solving and develop novel approaches to solving your big problem. This double-track session takes on technical issues, such as automation challenges, model-based testing, testing immature technologies, open source test tools, testing Web services, and career development. You name it! Share your expertise and experiences, learn from the challenges and successes of others, and generate new topics in real-time. Discussions are structured in a framework so that participants receive a summary of their work product after the conference.

Facilitated by Esther Derby and Johanna Rothman
Testing Windows Registry Entries

Warning: Registry keys may be hazardous to your program's health! Registry key entries in Windows applications-visible or hidden-are often neglected by testers. A registry key entry is a program feature just like any other application function and as such needs to be validated. Michael Stahl describes why registry keys should be accorded special attention during testing and proposes a strategy for mitigating risks posed by incorrect registry key entries. He suggests a test strategy, as well as coding standards for input value and type validation, default values, regeneration, and naming rules. Michael demonstrates the use of correct and incorrect registry keys in common commercial applications.

Michael Stahl, Intel Corporation
Open SourceTest Automation Frameworks

Open source software has come a long way in the past few years. However, for automated testing there still are not many ready-made solutions. Testers often must spend their time working on test cases rather than working on a test automation framework. Allen Hutchison describes the elements of an automated test framework and demonstrates a framework that you can quickly assemble from several open source software tools. He then explains how to put the pieces together with a scripting language such as Perl. Once you build the framework, you can improve and reuse it in future test projects. At the end of the presentation, Google will release the described framework as a new open source project that you can begin using immediately.

Allen Hutchison, Google
Test Driven Development - It's Not Just for Unit Testing

Test-driven development (TDD) is a new approach for software construction in which developers write automated unit tests before writing the code. These automated tests are always rerun after any codes changes. Proponents assert that TDD delivers software that is easier to maintain and of higher quality than using traditional development approaches. Based on experiences gained from real-world projects employing TDD, Peter Zimmerer shares his view of TDD's advantages and disadvantages and how the TDD concept can be extended to all levels of testing. Learn how to use TDD practices that support preventive testing throughout development and result in new levels of cooperation between developers and testers. Take away practical approaches and hints for introducing and practicing test-driven development in your organization.

Peter Zimmerer, Siemens
The Venerable Triangle Redux

Jerry Weinberg's venerable triangle problem has been around since 1966 and was popularized in Glenford Myers' book The Art of Software Testing. To assess a tester's effectiveness, many software companies have used the triangle problem as an interview question. But, past studies indicate testers' effectiveness at solving the problem is relatively low. Recent studies by noted experts indicate that a significant number of testers in the industry lack formal training in software testing techniques. Over a three-year period Microsoft conducted several experiments to accurately quantify the effectiveness of testers with different years of experience and skill.

William Rollison, Microsoft Corporation
Risk: The Testers Favorite Four Letter Word

Identifying risk is important-but managing risk is vital. Good project managers speak the language of risk, and their understanding of risk guides important decisions. Testers can contribute to an organization's decision making ability by speaking that same language. Learn from Julie Gardiner how to evaluate risk in both quantitative and qualitative ways. Julie will discuss how to deal with some of the misconceptions managers have about risk-based testing including: Testing is always risk-based. Risk-based testing is nothing more than prioritizing tests. Risk-based testing is a one-time-only activity. Risk-based testing is a waste of time. And risk-based testing will delay the project.

Julie Gardiner, QST Consultants Ltd.
It's 2005, Why Does Software Still Stink

We've now been writing software for an entire human generation. Yet software is arguably the least reliable product ever produced. People expect software to fail, and our industry has developed a well-deserved and widely accepted reputation for its inability to deliver quality products. James Whittaker explores the history of software development over the last generation to find out why. He uncovers several attempts to solve the problem and exposes their fatal flaws. James then looks forward to a world without software bugs and offers a roadmap-practical techniques that can be implemented today-for how to get there from here. Join James on this journey through the past and into the future-and be sure to bring something to scrape the bugs off your windshield.

James Whittaker, Florida Institute of Technology
Choosing the Best of the Plan-Driven and Agile Development Methods

We seem to be under a curse in our profession. Although not cast by a witch or a wizard, the curse affects us just the same. It is the curse of "either/or"-the curse that we must choose either "this" or "that" but we cannot choose parts of both. Nowhere is this more evident than in today's struggle between the adherents of the traditional "plan-driven" and newer "Agile" approaches to software development. What most overlook is that both groups want to achieve exactly the same goal: quality software that meets customer needs within the constraints of time, budget, staff, and technology. They differ only on the strategies to achieve this goal. For example, both groups agree that system requirements must be understood; their differences lie in questions of "how much of what to do and when to do it." Lee Copeland offers insights and suggestions on the methods and approaches that will be most valued on your project-control vs.

Lee Copeland, Software Quality Engineering

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.