development lifecycles

Conference Presentations

STAREAST 2010: Testing AJAX: What Does It Take?

Using AJAX technologies, Web 2.0 applications execute much of the application functionality directly in the browser. While creating a richer user-experience, these technologies pose significant new challenges for testers. Joachim Herschmann describes the factors that are critical in testing Web 2.0 applications and what it takes to master these challenges. After presenting an overview of typical Web 2.0 application technologies, Joachim explains why object recognition, synchronization, and speed are the pillars for a truly robust and reliable AJAX test automation approach. He shows how to architect testability directly into AJAX applications, including examples of how to instrument applications to provide the data that testing tools require. Joachim shares his experiences of Micro Focus's Linz development lab and describes how they overcame the challenges of testing their modern AJAX applications.

Joachim Herschmann, Borland (a Micro Focus company)
Using Test Automation Frameworks

As you embark on implementing or improving automation within your testing process, you'll want to avoid the "Just Do It" attitude some have taken. Perhaps you've heard the term "test automation framework" and wondered what it means, what it does for testing, and if you need one. Andrew Pollner, who has developed automated testing frameworks for more than fifteen years, outlines how frameworks have grown up around test automation tools. Regardless of which automation tool you use, the concepts of a framework are similar. Andrew answers many of your questions: Why build a framework? What benefit does it provide? What does it cost to build a framework? What ROI can I expect when using a framework? Explore the different approaches to framework development and identify problems to watch out for to ensure the approach you take will provide years of productivity.

Andrew Pollner, ALP International Corp
Patterns of Testability

Testability requires interfaces for observing and controlling software, either built into the software itself or provided by the software ecosystem. Observability exposes the input and output data of components, as well as monitoring execution flow. Controllability provides the ability to change data and drive actions through the component interface. Without testability interfaces, defects are harder to find, reproduce, and fix. Manual testing can be improved by access to information these interfaces provide, while all automated testing requires them. Alan Myrvold shares software component diagrams that show patterns of testability. These patterns will help you architect and evaluate the observability and controllability of your system. Apply these testability patterns to describe and document your own testability interfaces.

Alan Myrvold, Microsoft
The Myths of Rigor

We hear that more rigor means good testing and, conversely, that less rigor means bad testing. Some managers-who've never studied testing, done testing, or even "seen" testing up close-insist that testing be rigorously planned in advance and fully documented, perhaps with tidy metrics thrown in to make it look more scientific. However, sometimes measurement, documentation, and planning don't help. In those cases, rigor may require us not to do them. As part of winning court cases, James Bach has done some of the most rigorous testing any tester will do in a career. James shows that rigor is at least as dangerous as it is useful and that we must apply care and judgment. He describes the struggle in our craft, not just over how rigorous our processes should be, but what kind of rigor matters and when rigor should be applied.

James Bach, Satisfice, Inc.
Stop Guessing About How Customers Use Your Software

What features of your software do customers use the most? What parts of the software do they find frustrating or completely useless? Wouldn't you like to target these critical areas in your testing? Most organizations get feedback-much later than anyone would like-from customer complaints, product reviews, and online discussion forums. Microsoft employs proactive approaches to gather detailed customer usage data from both beta tests and released products, achieving greater understanding of the experience of its millions of users. Product teams analyze this data to guide improvement efforts, including test planning, throughout the product cycle. Alan Page shares the inner workings of Microsoft's methods for gathering customer data, including how to know what features are used, when they are used, where crashes are occurring, and when customers are feeling pain.

Alan Page, Microsoft
Agile Testing: Uncertainty, Risk, and How It All Works

Teams that succeed with agile methods reliably deliver releasable software at frequent intervals and at a sustainable pace. At the same time, they can readily adapt to the changing needs and requirements of the business. Unfortunately, not all teams are successful in their attempt to transition to agile and, instead, end up with a "frAgile" process. The difference between an agile and a frAgile process is usually in the degree to which the organization embraces the disciplined engineering practices that support agility. Teams that succeed are often the ones adopting specific practices: acceptance test-driven development, automated regression testing, continuous integration, and more. Why do these practices make such a big difference? Elisabeth Hendrickson details essential agile testing practices and explains how they mitigate common project risks related to uncertainty, ambiguity, assumptions, dependencies, and capacity.

Elisabeth Hendrickson, Quality Tree Software, Inc.
Software as a Service: What You Need to Know

Many familiar products, including email, instant messaging, search, and e-commerce sites, are actually implemented as services rather than PC-installed software. The shift to services now extends to everything from office productivity tools to utilities like storage, authentication, manageability, and application hosting. Engineers who want to build highly available services with a positive user experience face unique design, testing, and operational challenges. Ibrahim El Far and Venkat Narayanan discuss aspects of configurability, including the ability to turn off features quickly or redirect traffic that minimizes the impact of defects on the user experience. They discuss the importance of fault testing and explain why testing a service must happen everywhere from the workstation to the live site. Learn best practices in operations, including automated deployment, monitoring services, and service repairs.

Ibrahim El Far, Microsoft Corporation
Successful Teams are TDD Teams

Test-Driven Development (TDD) is the practice of writing a test before writing code that implements the tested behavior, thus finding defects earlier. Rob Myers explains the two basic types of TDD: the original unit-level approach used mostly by developers, and the agile-inspired Acceptance-Test Driven Development (ATDD) which involves the entire team. Rob has experienced various difficulties in adopting TDD: developers who don't spend a few extra moments to look for and clean up a new bit of code duplication; inexperienced coaches who confuse the developer-style TDD with the team ATDD; and waffling over the use of TDD, which limits its effectiveness. The resistance (overt or subtle) to these practices that can help developers' succeed is deeply rooted in our brains and our cultures.

Rob Myers, Agile Institute
Successful Software Management: Seventeen Lessons Learned

Wouldn't it be nice to know what your staff is doing without looking like a micromanager? Have you wondered how to treat people fairly while still giving them what they need? Would you like to spend a week out of the office, but you're worried your staff won't be able to manage while you're gone? Johanna Rothman explores questions that face software managers every day. Gain new insights through the mistakes she made and the lessons she learned after she became a manager and then a consultant after years of hard-core technical work. Johanna describes seventeen technical management tips and tricks she has learned through trial and error, including the dangers of extended overtime, the value of one-on-one meetings, ways to build trust, and many others. Learn about a manager's job, how to create an effective work environment, and how you can help people do their best work.

Johanna Rothman, Rothman Consulting Group, Inc.
Using Agile to Increase Value in Lean Times

The proof is now in, and it shows that implementing agile is the best way to get critical, revenue-generating applications to market faster and at less cost. How much money and how many jobs could your organization save? Richard Leavitt and Michel Mah document the financial returns agile project teams are experiencing compared to their traditional counterparts and provide you with a business case toolkit for your senior executives considering agile practices. Rally Software Development commissioned research firm QSM Associates to benchmark twenty-nine agile development projects against their database of 7,500 software projects. The Agile Impact Report compares the performance of agile development projects against plan-driven and waterfall industry averages for time-to-market, productivity, and quality.

Richard Leavitt, Rally Software Development

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.