Software that performs well is useless if it ultimately fails to meet user needs and requirements. Requirements errors are the number one cause of software project failures, yet many organizations continue to create requirements specifications that are unclear, ambiguous, and incomplete. What's the problem? All too often, requirements quality gets lost in translation between business people who think in words and software architects and engineers who prefer visual models. Joe Marasco discusses practical approaches for testing requirements to verify that they are as complete, accurate, and precise as possible-a process that requires new, collaborative approaches to requirements definition, communication, and validation.
Many test managers feel that Development or Management or The Business does not understand or support the contributions of their test teams. You know what? They're probably right! However, once we accept that fact, we should ask: Why? Bob Galen believes that it is our inability and ineffectiveness at 360º communications, in other words, "selling" ourselves, our abilities and our contribution. We believe that our work should speak for itself or that everyone should inherently understand our worth. Wrong! We need to work hard to create crucial conversations in which we communicate our impact on the product and the organization. Bob shares with you specific techniques for improving the communication skills of test managers and testers so that others in your organization will better understand your role and contributions.
Approximately three-fourths of today's successful system security breaches are perpetrated not through network or operating system security flaws but through customer-facing Web applications. How can you ensure that your organization is protected from holes that let hackers invade your systems? Only by thoroughly testing your Web applications for security defects and vulnerabilities. Michael Sutton describes the three basic security testing approaches available to testers-source code analysis, manual penetration testing, and automated penetration testing. Michael explains the key differences in these methods, the types of defects and vulnerabilities that each detects, and the advantages and disadvantages of each method. Learn how to get started in security testing and how to choose the best strategy for
Basic security vulnerabilities in Web applications
Metrics can play a vital role in software development and testing. We use metrics to track progress, assess situations, predict events, and more. However, measuring often creates "people issues," which, when ignored, become obstacles to success or may even result in the death of a metrics program. People often feel threatened by the metrics gathered. Distortion factors may be added by the people performing and communicating the measurements. When being measured, people can react with creative, sophisticated, and unexpected behaviors. Thus our well-intentioned efforts may have a counter-productive effect on individuals and the organization as a whole. John Fodeh addresses some of the typical people issues and shows how cognitive science and social psychology can play important roles in the proper use of metrics.
Many organizations want to automate their testing efforts, but they aren't sure how to begin. Successful test automation requires dedicated resources and automation tool expertise-two things that overworked test teams do not have. Nationwide Insurance's solution was to create a Test Automation Center of Excellence, a group of experts in automation solution design. Members of this team partner with various project test teams to determine what to automate, develop a cost-benefit analysis, and architect a solution. Their automation experts stay with the test team throughout the automation project, assisting, mentoring, and cheering. Join Jennifer Seale to learn what it takes to put together a Test Automation Center of Excellence and examine test automation from a project management point of view.
The use of modular design in programming has been a common technique in software development for years. However, the same principles that make modular designs useful for programming-increased reusability and reduced maintenance time-are equally applicable to test case development. Shaun Bradshaw describes the key differences between procedural and modular test case development and explains the benefits of the modular approach. He demonstrates how to analyze requirements, designs, and the application under test to generate modular and reusable test cases. Join Shaun as he constructs and executes test scenarios using skeleton scripts that invoke the modular tests. Learn how you can design and create a few self-contained scripts (building blocks) that then can be assembled to create many different test scenarios.
Shaun Bradshaw, Questcon Technologies, A Division of Howard Systems Intl.
You've wanted this promotion to QA/Test manager for so long and now, finally, it's yours. But, you have a terrible sinking feeling ... "What have I gotten myself into?" "How will I do this?" You have read about Six Sigma and developer to tester ratio-but what does this mean to you? Should you use black-box or white-box testing? Is there a gray box testing? Your manager is mumbling about offshore outsourcing. Join Brett Masek as he explains what you need to know to become the best possible test manager. Brett discusses the seven key areas-test process definition, test planning, defect management, choosing test case approaches, detailed test case design, efficient test automation, and effective reporting-you need to understand to lead your test team. Learn the basics for creating a test department and how to achieve continuous improvement.
You know about it. You've used it. Maybe you've even loved it. But now, after all these years, the IEEE 829 standard, the only international standard for test documentation, has been radically revised. As a leader on the IEEE committee responsible for this update, Claire Lohr has detailed insight into what the changes mean to you. You'll discover that all of the old documents, with one exception, are still included. But now, the 829 standard describes documentation for each level of testing, adds a three-step process for choosing test documents and their contents, adds additional documents, and follows the ISO 12207 life-cycle standard as its basis. In addition, the new standard can be tailored for agile methods if the stakeholders agree on the modifications.
The one-size-fits-all IEEE 829 standard of the past is gone
How to tailor the new documents to match your needs
Does your testing provide value to your organization? Are you asked questions like "How good is the testing anyway?" and "Is our testing any better this year?" How can you demonstrate the quality of the testing you perform, both to show when things are getting better and to show the effect of excessive deadline pressure? Defect Detection Percentage (DDP) is a simple measure that organizations have found very useful in answering these questions. It is easy to start-all you need is a record of defects found during testing and defects found afterwards (which you probably already have available). Join Dorothy Graham as she shows you what DDP is, how to calculate it, and how to use it to communicate the effectiveness of your testing. Dorothy addresses the most common stumbling blocks and answers the questions most frequently asked about this very useful metric.
Automated GUI tests often fail to find important bugs because testers do not understand or model intricate user behaviors. Real users are not just monkeys banging on keyboards. As they use a system, they may make dozens of instantaneous decisions, all of which result in complex paths through the software code. To create successful automated test cases, testers must learn how to model users' real behaviors. This means test cases cannot be simple, recorded, one-size-fits-all scripts. Jamie Mitchell describes several user behavior patterns that can be adopted to create robust and successful automated tests. One pattern is the 4-step dance, which describes every user GUI interaction: (1) ensure you're at the right place in the screen hierarchy; (2) provide data to the application; (3) trigger the system; and (4) wait for the system to complete its actions.
Jamie Mitchell, Test & Automation Consulting LLC