Using The "ICED T" Model to Test Subjective Software Qualities Quality software—that is what we are seeking. While this is clearly a goal of any software tester or quality engineer, what exactly is the definition of quality software? Part of the answer is easy. There are many aspects of software that we can test and measure and to which we can assign a number. Some examples are how often the software crashes, how long it takes to complete a given task, or how much memory is being used. We can also look at how many of our tests pass and how many fail. While these quantifiable measures are important, they do not provide a complete picture of software quality. There are other more qualitative aspects of the software that also need to be considered. |
Andy Roth
December 11, 2000 |
|
What Do You Manage? You're a test manager. But do you manage only the testing? A frustrated test manager recently said, "With my SQA hat, I want to focus on finding defects and discovering risk in the product. With my support hat, I want to solve problems. With my tech pubs hat, I'm trying to get the documentation written. But last week, everyone needed my help at once. I'm only one person—how the heck do I do all that?" Well, maybe you shouldn't have to. |
||
Design Thinking: 4 Steps to Better Software Design thinking points out several missed steps in software development. And, while some may believe ideation and iteration to be wasteful, they're easy to add to the development process at low cost and, in the end, result in substantially more valuable software. In this article, Jeff Patton describes the four basic steps of design thinking. |
||
You Want It When?—Negotiating Test Schedules The biggest obstacle in the software industry is lack of time to do the job well. Negotiation can buy valuable time and help management avoid blunders. This paper is about estimating and negotiating test schedules. |
Gregory M. Pope
December 5, 2000 |
|
Why Software Fails (And How Testers Can Exploit It) This paper summarizes conclusions from a three year study about why released software fails. Our method was to obtain mature-beta or retail versions of real software applications and stress test them until they fail. From an analysis of the casual faults, we have synthesized four reasons why software fails. This note presents these four classes of failures and discusses the challenges they present to developers and testers. The implications for software testers are emphasized. |
||
Testing Java Virtual Machines In this paper, the authors describe their experience with automatically testing Java virtual machines and describe two specific techniques for generating test cases. |
||
Planning and Managing Complex Test Resource Logistics Subtle but catastrophic bugs, such as those that cause server crashes and database record-lock race conditions, often only reveal themselves during performance, stress, volume, data quality, and reliability testing. Such testing is most effectively performed in test environments-hardware, software, network, and release configurations-that mimic as nearly as possible the field environment, because test results in less-complex settings often do not extrapolate due to the non-linearity of software. In complex settings, such as Web and e-commerce server and database farms, managing these lab configurations can be quite challenging. This paper presents a basic Access database, designed using the Entity-Relationship technique, that will allow the Test Manager to plan, configure, and maintain this test environment through the test project. |
||
Three Keys to Test Automation How can you get your test automation project off on the right foot? I've been asked this question many times. It has prompted me to review the test automation projects in which I've been involved and identify the factors most associated with success. |
||
Process Enhancement Request Form and Procedures (template) The PERF (Process Enhancement Request Form) template was created to promote continuous process improvement. Anyone in the organization can submit a PERF to add, change, or remove anything having to do with processes. |
||
Use Your Mainframe to Test As testers we typically receive software from a development group at the end of the build cycle and then install this software into a given test system. We then run a set of pre-written test cases that exercise the software in a way that tests the software in a simulated environment. These tests generally take one of 3 forms. 1) We examine manually or programmatically the UI screens that the software produces. 2) We test the objects and methods of those objects in the program by exercising test code that interacts with the product code. 3) We do a "System Test" or black box test that places the product in a simulated user environment and then we do the operations that an end user would and verify the results. |
Pages
Upcoming Events
Oct 13 |
Agile + DevOps USA The Conference for Agile and DevOps Professionals |
Apr 27 |
STAREAST Software Testing Conference in Orlando & Online |