A Day In the Life of a QA Tester

[article]
Member Submitted
Summary:

This paper discusses the day-to-day activities of a tester instilling quality into software. It begins with a foundation-level discussion of software testing and quality assurance. Then Yamini discusses how the two relate to each other. Afterward, she explains the daily tasks involved in software testing and Quality Assurance, some pain points and areas for improvement, and how an individual tester can add value to the development process no matter what type of software development life cycle is governing your environment.

What is Software Testing?
Software testing is the systematic process by which an analyst uncovers defects in software. What are defects, you might ask? Defects are flaws in the code that cause a software application to break. While no software is completely defect- free, it is the aim of testers to reduce the number of defects found in software and to instill quality in the software application. Software testing includes the process of validating that the software has incorporated the user requirements present in the Software Requirements Specification document and meets users' needs. Software testers analyze software to see if the software conforms to user expectations. Software testing is one of the activities designed to adequately assure that the software has the necessary quality required by the users.

What is Quality Assurance?
Quality Assurance is the sum of all the activities designed to adequately assure that all the software processes in place have been done in an effective and efficacious manner. It involves both doing testing right and doing the right testing. Software Quality Assurance checks that the software processes are correct and are in compliance with the standards that operate within an organization. Quality Assurance involves more than software testing and yet software testing is necessary to the Quality Assurance profession. Without it, one cannot say that a Quality Assurance process is in place.

How do the two relate to each other?
As you can see, Software Testing is just one aspect of Software Quality Assurance. Software Testing is usually called Quality Control. QC is a component of a QA plan in an organization. In some organizations, the Software Testing role is split from the Software Quality Assurance Analyst role. In such an organization, Software Quality Assurance involves completion of technical checklists and document reviews to verify that both the software and documents are in compliance with standards.

Daily Tasks Involved in Software Testing and Quality Assurance
Once a Software Requirements Specification (SRS) document is produced, the tester “tests” the document to make sure that the requirements are complete, correct, consistent, and testable. In an agile environment, software requirements are tested as they are written. In a Waterfall SDLC, first the SRS is written and then testing the document occurs. On the topic of completeness, correctness, consistency, and testability: completeness is measured by whether or not the requirement tells you where to test, how to test, and what to test. Correctness implies that you know something about the application being tested. You have to know a little bit about the software to know if the requirement is correct. A consistent requirement is one that has no logical flaws. A testable requirement is one for which you can write a test case.

Before or after the requirements review, a Master Test Plan is written which includes an Introduction, Background, Scope, Acronyms, Definitions, Features to be Tested, Features not to be Tested, Constraints, Assumptions, Risks and Contingencies, Schedules, Roles/Responsibilities, and References. The Master Test Plan is used by the test team to determine how long testing will take. Also, the tester uses the MTP as a testing plan. The Introduction, Background, Scope, Acronyms, Definitions, and References are preliminary sections in the test plan. They outline the background of the project, the intended audience and exactly what the project definition is-what it includes and what it does not include. The Acronyms outline the abbreviations used in the test plan and the Definitions section defines any new terms that may be used in the test plan. The References section lists the documents used to create the test plan. Typically, they refer to the SRS and the Software Design Documents. The “Features to be tested” section includes the list of all the functional requirements, performance requirements, usability requirements, and security requirements that will be tested in the software. The “Features not to be tested” section includes any features that will be excluded from the testing. The Constraints and Assumptions section identifies those factors that will impact the project and the software testing effort as well as any underlying beliefs that are held regarding the project. Constraints also represent the limitations of the software project. In the Risks and Contingencies section, all the risks along with contingency plans and mitigation strategies are presented. No one likes to discuss risks but that should be one of the first issues discussed and planned for when formalizing the project plan. Risk analysis is one way to mitigate the risks inherent in software development. Risk analysis is a systematic process that weighs and prioritizes risks of each requirement. The Schedules section discusses the timeline for all the testing activities to take place. The Roles/Responsibilities section outlines who is responsible for what deliverable. The last section is the Appendix section. These sections are not usually in the order I have presented them in but this listing does include the major pieces of a Master Test Plan.

After the review of the SRS is done, the tester then begins writing either test cases or use cases from the SRS. Use cases document how the system interacts with various actors while test cases are much more specific. Depending on the approach, a test case or use case document is produced. Let’s assume the tester writes test cases. In the test case document, the tester will generally put the description, test steps, expected results, actual results, the pass/fail results, and screen shots. The screen shots are a necessary addition after the testing has started to show evidence that a test case has passed. The test steps are also known as the test script. For the sake of simplicity, let’s assume a manual testing approach right now. Later, the discussion will turn to automated testing.

The test case document along with the Master Test Plan is then submitted for review as part of the formal review of the work product. Revisions are made based on the outcome of the review. At the review meeting, all the stakeholders are present including the requirements analyst, developers, the testers, project manager, and if there is a separate person performing an SQA Analyst role, then he/she is also asked to the review. Each page is discussed to see if any changes need to be made or any points need to be clarified. Once the issues are noted, the tester revises the Master Test Plan and the test cases based on the formal review. Issues are tracked to closure and a decision is made to have another meeting to approve the work product or handle the review of the work product through email.

The tester must also produce a Traceability Matrix which ties the Software Requirements Specifications to the Software Design Document and to the Test Case document. There should be at least a one to one mapping or a one to many mapping of requirements to test cases. In some firms, a QA Analyst/Tester must also review the Software Design Document making sure that every requirement is accounted for in the SDD. In this way, a tester can adequately assure that every test case is traceable to a requirement and every requirement is traceable to a software routine, option, or menu. Traceability is a key quality attribute of a software testing process.

After development of the code is completed and the build is delivered to the testers then the actual testing begins. Each test case is executed against the application and pass/fail results are recorded. Screen shots are taken as well. Any defects are recorded using a defect tracking tool such as Rational ClearQuest, Mercury Quality Center, or another defect tracking tool. There should be a process in place for defect tracking and defect resolution. Typically, when the tester writes up a defect in a defect tracking tool, the points to include are the stage of testing, subject, description of the problem, the steps to repeat, and a screen shot if possible. The description of the problem should be recorded as what is the issue that needs to be corrected and where the error is rather than a recording of what is simply left out since no one would be able to determine whether that is a problem. I have seen SharePoint sites used for defect tracking. When such a site is used, it should include the stage of testing that the defect was found. Even when using Rational ClearQuest or Mercury Quality Center, the stage of testing should be noted depending on the type of SDLC that is in present in the organization.

Depending on the severity of the defects, a new build is created to correct the defects found. The tester retests the defect to make sure it is fixed and then does a light regression around that area of functionality to make sure nothing else has broken. Once the tester notifies the team that internal testing is done, then depending on the SDLC, the code is released into production or handed off to the test sites, if any for field testing. If the test sites find defects they are reported to the development team and a new build is released which will then have to go through internal testing before being released to the test sites again. Defects are recorded as before.

Automated Testing
In automated testing, instead of a Master Test Plan or a along with one, an Automation Test Plan is written that will outline exactly how the automation will proceed and the approach taken. Once that is written, the creation of the automated test scripts begins and corresponds with the requirements written in the SRS o r RSD – Requirement Specification Document. After a build is delivered, the test scripts are run against the application to see what has broken. If it is a true defect, then the defect is written up using the same tools as in manual testing. Depending on the number and severity of changes, a new build is delivered and the automated test scripts are run again against the application. Once the tester has found no defects, he/she can report that testing is completed and the build is put into production.

Pain Points and Areas for Improvement

  1. Building trust in the SQA Tester–I had a development manager tell me once that it takes a long time for a project manager to be trustful of a tester. Fortunately, that particular situation worked in my favor but it can be hard for the development team to trust that the tester is going to perform due diligence on the testing project. That is, that he/she will find out and uncover everything there is to know about the project and the coding that can help him or her test the application thoroughly. Often times, this can be overcome with increased communication and explanation to the project manager and developers the reasoning behind why particular actions are taken.
  2. Developers who do not believe in the value of Software Testing and Quality Assurance–Many times, testers are faced with a situation in which they are constantly butting heads with the developers. Usually, this comes from an antiquated belief that testers do not know what they are talking about or that anyone can test. However, testing is a skill and it takes years to finely perfect the craft of testing software. Sometimes, the only recourse a tester has is to take the issue up with a much more senior tester who has the position and the backing to tell the developer to look at the tester's findings.
  3. Inadequate amount of time to test–sometimes there just isn't enough time within the project schedule to adequately test the software. A way to circumvent this issue is to test the requirements as they are written and build the test cases while the requirements are being written. The other thing to do here is to have people who can adequately scope out a project and the deliverable dates. Also, regular meetings with staff help to identify the status of key personnel in the software development project.
  4. XPoorly written requirements–sometimes the requirements are incorrectly captured and need to be revised as the software development project proceeds. This situation is costly to the organization as well as the test team in terms of both money and time. This situation could lead to scope creep in which the delivered code exceeds what has been written in the requirements. There is no real workaround for this issue. The best thing to do is nip it in the bud when you see scope creep happening. One way to do this is to make a solid decision beforehand to gain user signoff on the SRS so that scope creep doesn't happen. Another way to nip it in the bud is to have a change control board in place that approves SRS changes. The change control board then becomes responsible for cost overruns. This avoids the project manager being blamed for scope creep.
  5. Not enough testing resources–This issue is a situation for upper management to handle. When there are not enough testing resources, it affects the schedule and could lead to burnout of the few testers that are there to test the product.
  6. Not enough training for testers–This situation could lead to testers not understanding the software under test. It affects both time required to test and the money involved in the testing effort.
  7. Scheduling of future testing efforts–This problem arises when schedules for the next release are planned for while testing is going on for the current release. It is difficult for testers to give a correct estimate of the amount of time the future testing effort will take.
  8. Time to discuss with developers–sometimes there isn't enough time to discuss an issue with a developer and when this happens miscommunications occur quite regularly. It is easier to get a developer on the phone and show them the issue than it is to email them the issue and get caught up in back and forth emailing.
  9. People challenges. Randall W. Rice and William E. Perry wrote this book called, Surviving the Top Ten Challenges of Software Testing and in that book they write that the following are some of the challenges testers face in their daily interactions:

The Top Ten People Challenges Facing Testers
Challenge #1: Having to Say No–having to say you just don’t have enough time to test another application while testing the current application.

Challenge #2: Fighting a Lose-Lose Situation–this could mean many things but often refers to the internal politics of the organization and whether there is support for testing within an organization.

Challenge #3: Hitting a Moving Target–this often refers to requirements that keep growing once a project has been started. This situation is commonly referred to as scope creep.

Challenge #4: Testing What's Thrown over the Wall–often refers to testing without a process in place to prevent developers from simply giving code to test that hasn’t been unit tested or put together in a clean build

Challenge #6: Communicating with Customers–And Users–often refers to making sure that the development team including the testers deliver the right product to the customer and users and that there is ongoing support to train the customers and users in the new software product.

Challenge #7: Explaining Testing to Managers–some managers simply don't understand testing so it is the job of a test analyst to have sound reasons for why they are doing what they are doing.

Challenge #8: Testing Without Tools–The people related challenge here is to acquire adequate support for purchasing tools and demonstrating why the tools are needed. In other words, the challenge is to make a case for purchasing testing software.

Challenge #9: Building Relationships with Developers–The people related challenge here is to cultivate relationships with developers since most testers work closely with the developers.

Challenge #10: Getting Trained in Testing–in chapter 3 Rice and Perry write that, "Without training, testers are ill-equipped to meet the rigors of testing, especially in technically difficult situations. The people-related challenge of the following is to secure adequate support for training."

Adding Value to the Development Process
Whether an organization is using Agile Development methods or the traditional Waterfall or Iterative Development approaches, a tester can add value to the organization. He/she increases the confidence of the organization that the software is built right and the right software is built. This increases the efficiency and efficacy of the organization’s software development approach. Organizations that do not have test teams or individuals testing suffer the consequences of inadequately built software, defect ridden software, and cost overruns that spill over into other areas of the organization. Simply put, the cost exceeds the benefits when you do not have a tester or test team in your organization. Most testers are detailed and thorough about their work and can find flaws that are not readily apparent, thereby adding a quality that did not exist before. In addition, most testers have good communication skills which can bridge the gap between the developers and the users. Some developers cannot effectively communicate with the user community and suffer in this regard since they cannot explain what it is they are trying to do. Testers serve a key role here as communicators since they can conduct test site calls and organize user acceptance testing for the software development team.

In conclusion, testers face many daily challenges while testing software. Some of these are inherent in the organizational culture but others are derived from misconceptions about the role of testers and what they actually do. This paper has attempted to explain different aspects of a tester’s job and how they fit into the organization. It has attempted to explain the differences between using a manual testing approach and an automated testing approach. While some similarities exist, there are some differences. Software testers are highly valued and software testing is a very good profession within the IT industry.

User Comments

1 comment
Ramon Es's picture

"Defects are flaws in the code that cause a software application to break."

I disagree with that definition. Defects can exist when the code is perfectly valid and nothing in the application breaks. Just because the application does exactly what the code defines does not mean it is correct.

There are numerous definitions as to what a bug or defect is and there is validity to most of them. In my opinion a defect or bug is anything that negatively impacts the user experience. That includes bad requirements, at times rogue features added by developers, problems with look & feel, and, of course, errors in code. That is covered to some extent later in the article and that is much appreciated.

June 4, 2014 - 11:43am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.