This fictional diary looks at the experience of an agile QA during a single iteration, sharing some of the scenarios, questions, queries, and successes typically encountered by an agile test team. Some of the issues touched on in this first part include estimations, TDD, exploratory testing, team size, tester-developer ratios, relationships with other team members, and sign-offs.
Wednesday–Day 35–Iteration 9
Started the new iteration today although we're still testing stories and raising defects from Iteration 8. However, in Iteration 9 planning I finally convinced Lindsay, the Iteration manager, to factor defects into the estimates. I've been asking for approximately 15% of the dev pairs time to be assigned to defect fixing for bugs found during the iteration that are urgent and high priority. Now we've had a few iterations we've been able to we put together some metrics for the average number of defects raised in an iteration. From that we extrapolated that 15% of dev time was a reasonable figure. We'll keep monitoring it over the forthcoming iterations and adjust as necessary.
Had an enjoyable afternoon doing some exploratory testing of the new discovery widget. Lots of data to look at so Josh, my fellow tester, joined me and suggested we start off with some blink testing. Never tried this technique before–was surprised when we found two defects straight away. If we had spent an hour thoroughly checking each line I may well have missed them.
This mornings 9.15 am 'stand up' is way too long. It really is getting too unwieldy. We're supposed to be sticking to the short, sharp scrum format of three questions but some people in the team feel the need to go into a soliloquy. With nearly 50 people in the team it takes too long at the best of times. One of the technical leads suggested we break off with an optional, separate scrum afterwards for those that wish to share technical issues with the project.
Spent part of the morning trying to get my local environment working after it failed to compile. I'm fairly sure I only updated the code on my machine from the latest working version but I seem to be getting some memory leakage problems. The devs on the team are too busy to give a hand at the moment as they start their new stories today. Which means I'm going to be involved in lots of story huddles. Also the Continuous Integration (CI) build has been red since the final code check in last night. Right now the skull and crossbones flag is sitting on one dev pairs desk as they try to figure out why their check in broke the build. Someone has also bought a singing parrot (which can be switched from a singing to a curse mode!) to leave on the offending devs desk. It helps keep the humour factor high and it does serve a purpose–we certainly know who is working on fixing the broken build.
Mahmesh had a look at my build at the end of the day (the developers finish their core pairing hours at 5pm). Looks as though my machine may need upgrading at some point but for now he made some changes to a system environment variable and kicked the build off again. Feel a bit guilty leaving my machine on overnight but I'll check to see if it's gone through ok in the morning.
Saturday–Day 38–Iteration 9
Yes it's a Saturday and I'm working overtime for the second time. The release at the end of this iteration will be demo’ed to the client, so we need all the stories tested, passed and then signed off by Patrice the product owner as soon as possible. As we already have a bit of a test backlog from the previous iteration it was 'suggested' that we came in at the weekend. So much for the agile belief in sustainable pace.
Still, my local build went through ok which is handy as Bobby (my other test colleague) doesn't want to deploy the latest code to our QA environment just yet as he is in the middle of a meaty test. So in the meantime I can check the story Mahmesh finished yesterday afternoon on my local environment. Had a brief look at his unit tests and then double-checked the selenium test implementation. I had written the draft of the test upfront so I was happy with the flow of the test but I needed to make sure Mahmesh was using the correct data and values and he hadn't made any significant changes to the test. Of course if he had I'm pretty confident he would have made me aware beforehand. Unfortunately he neglected to call me over to do a quick walk through of the completed test and story prior to him checking in yesterday but I forgive him as he did fix my build.
Was actually a very productive day in the end. Besides us three testers there was only a couple of devs in. So no story huddles, planning sessions, contention on environments or any of the other distractions we normally have in the week. We also had the CI build pretty much to ourselves. It was green all day so we were able to check in a few selenium script changes. Bobby broke the build at one point but we managed to figure out the problem fairly quickly (we needed to reset some test data at the end of an automated test). I do enjoy the 'buzz' of an agile project but sometimes its nice to just put our heads down and test without being disturbed.
Wednesday–Day 41–Iteration 9
Lots of preparation sessions going on today for the next iteration. This is where the business analysts and the product owner (along with the project manager, iteration manager and other interested parties) discuss what stories they would like to see the devs work on next. If the story is ready for play then it will be considered for the upcoming iteration planning session. We try to send at least one tester to these sessions so we can get a bit of a heads up as to what may be coming next and then work on completing our test scenarios and draft automation tests against each stories acceptance criteria. On the Friday during the iteration planning session the devs decide which of the selected stories they can complete against their allocated budgets (pair days). We attend that one too, which can be long and torturous as it involves a lot of 'discussion' on the estimate for each story and the brief implementation. We also give a short summary on how we expect the story to be tested. I guess it's good to have this debate with your peers up front but I’m always glad when the Iteration manager bangs the table three times before everyone puts their 'fingers in the wind' to estimate how many days it will take to complete the story.
As we came out of one prep session Lindsay started having kittens about the index cards on the wall. We have hundreds of cards with all our stories, task and defects split across our various workflows. She's convinced that someone is moving them around. 'These cards should be in the dev stream not the QA stream,' she said.'They haven't finished developing these stories yet.' She wasn't too impressed when I mentioned the possibility that the cards may have fallen off the wall and the cleaner may have stuck them back up this morning. At that point a sea shanty could be heard from the middle of the project room–we looked up to the plasma screen hanging on the wall to see the build had gone red. Lindsay went off to the offending dev pair to find out what had broken. Saved by the parrot.
Monday–Day 44–Iteration 9
Its lunchtime and I've done zero testing so far today. After the morning stand up (when are we going to adopt the conciseness of scrum–it's so long it's driving me nuts) I had to attend two story huddles, one of which was with some of the offshore developers which meant it took twice as long. Helped Gupta the BA with some acceptance criteria for a story they want to play in the next iteration, then had to sort out some environmental issues with our QA machine with Derek the build configuration guy.
We're having a team retrospective. Lindsay wants to know why we are raising so many defects. Forty plus accusatory faces look my way.
'Actually I believe we’re raising far less than we would on a waterfall project as we testing closer to development completion and often. That way the defects we find are not likely to be propagated. TDD also means we get far better code onto our QA boxes.'
'Which is why we have fewer testers than we would on a waterfall project' she retorts. I mentally make a note to save the tester-developer ratio argument for another day. 'You help write the tests upfront yet you're still raising lots of defects on your QA environments.'
Pre-development tests, post-development testing, I seem to be getting the rap both ways.
'Actually', I respond, 'I would consider the automated unit and integration tests as checking rather than testing.' Before I can elaborate further there is uproar in the room particularly from the agile evangelists and TDD aficionados. I've just opened a can of worms. Order is called over the multitude of discussions and an agreement is made to discuss in more detail in a follow up meeting. At the end of the retrospective I feel lucky to get out alive but I'm glad I’ve put a renewed focus on our test strategy, particularly on what I feel are the complimentary practices of TDD and exploratory testing.
Tuesday–Day 45–last day of Iteration 9
Things are quite hectic. We have one eye on planning and preparing for the next iteration but at the same time we are trying to test as many stories as possible for this iteration. Lindsay is particularly concerned about how the velocity will look as none of the stories are counted as complete until they are through test and signed off.
Sign offs is an overhead that I could often do without. Gupta, the BA, with Patrice, our onsite stakeholder and product manager, pick up stories that have passed QA in order to give them a final stamp of approval. They're especially keen to get this done today as we have an external demo to our customers on Thursday. The problem for us is that at this early stage of the project many of the stories are quite technical or involve the resetting of a lot of test data and conditions. This onerous task often falls to the test team. Worse still they often want to use the QA environment during the sign off session which causes a fair few contention issues for us. Not an ideal situation and one I need to raise with Lindsay and my offsite Test manager asap.
Other than my 'contentious' testing versus checking’ viewpoint (meeting yet to be arranged) another outcome from the retrospective was the suggestion to break the development team up into more manageable streams. I discussed with Bobby and Josh how we would manage this. What should we do? Embed ourselves with each team? But if we do that will we lose our shared knowledge. Will we lose sight of the bigger picture? Many of the other team members rely on our broad knowledge of the application. My other fear is we lose even more of our objectivity as testers the closer we become integrated with the devs. For me this is already a concern that we have to manage carefully.
In the meantime I look forward to the challenge of Iteration 10 with a sense of excited urgency and a degree of welcome apprehension. Here's hoping tomorrows demo goes well... .