We made a similiar transition recently, but it was actually driven by QA. Querying for lists with children of a certain state/title was a nightmare for us and work ended up being lost. Instead, we updated the State workflow on Bugs and PBIs to include "Ready to Test" and "Test In Progress", along with some supporting Reasons.
The developer who finishes the final task on a top level item changes the State to "Ready to Test", and querying for that is simple. We've found that it vastly improved the visibility of dev complete items.
How can we track testing in sprints when test tasks have been removed to change the definition of "Done"?
We use Team Foundation Server for development and testing. Our Director has told our scrum master to go to 1-week sprints AND remove testing tasks from the product backlog to change the definition of "Done". We could use a suggestion on how to ensure we get testing tasks assigned, and track their progress, when our only tools are TFS and Google Docs. Having been forced to use Google Docs to track testing in the past, I expect issues trying to share -- for instance -- a spreadsheet to do this. Tasks could be forgotten or missed. Updates may go uncommunicated.
6 Answers
It is challenging to use spreadsheet to track progress and share it with team to update their results. In the past, we have used Excel spreadsheet and here is how we maintained and tracked the results every Sprint.
QA Lead creates the spreadsheet that includes all the test cases for that sprint. In another column, the QA assigned for the tests and the next column has Pass, Fail, if Fail (Defect #). We also had another column to say if it was Automated or not. If it were a 2 week Sprint, we had 2 tabs for Week 1 and Week2.
This spreadsheet could be large depending on the matrix used (Database, Operating Systems to run the test against).
Then team then made a copy of this spreadsheet, update their test execution results. Now, it was the responsibility of the lead to copy over these results back to the Master spreadsheet at the end of each week.
In order to track testing, we extract various statistics from the data in spreadsheet using macros (# of unique Test cases in a Sprint, #Tests Completed,# Passed, #Failed, #automated) and share it with the stakeholders on the team's progress.
Of course, it takes some time to maintain such a spreadsheet but we had success doing it this way. Hope this helps.
Thank you for your responses! Both are very helpful.
Daniel, we've discussed your solution and will give this a try on a project to see how it works for our teams (Development is in favor of this but want to try an alternative first).
Praveena, good solution also. I'm creating a matrix similar to your description, and we'll give that a try soon on a project about to start.
Thank you.
If your Definition of Done has been changed to remove testing tasks, then you would be unemployed.
From your question, I'm not sure where your process ends and your use of tools to implement that process begins?
I just joined this site so I appologize for replying to this months after it was posted. Daniel and Praveena both have very good answer and I have taken notes from them. To add on, one thing some of our QA Analysts use here to create a test log is http://testnote.io/#!/ . It has been useful to keep track of not only testing that has been done, but anomolies (edge cases) that were found and how to reproduce for a final QA task, if someone else were to tackle it.