We use Team Foundation Server for development and testing. Our Director has told our scrum master to go to 1-week sprints AND remove testing tasks from the product backlog to change the definition of "Done". We could use a suggestion on how to ensure we get testing tasks assigned, and track their progress, when our only tools are TFS and Google Docs. Having been forced to use Google Docs to track testing in the past, I expect issues trying to share -- for instance -- a spreadsheet to do this. Tasks could be forgotten or missed. Updates may go uncommunicated.
I have a question about how you all handle errors in test cases found during test execution, during a formal System Test phase, in a heavily regulated environment (Medical Device Software - ISO13485, IEC62304)?
My instinct was that I should capture errors found in test cases (where they don't match the requirements/design assumptions/have incorrect 'expected results' or test steps are incorrect) as a defect (type 'test case error'), assigned to me or the test case author, to be 'fixed/resolved' by correcting the test case, after the end of that test cycle. This would mean we log effort spent in correcting duff test cases against the defect raised for them, and can prioritise that defect according to available resources in the next sprint.
However, my software dev manager disagrees and thinks we should just correct test cases on the fly, during execution, as we find any errors with them, so the tests just pass within that test cycle. He says any effort 'fixing' test cases should be logged against our 'test execution' task for that feature. He fears an explosion of issues/defects in JIRA, and his argument is that if he were to find a software bug during unit test, he would just correct it on the fly, without raising a defect...
I'm uneasy about this approach for a few reasons 1. I want some evidence of *why* a test case changed after the test execution activity started 2. I want to be able to see how much time is being spent correcting test cases - to trigger some process improvement in the review stages if this turns out to be a significant number of hours 3. If we end up executing a different set of tests than was planned at the start of the sprint (same tests, but with different 'expected results' because the tester changed them 'on the fly' during testing), wouldn't auditors have something to say about this?
Has anybody got any thoughts or experience on this? The key is that we are heavily regulated and need evidence of everything we do.