Hi all,
I have a question about how you all handle errors in test cases found during test execution, during a formal System Test phase, in a heavily regulated environment (Medical Device Software - ISO13485, IEC62304)?
My instinct was that I should capture errors found in test cases (where they don't match the requirements/design assumptions/have incorrect 'expected results' or test steps are incorrect) as a defect (type 'test case error'), assigned to me or the test case author, to be 'fixed/resolved' by correcting the test case, after the end of that test cycle. This would mean we log effort spent in correcting duff test cases against the defect raised for them, and can prioritise that defect according to available resources in the next sprint.
However, my software dev manager disagrees and thinks we should just correct test cases on the fly, during execution, as we find any errors with them, so the tests just pass within that test cycle. He says any effort 'fixing' test cases should be logged against our 'test execution' task for that feature. He fears an explosion of issues/defects in JIRA, and his argument is that if he were to find a software bug during unit test, he would just correct it on the fly, without raising a defect...
I'm uneasy about this approach for a few reasons
1. I want some evidence of *why* a test case changed after the test execution activity started
2. I want to be able to see how much time is being spent correcting test cases - to trigger some process improvement in the review stages if this turns out to be a significant number of hours
3. If we end up executing a different set of tests than was planned at the start of the sprint (same tests, but with different 'expected results' because the tester changed them 'on the fly' during testing), wouldn't auditors have something to say about this?
Has anybody got any thoughts or experience on this? The key is that we are heavily regulated and need evidence of everything we do.
thanks for your help/suggestions