SIT UAT harmonization is a concept where by both teams have synergy from begining of STLC phase. SIT team shares test cases with them. UAT team reviews it, Any duplicate test cases between SIT and UAT are removed. This results in both teams executing unique test cases and thereby resulting in cost savings.
I have a question about how you all handle errors in test cases found during test execution, during a formal System Test phase, in a heavily regulated environment (Medical Device Software - ISO13485, IEC62304)?
My instinct was that I should capture errors found in test cases (where they don't match the requirements/design assumptions/have incorrect 'expected results' or test steps are incorrect) as a defect (type 'test case error'), assigned to me or the test case author, to be 'fixed/resolved' by correcting the test case, after the end of that test cycle. This would mean we log effort spent in correcting duff test cases against the defect raised for them, and can prioritise that defect according to available resources in the next sprint.
However, my software dev manager disagrees and thinks we should just correct test cases on the fly, during execution, as we find any errors with them, so the tests just pass within that test cycle. He says any effort 'fixing' test cases should be logged against our 'test execution' task for that feature. He fears an explosion of issues/defects in JIRA, and his argument is that if he were to find a software bug during unit test, he would just correct it on the fly, without raising a defect...
I'm uneasy about this approach for a few reasons 1. I want some evidence of *why* a test case changed after the test execution activity started 2. I want to be able to see how much time is being spent correcting test cases - to trigger some process improvement in the review stages if this turns out to be a significant number of hours 3. If we end up executing a different set of tests than was planned at the start of the sprint (same tests, but with different 'expected results' because the tester changed them 'on the fly' during testing), wouldn't auditors have something to say about this?
Has anybody got any thoughts or experience on this? The key is that we are heavily regulated and need evidence of everything we do.
For one of the several applications I test, we're importing a significant amount of data from one database to another. My PM has asked for me to research some "Best Practices" on how best to verify that the data imported correctly. Do you have any suggestions on where I could find some "Best Practices" to support my testing effort?