I'm working on refining our definitions for test results and have the following:
Pass - Test executes and all verification points pass (behavior matches requirements)
Fail - Test executes and 1 or more verification points fail (it does not meet requirements)
Blocked - Test case cannot be executed due to a defect in current or associated product (something like.. i cant even install the software)
Skipped - Tester determined that test did not need to be run and did not execute the test (He/She knew it would fail due to an already logged defect for example)
There is a contingent of folks who want to use the following result:
Pass with Exceptions -- Test case passed all verification points but failed due to an anomaly found in a related product or in an area not related to the requirements being tested.. for example -- i'm looking at load performance and i see an unrelated defect in the GUI.
I am opposed to this definition as it seems to be a little unclear with regard to intent and I have seen the definition get changed to allow failing test cases to "Pass with exception"
Has anyone seen any standards published that provide guidance in this area?