test management Questions

For example, there could be multiple outputs for one input depending state of the application or one input can have only one output also. 

I am interested in the others' take on visual testing alongside funtional testing with tools like Applitools?

I am searching some articles about sofware installation/uninstallation. Which are things what must have tested, best practices, creating test cases and so on.

Thanks for help :)

By Kurien Koshy - November 10, 20141 Answer

What are the different kinds of standard metrics that needs to be considered for a product at different stages including during inhouse testing and after release of the product to the Clients.

Suggestions and input welcome

 

By Kim Shearer - October 31, 20145 Answers

Looking for information around testing online web gaming applications.

Hi all,

I have a question about how you all handle errors in test cases found during test execution, during a formal System Test phase, in a heavily regulated environment (Medical Device Software - ISO13485, IEC62304)?

My instinct was that I should capture errors found in test cases (where they don't match the requirements/design assumptions/have incorrect 'expected results' or test steps are incorrect) as a defect (type 'test case error'), assigned to me or the test case author, to be 'fixed/resolved' by correcting the test case, after the end of that test cycle. This would mean we log effort spent in correcting duff test cases against the defect raised for them, and can prioritise that defect according to available resources in the next sprint.

However, my software dev manager disagrees and thinks we should just correct test cases on the fly, during execution, as we find any errors with them, so the tests just pass within that test cycle. He says any effort 'fixing' test cases should be logged against our 'test execution' task for that feature. He fears an explosion of issues/defects in JIRA, and his argument is that if he were to find a software bug during unit test, he would just correct it on the fly, without raising a defect... 

I'm uneasy about this approach for a few reasons
1. I want some evidence of *why* a test case changed after the test execution activity started
2. I want to be able to see how much time is being spent correcting test cases - to trigger some process improvement in the review stages if this turns out to be a significant number of hours
3. If we end up executing a different set of tests than was planned at the start of the sprint (same tests, but with different 'expected results' because the tester changed them 'on the fly' during testing), wouldn't auditors have something to say about this?

Has anybody got any thoughts or experience on this? The key is that we are heavily regulated and need evidence of everything we do.

thanks for your help/suggestions

We have an established testing framework and tools(QC) 

I am trying to make a proposal for automation testing of a e-learning website.

Application software deliverable get's delayed due to Developer's or Tester's in IT industries? 

Which are the valuable charts or process guidelines to measure this ?

I'm new to testing..i'm going to choose testing as my domain..i would like to know more about testing tools.which will have better growth and more demand in the market??

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.