Recent Q&A Activity

how to analyze jmeter results

In terms of testing best practices, should Development Team have access to QA Database? If not, how do the developers troubleshoot/debug data related issue? 

Is tehre a way/ method to quantify the quality of a software module/ product. If yes, how to obtain the same.

I Have both VS2010 and VSTestProf 2013 installed.  When I get a work item H-links sent to me it opens in vs2010. IS there a setting I can set so that all links open in vs 2013 test Proffesional?



We are starting a new project for a new client - who uses linux - I have not tested on linux before so not sure what the differences are between linux and windows 7.

Are there any differences?  Does it look, act and functionally work in the same way as on windows 7 platform?



I want to write responsive test cases. Please provide template.

In water-fall there is an industry standard for each test level.  Is there the same for Agile? 

Please give your valuable suggestion , this will help me a lot throughout my career.


Additional skills like:

1. Any technologies ( automation tools/mobile technology/ java/ any.......)

2.About Testing

3.Any other 


Not only above skills recommedn any other from your perspective.

Please suggest ..Thanks

I am a QA, i want to find a way to guage these quality checkpoints, if anyone has any experience, any guidance i would really appreciate any help. 

I'm working on refining our definitions for test results and have the following:

Pass - Test executes and all verification points pass (behavior matches requirements)

Fail - Test executes and 1 or more verification points fail (it does not meet requirements)

Blocked - Test case cannot be executed due to a defect in current or associated product (something like.. i cant even install the software)

Skipped - Tester determined that test did not need to be run and did not execute the test (He/She knew it would fail due to an already logged defect for example)

It is a time and materials contract.  I wish to add financial incentives for time, but I'm worried that this will cause people to rush the execution to make the bonus deadline.  Besides setting a realistic deadline, I'm trying to think of a way to add a component that will speak to the quality of the test run. 

After 20 years of leading AD teams, I'll be transitioning into leading my companies shared software QA team.  This is a very large company (Fortune 50) where IT budgets are determined by LOB.  The AD teams own the budgets and allocate funds to shared QA org.  Our annual planning cycle starts in July and closes in November for the subsequent year.  The problem is multi-faceted:


StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.