quality management Questions

By madhu mittal - July 27, 20154 Answers

Hi, I have 4 years of exp in oracle apps.but there is maternity gap of 6 years,want to work again.I am now interested in Testing.plz advise how to start it?

Is tehre a way/ method to quantify the quality of a software module/ product. If yes, how to obtain the same.

With Technology disruptions happening in fast pace, testing gets embeded as part of the development and testing as a phase may not exist, so what is the futuer of testing professionals after 10 years from now. What is the future ?

By QAE m - June 10, 20156 Answers

In terms of testing best practices, should Development Team have access to QA Database? If not, how do the developers troubleshoot/debug data related issue? 

Please give your valuable suggestion , this will help me a lot throughout my career.

 

Additional skills like:

1. Any technologies ( automation tools/mobile technology/ java/ any.......)

2.About Testing

3.Any other 

 

Not only above skills recommedn any other from your perspective.

Please suggest ..Thanks

After 20 years of leading AD teams, I'll be transitioning into leading my companies shared software QA team.  This is a very large company (Fortune 50) where IT budgets are determined by LOB.  The AD teams own the budgets and allocate funds to shared QA org.  Our annual planning cycle starts in July and closes in November for the subsequent year.  The problem is multi-faceted:

  1. As part of annual planning the AD teams are not allocating or under allocating $ to support QA
  2. QA leadership needs to beat the bushes for $ to support IT projects (e.g. allocate $ to our resource role)
  3. Constantly battling for money means the only resources supported are the ones specifically budgeted to support testing a new development project - we can't build a center of expertise, invest in tools, take training, improve processes, and ultimately improve quality due to funding contraints.

Are there any best practices in defining a standard allocation model?  For example, X% of AD budget funds core QA resources (FTE, Key Consultants - including resources for CoE) and Y% funds project related work.  Internally we're calling this a "blended model".  

All thoughts are appreciated.

I am writing to enquire if anyone has had any exposure to testing of the following applications and can offer any advise on the use of particular techniques or tools that have proved useful.  any assistance you could offer would be most appreciated.

 - Microsoft Active Directory
 - Microsoft Active Directory Application Mode
 - Microsoft Forefront Identity Manager

The challenge
development and configuration changes made to one or more of the applications detailed above  can have far reaching/unwanted implications, both in the applications themselves and for upstream/downstream applications they interact with.   Functional/front end driven testing alone will not provide confidence from a regression perspective

The solution
A tool/technique is required that can inspect/export/compare triggered configuration/code/data changes as a result of a modification.

 

I'm working on refining our definitions for test results and have the following:

Pass - Test executes and all verification points pass (behavior matches requirements)

Fail - Test executes and 1 or more verification points fail (it does not meet requirements)

Blocked - Test case cannot be executed due to a defect in current or associated product (something like.. i cant even install the software)

Skipped - Tester determined that test did not need to be run and did not execute the test (He/She knew it would fail due to an already logged defect for example)

 

There is a contingent of folks who want to use the following result:

Pass with Exceptions -- Test case passed all verification points but failed due to an anomaly found in a related product or in an area not related to the requirements being tested.. for example -- i'm looking at load performance and i see an unrelated defect in the GUI.

 

I am opposed to this definition as it seems to be a little unclear with regard to intent and I have seen the definition get changed to allow failing test cases to "Pass with exception"

 

Has anyone seen any standards published that provide guidance in this area?

 

Thanks,

 

Mike

 

I am a QA, i want to find a way to guage these quality checkpoints, if anyone has any experience, any guidance i would really appreciate any help. 

Hello,

 

Iam new to this site and found it very useful. I have worked as a software tester for 7 years  and would like to return to the testing market again after a gap of 2 years from the industry as my child is young.

I have worked on Agile methodologies and some scripting in ruby (basic knowledge) with Cucumber.

I would like to learn few technologies and an automation tool before I start looking for jobs. Please advice me what scripting langauge and automation tools I should focus on.

Since I have a gap of 2years I would like to gain skills before I start giving interviews.

Please help!

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.