quality management

MIDHUN DAS's picture

Which is the best open source automation tool for desktop application testing?

MIDHUN DAS asked on September 12, 2017 - 3:00am | Replies (0).

I am in search of an open source tool so that I can automate testing of a Windows Desktop application.


<em>No answers yet</em>

Katrina Brown's picture

Is there a principle that covers when a UAT Client tester requesting Functional tests from the dev team, for UAT test creation?

I believe this to be a conflict of interest or simply a case of ethics. I'm having issues locating a clear statement on this type of situation.

Chris N's picture
Chris N replied on September 8, 2017 - 8:36am.

I think there are a others things that should be considered when aswering this question.  

1.  Is the software being built for in-house use or for an enternal market.  I would never ask Microsoft for their test cases for a product,  but if I was having a custom appliction developed,  I have asked for the suppliers test cases.  For our projects the customer asks for our test cases all the time.  While the previous poster mentioned different perspectives for function vs UAT testing,  I am still doing some business process tests during functional testing.  

2.  The product under test may be unfamiliar to the business users so depending on the level of detail, they may be using them as training type documents to help understand how they should construct those tests.  I have also done the reverse where I have asked the business for some of their test cases to better understand how they function as a business.  Often business representatvies sit with the IT team during testing and testing is done concurrently or in parallel due to this lack of knowledge.  It can be a learning experince for both on how to better test applications.

3.  One of the purposes of testing is to collect information on wheter the system is ready for production/acceptance.  The business partner often has a large voice in this decision so sharing our tests and the results of our tests helps to provide a level of confidence that the system can go live. 

I don't advocate that the client just rerun the test verbatum, I don't think there is an ethical question if they do, but the client is your partner in the development effort and ensuring everyone is on the same page with regards to functioanilty and fit for use the better off you will be in the long run. 

Stewart Lauder's picture

not sure on a principal but this request shouldnt be encouraged as UAT and System testing are proving different things - UAT for business processes & system testing for functional changes - therefore UAT tester should be writing their own test scripts

Justin Rohrman's picture

In my experience, it is fairly common for customers to request some sort of testing guidance from the dev or test team. I have shipped unit tests and UI automation as part of the delivery for some clients. It probably does bias their testing, but I don't think there are any ethical dilemas here unless I'm misunderstanding the question. 

Jo Rand's picture

End to End testing in Agile projects

Jo Rand asked on August 10, 2017 - 11:27pm | Replies (2).

I am interested in hearing your thoughts on how End to End testing across multiple applications could/should be managed across agile teams, where the teams are responsible for an application/system with integration with another application/system.

I'd be interested in knowing how other companies handle this from a strategic and project level.  Thanks

Praveen Chakravarthy D G's picture

As per me, it is a very critical topic for large agile programs that are executed. In fact if there are clear stories on interfaces and integrations, it can be kept in scope of a sprint and tested. Few more thougths from practical implementations

1. Add tasks in the beginning of the sprint keeping in view the interfaces that are available and can be tested.

2. I have actually added a parallel team that works on the items that slip or constrined to be part of sprint. This parallel team does the interface testing, non-functional, e2e scenarios etc. This parallel team had stories that need to be tested and validated before each program increment. Basically sprint teams work in sprint and these parallel teams works outside the sprint but with in program increment.

FYI - Prg inc. is nothing but set of 4 sprints together that delivers a meaningful module at the end of it.

Hope this helps if not ignore :-)


Justin Rohrman's picture

This depends on how the integration works. When I have done this in the past, one team was responsible for creating data in a specific format that other services could comsume. The other team was responsible for testing that their product could consume data in that particular format. We also found that it can be useful to have a task on the work queue for a person to spend some time testing how different products integrate. 

Dilusha K's picture

Is it a standard practice to start all the Test Case Heading from the word 'Verify'?

Dilusha K asked on August 5, 2017 - 6:10am | Replies (3).

Is it a standard practice to start all the Test Case Heading from the word 'Verify'?
Ex :
Verify show .................
Verify Create................
Verify Edit................

Stewart Lauder's picture

i personally always try and avoid using the word "verify" at any point when writing test cases, as it suggests we are "checking" and not "testing" - i tend to make my heading as a clear statement aligned to whats being tested

Zephan Schroeder's picture

Starting test cases with "Verify ", "Check that the ", or similar boilerplate prefix statement is somewhat common but is not at all a standard practice or even recommended. I personally find such static prefixes counterproductive. They diminish human ability to quickly scan a list or alphabetically recognize a group of tests. It takes up valuable mental parsing not to mention screen/paper real-estate. It adds no value to the reader IMHO.

I strongly recommend using a standardized test case naming convention focused on conveying summary info so familiar tester can run without opening the details. I help drive consistency with the following naming convention:

Test Case Title Naming Convention:  
<Feature>: <Initial State> <Action[s]>[,] [Expect ]<Expected result>


  1. Homepage: Login as Admin user with a clean browser cache. Website authenticates and shows Admin user homepage (Admin menu + admin home content section)
  2. Homepage: Login as Normal user with a clean browser cache. Website authenticates and shows Normal user homepage
  3. Menus: Edit submenus each open successfully
  4. Menus: File submenus each open successfully
  5. Menus: Login as Admin user, menubar contains File, Edit, View, Links, and Administrator menus
  6. Menus: Login as Admin user, Admin submenus each open successfully
  7. Menus: Login as Admin user, Admin submenus each open successfully
  8. Menus: Login as Normal user, menubar contains File, Edit, View, Links menus (but no Admin menu)
  9. ...

Notice above are sorted alphabetically and provide easy sorted by Feature and then by initial state. 

QUIZ #1: Did you spot the duplicate test case? 
QUIZ #2: Can you quickly spot the gap in these high-level menu tests?
QUIZ #3: Is it easy to do parallel testing by assigning Admin user tests to one tester and Normal user tests to a different tester?

Test case titles can get long. You can use terminology, length limits, and other guidelines to produce consistent test titles that meet any additional restrictions. The point I want to emphasize is that test case titles get plenty long without adding filler words that add no actionable information.

OPEN QUESTION: Have you seen or used different test case title naming conventions or have other test case heading best practices? Comment here!

Sriharsha ng's picture

No. Although testers use Verify, Check e.t.c its all depends on how the test case title should sensible against the steps and expectations.

Achieving Continuous Improvement and Innovation in Software

There is tremendous pressure on software development teams to deliver software faster, better, and cheaper. Quality engineering with a focus on innovation is the answer

Better Software Magazine
Amaya maheshwari's picture

what are highly vulernable areas in ecommerce website, that needs to test properly?

how to test vulnerability of ecommerce, whether the applied security is properly working or not?

whether the website can be hacked by anyone or not

Justin Rohrman's picture

This is highly dependent on the product you are working on, the technology stack used to build it, the team that built is, and how security has been handled so far. If you are asking because you want to start a security investigation, you will probably want to talk with your development team to organize that work.


As a general note: the OWASP top 10 list might be a decent place to start. You can find that here: https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

John Alberta's picture

Have you used qtpselenium.com to learn automation tools? please advise

John Alberta asked on July 13, 2017 - 1:04am | Replies (1).

Have you used qtpselenium.com to learn automation tools? please advise

Justin Rohrman's picture

I have never heard of that website, but that doesn't speak to how good (or not) the resources there are. Depending on what tool you want to learn, and this is expecially true for the Selenium suite, there are probably free resources available on the web to help you get started. Selenium also has a massive opensource community which can come in handy for leanring. 

Jeff Checkner's picture

Quality processes in Continuous Integration/Deployment

In traditional waterfall and agile processes we implement a test strategy to set expectations (and receive feedback) on approach for a release (multiple sprints of work) and test summary to capture results of the execution (functionl, security, and performance testing).  

In a a CI/CD model when we have a 2 week sprint and plan to deploy every 2 weeks it seems a bit cumbersome to do a strategy and summary every 2 weeks.  Ideally the summary could be pulled from an automation tool and produced to a dashboard so that should not be too bad.

Michael Tomara's picture

I would add that having a long-term strategy should not be incompatible with 2-week release periods unless you are releasing something brand new each time. So this depends on a scope of each release. Release scope defines the scope of testing as well: usually it is quite a challenging task to test absolutely everything for short release periods. Of course the bigger the better, but some priorities must be established. 

Sure, automated testing tools seem to be the best way to deal with the lack of time here - given that you have such a solution which is compatible with CI and at the same time can perform regression testing properly. By the way, if you need to run some UI regression tests, I would suggest you trying Screenster, a visual testing tool which was designed to help with UI testing in CI environment.

Justin Rohrman's picture

I'm not sure what you mean by setting a strategy and summary, but it sounds like you mean writing something. Strategies (and other common test artifacts) don't have to be heavy things that drag you down. If they aren't helping you do a good job testing, or release more often, I would encourage you to abandon them.

Regarding CD, not to be dismissive of your point, but releasing every two weeks isn't close to continuous delivery. This is the standard compressed waterfall interpretation of agile that many companies take. The more often you want to release software, the more crucual it becomes to have useful quality percieving activities like pairing, layering of automation, build pipeline automation, and monitoring. It also means your testers must be technical for some definition of the word technical.


To start with though, I'd encourage you to reevaluate what strategy and summaries are, and why you need to create them every two weeks. 

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.