The main goal of endgame testing is to test the system end to end from the user's perspective. This should ensure continuity between components developed by different teams, continuity in user experience, and successful integration of new features. Endgame testing will often identify gaps that are difficult to discover inside agile teams, including flows across the product.
Test plans are essential for communicating intent and requirements for testing efforts, but excessive documentation creates confusion—or just goes unread. Try the 5W2H method. The name comes from the seven questions you ask: why, what, where, when, who, how, and how much. That's all you need to provide valuable feedback and develop a sufficient plan of action.
The more you rely on feedback from your automated tests, the more you need to be able to rely on the quality and defect-detection power of these tests. Unfortunately, instead of being the stable and reliable guardians of application quality they should be, automated tests regularly are a source of deceit, frustration, and confusion. Here's how you can start trusting your automated tests again.
I am interested in hearing your thoughts on how End to End testing across multiple applications could/should be managed across agile teams, where the teams are responsible for an application/system with integration with another application/system.
I'd be interested in knowing how other companies handle this from a strategic and project level. Thanks
As per me, it is a very critical topic for large agile programs that are executed. In fact if there are clear stories on interfaces and integrations, it can be kept in scope of a sprint and tested. Few more thougths from practical implementations
1. Add tasks in the beginning of the sprint keeping in view the interfaces that are available and can be tested.
2. I have actually added a parallel team that works on the items that slip or constrined to be part of sprint. This parallel team does the interface testing, non-functional, e2e scenarios etc. This parallel team had stories that need to be tested and validated before each program increment. Basically sprint teams work in sprint and these parallel teams works outside the sprint but with in program increment.
FYI - Prg inc. is nothing but set of 4 sprints together that delivers a meaningful module at the end of it.
Hope this helps if not ignore :-)
This depends on how the integration works. When I have done this in the past, one team was responsible for creating data in a specific format that other services could comsume. The other team was responsible for testing that their product could consume data in that particular format. We also found that it can be useful to have a task on the work queue for a person to spend some time testing how different products integrate.
If you have a spotty test automation strategy, you may get lots of regression issues every time you have a new release for your mobile app. A mobile device lab to run regular regression tests could be the key. Here's a plan to get a mobile automation lab up and running, as well as some practices that can help reduce the number of regression issues and improve your overall app test strategy.
Is it a standard practice to start all the Test Case Heading from the word 'Verify'?
Verify show .................
i personally always try and avoid using the word "verify" at any point when writing test cases, as it suggests we are "checking" and not "testing" - i tend to make my heading as a clear statement aligned to whats being tested
Starting test cases with "Verify ", "Check that the ", or similar boilerplate prefix statement is somewhat common but is not at all a standard practice or even recommended. I personally find such static prefixes counterproductive. They diminish human ability to quickly scan a list or alphabetically recognize a group of tests. It takes up valuable mental parsing not to mention screen/paper real-estate. It adds no value to the reader IMHO.
I strongly recommend using a standardized test case naming convention focused on conveying summary info so familiar tester can run without opening the details. I help drive consistency with the following naming convention:
Test Case Title Naming Convention:
<Feature>: <Initial State> <Action[s]>[,] [Expect ]<Expected result>
- Homepage: Login as Admin user with a clean browser cache. Website authenticates and shows Admin user homepage (Admin menu + admin home content section)
- Homepage: Login as Normal user with a clean browser cache. Website authenticates and shows Normal user homepage
- Menus: Edit submenus each open successfully
- Menus: File submenus each open successfully
- Menus: Login as Admin user, menubar contains File, Edit, View, Links, and Administrator menus
- Menus: Login as Admin user, Admin submenus each open successfully
- Menus: Login as Admin user, Admin submenus each open successfully
- Menus: Login as Normal user, menubar contains File, Edit, View, Links menus (but no Admin menu)
Notice above are sorted alphabetically and provide easy sorted by Feature and then by initial state.
QUIZ #1: Did you spot the duplicate test case?
QUIZ #2: Can you quickly spot the gap in these high-level menu tests?
QUIZ #3: Is it easy to do parallel testing by assigning Admin user tests to one tester and Normal user tests to a different tester?
Test case titles can get long. You can use terminology, length limits, and other guidelines to produce consistent test titles that meet any additional restrictions. The point I want to emphasize is that test case titles get plenty long without adding filler words that add no actionable information.
OPEN QUESTION: Have you seen or used different test case title naming conventions or have other test case heading best practices? Comment here!
If you are unsure about the things you should be doing to control technical debt in your existing performance test suites, here are a few questions that should be considered. Asking yourself these questions regularly will go a long way toward keeping your tests fit and sustainable and helping control a few common factors that lead to technical debt in performance tests.
I want to perform performance testing for hybrid iOS and android mobile app. Please suggest ways on how to do it with some open source and paid tools
The list of tools is something you can easily find with a quick google search. The question of 'how to do it' is not something that can be answered without knowing a lot about your development process, what your team goals are, and what problems you are trying to solve. I'd recommend starting by talking with your team.
how to test vulnerability of ecommerce, whether the applied security is properly working or not?
whether the website can be hacked by anyone or not
This is highly dependent on the product you are working on, the technology stack used to build it, the team that built is, and how security has been handled so far. If you are asking because you want to start a security investigation, you will probably want to talk with your development team to organize that work.
As a general note: the OWASP top 10 list might be a decent place to start. You can find that here: https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
It's easy to see that style consistency is important when discussing the user interface. But there are other areas where being consistent is just as important, even though they are not as visible. Consistency is one of the quality attributes of a product—any product—even if it is not stated clearly in the requirements documents, and testers have a responsibility to check for it.