There is much talk in the software community about the upcoming trends in software testing—and about whether testers will even still be relevant or necessary in the years to come. I, for one, don’t think automation could replace the exploratory skills of a human tester, but that doesn’t mean our jobs won’t be changing at all.
There are lots of exciting possibilities and opportunities for tomorrow’s testers. Let’s see how this profession can be expected to evolve in the upcoming decade and beyond.
Skills and Technologies
Thanks to user behavior-analyzing tools such as Adobe Analytics and Google Analytics, requirements design can be tailored to specific customer groups. This has great potential to influence the Pareto principle—that 80 percent of your effects (sales, revenue, etc.) come from 20 percent of your causes (products, employees, etc).
Having this data lets you create very specific sets of requirements, specialized to customer profile groups based on their behavior and usage and weighted with expected results from these groups. The fulfillment of these requirements will have to be tested more precisely, so there will be less space for exploratory testing and more reliance on testers using their common knowledge and data analytical skills.
This is what persona testing is based on. This area has just started to develop in the last couple of years. In a few years, I predict we will be pairing persona testing with smart data, or the processed and cleaned big data, to get persona testing frameworks with perfectly optimized and measured A/B testing prototypes. There will be hardly any product, be it a webpage or wearable, without a marketing strategy aiming for the prime focus group, so a targeted approach to testing will hold essential business value.
I also predict that there will be a greater demand for testers to learn at least some basic programming. Scripting languages are becoming more and more effective and easier to learn; you don’t need to write fifty lines of assembly code to be able to script a test of a login function. Even manual testers are learning languages like Python, Selenium, or Perl because their rich online documentation makes the scripting of routine tasks even easier.
Automation tools also provide fantastic compatibility with functional, object-oriented, keyword-driven, and other kind of programming languages. Their rise makes building precise, fine-tuned automated test frameworks simpler. Consider how useful automation is when performing health checks of production environments in order to help lower the downtime of customer-facing services to a minimum. This is especially important with the rise of ever-available mobile apps and 24/7 services, such as online banking and health care providers.
Roles and Dynamics
Testers and developers are working more closely, with some testers even being embedded in the development or project team. While learning the fundamentals of testing is relatively easy—you often can start working after taking a good course or reading a detailed book—setting up and managing a test process requires years of experience and knowledge, and I don’t see that going away.
This knowledge will also be required in cross-functional teams working with continuous development. While all team members can test a given area pretty well, there’s a definite need for guidance. This also applies to crowd testing mobile devices or cloud setups. Testers may be integrated in other teams, and developers may be asked to perform some testing earlier in the development cycle—and that’s a good thing—but we will still need testers with defined expertise.
Tools and Devices
With so many connected options today, customers need to be able to use websites on all kinds of devices. Mobile testing is already becoming so widespread that it is not a separate artifact anymore. The variety of mobile devices will only increase over the years, so the scope of testing will be huge.
Here come the handy crowdsourced testing services. This option lets you make sure your native or web apps work properly under all kinds of conditions all over the world. In coming years, the testers responsible for managing crowd testing will have to drive frameworks testing these use cases, so this would be a useful area to learn more about now.
Another option for parallel testing on many environments is the cloud. The biggest challenge right now is the configuration of the test systems, but companies facing this challenge will surely provide some compact solutions in the near future—and they will be more efficient and inexpensive. Already, there’s no comparison between the cost of cloud testing and the cost of testing real, physical devices used even five years ago. This cost will continue to get lower and lower, just like shipping costs with containers over the last decade.
In contrast, a new technology that seems to be developing relatively slowly is self-driving cars. It will probably take years for governments around the world to create appropriate laws for having these vehicles on the road, but meanwhile, companies will keep creating a variety of self-driving cars (just like Google is doing right now), and all of them will have to be tested for all important use cases.
Just as when it’s a person behind the wheel, there are many factors to consider, so a complicated risk-based test approach will be needed for testing activities. Although there are still gray areas of these use cases, some companies started the groundwork for this type of analytical testing by using test cars with cameras and recording devices. As the number of companies producing self-driven cars rises, this testing need will also go higher, and due to the safety-critical nature, it won’t be available for freelancers—this scope will definitely be confidential and will require expert testers.
And there’s another burgeoning field that sci-fi writers have been dreaming about for a while: the everyday use of robots. This has already started to be a reality in Japan, and the technology is at different levels of development all over the world. The producers already perform a huge testing activity before putting these robots into use, but a new dimension will open when the robots can connect to social media and the internet of things. These connections will bring up dozens of integrity and security questions. I think the most effective way to test them will be through crowdsourcing, just like we do now with mobile testing.
There is a lot to learn to keep up with software testing in the next couple of decades. What will you start with?
This is the first time for me to know the concept "Crowdsourcing". Thanks!
Currently, I worked as automation test engineer. I have a confusion that what shall automation test engineer do in the beginning phase of big product development. The problem is that the testing process is slow, product feature is not stable. Most of time, automation test engineer has no enough work to do because the automation framework is already muture and what they are wating for is just the requirement from test team. What shall they do?
The delayed phase of activity you described is very common among software testers. With good preparing, you can use this time properly. I call this period of software testers "the golden time." It is a perfect for:
Thanks for the interesting article. I have a few questions.
You say "there will be less space for exploratory testing and more reliance on testers using their common knowledge and data analytical skills."
As far as I know, common knowledge being used is ET.
Using testers "data analytical skills" - is it possible for new features? How exactly? -There is no analytics for features not yet released. What about cases that sometimes you only need one person that will be harmed by a bug in your program to lose your credibility. And you will not see this one person actions in your analytics.
"I also predict that there will be a greater demand for testers to learn at least some basic programming." - True. It is here and called Agile, CI etc.
"developers may be asked to perform some testing earlier in the development cycle" is TDD.
Thanks for the well-founded questions!
Data analytical skills might be useful in case of determining the testing coverage and depths of functions where A/B testing and focus groups are defined. But testing medical software of nuclear plant driver, you have to be carefully testing every case, by well-defined and precise requirements, to avoid any harm of a single user, just like you described.
While test driven development is implemented on different levels, other types tests may be required to achieve high coverage. And agile and continuous improvement is not applied everywhere with the same effectiveness -- some companies still use the same manual methods described in the books from the '70,s '80s.
Still, I have to contradict in exploratory testing. My concept is more close to Lee Copeland's definition: simultaneously plannig, exectuing and assessing tests, leaning the sbehavior of the system. Techniques as reversing the order of test cases of chenging the user flow might acieve better test coverage thane using common knowledge only.
A/B testing vs. software testing: difference seems to be very wide
1) Success of A/B testing is driven by customer's behaviour and on line tool (e.g. Optimezely can do the trick or other all-in-one framwork), meanwhile for software testing is the customer or requestor the owner of defining requirements and success dependes on quality of solution delivered
2) testing approach change is focus: data-driven approach (as you claimed) is the importan one in A/B testing, meanwhile for software testing several tecniques like specification-based, structure-based, risk based,experience based (and/or combinations of them) can be applied by choosing the right one/s on the basis of project's constraints and boundaries
3) testing context: A/B testing takes place only on prod environment, meanwhile software testing is a continuos process set out on DEV env, going through QA env and it ends up on UAT env.
What is your perspective?
You've just given a perfect description and explanation of A/B testing, Sir. In my article, I was pointing on the framework itself. I also has to be tested as a software, whether it drives the selected customer, work properly etc.
Thanks for sharing this article.
As far as I know, AI is an emerging technology in the world. So how would a test engineer be in the next 10 years if it can write code by itself without the training from a bunch of datasets?
AIs are really developing and already in use on many areas. Still, scheduling and managing them remains a human job for a long time. The era of Skynet from the Terminator movie is still not here, fortunately. But the time is definitely coming.
I don’t think software testing will ever phase out in future, it will be done some way or another.
However Manual Testers(not Manual Testing) will eventually phase out, Organisations are more keen in hiring Testers who can do automation also.
In near future, Most of the companies will move to Automation and hire Testers with multi skills and I believe there will be not much difference between tester and Developer. Some companies have already started naming their Testers as SDET.
Humans in testing can only be replaced if in future we develop a very strong AI which can replicate human thinking which I don’t think is possible in the near future, it would take some time.
This is absolutelz true. Take my example: since writing this article, I've changed my work place. After 7 years of manual testing, I learn automated testing and apply it daily, among the manual tasks. Times change in testing - we have to care about this!