I recently tweeted about software testers needing to know about what's going on in the world of automation. It got a pretty warm reception, so I thought I should expand on my thoughts.
Whatever your role in testing is these days, your day-to-day job will probably be enhanced by using at least some of the following approaches. At a minimum I'd suggest knowing what these terms mean and an example of how they might be used in a software development shop.
Continuous Integration Services
One the biggest changes over the past decade when it comes to automation in software development has been task automation. In the past, things like building a particular version of an application, creating documentation, or updating the status of bug reports were done manually. Some teams even had dedicated individuals who were the "build person" responsible for initiating a build. Doing tasks like this manually (or with heavy ties to individual people or machines) was time-consuming and created annoying bottlenecks, such as the build person taking a personal day and blocking new builds from being completed.
Luckily, continuous integration (CI) tools came to the rescue by allowing tasks to be standardized and automated. CI services essentially schedule and execute tasks that a regular desktop computer can do and have these tasks execute on target machines other than itself. Going back to the build example, instead of having Bob being responsible for manually creating builds on his machine, a CI service can be set up to choose a target machine and execute a build on that machine. Not only does Bob not need to be physically present at the build machine, but builds can occur any time, either scheduled or in response to another action.
For example, Alice the tester may want a build of an application based on the latest changes to see if a bug has been fixed, and she can initiate that build herself. This not only frees up resources from doing repetitive tasks, but also gives more control to teams over individual and team workflows. You can also chain CI tasks together to further streamline some tasks. Learning how a CI service works is a great introduction to automation without a lot of emphasis on programming.
One way to make use of CI is to run end-to-end test suites. These tests often need to run for several minutes or even hours. I’ve used CI to spin up and spin down testing machines and to launch tests on those test machines. This is a big help compared to running these tests on your own machine, because it allowed for a test developer to do other tasks while tests run elsewhere. The CI server handled all aspects of these tasks.
Some popular examples of CI services are the open source tool Jenkins, the cloud-based Travis CI, and the proprietary tool Bamboo, but there are other ones as well. Even more low-tech is using a tool like cron or Windows Task Scheduler for use on a single machine to automate tasks.
CI is indispensable for developing software beyond hobby programs, and it’s one place where a tester can really add value.
Modern Source Control
I should first point out that I love source control. When writing code (or blogs!), it's too helpful a tool not to use. For a tester who codes, it's a no-brainer. Even if a tester doesn't code, using source control in a modern way can be a big benefit when testing software.
What do I mean by modern? I mean using source control that 1) integrates with other tools, such as a CI server or bug tracker, and 2) allows for using good team workflow practices, like trunk-based development. Good source control allows individuals to analyze changes and dig deeper into what's happening in a software project.
A tester with access to a source control history and some basic training can ask questions like "Which files in the application have had the most developers work on them?" "Which files have the most changes?" "Which changeset contained the code that caused the bug?" and so on. This information can be helpful in looking for patterns and underlying causes of some issues.
Integrating source control with CI services can be even more powerful. Issues in bug trackers can have their statuses updated based on changes made by developers. Testers can request certain requirements be automatically checked on incoming code, such as passing automated tests or code styling requirements. Builds and deployments can be initiated by changes to code. There are many possibilities in this case when source control is used well, which is one of the underlying concepts behind continuous delivery.
For example, I’ve worked on an open source project that uses a cloud-based CI service to check every commit submitted by contributors. In this project, the CI runs all automated tests in the project and checks all added code for styling and formatting. If a commit has failing tests or does not meet the set style guide, the submission fails and informs the contributor and project maintainer to fix up the commit. This helps give each commit in the project history a uniform style and informs committers of possible trivial errors in added or updated modules.
The current hotness in source control is Git, which is free and open source, with a healthy ecosystem around it. There are also several other options, such as Subversion, Mercurial, and Microsoft Team Foundation.
Telemetry and Monitoring
This is a topic I'm not as familiar with, but it is definitely of interest to testers. Monitoring is an approach where hooks are placed into an app that sends information back to the software creator about how the software is being used. This could include which back-end/server API functions are being called and in what order, which parts of a UI are being used and at what frequency, and so on.
The goal isn't to send specific user information back to the development team, but more general information about what parts of an application are being used and how. This provides insight into what end-users are doing, how they actually use the app, and how certain features are received. Alan Page is a tester at Microsoft that has briefly discussed some of the cool things he's seen done with telemetry and monitoring.
Similar to mining source control history, monitoring can help you find out answers, from simple questions ("How many people logged in last week?") to more specific and insightful questions ("How did users change their habits when feature X was released?"). These are the kind of questions that help testers execute better testing strategies and, overall, help teams make better choices for users.
For more information, check out the AB Testing Podcast with Page and Brent Jensen. For how a mainstream product uses telemetry, take a look at how Mozilla uses telemetry with Firefox.
Using Selenium Well
Last but certainly not least, Selenium WebDriver is a tool pretty much any tester who works with web apps should be familiar with. At this point, WebDriver is a standard tool for automating driving browser actions, similar to how a human user would interact with web apps in a browser. It has several language bindings, works with several mainstream browsers, and is a great example of an extendible API that can be built on top of by developers. In short, it's a good piece of work.
When used smartly, WebDriver allows testers and developers to automate user acceptance tests that can be placed in a continuous delivery workflow. I’ve written simple WebDriver-based tests that find issues like navigating to a login page URL and not finding the username and password fields (because of a bad deployment), or finding that a dialog doesn’t open when a control is clicked as expected (an obvious but serious bug). These are issues that can be found quickly but can’t quite be covered by unit tests.
WebDriver can also be used to write automated tests that run locally to double-check that changes didn't break critical features in unanticipated ways. There's even uses for the WebDriver that extend beyond functional testing.
For testers who are interested in learning to code, WebDriver can provide a good introduction to programming. Automating test scripts can be a comfortable way to get familiar with programming without diving deep into programming language waters. It provides enough of a structure to get started and still get some good testing work done.
With these concepts in mind, embrace test automation, whatever your role is in software development.
A good summary indeed. However, I'm disappointed that you didn't address the one question I was hoping you would: when is it appropriate to use automation and when not? Should we automate everything even if exploratory testing would be more appropriate in some cases? Should we just ignore the context?
Hello, I guess you should not be disappointed one particular issue is missing from this summary, since in itself it has been discussed almost to death by other authors in their blogs/sites - however, I agree that the question you bring up really should be repeated sufficient enough number of times so maybe the word gets spread among certain decision making bodies who often push the A-word thing down poor testing teams' throats and it gets spread so effectively that maybe those bodies should finally acknowledge what I guess majority of testers/QA people approaching or about to tackle automation already know (being or claiming to be specialists in their field they should) about its optimum use - anyway, the piece focuses on other aspects of automation than the questions of when or whether or not to use it, and in my opinion, talking about it just was not the author's intent.
Yes, I agree that it wasn't the author's intent to talk about when we should automate and when we should not. That's what I'm taking issue with, that such an important question would be left out of a summary of automated checking.
TL; DR: the article is too developer mindset orientated and presented from a developer's point of view
I am revisiting this article to add a new comment since I got it sent in a top10 list for the passing year, out of curiosity if other people also commented on it, but primarily because one thing came to my mind since my initial readings of this piece, the "thing" being that the author of this article assumes and preaches from the position of a developer/programmer familiar with certain tools of the software development trade and while it is perfectly all right for anyone to be able to use said tools / technologies to facilitate and accelerate their work - be it development, testing, etc. - they are only tools that need a preferably skilled or fast learning user who in the first place has and can apply the testing knowledge and skills properly in the sense of being able to write / develop good tests and other testware (in other words, is a good tester) and knowing what to subject to automation at a later phase and why some such should be done, and only then, provided such tester is fairly proficient with technology and related skills (which v. often require ability to write code), knowing how to express those tests through code that forces the SUT to perform certain actions as if the "automation" code were a user with an OCD; the tools and technologies themselves will not produce or conjure any tests as they still haven't reached this stage of independence and intelligence - yet - and to counterbalance the point of view presented through this article, it might be supplemented with an account of a manual tester who crossed the Rubicon to the land of test automation and was successful, and who would share his experiences and advice with those manual testers (are there any in existence yet?) who want to or are forced (for fear of losing their jobs) to take up test automation (or better still, another article on the subject area might be commissioned by this site from someone just described).
Thanks for the replies to my article!
If I had to sum up my intention of thiis article, it would be something like "There are many modern tools and techniques that software testers should be aware of because they can be very helpful in some common contexts". When I was starting out in software development, one thing I learned early was that being aware of concepts like source control, bug tracking, details of software methodolgies like agile or waterfall, and so on. I didn't necessarily use these concepts on a daily basis, but merely knowing what they were helped look for jobs, work well on teams and decide how to approach some problems. Knowing what's out there usually helps open your options.
I know automation is hot topic in the software testing world right now (hence this article). Whether testers should get to know automation, programming or related tooling is a matter of context, but honestly it's becoming a common context. There's a whole bunch of ways to get into automation, and there's a whole bunch of great tooling/classes of tools to help folks in software development.
This is an article I wrote about Windows desktop application testing with python and open source libraries like PyWinAuto : http://qapage.com/Windows-destop-automation/
Hi Josh, you have published an in-depth article on things needed for testing in software development. By continuous integration with mobile monitoring, we can build more effective software that helps the user be more productive. Anyway, thank you for sharing all the useful information, and it has given me a lot of insights into our content on software development.