In my years of performing mobile testing and connecting with colleagues through meet-ups, blogs, articles, and conferences, a common theme has always emerged: Testers are nervous or skeptical about getting into mobile testing.
I have often heard testers say mobile testing entails a totally different thought process from other types of testing, and they believe they have to take extra courses and get certifications to be proficient in it. There are even testers who think they are not qualified to do mobile testing, although they have been testing other applications for more than a decade and have gained a lot of experience during those years.
I remember testers saying the same things years ago about testing responsive websites, where a website should render correctly for every form factor of a particular device. In the end, testers with critical thinking prevailed and were able to test these websites and provide valuable information for the stakeholders to make decisions. The same thing is happening now in the era of mobile.
Technology is constantly changing, and it is understandable to feel out of place. But at the same time, we need to realize that, as testers, we can adapt to any domain or technology. Critical thinking and experience-based skills apply to almost any arena.
Mobile Testing: Not So Different After All
Yes, there are differences in mobile compared to other applications in terms of the way it is implemented, distributed, and consumed by end-users. But in terms of testing, it is not that different after all. The concepts, approaches, and strategies testers use when testing other applications can be applied to mobile as well.
Some of the commonalities in terms of areas to be tested are log files, rendering issues, performance, consistency, storage, memory issues, caching problems, and security vulnerabilities.
There are also overlaps in terms of testing strategies that can be used, including:
- Blink test: Looking for visual patterns by constantly switching between similar versions of pages or apps to notice minute differences in rendering or visual elements
- Installation testing: Installing, uninstalling, and reinstalling apps, including upgrading apps from much earlier versions
- Interrupt testing: Testing how an end-user will use the app by constantly texting, calling, or switching apps on the test phone. The same could be done for web pages
- Testing with different configurations: Testing apps on different mobile devices and OS versions. Similar to browser testing, which involves testing in different browser versions or testing in desktop applications with different versions of the client’s application
- Checking for consistency: Checking for app consistency between Android and iOS. Similar to looking for consistency in pages by testing between browsers, between browser and mobile, and between desktop applications
- Checking user reviews: Reading through user reviews to find out how end-users feel about your app and how they use it. This applies to desktop applications and websites as well
- Checking for rendering issues: Checking whether web pages display differently based on different mobile browsers and screen sizes
There are many examples of issues I have found following the above strategies in mobile and other applications. Whenever I test an application, I always check for consistency with its previous versions, with its competitors, between OS versions, between browsers, and much more.
Once I was testing a booking application that makes hotel, flight, and car reservations via a mobile app. There were both an Android and an iOS version of this app. I noticed that when a customer searched for hotel rooms for more than four people, they were navigated to the mobile website from the Android version of the mobile app; but on the iOS version of the same app, they were able to search for hotel rooms within the app itself without being redirected to the mobile website. The behavior was inconsistent between their versions of Android and iOS apps, and this was a bad customer experience. Once I pointed this out, we were immediately able to change the flow on the Android app to ensure the apps have consistent behavior.
Another time, I was testing a revamped version of a screen capture desktop application. The new features were really cool, but when I checked for consistency between the new version and older versions and other competitor apps, there was a vast difference in the main functionalities. Imagine this situation: The user opens a word processing application and tries to save a file. Any user would immediately look at the top left corner of the screen to click on the File>Save option. This is a de facto standard for word processors. But say the option is all the way at the bottom right corner of the screen. How would that experience be? The same thing happened with the new version of the client’s screen capture tool; the option to capture a screenshot was at the bottom right corner of the screen instead of the top portion of the screen, which the users were used to seeing. Similarly, there were other major buttons and options on the GUI that were misplaced and scattered all over the application. All in all, the application was quite inconsistent with its previous versions and competitors, which would probably result in a poor user experience.
It is important to know these commonalities to understand that mobile is just one type of application, and the testing techniques and approaches picked up from testing other applications are not a waste. These are rich skills that are used across various domains. In my personal experience, I believe that these skills are learned through practice rather than attending a course or getting certified.