Jonathan Kohl is a software tester and consultant based in Calgary, Alberta, Canada. One of his focuses is mobile testing, about which he has written a book, Tap Into Mobile Testing. He also leads a training course on the subject. I recently had the opportunity to speak with him about the mobile testing arena. The following is part one of our interview.
Joey McAllister: How do you explain mobile testing to someone who hasn't worked on it yet? Is it like general software testing with a few twists, or is there a very different approach?
Jonathan Kohl: A lot of the principles will hold true, but there are some differences for sure, and it's really important to strategize and figure out how to use your time the best that you can. The "mobility" part of it seems to really throw people off. People with a QA or a testing background who have worked in development shops are used to sitting at a computer. We have our comfy, ergonomic chair at our desk, or our Swiss ball we're sitting on. We have it all set up and tweaked just the way we like it. So, I go to work. I go sit in the test lab, or I sit at my desk and I do my work, and the mobility side of things is something that people don't think of. You actually really do need to move around and do physical things with the devices.
And, the devices have a lot of sensors in them. A lot of testers probably don't realize this, but just for simple things—like, even for a touch screen to work and display properly when you're moving it around—there are several sensors that are being used. It's quite specific, and there's a lot of combinations to explore from a platform perspective. You're not only looking at who manufactured the hardware or the operating system, but you're also looking at the version of the operating system and firmware. Another thing people may not realize, because we take it for granted on the web: To get data to your device, there are all these carriers that are using various types of technology and all kinds of strange partnerships need to occur for us to access these things when we travel. There are a lot of factors from a platform-testing perspective that are really difficult to set up and test well, and we don't really think about them that much when we're testing web apps or other types of applications because we just don't have that variety.
Mobility is bringing some changes. One of the interesting things is that it's bringing computing to people who hadn't been using computers before. So, you have people in developing nations and in areas that may not be as economically blessed who can now afford a smartphone, and they can afford wireless broadband, and they're getting on the Internet and using it for the first time. They're in all kinds of different locations with different speeds and technologies and languages. So, there are a lot of factors.
I talk a lot about "combined activities." We use these things on the move. How is a restaurant trip affected by the applications in these devices? We look for the map of how to get to the restaurant when we're on the move, and we use applications while we're working through our restaurant experience. There are a lot of different factors that come into play there. What's the lighting like in the restaurant? What's the weather like when you're using the application? Are you using the application during a busy time of day, when networks might be getting jammed?
Some of the worst offenders right now are people who make airline applications. Airline applications tend to depend on an Internet connection. I need an airline application to work for me when I'm in an airport, and airports have the worst Internet connectivity. What are the environmental effects that people could encounter when they're using the app? I think I've managed to influence one airline app development group to go and test their software in an airport.
Joey McAllister: It's almost more like traditional product testing, like testing a toy or some physical product—not necessarily just the software but the whole package.
Jonathan Kohl: Yeah, I hadn't thought of it that way, but that's a great mashup. We've got the software side of it, and we've got the network connectivity and all of that we're used to with Internet-based apps, but there is the physical—the lights that blink and the sounds that come out and the movement that occurs. That all comes into play.
Joey McAllister: Software that traditionally was developed exclusively for desktop computers is now also being developed for tablets and phones, and sometimes it's the same software—at least the base of it. Do you think that all software going forward should be tested as mobile, using some of these approaches that you're talking about?
Jonathan Kohl: To some extent, yes. We're seeing devices collapse into each other. A lot of people are quite skeptical still of mobile testing, because it's been such a slow burn. It is moving in that direction, but there are still lines that are drawn.
I was just talking with some developers the other day who work with client-server and standalone executable apps, not web-based. They use web apps, but they don't really understand and they don't really care, because that's another world—that's not what they get paid to write. I have really good friends who are completely enmeshed. We live in the mobile world. It's hard for us to think of things outside of it. There’s still a divide.
There's also a divide between the app development for the mass-market consumer and the enterprise. The enterprise is really lagging, and some places don't want mobile devices in the workplace. They don't want to support them. They don't want mobile applications. So, it's a bit of a slow transition into the enterprise space. A lot of people who are professional testers may stay in the world they're in and not be exposed directly to mobile technology.
However, you're absolutely right. These things are collapsing into each other. Windows Metro, the new Windows 8 operating system user interface, is very Windows Phone-like. If you use Ubuntu, their Unity interface is very mobile. OS X Lion with the Mac has gestures that come from the mobile space. A lot of the discovery and the really cool growth in technology is happening in the mobile space, and that's getting pushed down into these other areas.
Joey McAllister: How do some of the other elements of these devices, like a calendar or texting or even the device settings, factor in when you're testing mobile applications?
Jonathan Kohl: We have these small devices that tend to have a singular focus, and a lot of these things you're talking about are built in. So, when you're talking about a mobile phone, you've got a phone that is using radio technology to transmit and receive, then you've got a little computer, then you've got these apps that they bundle with it that are native. These apps, anything related to the phone (if it's a communication device), and anything related to the OS take precedence.
I took over testing for a mobile application several years ago, and they had focused solely on functional testing (here’s a requirement, here's what it's supposed to do; here’s what we're saying it does; we've tested these things in the lab, and they seem to work), and they were failing. One of the first things I did was to look at the device and find all the options. Most of us just get it set up the way we like it and then we don't go back in—who wants to mess up your own device? So, on a new test device, I will spend several hours going into every nook and cranny to find these things that are built in and how they affect things. One of the fascinating ones was "reverse high contrast" used as an accessibility setting. So, instead of white with black text, you could flip it to black with white text. Trying that, the application that I was testing didn't work at all.
I created a mnemonic after doing a lot of this kind of testing called "I SLICED UP FUN," where I look at combining these activities with other things—inputs, store location, interruptions and interactions, communication, ergonomics, data, usability, platforms, functional tests, user scenarios, and networking. I started going through user scenarios (this is an enterprise-level app) to find out what people are going to use. Well, how do I use my device? I take notes on it, I put in calendar events, I get a call, I blow the call off, I text people—all kinds of things. I started using the built-in parts of the device in conjunction with our application: Use the application, go check email, send a text, look at a calendar event. While I'm using the application, it's slow to load, so I'm bored. I switch context and check my email, then I check back to see if the data file has loaded.
It was shocking to find out how these interruptions and interactions with these built-in things especially could cause our application to crash or to freeze up or to get weird error messages. We found that with one particular device type with one particular application we were testing, if we saved new calendar events or recorded voice messages with the built-in recorder app or saved notes with the built-in note-taking application prior to using our application, the first time it tried to connect to the Internet, it would blow up. Of course, we can't control these other applications, but we were able to put in better error messaging and do something so the application could recover.
That was something that took me back years and years with PCs, where something else going on in the environment or on the device could cause problems. Certainly, incorporating these things that take precedence—these built-in things, these settings—into the testing scenarios that you're using for your actual application is incredibly important.