In this TechWell interview, Steve Weisfeldt digs into the degree to which the introduction of mobile devices has changed the testing landscape. New testing challenges have arisen thanks to the many different network configurations phones and tablets have introduced.
Josiah: Alright, today I’m joined by Steve Weisfeldt who's a senior performance engineer at Neotys. He'll be speaking on the performance of mobile apps during this fall's STARWEST convention. Steve, thank you very much for joining us today.
Steve: Thank you.
Josiah: First, could you please tell us a bit about your background in testing software?
Steve: Interestingly enough, I sort of fell into testing, not by any design. My first exposure was an early job in my career. First out of grad school, I was working for a defense contractor and I worked on a team that didn't actually do testing, but we watched testers. That's the best way to say that. We sort of supervised, or verified that the testers of a particular naval submarine software system were doing the right sort of testing. They call that “independent verification of validation.” So that was one of my first exposures to testing. Maybe fast forward to a number of years later, I worked with and for a number of vendors in this space. In a myriad of rolls, a good chunk of my time actually delivering testing services as part of either service organizations of vendors, or working for service companies, or for a period of time having my own testing service company where we would go out and deliver all way, shapes, and forms of testing services usually built around automation, so functional and performance testing.
Josiah: You've been involved with testing software since 1999. This industry, software technology, it's one of the most changed, one of the most adaptive industries out there. How much has the landscape changed since you first entered the business?
Steve: It's changed a lot, but at the same time, it's one of those… the more things change the more they stay the same, sort of things. I guess I could think of this in maybe a couple different facets. If you think about the technologies itself, that's obviously changed significantly. When I first started testing, web applications were really just starting to come into play so there were a small number of possible technologies that could be used to develop applications on the web space. Now of course we've got everything from serialized content, steaming, media, audio video, all kinds of push technologies, that continues to evolve quite a bit which makes it very difficult for testers and for automated testing solutions. On the process side, I don't know if there's been any real evolution. Things have changed in some aspect, but sometimes also stays the same. For example, many organizations have become significantly more mature in testing, understanding the need to do testing and having test teams, whether it’s functional, or performance, or the like. Some organizations still haven't kind of grasped on to that as it was a number of years ago. But that's also sort of cycled back a little bit to where we've seen some organizations go from maybe having nothing to being very mature and maybe fatter on the testing side and, then for whatever reasons, financial, resource, whatever, kind of scale back on that. As more and more organizations start to move into sort of agile methodologies and what not, sort of the defined, old school waterfall approach has maybe fallen by the way side of many organizations. So we have multiple people doing multiple roles. I guess that's it. It's changed quite a bit, but at the same time, I think things are actually very similar in many ways too.
Josiah: Now a lot of your talk at STARWEST is going to be focused on mobile testing. How, if any, how many more steps have been added with the advent of mobile and tablet based applications?
Steve: I don't know if anything has really been added, it changes the complexity. My discussion and my expertise is more on the automated side of testing and I think, and as you know, also more on the performance side of testing. So, the introduction of mobile clients, whether they are phones or tablets, creates a number of challenges for automating any kind of performance testing. It creates a significantly greater need for performance testing because now all of the sudden applications are being accessed by a significantly larger number of people at the same time. So performance becomes a huge issue on the technical side. From the business side, it creates a bigger demand because applications are so mission-critical, I guess I should say applications that are mission-critical, revenue producing, productivity centric, are now exposed to many more folks. And so, you hear the story, if your application is down or not responding, it’s competition driven, you lose revenue, blah, blah, blah, that goes on and on. That hasn't changed from the same problems that we had in web applications, but it becomes exacerbated or increases significantly on the mobile side. Getting back to the technical side of the piece, we've got a lot of other things that we need to consider. We've got a number of different devices now, it's not just a browser, it's different flavors of operating systems on different devices. Different browsers, native applications, but even the network is now part of this piece because as you go from mobile device to server, we've got different signal strengths, we've got network conditions which affect performance, and these things all impact the performance of our applications and all need to be included as part of our testing. So, I kind of circle back to the answer I gave you at the beginning, it doesn't really change the steps, per se, but it makes it much more complex.
Josiah: Like you said, it doesn't really change the steps, but you did mention there are a whole bunch of different platforms. It's not just web based any more, you’re worrying about tablets, you’re worrying about mobile phones. Do you find that a certain platform takes the most time to effectively test and make sure that it's working correctly, or are they all kind of equal?
Steve: I think they're pretty much all kind of equal, certainly as important, but I won't say that testing an application on an Android device is going to be significantly more challenging than testing an application whose client runs on a desktop browser. It's just different.
Josiah: Absolutely. Now, what are the most common challenges when trying to emulate realistic scenarios during the testing process?
Steve: I think on the mobile side of the house, there are a couple of challenges. One of which is, again, let's keep this in the conversation of performance automation because you can't do performance testing without it. So with that in mind, the first challenge is admitting the kind of activity coming from a mobile device. In the early days of applications being accessed from mobile devices, it was really just going to a browser in your phone just like on your desktop except it would be a lot smaller in your little window. But now applications are different, right? If you go to pick your favorite website, Travelocity.com on your phone's browser, it redirects you to a completely different website that's geared towards mobile. So being able to actually create and capture the functionality, the use cases, the scripts, whatever the term that you want to use will be from the mobile device is a challenge right there. Many tools don't allow you to even do this, you have to kind of make it up from scratch. The other part of it is native applications. Of course on the mobile devices, we've got applications that aren't even browser specific, so how do I capture that activity? Again, many of the tools that are out there fall short for this and that becomes a challenge. The second piece is test execution. Because you want to make sure that you are accounting for not just the things the users are doing, but the kinds of devices that they are coming from. So being able to emulate the different sorts of devices, well that means different operating systems, or different user-agent fields to emulate the different devices because your application might behave differently accordingly. But also as I mentioned earlier, the connection speeds and the network ramifications. As we talked about earlier, I guess I hadn't mentioned this yet, but, if you think about accessing your favorite application over your mobile device as you’re walking down the street or even moving in your house, to driving on a train, you might go from three bars to four bars and you might move from cell tower to cell tower which introduces packet loss. These kinds of things are really challenging to emulate as part of your testing. I guess the final piece would be geographic representation. If you have an application that is accessed by mobile devices, you don't know where your users are anymore because they could be traveling all over the continent, all over the world, and you want to be able to test for those geographical locations as well. If you take sort of the old school, low-testing approach of driving all of your test load just coming from whatever load generation device you are using inside of your data center, all you’re really doing is testing your application as if your end users are also in your data center which is probably not very practical, and it would be a really crowded data center too, right? So being able to emulate traffic coming from locations which are closer to where your end users are, is going to give you a much more accurate load test and much better results.
Josiah: Absolutely. I don't want to give away all your talk, but what do you really hope to convey to your audience of testers in your STARWEST conversation? What do you hope to leave them with after the end of your presentation?
Steve: I sort of alluded to that already in the earlier piece, and what I want to do is kind of paint those problems that we have. What are the challenges that you need to consider when doing any kind of performance testing of your apps which are accessed by mobile devices? So some of the things that we talked about already, I'll drill into that in more detail. And then how do we account for them? What are the kinds of things that you should be looking for, a) in a solution whether it’s a open source or commercial solution to help you automate that testing, b) what are some of the process you need to think about, and c) a little bit about resources. What kinds of folks might you need to have involved in the testing to be able to augment into that mobile space? Do you need to make any changes? And then maybe 3b) is kind of how do you fold performance testing into some of your modern-day methodologies, whether you are sort of an evolving waterfall shop, but on the other side of the coin if you are moving towards or are agile doing any continues integration, that's another key piece. That's not really mobile specific, per se, but I think we all kind of move forward. Organizations are becoming more mature with certain technologies, typically are also becoming more mature in other aspects of the development cycle. I want to kind of tie that all together a little bit.
Josiah: Thank you very much for your time and insight Steve. It was really nice talking to you and I look forward to hearing more from you in October in Anaheim.
Steve: Thank you very much.
Steve Weisfeldt is a senior performance engineer at Neotys, a provider of load testing software for web applications. In the load and performance testing world since 1999, Steve’s expertise lies in enabling organizations to optimize their ability to develop, test, and launch high-quality applications efficiently, on time, and within budget. Prior to Neotys, Steve was president of Engine 1 Consulting, a services firm specializing in all facets of test automation. In an earlier position he spent seven years at automated testing vendor Segue Software (acquired by Borland) where he delivered professional services and training.
Podcast Music: "Han Solo" (Captain Stu) / CC BY-NC-SA 3.0