Mobile Application Testing on a Shoestring
By: Matthew Heusser
The small-device market is exploding with users, devices, platforms, and browsers; they have money, and they expect applications to just work. Matt Heusser tells you how to thrive in the new mobile application marketplace - without breaking the bank.
Perhaps you want your company to be more than simply "online"; you want it to be available while people wait for the dentist. Maybe you want to equip your sales force to make customer demos on the iPad. Or possibly the CEO's nephew called him a "poser" over dinner because your company's software doesn't work on the iPhone. If the scenario I've just described doesn't apply to you right now, it probably will eventually. For whatever reason, suddenly you'll have to develop a mobile strategy.
In this article, I'll lay out a few tips, tricks, and suggestions that can help to accelerate the pace of your mobile testing effort while setting reasonable expectations. To keep things simple, we'll focus on web-based mobile applications, but many of these techniques will also work on native iOS, Android, or BlackBerry applications.
In the worst case, you have major production issues right now. But then the blocking issue isn't testing, it's fixing—and you can use the time spent fixing to develop a strategy and get a lot of testing done.
What Does 'Support' Mean? Which Devices?
In a recent international keynote address at the Software Test Professionals Conference, Matt Johnston of uTest laid out the combinatorial problem of mobile application testing. Starting with testing each feature in each OS and browser, he added on handset makers and models, wireless carriers, and geolocation. Unless your company name is Google, Nokia, or Research in Motion, and unless the application is critical to your company's success, it's unlikely that you'll get the time to test all these combinations exhaustively.
The trick is to pick just one—your most common testing issue—and come up with strategies to retest very quickly. Over time, you can expand the strategy to address other systems and combinations, especially if problems are reported from the field.
In the past, when asked to test new systems, I've used a variation of Jerry Weinberg's famous "orange juice test".  That is, I reply, "Yes, we can do that, and it's going to cost you..." time, staff, and sometimes money for equipment and licenses.
To determine what systems to test first, you could take a hard look at your users. Sure, general browser statistics are available all over the Internet, and you could go with the most popular devices, but in most cases your particular users will have specific adoption patterns—different patterns than those of the general public. You could be better off pulling statistics from your own server logs, or getting to know the customer, than going with general-use statistics.
In many cases, you can combine coverage. For example, you might test functionality on the iPhone heavily and then "skim test" the iPad and iPod. (You could also choose popular Android and Blackberry devices for a similar strategy, as those devices have their own massive expansion of combinations that most companies reduce to a small set.)
Beyond that first heavily supported device, you might see testing on a few very different devices as a major risk. The challenge is determining how much effort to put toward which devices, and when to stop.
... to read more articles, visit http://sqa.fyicenter.com/art/