Software QA FYI - SQAFYI

Applying Models in your Testing Process

By: Steven Rosaria, Harry Robinson

Abstract
Model-based testing allows large numbers of test cases to be generated from a description of the behavior of the system under test. Given the same description and test runner, many types of scenarios can be exercised and large areas of the application under test can be covered, thus leading to a more effective and more efficient testing process.

The Current State of Software Testing
The following techniques are common in conventional software testing: • Handcrafted static test automation: the same fixed set of test cases is executed on the system under test.
• Random input generation: test runners that can simulate keystrokes and mouse clicks bombard the application under test with random strings and click on arbitrary spots on the screen. This category also includes test runners that call API functions in random order with random parameters. The test runners simply apply one input after the other; they don’t know what happens after an input is applied. • Hands-on testing: an army of ad hoc testers executes test cases interactively against the system under test.

Let’s take a look at how these techniques might be applied to the Microsoft Windows® clock application. The clock comes with a menu that allows the user to toggle a second hand, date, and GMT time display. The display of the clock can be changed from analog to digital and vice versa; the clock can be displayed without a title bar, where the user can double click to go back to the full display:

There is an option to set the display font, which brings up a dialog box for this purpose. This can only be done when the clock is in digital setting.

Finally there is a simple “about” box that can be closed.

Static test automation for the clock could be in the form of a script that simply tries the same sequence of actions in the exact same order each time. One such sequence might look like this:
• Start the clock
• Select Analog from the menu
• Select Digital from the menu
• Bring up the Font dialog box
• Select the third font in the list box
• Click OK in the Font dialog box
• Close the clock
Each script is entirely fixed and has to be maintained individually; there is absolutely no variation in the sequence of inputs.

One possible implementation of a random test runner is a test runner that simulates pressing the combination of ALT and all letters of the alphabet in turn in order to try and cover all menu bar options of an application under test. Many of those combinations may be undefined. Also, if a menu option brings up a new window that does not have a menu of its own, the test runner is essentially wasting time trying to find menu choices. In a similar fashion, the test automation may try to enter a text string in an input control that is expecting floating-point numbers. While these are valid tests in and of themselves, it would be nice to be able to control the inputs being applied. In other words, the test runners have no idea of what is physically possible in the software under test, or what to expect after executing an action; they just apply the input and keep going. Random test automation might for example produce the following action sequence:

• Start the clock
• Type the string “qwertyuiop” (even though there isn’t a text box anywhere to enter data)
• Press ALT-A, ALT-B, ALT-C, etc; nothing happens, the menu is activated only by ALT-S
• After the menu is activated, press F, which brings up the Font dialog box • Press ALT-A, ALT-B, ALT-C, etc; nothing happens, these shortcut keys are not defined in the Font dialog box
• Click on random spots of the screen

If this goes on long enough, eventually the test runner gets the clock application back in the main window and the whole process keeps going on and on until it is stopped by uncovering a bug, by some form of timeout, or by a command from the tester.

Hands-on execution of test cases consists of going through all features of the clock application and verifying the correctness by visually comparing the actual behavior with the expected behavior. For a simple application such as the clock, this approach actually gives reasonable coverage at a very low cost.

Test methods such as static test automation, random input generation, and hands-on testing have some major drawbacks:
• The system under test is constantly changing in functionality. This means that existing handcrafted static test automation must be adapted to suit the new behavior, which could be a costly process. • Handcrafted static test automation implements a fixed number of test cases that can detect only specific categories of bugs, but the tests become less useful as these bugs are found and fixed. This phenomenon is referred to as the “pesticide paradox” [1]. A number of interesting bugs have been found in the clock application simply by varying the input sequence, as described in [2]. • Applying inputs at random makes it difficult to control input sequencing in an organized manner, which may lead to decreased test coverage. The entire sequence of random choices is indeed controlled by a seed value, but it is a process of trial and error to find a seed that ultimately results in a test sequence that consists entirely of actions that are valid for the system under test. • Hands-on test execution does not scale well to complex systems.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Applying Models in your Testing Process