Software QA FYI - SQAFYI

Agile Regression Testing Using Record & Playback

By: Gerard Meszaros, Ralph Bohnet, Jennitta Andrea

Abstract. There are times when it is not practical to hand-script automated tests for an existing system before one starts to modify it. In these circumstances, the use of “record & playback” testing may be a viable alternative to hand-writing all the tests. This paper describes experiences using this approach and summarizes key learnings applicable to other projects.

1 Introduction
Scripting tests by hand as required by JUnit or it’s siblings is hard work and requires special skills to write tests. Writing functional (or acceptance) tests using JUnit is particularly hard because of all the data requirements. One possible alternative is the FIT frameworks [1] but these still require someone to develop utility code to act as “glue” between the testing framework and the system under test (SUT). Each of these approaches requires that the system provide an interface by which the tests can be conducted.

1.1 Catch-22 of XUnit-Based Testing
On several recent agile projects, we found ourselves needing to modify an existing system that had no automated tests. In one case, the manual retest effort was expected to involve several person-years of effort and many months of elapsed time. Handscripting JUnit (or equivalent) tests was considered too difficult because the systems were not designed for testability. (e.g. business logic embedded in the UI; no access to an application API; no means of controlling all the test setup (such as “stubbing”)). Refactoring the system for testability so that tests could be written was considered too risky without having automated regression testing to verify that the refactoring had not introduced problems. And even if we had done it, we were concerned that we could not hand-script all the necessary tests, complete with their expected outcomes, in the time and resource budget available to us.

1.2 Looking for Alternatives to XUnit
This led us to investigate all possible alternatives to XUnit style testing. Most of this effort focused on “record & playback” (R&PB) styles of test creation, an approach that involved recording functional tests on the system before we made the changes and regression testing the refactored system by playing back these tests. The tests verified the overall functionality of the system and, in particular, much of the business logic it contained. Once the tests were recorded and verified (successfully played back on the original system), we could then start refactoring the system to improve its design. We felt that this would allow us to quickly record a number of tests that we could play back at will. Since we had an existing version of the system to use as a “gold standard”, we could leave the effort of defining the expected outcomes to the record and playback framework.
This paper describes the options we considered, their advantages and disadvantages and how we ended up regression testing the system. The work also lead to an understanding of where R&PB can be used in a more general context than the specific projects with which we were dealing.
2 Issues with R&PB Test Automation
R&PB style testing predates XUnit-style testing by many decades. Test automation folklore is rich with horror stories of failed attempts to automate testing. This paper describes critical success factors for making this style of testing work, what to avoid, and best practices in R&PB test automation.

The “robot user” approach to test automation had received enough bad publicity in past attempts at test automation that we found it to be a hard “sell”. We had to convince our sponsors that “this time it would be different” because we understood the limitations of the approach and that we had a way to avoid the pitfalls.

2.1 The “Fragile Test” Problem
Test automation using commercial R&PB or “robot user” tools has a bad reputation amongst early users of these tools. Tests automated using this approach often fail for seemingly trivial reasons. It is important to understand the limitations of this approach to testing to avoid falling victim to the common pitfalls. These include Behavior Sensitivity, Interface Sensitivity, Data Sensitivity and Context Sensitivity.
Behavior Sensitivity
If the behavior of the system is changed (e.g. the requirements are changed and the system is modified to meet the new requirements), any tests that exercise the modified functionality will most likely fail when replayed. This is a basic reality of testing regardless of the test automation approach used.
Interface Sensitivity
Commercial R&PB (“robot user”) test tools typically interact with the system via the user interface. Even minor changes to the interface can cause tests to fail even though a human user would say the test should still pass. This is partly what gave test automation tools a bad name in the past decade.

Data Sensitivity
All tests assume some starting point; these are often called the “pre-conditions” or “before picture” of the test. Most commonly, this is defined in terms of data that is already in the system. If the data changes, the tests may fail unless great effort has been expended to make the tests insensitive to the data being used. More recent versions of the test automation tools provide mechanisms that can be used to make tests less sensitive. This has added a lot of complexity to these tools and, as a result, they often fail to live up to their promises. This has likely contributed to the bad reputation they have received.

Context Sensitivity
The behavior of the system may be affected by the state of things outside the system. This could include the states of devices (e.g. printers, servers) other applications, or even the system clock. E.g. the time and/or date of test.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Agile Regression Testing Using Record & Playback