Software QA FYI - SQAFYI

Defining Test Requirements for Automated Software Testing

By: Daniel J. Mosley

In an informal survey taken at www.csst-technologies.com, the most frequent response to the question, "As a software tester, what would facilitate you work the most?" was "Improved System Requirements Documentation" (28 responses) 33%. This result was followed closely by "Software Testing Methods/Process" (27 responses) 31%. Other results include "Software Testing Standards" (14 responses) 16%, "Improved Test Planning' (11 responses) 13%, and "More Time to Complete Testing" (6 responses) 7%. It is obvious that the two most important improvements software test engineers want are improved software requirements and improved testing methods.

Identifying and defining software requirements in general is a difficult job. Requirements Management is seen as the key to success in software development (3). Over the years The American National Standards Institute and The Institute of Electrical and Electronics Engineers, Inc have defined standards such as ANSI/IEEE 830-1984 Software Requirements Specifications, ANSI/IEEE 830-1998 Recommended Practice for Software Requirements Specifications, and IEEE Std 1362-1998 (Incorporates IEEE Std 1362a-1998) IEEE Guide for Information Technology -- System Definition -- Concept of Operations (ConOps) Document. While the first two documents are technical standards for requirements specification by software engineers, the third is aimed at defining software requirements from the user's point of view.

A formal approach to specifying test requirements goes a long way toward satisfying the two major complaints cited in the survey above. It formalizes the translation of software requirements to test requirements and makes the process repeatable (Capability Maturity Model level 2). Even with the availability of the ANSI/IEEE standards, Gerrard (1) holds that the majority of requirements documents are "often badly organized, incomplete, inaccurate and inconsistent." He also believes that the majority of documented requirements are "untestable" as written because they are presented in a form that is difficult for "testers" to understand and test against their expectations. It is up to the test engineers themselves to translate those requirements into testable ones.

There are no standards documents to guide the specification of requirements for software testing (and/or to guide the translation of existing software requirements specifications). Refining existing software requirements into software testing requirements is a very difficult task. The information a test engineer must have in order to properly test a software component is highly detailed and very specific. Granted, there are different levels and approaches to testing and test data specification (Black Box, Gray Box, and White Box views of the software) and the nature of the test requirements depends on the point of view of the test engineer. The stance of the test engineer is also dependent on the level of depth and details the software requirements specification contains. Thus, it is possible to specify test requirements from both Black Box and White Box perspectives.

In many instances, there isn't a software requirements specification document to be translated into test requirements. When this is true, the test engineer has two options. One is to try and capture the test requirements on his/her own and another is to tell the project manager that the software cannot be tested because there are no documented requirements. Unfortunately, in the real world many software requirements specification documents are written only after the software has been constructed. When this is the case, test engineers must ferret out the test requirements themselves, prior to, and during the testing activities.

Why are test requirements so important to the testing process? They are necessary because test engineers must predict the expected outcome of their tests, and they are also necessary because the test engineers must verify the results of each test. These activities cannot be done with pre-specified test requirements. As Meyers (2) so aptly stated in his landmark book, The Art of Software Testing, "If there are no expectations, then there can be no surprises." What he meant is that there must be some set of "prior beliefs" about the behavior of the software if anything it does is to be conceived as incorrect. He also wrote, "If the expected result of a test case has not been predefined, chances are that a plausible, but erroneous, result will be interpreted as a correct result because of "the eye seeing what it wants to see."" Without test requirements there is no way to predefine the software's behavior devoid of prestidigitation.

As for the second issue, test results verification cannot be done unless there is a set of predefined application behaviors. This is has been a major point of contention for software development managers. The development manager, and/or the project leader always question how the test results will be verified. They want to see written proof of the test result. There are many ways to verify test results. For example, simply observing the application under test's (AUT) behavior may be enough, but development managers want more substantial proof.

That is where test requirements play a crucial role. A test requirement is a testing "goal." It is a statement of what the test engineer wants to accomplish when implementing a specific testing activity. More than this, it is a goal that is defined to reflect against an AUT feature as documented in the software requirements specification. A test requirement is a step down from the software requirement. It must be "measurable" in that it can be proved. Measurable means that the test engineers can qualitatively or quantitatively verify the test results against the test requirement's expected result. In order to achieve this, test requirements must be broken down into test conditions that contain much more detail than the software requirements specification and the test requirement allow. The relationship from software requirement to test requirement can be one-to-one, one test requirement per software requirement, one-to-many, one software requirement results in many test requirements, and many to one, more than one software requirement relates to one test requirements. Using the same line of thinking the relationship of test requirement to test condition can be one-to-one, one test condition per test requirement, one-to-many, one test requirement results in many test conditions, and many to one, more than one test requirement relates to one test condition. In both instances, many-to-many relationships are also possible, but they make testing so complex that the results are difficult to interpret, so this type of relationship should be avoided. When it occurs, consider using a decomposition approach to split the test requirement into one or more, less complex requirements.

Test requirements must also be associated with manual or automated scripts. For example, a test requirement may produce 50 test conditions that represent functional variations of the baseline test data. Those data are stored as test data record in a text file. An automated (data-driven) test script that navigates the AUT, and reads in the data, inserts each record into the appropriate GUI screen, and saves the record to the AUT database. For test coverage metric purposes, it is important to "attach" the test script to the test requirement.

From Software Requirement to Test Requirement to Test Condition: an Automated approach

In automated development tool suites, such as Rational Software Inc.'s Enterprise Suite, requirements management tools are used to specify a number of different types of software requirements. Automated test tool suites such as Rational Suite Test Studio 1.0/1.5 offer the test engineer the ability to work with software requirements entered via Rational Requisite Pro and to translate them into test requirements for the design and construction of automated test scripts.

Rational's development/test tool suites are designed around an internal process, Rational Unified Process (RUP) 1.0 (4). RUP describes several general software requirement types,

Functional Requirement - defines the input and output behavior of a system

Usability Requirement - defines aesthetics, and consistency in the user interface, documentation and training materials

Reliability Requirement - defines failure and recoverability

Performance Requirement - defines response time

Supportability Requirement - defines testability and maintainability

As well as, several software construction oriented requirement types that take into account physical constraints,

Design requirement - places limitations on the software design

Implementation Requirement - places limitations on software construction

Interface requirement - places limitations on software interaction with other systems, users, etc.

Physical Requirement- places limitations such as hardware type, etc. on implementation

In addition, Rational requisite Pro offers the ability to specify custom requirement types designed around specific project needs. As far as test requirements go, RUP 1.0 asserts the following,

"Test cases are ways of stating how we will verify what the system actually does, and therefore they should be tracked and maintained as requirements. We introduce the notion of requirements type to separate these different expressions of requirements."

Requisite Pro has two default requirement types that relate to testing. They are a "Test Requirement" type (Prefixed with TR) and a "Test Case Requirement" type (Prefixed with TCS). Both kinds of test requirements can be entered either through the Requisite Pro interface or through the TestManager GUI. Requirements can also be gleaned from MS word or from standard Text documents using a keyword search and include algorithm.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Defining Test Requirements for Automated Software Testing