Software QA FYI - SQAFYI

Pros and cons of requirements-based software testing

By: Robin F. Goldsmith, JD

For something that is essential, fairly fundamental and seemingly straightforward, requirements-based software testing sure does generate a lot of discussion. Rather than representing opposite extremes of the same continuum, pro and con camps come at the topic from disparate perspectives. Advocates of requirements-based testing tend to be analytical, whereas opponents tend to couch their objections in more emotional terms. Each approach has its own strengths and issues. We'll start by examining those that affect advocates.

Appropriating the "requirements-based testing" term

Several prominent testing authorities have endeavored to equate the term "requirements-based testing" with their own favored technique. Cited methods include designing tests based on use cases, decision tables and decision trees, and logic diagramming techniques such as cause-and-effect graphing. Each of these gurus essentially claims that their method is the one and only true requirements-based testing.

Such efforts imply that theirs alone and no other technique qualifies as requirements-based testing. Since the various gurus can't all have the one and only technique, they are probably contributing to general doubts and confusion about requirements-based testing. It may turn some people off to requirements-based testing, or at least the advocated methods. Those who buy into the premise that a particular promoted technique equals requirements-based testing undoubtedly are missing benefits from other requirements-based testing techniques.

The most important tests

Quite simply, requirements-based software tests demonstrate that the product/system/software (I'll subsequently use just "product" to refer to system and software too) meets the requirements and thereby provides value. Consequently, requirements-based tests are the most important tests, for only they confirm that the product is doing something useful.

Requirements-based tests are "black box" (or functional) tests. So long as relevant inputs and/or conditions produce expected results that demonstrate the requirements are met, how the product does it is not of concern. Engineers refer to this as if the processing is in an unknown black box. Most of the tests that professional testers and users run are black box, and they use many different techniques for designing those tests. No single technique by itself can be said to be the one and only method of requirements-based testing.

Poorly defined requirements
Here's the rub. Requirements are almost always poorly defined, or at least not defined as well as is desirable. Requirements-based tests can be no better than the requirements they are testing. Most developers, testers, users and others recognize this, but the problem is usually worse than even the most critical of them realizes.

When requirements are poorly defined, developers are likely to develop the wrong things and/or develop things in the wrong way; testers generally are not able to tell any better than the developers what's right and wrong.

Some authorities assume that documenting requirements with use cases automatically solves this problem. The premise is that one test of each use case scenario path through the use case suffices to fully test all the ways the use case can be executed. This premise fails to take into account that use cases can be missed or have numerous wrong and missing paths, and each path can be invoked in multiple ways that need to be demonstrated.

The advocated decision tree/table and logic diagramming test design techniques represent systematic, disciplined ways to identify more thorough tests of the requirements by improving the defined requirements, largely by making more explicit aspects of the requirements that had previously been stated implicitly at best.

These particular test design techniques help, but are usually less effective than presumed or recognized. There is the tendency to over-rely on a chosen technique, which can result in missing other important tests. Paradoxically, using these disciplined techniques that reveal many more additional test conditions can become overwhelming. Even with the greater number of identified tests, the typical focus of these test design techniques frequently results in significant requirements/test conditions still being overlooked.

Although the specifics are beyond the scope of this article, the good news is that the proactive testing methodology includes a number of more powerful test design techniques that identify considerably more (and often more important) requirements and test conditions that are ordinarily overlooked. Also, the methodology facilitates managing and prioritizing a far greater number of identified test conditions.

REAL requirements vs. design

Another big reason that requirements-based test design techniques often miss important requirements is that not all black box tests demonstrate that the requirements have been met. In fact, most black box tests are actually demonstrating that the product conforms to its design. Realize that designs often are referred to as "requirements."

REAL requirements are business deliverable whats that provide value when delivered by some product how. On the other hand, what people ordinarily refer to as "requirements" are actually requirements of the high-level design product how which will provide value if and only if the product satisfies the REAL business requirement whats. This is especially true of use cases, which despite being called the "user's requirements" are actually usually usage requirements of the product expected to be created. For further explanation of these important distinctions, see my book and previous article on using REAL business requirements to avoid requirements creep.

Traditional testing, which I refer to as development or technical testing, is intended to demonstrate that the product indeed conforms to its design. It is necessary but not sufficient. The design is presumed to be responsive to the requirements. Requirements-based testing also needs to demonstrate that the product is not only built as designed but in fact actually satisfies the REAL business requirements that provide value.

Therefore, it's generally considered helpful to trace the requirements to the tests that demonstrate those requirements have been met. However, typical traceability matrices start with the product requirements. To be truly effective, one should start with the business requirement whats, tracing them to the high-level design product requirement hows, and in turn tracing the product requirements to the detailed design, implementation components, and tests of the implemented product.

Effective user acceptance testing is intended to demonstrate that the product as designed and created in fact satisfies the REAL business requirements, which therefore need to be traced directly to the user acceptance tests that demonstrate they've been met.

Commercially available automated tools can reduce some of the effort involved in creating and maintaining traceability matrices. A caution, though: Business requirements are hierarchical and need to be driven down to detail, usually to a far greater extent than people are accustomed. The resulting large numbers of individually identified, itemized business requirements and corresponding detailed product requirements and tests may make maintaining full traceability impractical. Thus, at least initially, it may be advisable to keep the traceability matrix high-level.

Clarity and testability
To improve poorly defined requirements, a number of advocates consider requirements-based software testing to include reviews of the requirements' clarity and testability. A requirement lacks testability when one cannot create tests to demonstrate that the requirement has been met. The most common reason a requirement is not testable is because it is not sufficiently clear. Unclear/untestable requirements are likely to be implemented incorrectly; regardless, testers have no way to tell.

Widely held conventional wisdom is that unclear/untestable (product) requirements are the main cause of creep -- expensive changes to requirements which had supposedly been settled upon. In fact, much of creep occurs because the product requirements, regardless of how clear and testable they are, don't meet the REAL business requirements. The main reason is because the REAL business requirements usually have not been defined adequately, mainly because people think the product requirements are the requirements.

Moreover, a requirement can be clear and testable but wrong, and clarity/testability is irrelevant for overlooked requirements. Typical requirements reviews tend to use one or two techniques that are unlikely to detect wrong and overlooked requirements.

In contrast, the proactive testing methodology uses more than 21 techniques to review requirements, including many more powerful techniques that indeed can detect incorrect and overlooked requirements. Similarly, more than 15 powerful techniques are used to review designs. In addition, proactive testing has ways to more fully define use case test conditions and ways to make seemingly untestable requirements testable.

By being aware of the issues with requirements-based testing as traditionally advocated, we can apply powerful improved review and test design methods to truly enhance the effectiveness of these most important tests.

Critics of requirements-based testing

A key requirements-based testing issue is that some prominent voices within the testing community deride it, often loudly and with great emotion. They say that the rapid pace of constantly changing business and technology makes it essentially impossible to define requirements; therefore trying to test based on defined requirements is a waste of time. They say instead of spending time trying to define the requirements, just go code and run tests. Such rationale can be very appealing, especially in organizations that seem to have lots of requirements changes.

I'll caution, though, that this reasoning has numerous pitfalls. First, it creates a self-fulfilling prophecy that requirements cannot be defined adequately. Ironically, both developers and testers often welcome this opportunity to get busy, even though they both know from repeated painful experience that their biggest source of mistakes and rework is inadequately defined requirements, which this approach assures.

Second, foes of requirements-based testing often speak monolithically as though the only alternative to their favored just-go-code-and-test approach is interminable analysis paralysis in a mindless and inflexible exaggerated "waterfall" chasing the impossible task of getting every possible requirement defined perfectly. I'd contend that such blaming of the methodology is usually the excuse for inept development, not the cause of it. Capable developers don't follow any methodology so slavishly. Reasonableness, not perfection, is the real practical standard.

Full article...


Other Resource

... to read more articles, visit http://sqa.fyicenter.com/art/

Pros and cons of requirements-based software testing