background image
<< Incremental Development | Concepts of Teesting >>

Testing in the Software Development Life Cycle

<< Incremental Development | Concepts of Teesting >>
Testing in the Software Development Life Cycle
The purpose of testing is to reduce risk.
The unknown factors within the development and design of new software can derail a project and
minor risks can delay it. By using a cycle of testing and resolution you can identify the level of risk,
make informed decisions and ultimately reduce uncertainty and eliminate errors.
Testing is the only tool in a development's arsenal which reduces defects. Planning, design and QA
can reduce the number of defects which enter a product, but they can't eliminate any that are
already there. And any kind of coding will introduce more errors since it involves changing
something from a known good state to an unknown, unproved state.
Ideally testing should take place throughout the development life cycle. More often than not (as in
the waterfall model) it is simply tacked onto the back end. If the purpose of testing is to reduce
risk, this means piling up risk throughout the project to resolve at the end ­ in itself, a risky tactic.
It could be that this is a valid approach. By allowing developers to focus on building software
components and then, at a later date, switching to rectifying issues it allows them to
compartmentalise their effort and concentrate on one type of task at a time.
But as the lag between development and resolution increases so does the complexity of resolving
the issues (see "Test Early, Test Often" in the next chapter). On any reasonably large software
development project this lag is far too long. Better to spread different phases of testing throughout
the life cycle, to catch errors as early as possible.
Traceability
Another function of testing is (bizarrely) to confirm what has been delivered.
Given a reasonably complex project with hundreds or perhaps thousands of stake-holder
requirements, how do you know that you have implemented them all? How do your prove during
testing or launch that a particular requirement has been satisfied? How do you track the progress
of delivery on a particular requirement during development?
This is the problem of traceability.
How does a requirement map to an element of design (in the technical specification for example)
and how does that map to an item of code which implements it and how does that map test to
prove it has been implemented correctly ?
On a simple project it is enough to build a table which maps this out. On a large-scale project the
sheer number of requirements overwhelm this kind of traceability. It is also possible that a single
requirement may be fulfilled by multiple elements in the design or that a single element in the
design satisfies multiple requirements. This make tracking by reference number difficult.
If you need a better solution I would suggest an integrated system to track requirements for you.
There are off-the-shelf tools available which use databases to track requirements. These are then
linked to similar specification and testing tools to provide traceability. A system such as this can
automatically produce reports which highlight undelivered or untested requirements. Such systems
can also be associated with SCM (Software Configuration Management) systems and can be very
expensive, depending on the level of functionality.
See the section on "change management" for a discussion of these systems.
7