background image
<< Testing Metrics for Testers | Release Control >>

Other Metrics for Testing and Development

<< Testing Metrics for Testers | Release Control >>
Other Metrics for Testing and Development
Just as developers are responsible for errors in their code, so testers should be responsible for
errors in their defect reports. A great deal of time can be wasted by both development and test
teams in chasing down poorly specified defect reports or defects that are reported in error.
So a measure of testing effectiveness therefore becomes the number of defect reports rejected by
development. Of course you should minimise this value, or use it as a proportion of defects logged
and target each tester and the test team overall with a 0% goal ­ no errors.
Also you can target other metrics like response times to defect reports.
If a developer takes too long to respond to a defect report he can hold the whole process up. If
they take too long to fix a defect the same can happen. Also testers can sit on defects, failing to
retest them and holding up the project.
But be careful of using these measures too prescriptively.
Defects are funny beasts. They are inconsistent and erratic. They defy comparisons. One "minor"
severity defect might seem like another but might take ten times as long to diagnose and resolve.
The idiosyncrasies of various software products and programming languages might make one class
of defects more difficult to fix than another.
And while these will probably average out over time, do you really want to penalise a developer or
tester because they get lumped with all the difficult problems?
Food for thought...
You're measuring defect injection rate.
You're measuring defect detection rate.
If you do this for a long time you might get a feel for the 'average' defect injection rate, and the
'average' defect detection rate (per system, per team, per whatever). Then, when a new project
comes along, you can try and predict what is going to happen.
If the average defect injection rate is 0.5 defects per developer hour, and the new project has 800
hours of development, you could reasonably expect 400 defects in the project. If your average defect
detection rate is 0.2 defects per tester hour, it's probably going to take you 2000 tester hours to find
them all. Have you got that much time? What are you going to do?
But be careful ­ use these metrics as 'dashboard' numbers to highlight potential issues but don't get
too hung up on them. They are indicative at best.
Things change, people change, software changes.
So will your metrics.
38