Expert Software Test Practices Interviews
By: Rex Black
CAI: Could you tell us a little about yourself, your background, and what you
are working on today?
REX BLACK: I got started in software and systems engineering back in 1983 doing
work on financial applications that were based on CP/M and MP/M micros. Shortly after
that, I moved over to Unix-based applications. I continued doing programming and
system administration work for a number of years before eventually falling into
software testing in 1987.
I initially worked on automated test scripts, test execution and test project
management for an independent software test lab. Subsequent to that, I found
employment as a software test manager for several different test companies that
operated within a variety of different technological environments. They were good
companies with good technology but bad business plans. The result was that by 1994 I
found myself going through my second layoff in a two-year period. At that point it
occurred to me that if I was going to be out of work because of bad business decisions,
Iíd like them to be my own bad business decisions. Consequently, I started my own
consulting company, Rex Black Consulting Services. And I am happy to say that I have
been busy ever since.
Over the years, Iíve worked with a lot of different technologies, including end-user and
IT applications, financial and medical applications, embedded systems, consumer
electronics and hardware. My experience really runs the gamut.
Lately, our business seems to be breaking down into three main areas. The first area
is training - with a special emphasis on certification using the Foundation and Advanced
syllabi from the International Software Testing Qualifications Board. The second area is
consulting - classical consulting assessments where we come in and talk to people
about how they are doing their software testing and how they might be able to make
improvements. The third area is staff augmentation, either onsite for our clients or
offshore at our operation in New Delhi. We have approximately 30 testers in India
working for us right now, through our Pure Testing sister company.
Our business is completely international. Just yesterday I signed an agreement with a
marketing partner in Brazil and I hope to sign another agreement with a marketing
partner in South Africa later this week. Once we sign the South African contract weíll
be doing business on all 6 continents. I donít count Antarctica, because the penguins
have no budget. They canít hire me and we canít work there.
CAI: In your book Critical Testing Processes, you talk about having
encountered certain common testing themes - good and bad - over the course
of your career. Could you highlight some of these for us?
REX BLACK: A particularly good theme that I sometimes see is high project support
levels for software testing and quality. Whenever software testing is viewed as an ally
and supported by management, thatís a good thing. Another good theme I sometimes
see is constancy of purpose at the project management level. Projects that are very
reactive and constantly changing direction tend to be very problematic for software test
groups. On the other hand, projects that maintain focus and direction are much easier
for testers and will have a very good effect on the software testing process.
Bad themes would include unrealistic or inconsistent testing expectations from
management or from other project team members. One way to very quickly assess the
expectation levels around software testing in your organization is to ask your project
managers what they think the defect detection percentage ought to be for an
independent software test group that is testing a system prior to its release. The very
best software testing organizations will achieve a 95% defect detection percentage, or
perhaps a little higher. However, you often run into situations where managers, when
asked what the defect detection percentage should be, respond with a blank look and
ask, ďShouldnít they find everything?Ē Thatís indicative of unrealistic software testing
expectations. When unrealistic expectations are present, regardless of how well a job
is done, the efforts will always be seen as a failure in somebodyís eyes. Thatís bad.
A less significant bad theme that comes up repeatedly is when personal conflicts exist
between certain software testers and certain developers. Iíve had software testers tell
me that they take delight in targeting specific developers and finding lots of bugs in
their work just to make them feel stupid. Obviously, this is very corrosive to teamwork.
Iíve also seen developers with extremely defensive attitudes towards testers who tend,
as a result, to attack their software testers any time an issue is found. You can see
how these things feed off of each other.
CAI: In your book, you outline 12 critical testing processes. Why is process so
important in software testing? Could you highlight your answer with a few examples from this list of 12?
REX BLACK: The 12 software testing processes outlined in my book are critical in the
sense that they all impact the efficiency and effectiveness of either the testing effort
itself or the overall project.
For example, one of the 12 critical testing processes is bug reporting. Over the course
of a medium-sized project, there could easily be 100, 200, 1000 or even 5000 bug
reports filed. A process repeated that many times presents many opportunities for
efficiency improvements. In this case, two opportunities for improvement are injection
of fewer bugs and better communication between testers and developers.
Another critical testing process is the overall reporting of results. This entails the
communication to management of what was found, what tests were run (or not run),
what risks weíve covered (or not covered), where exactly the system might be facing
the most serious problems, and what kinds of defects are expected to be found. The
results reporting process is how the test team delivers value to management. The goal,
in fact, of the results reporting process is to give management the necessary
information they need to make very important and difficult decisions regarding product
ship dates. In other words, the reporting of results from success testing addresses the
quality dimension of the project.
If you look at the dimensions of a project (the features, the schedule, the budget and
the quality) you will see that it is fairly easy for a manager to ascertain where the
project stands in terms of features, schedule and budget - but not in terms of quality.
Whatís tricky, especially towards the end of a project, is ascertaining quality. However,
if the reporting of results from testing is done right, the test team can help the project
management team get the product out the door at just the right moment.
CAI: How would you say most IT organizations rate in terms of their software
REX BLACK: Iíd have to say that most organizations rate pretty low, even the ones
that train their software testers. I often meet people with years of paid software
testing experience, people who could easily be considered software testing
professionals, who arenít familiar with the basics of test design, bug reporting, or how
testing adds value. They often arenít aware of testing concepts that have been written
down and published recently or in some cases for over 25 years.
CAI: Why do you think this is the case?
REX BLACK: The software testing field has not done a very good job of building on the
foundations that were put in place back in the 70s and 80s by people like Boris Belzer,
William Hetzel and Glen Myers.
Moreover, software testing professionals, in general, donít do a very good job of
connecting software testing to business value. As a profession, we simply donít do a
good job of communicating the value of software testing to project managers and other
senior executives. Because nobody in management really understands how software
testing ultimately connects to value, software testing tends to be the first thing that
gets put on the chopping block when other priorities arise.
CAI: How does one connect software testing to business value? How does one
best communicate this?
REX BLACK: I think a smart way to connect testing with overall value is to look at the
alignment of defect detection, defect removal, and customer quality satisfaction.
To this end, several key questions need to be answered: 1) what defect detection
percentage should the organization have per unit testing; 2) should there be regression
testing and how effective should it be; and 3) should there be code reviews, design
reviews and requirement reviews and how effective should these be?
I had an interesting situation that came up when I was doing an assessment for a client
last year. Their defect detection percentage was good, in the 90-95% range for each
project, but their defect removal effectiveness was fairly low, down in the 80% range.
In other words, a sizeable percentage of the defects that were being detected during
testing were being deferred rather than removed. However, when I talked to
management about this, and pointed out the disconnect, they said, ďWeíre not overdeferring,
weíre over-testing. Weíre testing things that donít matter.Ē In order to get
to the heart of this, I studied the results of the most recent customer satisfaction
surveys; specifically, the surveys that were focused on various quality characteristics.
In doing this, I noticed that the numbers were well below where they wanted them to
be. Consequently, my response to management was, ďNo, you really are overdeferring.
If you were over-testing, customer satisfaction with quality would not be so
low. The low customer satisfaction suggests that you are over-deferring defects.Ē
Another thing to keep in mind when connecting software testing to overall value is who
you need to be communicating with. I mentioned earlier that I started my own
consulting company after suffering a second layoff in two years due to downsizing. An
additional variable was actually at play. I was inadvertently ignoring the needs of the
field support people, who obviously had a very strong and legitimate need to
understand what was going on. It was important that their concerns be addressed. But
this didnít happen. Consequently, the field support people were not confident in the
testing process, primarily because they didnít believe that the testing would address
their needs. In the end, what that meant was that they were very much in support of
the layoff of the test team, including me. What I learned from this is that it is very
important to identify all the stakeholders on a project and to make sure that all of them
are being well served by the testing. Thatís a critical part of the value communication
A final thing you want to keep in mind when trying to measure, and ultimately
communicate, the value of your testing efforts is the return on investment. You can do
this by looking at the cost of quality, which is a way of looking at the cost of defect
detection and repair prior to release (as opposed to post-release). You can measure
the efficiency of your test team by studying this differential. If it costs, for example,
$500 on average to find and remove a bug prior to release (as opposed to $15,000 to
find and remove a bug post-release) then every defect that the test team finds prior to
release will save your organization $14,500. Thatís a fairly easy calculation to make.
Management can then use this kind of data to make hard-headed economic decisions
about how much money to invest in testing.
CAI: For organizations that want to quantify the ROI being generated from
their own testing efforts, are there are specific methods you might be able to
REX BLACK: In my book Critical Testing Processes, I spend an entire chapter outlining
how to quantify testing ROI for any given organization. Any organization that wants to
measure the value of their software testing effort could follow that process and come
up with the return on their test investment. There are also a number of articles and
presentations that are out on the library page of our website,
www.rexblackconsulting.com, that talk about this issue of testing ROI and how much
money can be saved through testing.
In terms of overall industry numbers, a fellow named David Rico once did a study for
the Department of Defense in which he came up with 800% as an average return on
testing investment. That study can probably be found on the internet simply by
searching on ďDavid Rico software process improvement.Ē Capers Jones also has quite
a bit of data on industry best practices, as well as the risks associated with poor
practices, and he includes poor testing as one of the major software project risks in his
book Estimating Software Costs. The increased risk to a project that is created by poor
testing, particularly for large and complex projects, is really amazing. The risk of
project failure can easily double or quadruple.
CAI: Could you quantify for our readers, more specifically, the impact that
software testing improvement can have on an organization, in terms of ROI?
REX BLACK: Yes. You can look at the cost of quality model, which is imperfect in that
it doesnít capture all of the value delivered by testing, but which nevertheless captures
one of the primary value components - the detection of bugs that can be fixed. Using
the cost of quality approach, it is not unusual for me to see a return on testing
investment in the 700-800% range.
I once conducted an assessment for a bank. We found that the return on their testing
investment, in terms of money saved from the avoidance of field failures that might
otherwise have occurred, was 3,500%. Actually, I think it would have been higher but
once we hit 3,500% I went to the sponsor of the assessment project and explained that
we should probably stop counting since if we came up with a number that was any
higher nobody would ever believe it. I donít think most people are aware of how
expensive field failures can be. That, in turn, impacts their perception of the value that
is added by testing.
CAI: What methods and metrics do you think are important for measuring the
effectiveness of your testing efforts?
REX BLACK: First of all, you can look at your defect detection percentages for high
priority bugs versus all bugs. If your testing is properly focused, you should find a
higher percentage for high priority bugs than for all bugs. You want to be finding more
of the scary stuff than the trivial stuff.
You also want the test management and project management team, in general, to be
looking at this issue of testing ROI. How much money is being saved, and how can we
save even more money? This question is going to drive upstream improvements that
are going to make the test execution period shorter and cheaper. And thatís virtuous
from two perspectives. First of all, the cost of testing is generally at its highest during
the test execution process. Secondly, anything we can do to shorten the test execution
process will reduce the overall testing process and therefore, the length of the project
itself. Naturally, test execution tends to be on the critical path for release, so
shortening test execution also accelerates releases.
I think you should also examine the metric of confidence. You should be continually
trying to increase management confidence. That means figuring out how to report
results so that management better understands them.
Other things that I think are worth looking at include the overhead and risks associated
with the deployment of test releases, as well as the bug reporting process itself.
Regarding the latter, you should be looking at ways to reduce the percentage of bugs
that are rejected due to poor writing or insufficient information or improper
classification. In short, try to make the defect cycle of ďfind, fix and confirmĒ as tight
In the end, there are quite a few metrics that can be combined in a sort of dashboard
view for gaining visibility into: 1) how your testing efforts are doing; and 2) where they
can be improved.
CAI: Do you have any advice for organizations that are trying to integrate
testing throughout their entire software development life cycle?
REX BLACK: You really have to involve senior testers in the prevention of bugs, up
front and right from the beginning. I think itís a very common mistake to either get
testers involved too late in the game, when thereís very little that can be done other
than grinding out as many bugs as possible, or to put too few testers on the project
Any attempt to integrate testing into the full development lifecycle has to be taken very
seriously. It cannot just be a token attempt.
CAI: Developing software test conditions is critical when reviewing use cases
or detailed functional requirements. What techniques do you recommend for
testers when receiving such artifacts, and what do you recommend they
should do if the artifacts are substandard?
REX BLACK: I see this as a two-step process.
In the first step, you must do some high-level initial work to determine: 1) what should
be tested; 2) in what sequence the testing should be conducted; and 3) how much
testing should be done. Typically, I use some form of quality risk analysis to make
these determinations. I am going to be looking for specific kinds of problems that
could occur and trying to assign risk levels to each so that appropriate sequencing and
resource allocation decisions can be made.
The second step involves the design of specific test cases to cover these various risks
(to the degree appropriate, based on the level of risk). Iíve actually written a book on
test design called Effective and Efficient Software Testing, which we are currently in the
process of publishing. In the book, I identify a number of software test design
techniques. These techniques have been around for quite a while and are well
established Ė techniques like equivalent partitioning, boundary value analysis, decision
tables, transition diagrams and domain analysis. They are techniques that can be used
to discover specific conditions for testing and for the mitigation of particular risk areas.
One of the reasons that I like to use risk analysis as the first step in this approach is
that so often we find ourselves working on projects where the requirements and design
specifications are either poor or missing entirely. In short, itís still quite possible to do
a risk analysis in these situations because the risk analysis process is as much about
connecting to and understanding project stakeholders as it is about using documents
(to essentially determine the same thing). Nevertheless, if youíve got a document, the
risk analysis is going to be more complete and the test design more efficient. I
generally state, as a rule of thumb, that if you have to design tests using poor
requirements specifications you will be adding an approximate 20% inefficiency into
your software testing process.
CAI: What methods do you recommend for estimating software testing effort?
REX BLACK: I am not a big fan of tester-to-developer ratios as a method for test
estimation. I think they are unreliable. In certain very stable circumstances involving
small, incremental changes within maintenance releases I have seen this approach
work. In general, however, the approach I would recommend for the estimation of
software testing effort is the creation of a work breakdown structure developed by the
test team as a whole, using something like Microsoft Project or even just a white board
with dates on it. You can then measure your work breakdown structure against
historical metrics, preferably metrics derived from similar, past projects conducted
within the same organization along with some industry averages. At that point, you
will be positioned to start negotiating with project management regarding what can and
cannot be done.
... to read more articles, visit http://sqa.fyicenter.com/art/