background image
<< Usability Testing | Test Planning >>

Performance Testing

<< Usability Testing | Test Planning >>
Other issues that must be considered when conducting a usability study include ethical
considerations. Since your are dealing with human subjects in what is essentially a scientific study
you need to consider carefully how they are treated. The host must take pains to put them at ease,
both to help them remain objective and to eliminate any stress the artificial environment of a
usability study might create. You might not realise how traumatic using your software can be for
the average user!
Separating them from the observers is a good idea too since no one performs well with a crowd
looking over their shoulder. This can be done with a one-way mirror or by putting the users in
another room at the end of a video monitor. You should also consider their legal rights and make
sure you have their permission to use any materials gathered during the study in further
presentations or reports. Finally, confidentiality is usual important in these situations and it is
common to ask individuals to sign a Non-Disclosure-Agreement (NDA).
Performance Testing
One important aspect of modern software systems is their performance in multi-user or multi-tier
environments. To test the performance of the software you need to simulate its deployment
environment and simulate the traffic that it will receive when it is in use ­ this can be difficult.
The most obvious way of accurately simulating the deployment environment is to simply use the
live environment to test the system. This can be costly and potentially risky but it provides the best
possible confidence in the system. It may be impossible in situations where the deployment system
is constantly in use or is mission critical to other business applications.
If possible however, live system testing provides a level of confidence not possible in other
approaches. Testing on the live system takes into account all of the idiosyncrasies of such a system
without the need to attempt to replicate them on a test system.
Also common is the use of capture-and-playback tools (automated testing). A capture tool is used
to record the actions of a typical user performing a typical task on the system. A playback tool is
then used to reproduce the action of that user multiple times simultaneously. The multi-user
playback provides an accurate simulation of the stress the real-world system will be placed under.
The use of capture and playback tools must be used with caution, however. Simply repeating the
exact same series of actions on the system may not constitute a proper test. Significant amounts of
randomisation and variation should be introduced to correctly simulate real-world use.
You also need to understand the technical architecture of the system. If you don't stress the weak
points, the bottlenecks in performance, then your tests will prove nothing. You need to design
targeted tests which find the performance issues.
Having a baseline is also important.
Without knowledge of the 'pre-change' performance of the software it is impossible to assess the
impact of any changes on performance. "The system can only handle 100 transactions an hour!"
comes the cry. But if it only needs to handle 50 transactions an hour, is this actually an issue?
Performance testing is a tricky, technical business. The issues that cause performance bottlenecks
are often obscure and buried in the code of a database or network. Digging them out requires
concerted effort and targeted, disciplined analysis of the software.
19