Implementing Performance Tests
254
Chapter 9 - Performance Testing Concepts
Implementing Performance Tests
Once you have chosen the pass and fail criteria, hardware and software requirements,
and workload model, you are ready to create test scripts and set up the tests. Some
issues to consider during this phase of the process are:
s
The applications to test
It is not cost-effective to test all of your applications.
Spending time and resources testing an application that has little effect on overall
performance takes away from the time spent testing critical applications. You must
consider this balance when planning and designing tests.
In general, you should identify the 20% of the applications that generate 80% of the
workload on your system. For example, you might not want to include an
application that updates a database at the end of the year only.
s
The database on which to run the test
Decide whether you want to run the test on
the production database or a test database. Running tests on production systems in
current use may yield incorrect results because the effect of the regular user
workload is not included in the workload model.
s
The termination conditions
If one virtual tester fails, should the test stop or
should it keep running?
If you are implementing a large number of virtual testers and a few fail, generally
the test can continue. However, if a virtual tester that performs a fundamental task
(such as setting up the database) fails, the test should stop.
s
The stable workload
Should the test wait until all virtual testers are connected, or
should the test begin running immediately?
If you are trying to measure the response time for virtual testers, you probably
should wait until all testers are connected before the actual testing begins.