background image
<< How Different System Configurations Affect Performance | Comparing Results of Multiple Runs >>
<< How Different System Configurations Affect Performance | Comparing Results of Multiple Runs >>

Analyzing Performance Results

Analyzing Performance Results
259
The following table summarizes a sample configuration test for 100 virtual testers:
Analyzing Performance Results
TestManager generates considerable about your tests, and at first, the sheer volume of
data might be overwhelming. However, if you planned your tests carefully, you
should be reasonably certain which data is important to you.
First, check that your data is statistically valid. To do this, run a Performance report
and a Response vs. Time report on your data.
Note:
At the end of a successful suite run, TestManager runs the Performance and
Response vs. Time reports automatically.
The Performance report includes two columns: Mean and Standard Deviation. If the
mean is less than three times the standard deviation, your data might be too dispersed
for meaningful results.
The Response vs. Time graph shows the response time versus the elapsed time of the
run. The data should reach a steady-state behavior rather than getting progressively
better or worse. If the response time trend gets progressively better, perhaps you
included logon time in your results rather than measuring a stable workload. Or the
amount of data accessed in your database may be smaller than realistic, resulting in
all accesses being satisfied in cache.
Test Scripts
Suite
Reports
A test script to initialize
the database.
A test script to log in
virtual testers.
A test script for each
virtual tester task:
s
adding records
s
deleting records
s
querying the database
s
running payroll reports
A fixed user group with one
virtual tester. This virtual
tester logs in, initializes the
database, and sets an event
indicating that the database
is initialized.
A fixed user group with 100
virtual testers. Each virtual
tester logs in and waits until
the event is set. Each virtual
tester then executes many
iterations of the scenario.
One scenario that contains:
s
a selector to randomly
select a test script
s
a test script for each
virtual tester task
A test log to show whether
all virtual testers in the
suite successfully ran to
completion.
A Command Status report
to show if the server
returned expected
responses, even under
stress.
Performance reports for
each suite run on each
configuration.
A Compare Performance
report comparing the
output of each
Performance report.