background image
<< SQL Execution Commands | Passed emulate Commands >>
<< SQL Execution Commands | Passed emulate Commands >>

VU emulation commands

360
Chapter 12 - Reporting Performance Testing Results
If any user-defined commands--such as TSS commands, Visual Basic method calls, or
Java commands--are executed, the Command Usage report includes a section for
those commands.
Summary Statistics
s
Duration of Run
­ Elapsed time from the beginning to the end of the run. The
beginning of the run is the time of the first emulation activity among all virtual
testers and test scripts, not just the ones you have filtered for this report. Similarly,
the end of the run is the time of the last emulation activity among all virtual testers
and test scripts. The elapsed time does not include the process time.
s
Passed Commands
,
Failed Commands
,
Passed Responses
,
Failed Responses
­
Identical to their counterparts in Cumulative Statistics on page 355.
s
Total Throughput
­ Four measurements of total throughput are provided: passed
command throughput, failed command throughput, passed response throughput,
and failed response throughput. The total throughput of passed commands is
obtained by dividing the number of passed commands by the run's duration, with
the appropriate conversion of seconds into minutes. Thus, it represents the total
passed command throughput by all selected virtual testers at the applied
workload, as opposed to the throughput of the average virtual tester. The total
failed command, and the total passed and failed response throughputs are
calculated analogously.
These throughput measurements, as well as the test script throughput, depend
upon the virtual tester and test script selections. The summary throughput
measurements are most meaningful when you select all virtual testers and test
scripts. For example, if you select only three virtual testers from a ten-virtual-tester
run, the throughput does not represent the server throughput at a
ten-virtual-tester workload, but rather the throughput of three selected virtual
testers as part of a ten-virtual-tester workload.
s
Number of Virtual Testers
­ Number of virtual testers in the suite run.
s
Number of Timers
­ Number of timers in the suite run.
s
Number of Completed Scripts
­ Test scripts are considered complete if all associated
activities associated are completed before the run ends.
s
Number of Uncompleted Scripts
­ Number of test scripts that have not finished
executing when a run is halted. Test scripts can be incomplete if you halt the run or
set the suite to terminate after a certain number of virtual testers or test scripts.
s
Average Number of Scripts Completed per Virtual Tester
­ Calculated by dividing
the number of completed test scripts by the number of virtual testers.