background image
<< Analyzing Performance Results | Determining the Cause of Performance Problems >>
<< Analyzing Performance Results | Determining the Cause of Performance Problems >>

Comparing Results of Multiple Runs

260
Chapter 9 - Performance Testing Concepts
After you are satisfied that your sample is valid, start analyzing the results of your
tests. When you are analyzing results, use a multilevel approach. For example, if you
were driving from one city to another, you would use a map of the United States to
plan an overall route and a more detailed city map to get to your specific destination.
Similarly, when you analyze your results, first start at a macro level and then move to
levels of greater detail.
The following sections summarize the different levels of detail that you can use to
analyze the results of your tests. For more information about performance testing
reports, see Reporting Performance Testing Results on page 331.
Comparing Results of Multiple Runs
The first level of analysis involves evaluating the graphical summaries of results for
individual suite runs and then comparing the results across multiple runs. For
example, examine the distribution of response times for individual virtual testers or
transactions during a single suite run. Then compare the mean response times across
multiple runs with different numbers of virtual testers.
This first-level analysis lets you know whether your performance goals are generally
met. It helps identify trends in the data, and can highlight where performance
problems occur--for example, performance might degrade significantly at 250 virtual
testers.
For this type of analysis, run the Performance and Compare Performance reports.
Comparing Specific Requests and Responses
The second level of analysis involves examining summary statistical and actual data
values for specific virtual tester requests and system responses. Summary statistics
include standard deviations and percentile distributions for response times, which
indicate how the system responses vary by individual virtual testers.
For example, if you are testing an SQL database, you could trace specific SQL requests
and corresponding responses to analyze what is happening and the potential causes
of performance degradation.
For second-level analysis, you could:
1
Identify a stable measurement interval by running the Response vs. Time report
and obtaining two time stamps. The first time stamp occurs when the virtual
testers exit from the startup tasks. This is the time stamp of the last virtual tester
who starts to do "real" work: adding records, deleting records, and so on. The
second time stamp is the first virtual tester who logs off the system. You have now
identified a stable measurement interval.