Performance report

The performance report summarizes the validity of the run and the data that is most significant to the run. The report also shows the response trend of the slowest 10 pages in the test and the graph of the response trend of each page for a specified interval.

Contents

Overall page

The Overall page provides the following information:

  • A progress indicator that shows the state of the run.
  • A pie chart that shows the information about the overall verification point passed and failed for the test run if they were set. For the schedule run, it displays the overall requirements that passed and failed.

Item

Description Verdict
Page VPs Displays the verdict of the page title verification points if they were set.
  • Passed
  • Failed
  • Inconclusive
  • Error
Page Element VPs Displays the verdict of the response code or response size verification points if they were set.
  • Passed
  • Failed
  • Inconclusive
  • Error
Page Status Codes Displays the success and failure rate for the entire run.

If a primary request includes verification points, the Page Status Code Successes value indicates that the verification point for the response code is passed.

If a primary request has no verification points, the Page Status Code Successes value indicates that the server received the primary request and returned a response with one of the following status codes:
  • 200 or 300 category
  • 400 or 500 category, which is an expected response code
  • Passed
  • Failed
Page Element Status Codes Displays the success and failure rate for the entire run.

If a primary request includes verification points, the Page Element Successes value indicates that the response code verification point passed for that request.

If a request has no verification points, the Page Element Successes indicates that the server received the request and returned a response with one of the following status codes:
  • 200 or 300 category
  • 400 or 500 category, which is an expected response code
  • Passed
  • Failed
Page Health Displays the total health of the pages, transactions, and loops for the test or schedule run.
  • Healthy
  • Unhealthy
  • If you click any individual chart, you can go to that specific report to analyze the status in detail.
  • If you click any legend (for example, Passed), the chart is updated to show only the other verdicts of a test or a schedule run. For example, the Page Status Codes has a legend as Passed and Failed. If you click Passed, the chart is updated to show only errors during a test or a schedule run.
  • Similarly, if you double-click any legend, the chart is updated to show only the selected verdict by removing all other verdicts from the chart. Thus you can focus on only one counter which you want to investigate in detail.

Summary page

The Summary page summarizes the most important data about the test run, so that you can analyze the final or intermediate results of a test at a glance.

The Summary page displays the following Run Summary information:

  • The name of the test.
  • The number of users that are active and the number of users that have completed testing. This number is updated during the run.
  • The elapsed time. This is the run duration, which is displayed in hours, minutes, and seconds.
  • The status of the run. This can be Initializing Computers, Adding Users, Running, Transferring data to test log, Stopped, or Complete.
  • Displaying results for computer: All Hosts. To see summary results for individual computers, click the computer name in the Performance Test Runs view.

The Summary page displays the following Page Summary information:

  • The total number of page attempts and hits. A page attempt means that a primary request was sent; it does not include requests within the page. A hit means that the server received the primary request and returned any complete response.
  • The average response time for all pages. Response time is the sum of response times for all page elements (including the connect time and inter-request delays). Response time counters omit page response times for pages that contain requests with status codes in the range of 4XX (client errors) to 5XX (server errors). The only exception is when the failure (for example, a 404) is recorded and returned, and the request is not the primary request for the page. Page response times that contain requests that time out are always discarded.
  • The standard deviation of the average response time for all pages.
  • The maximum response time for all pages.
  • The minimum response time for all pages.
  • A summary of the results for page verification points, if these verification points were set.

The Summary page displays the following Page Element Summary information:

  • The total number of page element attempts and hits. A page element attempt means that a request was sent. A hit means that the server received the request and returned any complete response.
  • The total number of page elements where no request was sent to the server because the client determined that the page elements were fresh in the local cache.
  • The average response time for all page elements. Response time is the time between the first request character sent and the last response character received. Response times for HTTP requests that time out or that return an unexpected status code (the recorded and played back codes do not match) in the range of 4XX (client errors) to 5XX (server errors) are discarded from the reported values.
  • The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater and the response time is more varied.
  • The percentage of verification points that passed.
  • A summary of the results for page element verification points, if these verification points were set.

If you have set transactions in your test, the Summary page displays the following Transaction information:

  • The minimum, maximum, and average response time for all transactions. Response time is the actual time spent within the transaction container.
  • The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater and the response time is more varied.
  • The total number of transactions that were started and completed.

Page Performance page

The Page Performance page shows the average response of the slowest 10 pages in the test as the test progresses. With this information, you can evaluate system response during and after the test.

The bar chart shows the average response time of the 10 slowest pages. Each bar represents a page that you visited during recording. As you run the test, the bar chart changes, because the 10 slowest pages are updated dynamically during the run. For example, the Logon page might be one of the 10 slowest pages at the start of the run, but then, as the test progresses, the Shopping Cart page might replace it as one of the 10 slowest. After the run, the page shows the 10 slowest pages for the entire run.

The table under the bar chart provides the following additional information:
  • The minimum response time for each page in the run. Response time is the time between the first request character sent and the last response character received. Response time counters omit page response times for pages that contain requests with status codes in the range of 4XX (client errors) to 5XX (server errors). The only exception is when the failure (for example, a 404) is recorded and returned, and the request is not the primary request for the page. Page response times that contain requests that time out are always discarded.
  • The average response time for each page in the run. This matches the information in the bar chart.
  • The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater and the response time is more varied.
  • The maximum response time for each page in the run.
  • The number of attempts per second to access each page. An attempt means that a primary request was sent; it does not include requests within the page.
  • The total number of attempts to access the page.
To display the 10 slowest page element response times, right-click a page and click Display Page Element Responses.

Response vs. Time Summary page

The Response vs. Time Summary page shows the average response trend as graphed for a specified interval. It contains two line graphs with corresponding summary tables. When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages.
  • The Page Response vs. Time graph shows the average response time for all pages during the run. Each point on the graph is an average of what has occurred during that interval. The table after the graph lists the total average response time for all pages in the run and the standard deviation of the average response time.
  • The Page Element response vs. Time graph shows the average response time for all page elements during the run. Each point on the graph is an average of what has occurred during that interval. The table under the graph lists the total average response time for all page elements in the run and the standard deviation of the average response time. The table also lists the total number of page elements where no request was sent to the server because the client determined that the page elements were fresh in the local cache. You set the Statistics sample interval value in the schedule, as a schedule property.

Response vs. Time Detail page

The Response vs. Time Detail page shows the response trend as graphed for the sample intervals. Each page is represented by a separate line.

The Average Page Response Time graph shows the average response of each page for each sample interval. When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages. The table after the graph provides the following additional information:

  • The minimum page response time for the run. Response time is the time between the first request character sent of the primary request and the last response character received. Response time counters omit page response times for pages that contain requests with status codes in the range of 4XX (client errors) to 5XX (server errors). The only exception is when the failure (for example, a 404) is recorded and returned, and the request is not the primary request for the page. Page response times that contain requests that time out are always discarded.
  • The average page response time for the run. This is similar to the graph, but the information in the table includes the entire run.
  • The maximum page response time for the run.
  • The standard deviation of the average response time. The standard deviation tells you how tightly the data is grouped about the mean. For example, System A and System B both have an average response time of 12 ms. However, this does not mean that the response times are similar. System A might have response times of 11, 12, 13, and 12 ms. System B might have response times of 1, 20, 25, and 2. Although the mean time is the same, the standard deviation of System B is greater and the response time is more varied.
  • The rate of page attempts per interval for the most recent statistics sample interval. A page attempt means that the primary request was sent; it does not include requests within the page. You set the Statistics sample interval value in the schedule, as a schedule property.
  • The number of page attempts per interval.

Page Throughput page

The Page Throughput page provides an overview of the frequency of requests being transferred per sample interval.
  • The Page Hit Rate graph shows the page attempt rate and page hit rate per sample interval for all pages.

    A page attempt means that the primary request was sent; it does not include requests within the page.

    A hit means that the server received the primary request and returned any complete response.

    When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages. The summary table after the graph lists the total hit rates and counts for each page in the run.
  • The User Load graph shows active users and users that have completed testing, over the course of a run. The summary table after the graph lists the results for the most recent sample interval. You set the Statistics sample interval value in the schedule, as a schedule property. As the run nears completion, the number of active users decreases and the number of completed users increases. The summary table after the graph lists the active and completed users for the entire run.
    Note: To set the sample interval value, open the schedule, click the Statistics tab, and then view or modify Statistics sample interval.
If the number of requests and hits are not close, the server might be having trouble keeping up with the workload.

If you add virtual users during a run and watch these two graphs in tandem, you can monitor the ability of your system to keep up with the workload. As the page hit rate stabilizes, even though the active user count continues to climb and the system is well-tuned, the average response time will naturally slow down. This response time reduction happens because the system is running at its maximum effective throughput level and is effectively throttling the rate of page hits by slowing down how quickly it responds to requests.

Server Throughput page

The Server Throughput page lists the rate and number of bytes that are transferred per interval and for the entire run. The page also lists the status of the virtual users for each interval and for the entire run.
  • The Byte Transfer Rates graph shows the rate of bytes sent and received per interval for all intervals in the run. When a schedule includes staged loads, colored time-range markers at the top of the graph delineate the stages. The summary table after the graph lists the total number of bytes sent and received for the entire run.
  • The User Load graph shows active users and users that have completed testing, per sample interval, over the course of a run. You set the Statistics sample interval value in the schedule, as a schedule property. As the run nears completion, the number of active users decreases and the number of completed users increases. The summary table after the graph lists the active and completed users for the entire run.
The bytes sent and bytes received throughput rate, which is computed from the client perspective, shows how much data Rational® Performance Tester is pushing through your server. Typically, you analyze this data with other metrics, such as the page throughput and resource monitoring data, to understand how network throughput demand affects server performance.

Server Health Summary page

The Server Health Summary page gives an overall indication of how well the server is responding to the load.
  • The Page Health chart shows the total number of page attempts, page hits, and status code successes for the run. The table under the bar chart lists the same information.

    A page attempt means that a primary request was sent; it does not include requests within the page.

    A hit means that the server received the primary and returned any complete response.

    A success means that the response code verification point passed for that request. If a primary request has no verification points, the Success value indicates that the server received the primary request and returned a response with one of the following status codes:
    • 200 or 300 category
    • 400 or 500 category, which is an expected response code
  • The Page Element Health chart shows the total number of page element attempts, page element hits, status code successes, and page element redirections for the run. The table under the bar chart lists the same information and the total number of page elements where no request was sent to the server because the client determined that the page elements were fresh in the local cache.

Server Health Detail page

The Server Health Detail page provides specific details for the 10 pages with the lowest success rate.
  • The bar chart shows 10 pages with the lowest success rate.
  • The summary table under the chart lists, for all pages, the number of attempts, hits, and successes in the run and the attempts per second during the run.

    An attempt means that a primary request was sent; it does not include requests within the page.

    A hit means that the server received the primary and returned any complete response.

    A success means that the response code verification point passed for that request. If a primary request has no verification points, the Success value indicates that the server received the primary request and returned a response with one of the following status codes:
    • 200 or 300 category
    • 400 or 500 category, which is an expected response code

Caching Details page

The Caching Details page provides specific details on caching behavior during a test run.
  • The Caching Activity graph shows the total number of page element cache attempts, page element cache hits, and page element cache misses for the run. These values correspond to responses from the server, indicating whether the content has been modified. Additionally, the bar chart shows the total number of page elements in the cache that were skipped for the run. That value indicates the cache hits that were still fresh in the local cache, where communication with the server was not necessary.
  • The Page Element Cache Hit Ratios graph shows the percentage of cache attempts that indicate server-confirmed success and client-confirmed success for the run. Server-confirmed cache hits occur when the server returns a 304 response code. Client-confirmed cache hits occur when the content is still fresh in the local cache and no communication with the server is required.

Resources page

The Resources page shows information about all the resource counters that were monitored during the schedule run. You can view the following information as mentioned in the table from the Resources page:
If... Then the Resources page displays...
If you did not add any Resource Monitoring source to a performance schedule

A message that states that you must set up the Resource Monitoring sources to view the resource details.

If you added Resource Monitoring sources to a performance schedule
  • The Resource Monitoring sources that were monitored during the schedule run.
  • All resource counters for those Resource Monitoring sources that were monitored during the schedule run.
  • The Unavailable sources section lists the Resource Monitoring sources that were unavailable or unreachable during the schedule run.
    Note: The Unavailable sources section is displayed only if any of the Resource Monitoring sources were unreachable or unavailable during the schedule run.
If you added Resource Monitoring sources by using labels to a performance schedule
  • The following information in the Server sources matching the labels set in the schedule (*Source defined in team space) section:
    • Labels and the Resource Monitoring sources associated with those labels that were monitored during the schedule run.

    • Resource Monitoring sources that were unavailable or unreachable during the schedule run.

    • An empty array ([]) when you used labels that were not tagged to any Resource Monitoring source in Rational® Test Automation Server.

    • The asterisk (*) symbol is shown after the name of the Resource Monitoring source if you add the Resource Monitoring source at the team space level in Rational® Test Automation Server.

  • All resource counters for the Resource Monitoring sources that were monitored during the schedule run.

If you ran a performance schedule by using the overridermlabels command from the Rational® Performance Tester command line
  • The following information in the Server sources matching the labels set with the command-line flag -overridermlabels (*Source defined in team space) section:
    • Labels that you used to add the Resource Monitoring sources to the schedule for the schedule run.

    • Resource Monitoring sources associated with those labels that were monitored during the schedule run.
    • Resource Monitoring sources that were unavailable or unreachable during the schedule run.

    • An empty array ([]) when you used labels that were not tagged to any Resource Monitoring source in Rational® Test Automation Server.

    • The asterisk (*) symbol is shown after the name of the Resource Monitoring source if you add the Resource Monitoring source at the team space level in Rational® Test Automation Server.

  • All resource counters for the Resource Monitoring sources that were monitored during the schedule run.

The Legend shows the Resource Monitoring type and its resource counters. When you have multiple Resource Monitoring sources, the resource counters for the respective sources are displayed in front of their Resource Monitoring source name. You can customize the resource counter information displayed in a graph by clicking any individual resource counter or type of source. You can click or double-click any individual resource counter for the following results:
  • A single click on the resource counter hides the data displayed on the graph. Click the resource counter again to display the data in the graph.
  • A double-click on the resource counter removes information about all other resource counters from the graph and displays only the information about the selected resource counter.
    Tip: You can click Select All option to restore all the resource counter information on the graph.

When you click on any of the sources, the graph removes all the resource counters of other sources and displays only the resource counters of the selected source.

For example, you have an Apache httpd server and a Windows Performance Host as a Resource Monitoring source. When the schedule completes, the Resources page displays the resource counter information of both the sources. If you want to analyze the resource counters for any one of the sources, you can click the Apache httpd server or the Windows Performance Host. Based on your selection, the graph is updated to show the selected source resource counters information.

The Performance Summary table under the graph lists the most recent values of the resource counters that were monitored during the schedule run. The first two columns show the Type of the source and Name of the resource counter. This table also lists the minimum, maximum, and average values of the resource counters that were monitored during the schedule run.

Page Element Responses

The Page Element page shows the 10 slowest page element responses for the selected page.

Page Response Time Contributions

The Page Response Time Contributions page shows how much time each page element contributes to the overall page response time and the client delay time and connection time.

Page Size

This page lists the size of each page of your application under test. The size of the page contributes to the response time. If part of a page or an entire page is cached, then those requests coming from the cache will not contribute to the total page size.

The size of a page is mostly determined by the size of its elements. Each bar in this report represents a page. To view the Page Elements Size report, click a bar and select Page Element Sizes. All the elements that are on the page show up with sizes.

Errors

The Errors page lists the number of errors and the corresponding actions that occurred in the test or schedule. You can view the following graphs on the Errors page:

  • Error Conditions: This graph displays the number of errors that the conditions met.

  • Error Behaviors: This graph displays how each error condition was managed.

  • Error Conditions over Time: This graph displays errors against the time that occurred during the playback of the test or schedule.

Note: You must have defined how to manage errors in the Advanced tab of the Test Details, VU Schedule Details, or Compound Test Details pane to log errors when a specific condition occurs.

Page Health

Use this page(report) to determine if the pages of your application have errors. If a page contains any error, the report displays that the page is not 100% healthy. If there are pages that are not 100% healthy, the report displays another section listing such pages and the errors reported.