unisys

Web Application

Stress Test

And Data Analysis

January 20, 2000

by Chunyen (“Christine” ) Chang

Unisys Consulting Services

Unisys U.S. Government Group

8008 Westpark Drive

McLean, Virginia 22102-3197

© Unisys Corporation. All rights reserved. The Unisys Logo is a registered trademark and service mark of Unisys Corporation.

Microsoft and MS are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

Other trademarks and trade names mentioned herein are the property of their respective owners.

The information contained in this document represents the current views of Unisys Corporation on the issues discussed as of the date of publication. This document is for informational purposes only, and Unisys does not guarantee the accuracy of any information presented. . This document is provided “as-is” to you, without warranty or commitment of any kind, express, implied, statutory, or otherwise. In no event shall Unisys or its suppliers be liable for indirect, consequential, incidental, special, or punitive damages (including without limitation damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of this information. This document is not to be reproduced or distributed without prior written permission of Unisys.

Table of Contents

1. Abstract 1-1

2. Web Application Stress Testing 2-1

2.1 Web Stress Test Plan 2-1

2.1.1 Web Application Stress Tool 2-1

2.1.2 Hardware Platform Configurations 2-4

2.2 Data Presentation 2-4

2.2.1 Typical Trial Report 2-5

2.2.2 Sample of Performance Monitor Counter Results 2-8

2.3 Data Analysis 2-11

2.3.1 Maximum Requests per Second 2-11

2.3.2 Resource Bottleneck 2-13

2.3.3 Time To Last Byte (TTLB) 2-16

LIST OF FIGURES

Figure 2-1. Application Settings 2-2

Figure 2-2. Typical Trial Summary Report 2-8

Figure 2-3. Performance Counters Output 2-11

Figure 2-4. Platform 1, Number of Threads and RPS 2-11

Figure 2-5. Platform 2, Number of Threads and RPS 2-12

Figure 2-6. Platform 3, Number of Threads and RPS 2-12

Figure 2-7. Platform 1 Server Performance Bottleneck Tests 2-14

Figure 2-8. Platform 2 Server Performance Bottleneck Tests 2-15

Figure 2-9. Platform 3 Server Performance Bottleneck Tests 2-15

Figure 2-10. TTLB Query Statement 2-16

LIST OF TABLES

Table 2-1. Hardware Platform Test Configurations 2-5

Table 2-2. Average TTLB 2-17


(This page intentionally left blank.)

iii

1. Abstracts

Unisys Consulting Service engaged with an enterprise customer to assess and analyze the scalability and performance of a web application that calleds Microsoft SQL server Server 7.0 stored procedures. The customer’s specific goals includeding the following:

·  Determine the hardware platform required to support this the new web application.

·  Discuss Address possible causes of performance bottlenecks.

·  Determine Give the client a sense of how fast the users can expect the web pages to be returned to them.

We used Microsoft’s Web Application Stress Tool, WASebTool, to meet these above objectives.

This paper presents the process and results of our findings. We ported the web application to three different hardware platforms in a lab environment. The WAS ebTool tool client machines were set up to provide the required load to each of the three servers. We collected pThe performance counter data and other statistics, including Rrequests per Second second (RPS) and time to last byte (TTLB), are collected. Performance monitor data provided the information we needed to ensure that the number of client machines is was adequate for a test the test trial. It Performance monitor data This paper also presents the system resources ges usedthat we measured to evaluate the system bottlenecks. The maximum RPS data infers the number of concurrent users that a server platform can support. The TTLB data shows how fast the results are presented to the client machine by each of the three servers.

The test environment is described in Section 2.1, Web Stress Test Plan. Tand thehe test results are presented in Section 2.2, Data Presentation. Section 2.3 explains the Ddata analysis analysesis detailed in Section 2.3, Data Analysis.


(This page intentionally left blank.)

1-1

2. Web Application Stress Testing

To stress test the Web application that calls the SQL stored procedures, we ported the application onto three server platforms in an isolated lab environment. We determined the maximum number of requests per second that the Web servers can handle, discussed server resource bottlenecks, and presented system response times. Section 2.1 describes how the tests were performed, Section 2.2 presents a typical test result, and Section 2.3 presents the conclusions of our Web application stress tests.

2.1  Web Stress Test Plan

The Web stress test plan included sections that addressed both software tests and hardware platform configurations.

2.1.1  Web Application Stress Tool

The Microsoft Web Application Stress (WAS) tool is designed to simulate multiple browsers requesting pages from a Web site. This tool can realistically simulate many requests with relatively few client machines. We used this tool to request Web pages that call the standard error stored procedures. Depending on the hardware capability of the client and server machines, one or several WAS ebTool clients were set up to test the limits of the servers. Microsoft’s provides a tutorial on the WAS Microsoft Web Application Stress Tool. This tutorial tool includes information on installation, scripting, performance counters, settings, and reporting. It also provides has a general guidelines for using WAS ebTool and describes common Web testing problems. This The tutorial is located at can be found on the WAS ebTtool ite (http://webtool.rte.microsoft.com). The following paragraphs describe our setup to perform these tests.

2.1.1.1  Application Settings

Figure 2-1 shows the settings we used to perform the tests. All variables in the settings are kept constant except the Concurrent Connections settings, which include Stress Level and Stress Multiplier. The Stress Level (threads) times the Stress Multiplier (sockets per thread) represents the stress level for each system. These settings are changed for every trial of the tests. All tests are run for 3 minutes, which is a sufficient length of time . Iin this case, 3 minutes is sufficient. One may want to increase the Test Run Time, if the application is complex. Observe the number of hits on each page to make the adjustment. The number of hits on each page is shown in Figure 2-2, can be found on the Result Summary (Figure 2-2, Typical Trial Summary Report.) Request delays and throttle bandwidth setups are not used. For these tests, weWe wanted to simulate LAN connections with no slow links such as modem connections.

Figure 2-1. Application Settings

2.1.1.2  Test Scripts

We set up established 90 Active Server Pages (ASPs) to be called by the WAS ebTool tool clients. These 90 pages were separated into 30 groups. The WAS tool ebTool allows grouping of Web pages to organize the order in which the script items are invoked and . It also allows controlling the group distribution with granularity, ; although in this test, we kept the ASPs’ probability of being called the same, at of all ASPs being called the same (at 3.23 percent).

2.1.1.3  Performance Monitor Counters

We collected the performance monitor counter data listed below. Each of these is described in the subsequent paragraphs.

·  System: Percentage of Total Processor Time

·  Memory: Available Bytes

·  Thread: Context Switches per Second (Total)

·  Web Service: Get Requests per Second

·  Web Service: Post Requests per Second

·  Active Server Page: Requests per Second

·  Active Server Page: Requests Queued

·  SQLServer, Cache Manager: Cache Hit Ratio

·  SQLServer, Cache Manager: Cache Used Counts per Second.

System, Percentage of Total Processor Time: This measures the amount of time that the processor is busy. When a processor is consistently running over 75 percent usage, it has become a system bottleneck.

Memory, Available Bytes: This indicates the amount of available physical memory.

Thread, Context Switches per Second (Total): This is the rate of switches from one thread to another. Thread switches can occur either inside a single process or across processes. A thread switch may occur when either one thread asks another for information or a thread that is ready to run preempts a higher priority thread. The fewer the better—more than 2,000 is badexcessive. If you are getting a lot of context switching, you can lower the stress level (thread count) and increase the stress multiplier (sockets per thread).

Web Service, Get Requests per Second: The rate that HyperText Transfer Protocol (HTTP) requests are made using the GET method are made. Get requests are generally used for basic file retrievals or image maps, though they can be used with forms.

Web Service, Post Requests per Second: The rate that HTTP requests are made using the POST method are made. Post requests are generally used for forms or gateways requests.

Active Server Page, Requests per Second: The number of requests executed per second.

Active Server Page, Requests Queued: The number of requests waiting for service from the queue. If the number of Requests Queued fluctuates considerably during stress and processor utilization remains relatively low, this is an indication that the script is calling a server Component Object Model (COM) that is receiving more calls than it can handle. In this case, the server COM is the bottleneck.

SQL Server, Cache Manager: Cache Hit Ratio: This indicates the percentage of data pages that were found in the buffer cache without incurring a read from disk—the higher the better.

SQL Server, Cache Manager: Cache Used Counts per Second: The number of times a cache object has been used.

2.1.1.4  User Accounts and Client Machines

We established 600 user accounts in the WASeb Tool client machines to simulate access to the servers. These accounts were necessary for successful login to the servers.

We installed Tthe WEBTOOL WAS tool Clients clients were installed on four machines, named Changc, WS1, SERVER 205, and COE-ES5000-4. Changc iwas a Hewlett-Packard laptop with a 366-Mhz processor and 195 MB of RAM. WS1 is was a 200-Mhz workstation with 64 MB of RAM. SERVER205 had two 200-Mhz processors and 130 MB of RAM(the) , and COE-ES5000-4 had four 500-Mhz processors and 4 GB of RAM. To stress a powerful server, either a one powerful client or several less powerful clients are set up to provide the load. We followed tThe general guideline wof using e follow is to use the less powerful clients first and adding resources as needed, based on performance monitor results. The most frequent resource bottleneck that we observed on the client side was in processor utilization. If we observed that a client experienced processor utilization of 75 percent or more, we added a client machine to aid in the test.

2.1.2  Hardware Platform Configurations

The test is was performed in a lab environment, an isolated local area network (LAN). The SQL database and the ASPs codes were ported to three hardware platforms. Each platform ran on the Microsoft Windows NT Server 4.0, Service Pack 4, operating system. Table 2-1 lists both The hardware resources and the services loaded on each system are listed in Table 2-1. We checked the Ssystem configurations were checked, set the paging file size was set to 1.5 times the RAM size, and stopped all unnecessary server services were stopped.

2.2  Data Presentation

We performed 62 trial runs in this Web application test; 34 trials were run against the Platform 1, 11 against the Platform 2, and 17 against the Platform 3. More trials were run on the Platform 1 because it was the first server to test. To establish a data pattern, more data are accumulated; once the data pattern is established and understood, fewer data points are needed to achieve results. The paragraphs below present a typical trial report and the analysis results.

Table 2-1. Hardware Platform Test Configurations

Platform 1 / Platform 2 / Platform 3
Processors / 2 x 200 MHz / 4 x 200 MHz / 4 x 500 MHz
Memory / 130 MB / 1,179 MB / 4,128 MB
File Cache / 17,000 / 19,324 / 26,832
Services / Alerter, Computer Browser, Event Log, Server, Workstation, License Logging Service, TCP/IP NetBIOS Helper, Messenger, MS SQL Server, MS SQL Server OLAP Service, Net Logon, NT LM Security Support Provider, Plug and Play, Protected Storage, Remote Procedure Call Locator, Remote Procedure Call Service, Spooler, Windows Internet Name Service / Alerter, Computer Browser, Platform 2 NIC Management Agents, Platform 2 Remote Monitor Service, Platform 2 Enhanced IMD Idle Screen, Event Log, Server, Workstation, TCP/IP NetBIOS Helper, Messenger, MS SQL Server, Net Logon, NT LM Security Support Provider, Plug and Play, Protected Storage, Remote Procedure Call Locator, Remote Procedure Call Service, Smart CD Server, Schedule, SNMP, Spooler, Platform 2 System Shutdown Service / Alerter, Computer Browser, CA-Unicenter, CA-Unicenter WorldView Agent, CA-Unicenter (NR-Server), CA-Unicenter (Remote), CA-Unicenter (Transport), Event Log, Server, Workstation, TCP/IP NetBIOS Helper, Messenger, MS SQL Server, Net Logon, Plug and Play, Protected Storage, Remote Procedure Call Service, Spooler, TNG DB Server

2.3  Data Presentation

We performed 62 trial runs in this Web application test; 34 trials were run against the Unisys DS/6 Server, 11 against the Compaq 7000 Server, and 17 against the Unisys ES5000 Server. More trials were run on the Unisys DS/6 Server because it was the first server to test. To establish a data pattern, more data are accumulated; once the data pattern is established and understood, fewer data points are needed to achieve results. The paragraphs below present a typical trial report and the analysis results.

2.3.1  Typical Trial Report

A WAS trial report by WEBTOOL contains nine sections: Summary, Overview, Script Settings, Test Clients, Result Codes, Page Summary, Page Groups, Page Data, and Performance Counters. The Summary report presents a great deal of vital information regarding the trial. A Typical Trial Summary Report is shown in Figure 2-42. To examine trial details, one can review the results from the other eight sections of the report.

To validate a trial, the report on the Result Codes is reviewed first. This example indicates the clients received all pages.