Professional Documents
Culture Documents
The performance testing is a measure of the performance characteristics of an application. The main
objective of a performance testing is to demonstrate that the system functions to specification with
acceptable response times while processing the required transaction volumes in real-time production
database. It’s defined as the technical investigation done to determine or validate the speed, scalability,
and/or stability characteristics of the product under test and also Performance-related activities, such as
testing and tuning, are concerned with achieving response times, throughput, and resource-utilization
levels that meet the performance objectives for the application under test.
1.1Objective
The objective of a performance test is to demonstrate that the system meets requirements for transaction
throughput and response times simultaneously.
The main deliverables from such a test, prior to execution, are automated test scripts and an
infrastructure to be used to execute automated tests for extended periods. This infrastructure is an asset
and an expensive one too, so it pays to make as much use of this infrastructure as possible. Fortunately,
this infrastructure is a test bed, which can be re-used for other tests with broader objectives. A
comprehensive test strategy would define a test infrastructure to enable all these objectives be met.
Performance Testing is the process of determining the speed or effectiveness of a computer, network or
software program or device. This process can involve quantitative tests done in a lab, such as measuring
the response time or the number of MIPS (millions of instructions per second) at which a system
functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated.
Performance testing is often done in conjunction with stress testing.
Stress testing is a form of testing that is used to determine the stability of a given system or entity. It
involves testing beyond normal operational capacity, often to a breaking point, in order to observe the
results. It refers to tests that put a greater emphasis on robustness, availability, and error handling under
a heavy load, rather than on what would be considered correct behavior under normal circumstances. In
particular, the goals of such tests may be to ensure the software doesn't crash in conditions of insufficient
computational resources (such as memory or disk space), unusually high concurrency, or denial of
service attacks.
Spike testing suggests to be done by spiking the number of users and understanding the behavior of the
application whether it will go down or will it is able to handle dramatic changes in load.
Endurance Testing (Soak Testing) is usually done to determine if the application can sustain the
continuous expected load. Generally this test is done to determine if there are any memory leaks in the
application.
1.3Risks Addressed via Performance Testing
Speed, Scalability, Availability and Recoverability, as they relate to Performance Testing
Scalability Testing is the testing of a software application for measuring its capability to scale up or scale
out in terms of any of its non-functional capability - be it the user load supported, the number of
transactions, the data volume etc
Availability Testing is the testing that need to check the application availability 24/7 at any point of time,
generally this will be done in production environment using tools called Site scope, BAC and Tivoli.
Recoverability Testing, is a disaster recovery testing majorly done in database perspective and in Load
balancing also. Recoverability means that committed transactions have not read data written by aborted
transactions (whose effects do not exist in the resulting database states).
1.4Understanding Volume
This section will discuss about the User Sessions, Session Duration, Transactions and User
Abandonment as they relate to volume in performance testing.
User Sessions :
For example a login session is the period of activity between a user logging in and logging out of a (multi-
user) system.
Session Duration is an Average amount of time that visitors spend on the site each time they visit the
pages in any web site.
Transaction is a movement carried out between separate entities or objects or functionalities, often
involving the exchange of items of value, such as information, services.
1.5Test tools
Load runner
Performance Center
WAPT
VSTS
Open STA
Rational Performance Tester
We have various tools available in industry to find out the causes for slow performance in the following
areas:
Application
Database
Network
Client side processing
Load balancer
• Application’s scripting.
• Tools vendor’s Information
While System Evaluation is a continual process throughout the performance testing effort, the bulk of the
evaluation process is most valuable if conducted very early in the effort. The evaluation can be thought of
as the evaluation of the project and system context. The intent is to collect information about the project
as a whole, the functions of the system, the expected user activities, the system architecture and any
other details that are helpful in creating a Performance Testing Strategy specific to the needs of the
particular project. Starting with this information, the performance goals and requirements can be more
efficiently collected and/or determined then validated with project stakeholders. The information collected
via System Evaluation is additionally invaluable when characterizing the workload and assessing project
and system risks. Simultaneously we need to learn techniques effectively and efficiently determine and
document the system's functions, document expected user activities and document the system's logical
and physical architecture.
Performance testing objectives are fairly easy to capture. The easiest way to capture performance testing
objectives is simply to ask each and every member of the team what value you can add for him or her
while you are doing performance testing. That value may be providing resource utilization data under
load, generating specific loads to assist with tuning an application server, or providing a report of the
number of objects requested by each web page. While collecting performance testing objectives early in
the project is a good habit to get into, so is periodically revisiting them and checking in with members of
the team to see if there are any new objectives that they would like to see added. And the majorly the
performance testing are as follows
• Establish valuable performance testing objectives at any point in the development lifecycle
• Communicate those objectives and the value they add to both team members and executives
• Establish technical, performance related targets (sometimes called performance budgets) that
can be validated independently from end-user goals and requirements
• Communicate those targets and the value that testing against them provides to both team
members and executives
2.3Quantify End Users Response Time Goals
Determining and quantifying application performance requirements and goals accurately is a critical
component of valuable performance testing. Successfully verbalizing our application’s performance
requirements and goals is the first and most important step in this process. Remember that when all is
said and done, there is only one performance requirement that really matters: those application users are
not annoyed or frustrated by poor performance. The users of your application don’t know or care what the
results of your performance tests are, how many seconds it takes something to display on the screen past
their threshold for "too long," or what your throughput is. The only thing application users know is that they
either notice that the application seems slow or they don't – and users will notice (or not) based on
anything from their mood to what they have become accustomed to. This How-To discusses methods for
converting these feelings into numbers, but never forgets to validate your quantification by putting the
application in front of real users. The major objectives in end user response time goals are follows
Base lining the application is where test execution actually begins. The intent is in twofold. First, all scripts
need to be executed, validated and debugged (if necessary). Second, various single and multi-user tests
are executed and recorded to provide a basis of comparison for all future testing. Initial baselines are
typically taken as soon as the test environment is available. Re-base lining occurs at each new release.
The following are the techniques to go forward in base line testing
• Is it hardware or software?
• Can the problem source be identified easily?
• Can you bypass the problem?
3.4Server configurations
This Section includes the Hardware Configuration of Application Server / Web Server / Database Server.
Example:
• Processor
• Virtual Memory
• Physical Memory
• LAN Card
3.5Client Configurations
This Section includes the Hardware Configuration of Client machine where VUsers will be emulated (Load
Generator/ Controller)
3.6Scalable test environments
Once we’ve decided what servers and clients will be needed on your test network, next you need to
decide how many physical machines you need for the test lab. We can save money by creating multiple
servers on one physical machine, using virtualization software such as Microsoft Virtual PC/Virtual Server.
This is an especially scalable solution because it allows us to spend less money on hardware, and you
can add additional virtual servers by upgrading disk space and RAM, instead of buying complete
machines to emulate each new server that you add to your productivity network.
4.TEST EXECUTION
4.1Test Design and Execution
Based on the test strategy detailed test scenarios would be prepared. During the test design period the
following activities will be carried out:
• Scenario design
• Detailed test execution plan
• Dedicated test environment setup
• Script Recording/ Programming
• Script Customization (Delay, Checkpoints, Synchronizations points)
• Data Generation
• Parameterization/ Data pooling
4.2Load runner executions
When Scenario is designed that includes Business Transaction Script, Virtual User load, Load Generators
(if any), Ramp Up/ Ramp Down, Test Duration, Client/ Server Resource Measurements (Objects and
Counters), we can plan for test execution.
During Test Execution, monitor essential online graphs (Transaction Response Time, Hits per Second,
Throughput, Passed/Failed Transactions, Error messages (if any)).
The first step when the Analysis Tool time series graphs are being prepared is to generate the values that
can be seen in the Graph Data sheet. This is done by dividing the graph time span into slots and taking
the mean of the raw values falling within the slot as the Graph Data value for that slot. The duration of the
slots in the Graph Data is referred to as the Granularity of the graph. This can be set to be a number of
seconds, minutes or hours. Shorter periods provide more detail, longer ones more of an Overview.
4.4Data Presentations
4.4.1Analysis Summary
4.4.1.1Header Time Range
This date is, by default, in the European format of dd/mm/yy (30/8/2004 for
August 30, 2004).
4.4.1.2Scenario Name:
The file path to the .lrs file
4.4.1.3Results in Session:
The file path to the .lrr file
4.4.2Statistics Summary
4.4.2.1Maximum Running Vusers
This number is usually smaller than the number of VUsers specified in run-time
parameters because of ramp-up time and processing delays.
4.4.2.2Total Throughput (bytes):
Dividing this by the amount of time during the test run yields the next number:
4.4.2.3Average Throughput (bytes/second):
This could be shown as a straight horizontal line in the Throughput graph.
4.4.2.4Total Hits:
Dividing this by the amount of time during the test run yields the next number:
4.4.2.5Average Hits per Second:
This could be shown as a straight horizontal line in the Hits per Second graph.
4.4.3Transaction Summary
4.4.3.1Transactions
Total Passed is the total of the Pass column. The number of transactions
Passed and Failed is the total count of every action defined in the script,
multiplied by the number of VUsers, further multiplied by the number of
repetitions, and also multiplied by the number of iterations.
• Vusers
• Transactions
• Web Resources
• System Resources
Application users think, read, and type at different speeds, and it's the performance tester's job to figure
out how to model and script those varying speeds as part the testing process. This How-To will explain all
of the necessary theory about determining and scripting realistic user delays.
• Realistic user delays are important to test results
• Determine realistic durations and distribution patters for user delay times
• Incorporate realistic user delays into test designs and test scripts
5.1.2Model Representative User Groups
Modeling a user community has some special considerations in addition to those for modeling individual
users. This How-To demonstrates and how to develop user community models that realistically represent
the usage of the application by focusing on groups of users and how they interact from the perspective of
the application.
• Identify the hidden values that may cause errors in scripts and executions.
• Incorporate proper functions as per the tools were it required to handle and hidden values.
• If required create re-usable actions or classes in virtual user generator scripts.
It’s a “FAIL OVER TESTING” across the Load balancers in real time environment. All expected load
generation will be generated through the load balancer. Load balancers and networks shouldn’t actually
be Causing performance problems or bottlenecks, but if they are, some configuration changes will usually
remedy the problem. Load balancers are conceptually quite simple. They take the incoming load of client
requests and distribute that load across multiple server resources. When configured correctly, a load
balancer rarely causes a performance problem. The only way to ensure that a load balancer is configured
properly is to test it under load before it’s put into use in the production system. The bottom line is that if
the load balancer isn’t speeding up our site or increasing the volume it can handle, it’s not doing its job
properly and needs to be reconfigured.
Physical architecture with load balancer
7.3Core Concepts
VU – generator - Virtual User Generator (Vugen) –records Vusers scripts that emulate the steps of real
users using the application.
Parameterization - Also known as Application Data and the data is resident in the application’s
database–Examples: ID numbers and passwords
Correlation – also known as user generated data or dynamic values. Originates with the user–Examples:
new unique ID or email address or session ids
Controllers - The Controller is an administrative center for creating, maintaining, and executing
Scenarios. The Controller assigns Vusers and load generators to Scenarios, starts and stops load tests,
and performs other administrative tasks.
Load generators - (also known as hosts) are used to run the Vusers that generate load on the
application under test.
LR Analysis - uses the load test results to create graphs and reports that are used to correlate system
information, identify bottlenecks, and performance issues.
Monitors –
7.4Benchmarking Run / Execution
To validate that there is enough test hardware available in the test environment, benchmark the business
processes against the testing hardware. Take a business process and execute a small number of users
against the application.
7.5Test design
Based on the test strategy detailed test scenarios would be prepared. During the test design period the
following activities will be carried out:
• Scenario design
• Detailed test execution plan
• Dedicated test environment setup
• Script Recording/ Programming
• Script Customization (Delay, Checkpoints, Synchronizations points)
• Data Generation
• Parameterization/ Data pooling
7.6Running Test
The test execution will follow the various types of test as identified in the test plan. All the scenarios
identified will be executed. Virtual user loads are simulated based on the usage pattern and load levels
applied as stated in the performance test strategy.
• Test logs
• Test Result
7.7Hardware Setup
Minimum requirements should be PIII machine and every virtual user will utilize 1 MB of memory.
The following performance test reports/ graphs can be generated as part of performance testing:-
Based on the Performance report analysis, suggestions on improvement or tuning will be provided to the
design team:
7.9Performance counters
The following measurements are most commonly used when monitoring the Oracle server
Oracle server
Measurement Description
CPU used by this session The amount of CPU time (in 10s of milliseconds) used by a session between the time a user call
started and ended. Some user calls can be completed within 10 milliseconds and, as a result, the
start and end-user call time can be the same. In this case, 0 milliseconds are added to the statistic. A
similar problem can exist in the operating system reporting, especially on systems that suffer from
many context switches.
Bytes received via The total number of bytes received from the client over Net8.
SQL*Net from client
Opens of replaced files The total number of files that needed to be reopened because they were no longer in the process file
cache.
User calls Oracle allocates resources (Call State Objects) to keep track of relevant user call data structures
every time you log in, parse, or execute. When determining activity, the ratio of user calls to RPI calls
gives you an indication of how much internal work is generated as a result of the type of requests the
user is sending to Oracle.
SQL*Net roundtrips The total number of Net8 messages sent to, and received from, the client.
to/from client
Bytes sent via SQL*Net to The total number of bytes sent to the client from the foreground process(es).
client
DB block changes Closely related to consistent changes, this statistic counts the total number of changes that were
made to all blocks in the SGA that were part of an update or delete operation. These are changes
that generate redo log entries and hence will cause permanent changes to the database if the
transaction is committed. This statistic is a rough indication of total database work and indicates
(possibly on a per-transaction level) the rate at which buffers are being dirtied.
Total file opens The total number of file opens being performed by the instance. Each process needs a number of
files (control file, log file, database file) to work against the database.
7.10Perfromance Metrics
The Common Metrics selected /used during the performance testing is as below Response time
Turnaround time = the time between the submission of a batch job and the completion of its output.
Stretch Factor The ratio of the response time with single user to that of concurrent users.
Capacity:
• Nominal Capacity: Maximum achievable throughput under ideal workload conditions. E.g.,
bandwidth in bits per second. The response time at maximum throughput is too high.
• Usable capacity: Maximum throughput achievable without exceeding a pre-specified response-
time limit
• Efficiency: Ratio usable capacity to nominal capacity. Or, the ratio of the performance of an n-
processor system to that of a one-processor system is its efficiency.
• Utilization: The fraction of time the resource is busy servicing requests.
• Average Fraction used for memory.
8.DATA PRESENTATION
8.1Data Presentation at different levels in the organization
Data presentation will get differ when we are participating in the execution meeting. The execution data /
analysis data should be with low granularity and with out think times or delay times. The data report
should talk about the individual transactions counts and window resource utilization from the entire web,
app and if we have any external service related and shared services data.
8.2How to organize efficient data graphs
It’s a very good to organize the data that need to be very easy to understand, so we need to correlate the
graphs properly with required graphs information’s. Example: - Through put, hits per second, average
transaction response times and if we have any spike in the environment.
8.3Summarize Results Across Tests Runs efficiently
Need to provide the brief summary for the all graph after every execution, so it will be easy to compare
the results with previous executions. The best practice is to create reports in form of “TEMPLATE” and
then apply the “TEAMPLATES” when ever we require after every execution.
8.4Use Degradation Curves in Reports
It’s very important, our data graphs should display the degradations curve graphs in proper manner and
need to provide exact summary for the respective spikes. If we have degradation in the graphs then we
need to do the root cause analysis and help the developer where exactly the problem occurs.
8.5Report Abandonment and Other Performance Problems
Performance problem might be differ from execution to execution because there might be a lot number of
issue across environment in terms of issues in web server and application server configuration file,
network, virtual IP’s, load balancers, external services, shared services and fire wall.