You are on page 1of 181

Software Testing

Confidential

Cognizant Technology Solutions

Table of Contents 1 INTRODUCTION TO SOFTWARE..........................................................................................7 1.1 EVOL T!O" O# T$E %O#T&'(E TE%T!") *!%C!PL!"E...............................................................+ 1.2 T$E TE%T!") P(OCE%% '"* T$E %O#T&'(E TE%T!") L!#E C,CLE.........................................+ 1.- .(O'* C'TE)O(!E% O# TE%T!")............................................................................................./ 1.0 &!*EL, EMPLO,E* T,PE% O# TE%T!") ................................................................................../ 1.1 T$E TE%T!") TEC$"!2 E%.......................................................................................................3 1.4 C$'PTE( % MM'(,.................................................................................................................3 2 BLACK BOX AND WHITE BOX TESTING..........................................................................11 2.1 !"T(O* CT!O".......................................................................................................................11 2.2 .L'C5 .O6 TE%T!")..............................................................................................................11 2.- TE%T!") %T('TE)!E%7TEC$"!2 E%........................................................................................12.0 .L'C5 .O6 TE%T!") MET$O*%.............................................................................................10 2.1 .L'C5 .O6 8V%9 &$!TE .O6................................................................................................14 2.4 &$!TE .O6 TE%T!")........................................................................................................13 3 GUI TESTING............................................................................................................................23 -.1 %ECT!O" 1 - &!"*O&% COMPL!'"CE TE%T!")......................................................................2-.2 %ECT!O" 2 - %C(EE" V'L!*'T!O" C$EC5L!%T......................................................................21 -.- %PEC!#!C #!EL* TE%T%............................................................................................................23 -.0 V'L!*'T!O" TE%T!") - %T'"*'(* 'CT!O"%........................................................................-: 4 REGRESSION TESTING..........................................................................................................33 0.1 &$'T !% (E)(E%%!O" TE%T!")..............................................................................................-0.2 TE%T E6EC T!O" ...................................................................................................................-0 0.- C$'")E (E2 E%T..................................................................................................................-1 0.0 . ) T('C5!") ......................................................................................................................-1 0.1 T('CE'.!L!T, M'T(!6..........................................................................................................-4 5 PHASES OF TESTING..............................................................................................................39 1.1 !"T(O* CT!O" ......................................................................................................................-3 1.2 T,PE% '"* P$'%E% O# TE%T!")............................................................................................-3 1.- T$E ;V<MO*EL......................................................................................................................0: ........................................................................................................................................................42 6 INTEGRATION TESTING.......................................................................................................43 4.1 )E"E('L!='T!O" O# MO*
LE TE%T!") C(!TE(!'..................................................................00

.........................................................................................................................................................46 7 ACCEPTANCE TESTING........................................................................................................49 +.1 !"T(O* CT!O" > 'CCEPT'"CE TE%T!")...............................................................................03 +.2 #'CTO(% !"#L E"C!") 'CCEPT'"CE TE%T!").....................................................................03 +.- CO"CL %!O"...........................................................................................................................1: 8 S STE! TESTING....................................................................................................................51 /.1 !"T(O* CT!O" TO %,%TEM TE%T!")...............................................................................11 /.2 "EE* #O( %,%TEM TE%T!") ..................................................................................................11
Performance Testing Process & Methodology 2Proprietary & Confidential -

/.- %,%TEM TE%T!") TEC$"!2 E% .............................................................................................12 /.0 # "CT!O"'L TEC$"!2 E%......................................................................................................1/.1 CO"CL %!O"?..........................................................................................................................19 UNIT TESTING.........................................................................................................................54 3.1 !"T(O* CT!O" TO "!T TE%T!")..........................................................................................10 3.2 "!T TE%T!") >#LO&?...........................................................................................................11 1 RESULTS.....................................................................................................................................55
"!T TE%T!") > .L'C5 .O6 'PP(O'C$...................................................................................14 "!T TE%T!") > &$!TE .O6 'PP(O'C$....................................................................................14 "!T TE%T!") > #!EL* LEVEL C$EC5%................................................................................14 "!T TE%T!") > #!EL* LEVEL V'L!*'T!O"%...........................................................................14 "!T TE%T!") > %E( !"TE(#'CE C$EC5%................................................................................14 3.- E6EC T!O" O# "!T TE%T%....................................................................................................1+ "!T TE%T!") #LO& ?.................................................................................................................1+ *!%'*V'"T')E O# "!T TE%T!")........................................................................................13 MET$O* #O( %T'TEME"T COVE(')E........................................................................................13

('CE COVE(')E...................................................................................................................4: 3.0 CO"CL %!O"...........................................................................................................................4: 1" TEST STRATEG ....................................................................................................................62 1:.1 !"T(O* CT!O" ....................................................................................................................42 1:.2 5E, ELEME"T% O# TE%T M'"')EME"T?.............................................................................42 1:.- TE%T %T('TE), #LO& ?.......................................................................................................41:.0 )E"E('L TE%T!") %T('TE)!E%...........................................................................................41 1:.1 "EE* #O( TE%T %T('TE),..................................................................................................41 1:.4 *EVELOP!") ' TE%T %T('TE),..........................................................................................44 1:.+ CO"CL %!O"?........................................................................................................................44 11 TEST PLAN...............................................................................................................................68 11.1 &$'T !% ' TE%T PL'"@........................................................................................................4/ CO"TE"T% O# ' TE%T PL'"........................................................................................................4/ 11.2 CO"TE"T% 8!" *ET'!L9.........................................................................................................4/ 12 TEST DATA PREPARATION # INTRODUCTION.............................................................71 12.1 C(!TE(!' #O( TE%T *'T' COLLECT!O" .............................................................................+2 12.2 CL'%%!#!C'T!O" O# TE%T *'T' T,PE%...............................................................................+3 12.- O()'"!=!") T$E *'T'......................................................................................................../: 12.0 *'T' LO'* '"* *'T' M'!"TE"'"CE............................................................................../2 12.1 TE%T!") T$E *'T'............................................................................................................./12.4 CO"CL %!O"........................................................................................................................./0 13 TEST LOGS # INTRODUCTION ..........................................................................................85 1-.1 #'CTO(% *E#!"!") T$E TE%T LO) )E"E('T!O"............................................................../1 1-.2 COLLECT!") %T'T % *'T'................................................................................................/4 14 TEST REPORT........................................................................................................................92 10.1 E6EC
T!VE

MM'(,.........................................................................................................32 Proprietary & Confidential -

Performance Testing Process & Methodology --

15 DEFECT !ANAGE!ENT.....................................................................................................95 11.1 *E#ECT.................................................................................................................................31 11.2 *E#ECT # "*'ME"T'L% .....................................................................................................31 11.- *E#ECT T('C5!")...............................................................................................................34 11.0 *E#ECT CL'%%!#!C'T!O"......................................................................................................3+ 11.1 *E#ECT (EPO(T!") ) !*EL!"E%.........................................................................................3/ 16 AUTO!ATION......................................................................................................................1"1 14.1 &$, ' TOM'TE T$E TE%T!") P(OCE%%@.........................................................................1:1 14.2 ' TOM'T!O" L!#E C,CLE.................................................................................................1:14.- P(EP'(!") T$E TE%T E"V!(O"ME"T................................................................................1:1 14.0 ' TOM'T!O" MET$O*%....................................................................................................1:/ 17 GENERAL AUTO!ATION TOOL CO!PARISON........................................................111 1+.1 # "CT!O"'L TE%T TOOL M'T(!6......................................................................................111 1+.2 (ECO(* '"* PL',.'C5....................................................................................................111 1+.- &E. TE%T!").....................................................................................................................112 1+.0 *'T'.'%E TE%T%...............................................................................................................112 1+.1 *'T' # "CT!O"%...............................................................................................................112 1+.4 O.AECT M'PP!")...............................................................................................................111+.+ !M')E TE%T!")..................................................................................................................110 1+./ TE%T7E((O( (ECOVE(,.....................................................................................................110 1+.3 O.AECT "'ME M'P............................................................................................................110 1+.1: O.AECT !*E"T!T, TOOL...................................................................................................111 1+.11 E6TE"%!.LE L'") ')E...................................................................................................111 1+.12 E"V!(O"ME"T % PPO(T..................................................................................................114 1+.1- !"TE)('T!O"....................................................................................................................114 1+.10 CO%T.................................................................................................................................114 1+.11 E'%E O# %E....................................................................................................................11+ 1+.14 % PPO(T...........................................................................................................................11+ 1+.1+ O.AECT TE%T%...................................................................................................................11+ 1+.1/ M'T(!6.............................................................................................................................11/ 1+.13 M'T(!6 %CO(E.................................................................................................................11/ 18 SA!PLE TEST AUTO!ATION TOOL.............................................................................119 1/.1 ('T!O"'L % !TE O# TOOL% ...............................................................................................113 1/.2 ('T!O"'L '*M!"!%T('TO(...............................................................................................12: 1/.- ('T!O"'L (O.OT...............................................................................................................120 1/.0 (O.OT LO)!" &!"*O&.......................................................................................................121 1/.1 ('T!O"'L (O.OT M'!" &!"*O&-) ! %C(!PT..................................................................124 1/.4 (ECO(* '"* PL',.'C5 OPT!O"%.....................................................................................12+ 1/.+ VE(!#!C'T!O" PO!"T%.........................................................................................................123 1/./ '.O T %2'.'%!C $E'*E( #!LE%.....................................................................................1-1 1/.3 '**!") *ECL'('T!O"% TO T$E )LO.'L $E'*E( #!LE...................................................1-1 1/.1: !"%E(T!") ' COMME"T !"TO ' ) ! %C(!PT?..................................................................1-1 1/.11 '.O T *'T' POOL%.........................................................................................................1-2 1/.12 *E. ) ME" ....................................................................................................................1-2 1/.1- COMP!L!") T$E %C(!PT.....................................................................................................1-1/.10 COMP!L'T!O" E((O(%......................................................................................................1-0
Performance Testing Process & Methodology 0Proprietary & Confidential -

19 RATIONAL TEST !ANAGER............................................................................................135 13.1 TE%T M'"')E(-(E%


LT% %C(EE".....................................................................................1-4

2" SUPPORTED EN$IRON!ENTS.........................................................................................138 2:.1 OPE('T!") %,%TEM............................................................................................................1-/ 2:.2 P(OTOCOL%.........................................................................................................................1-/ 2:.- &E. .(O&%E(%..................................................................................................................1-/ 2:.0 M'(5 P L'") ')E%.........................................................................................................1-/ 2:.1 *EVELOPME"T E"V!(O"ME"T%.........................................................................................1-/ 21 PERFOR!ANCE TESTING.................................................................................................139 21.1 &$'T !% PE(#O(M'"CE TE%T!")@...................................................................................1-3 21.2 &$, PE(#O(M'"CE TE%T!")@........................................................................................1-3 21.- PE(#O(M'"CE TE%T!") O.AECT!VE%.................................................................................10: 21.0 P(E-(E2 !%!TE% #O( PE(#O(M'"CE TE%T!")..................................................................10: 21.1 PE(#O(M'"CE (E2 !(EME"T%..........................................................................................101 22 PERFOR!ANCE TESTING PROCESS.............................................................................143 22.1 P$'%E 1 > (E2 !(EME"T% %T *,.....................................................................................100 22.2 P$'%E 2 > TE%T PL'"........................................................................................................101 22.- P$'%E - > TE%T *E%!)".....................................................................................................101 22.0 P$'%E 0 >%C(!PT!")..........................................................................................................104 22.1 P$'%E 1 > TE%T E6EC T!O"..............................................................................................10+ 22.4 P$'%E 4 > TE%T '"'L,%!%.................................................................................................10+ 22.+ P$'%E + > P(EP'('T!O" O# (EPO(T%...............................................................................10/ 22./ COMMO" M!%T'5E% !" PE(#O(M'"CE TE%T!")...............................................................103 22.3 .E"C$M'(5!") LE%%O"% .................................................................................................103 23 TOOLS.....................................................................................................................................152 2-.1 LO'*( ""E( 4.1...............................................................................................................112 2-.2 &E.LO'* 0.1.....................................................................................................................112 2-.- '(C$!TECT (E .E"C$M'(5!").......................................................................................113 2-.0 )E"E('L TE%T%................................................................................................................113 24 PERFOR!ANCE !ETRICS................................................................................................161 20.1 CL!E"T %!*E %T'T!%T!C%....................................................................................................141 20.2 %E(VE( %!*E %T'T!%T!C%...................................................................................................142 20.- "ET&O(5 %T'T!%T!C%........................................................................................................142 20.0 CO"CL %!O".......................................................................................................................142 25 LOAD TESTING.....................................................................................................................164 21.1 &$, !% LO'* TE%T!") !MPO(T'"T @.................................................................................140 21.2 &$E" %$O L* LO'* TE%T!") .E *O"E@...........................................................................140 26 LOAD TESTING PROCESS.................................................................................................165 24.1 %,%TEM '"'L,%!%.............................................................................................................141 24.2 %E( %C(!PT%.....................................................................................................................141 24.- %ETT!")%............................................................................................................................141 24.0 PE(#O(M'"CE MO"!TO(!").............................................................................................144
Performance Testing Process & Methodology 1Proprietary & Confidential -

24.1 '"'L,=!") (E% LT%.........................................................................................................144 24.4 CO"CL %!O".......................................................................................................................144 27 STRESS TESTING.................................................................................................................168 2+.1 !"T(O* CT!O" TO %T(E%% TE%T!")...................................................................................14/ 2+.2 .'C5)(O "* TO ' TOM'TE* %T(E%% TE%T!").............................................................143 2+.- ' TOM'TE* %T(E%% TE%T!") !MPLEME"T'T!O"..............................................................1+1 2+.0 P(O)('MM'.LE !"TE(#'CE%............................................................................................1+1 2+.1 )('P$!C'L %E( !"TE(#'CE%...........................................................................................1+2 2+.4 *'T' #LO& *!')('M.......................................................................................................1+2 2+.+ TEC$"!2 E% %E* TO !%OL'TE *E#ECT%..........................................................................1+28 TEST CASE CO$ERAGE.....................................................................................................175 2/.1 TE%T COVE(')E.................................................................................................................1+1 2/.2 TE%T COVE(')E ME'% (E%...............................................................................................1+1 2/.- P(OCE* (E-LEVEL TE%T COVE(')E................................................................................1+4 2/.0 L!"E-LEVEL TE%T COVE(')E............................................................................................1+4 2/.1 CO"*!T!O" COVE(')E '"* OT$E( ME'% (E%................................................................1+4 2/.4 $O& TE%T COVE(')E TOOL% &O(5................................................................................1+4 2/.+ TE%T COVE(')E TOOL% 'T ' )L'"CE..............................................................................1+/ 29 TEST CASE POINTS#TCP....................................................................................................179 23.1 &$'T !% ' TE%T C'%E PO!"T 8TCP9..................................................................................1+3 23.2 C'LC L'T!") T$E TE%T C'%E PO!"T%?.............................................................................1+3 23.- C$'PTE( % MM'(,...........................................................................................................1/1

Performance Testing Process & Methodology 4-

Proprietary & Confidential

1 Introduction to Software 1.1 Evolution of the Software Testing discipline


The effective functioning of modern systems depends on our ability to produce software in a cost-effective way. The term software engineering was first used at a 1968 !T" wor#shop in $est %ermany. &t focused on the growing software crisis' Thus we see that the software crisis on (uality) reliability) high costs etc. started way bac# when most of today*s software testers were not even born' The attitude towards Software Testing underwent a ma+or positive change in the recent years. &n the 19,-*s when .achine languages were used) testing is nothing but debugging. $hen in the 196-*s) compilers were developed) testing started to be considered a separate activity from debugging. &n the 19/-*s when the software engineering concepts were introduced) software testing began to evolve as a technical discipline. "ver the last two decades there has been an increased focus on better) faster and cost-effective software. !lso there has been a growing interest in software safety) protection and security and hence an increased acceptance of testing as a technical discipline and also a career choice'. ow to answer) 0$hat is Testing12 we can go by the famous definition of .yers) which says) 0Testing is the process of e3ecuting a program with the intent of finding errors2

1.2 The Testing process and the Software Testing Life Cycle
4very testing pro+ect has to follow the waterfall model of the testing process. The waterfall model is as given below 1.Test Strategy 5 6lanning 7.Test 8esign 9.Test 4nvironment setup :.Test 43ecution ,.8efect !nalysis 5 Trac#ing 6.;inal <eporting !ccording to the respective pro+ects) the scope of testing can be tailored) but the process mentioned above is common to any testing activity. Software Testing has been accepted as a separate discipline to the e3tent that there is a separate life cycle for the testing activity. &nvolving software testing in all phases of the
Performance Testing Process & Methodology +Proprietary & Confidential -

software development life cycle has become a necessity as part of the software (uality assurance process. <ight from the <e(uirements study till the implementation) there needs to be testing done on every phase. The =-.odel of the Software Testing >ife Cycle along with the Software 8evelopment >ife cycle given below indicates the various phases or levels of testing.
$e%uire!ent Study &igh Level esign Low Level esign 'nit Testing

)roduction *erification Testing 'ser (cceptance Testing Syste! Testing Integration Testing

SDLC # STLC

1.3 Broad Categories of Testing


?ased on the =-.odel mentioned above) we see that there are two categories of testing activities that can be done on software) namely) Static Testing yna!ic Testing The #ind of verification we do on the software wor# products before the process of compilation and creation of an e3ecutable is more of <e(uirement review) design review) code review) wal#through and audits. This type of testing is called Static Testing. $hen we test the software by e3ecuting and comparing the actual 5 e3pected results) it is called Dynamic Testing

1." #idely e!ployed Types of Testing


;rom the =-model) we see that are various levels or phases of testing) namely) @nit testing) &ntegration testing) System testing) @ser !cceptance testing etc. >et us see a brief definition on the widely employed types of testing. Unit Testing: The testing done to a unit or to a smallest piece of software. 8one to verify if it satisfies its functional specification or its intended design structure. Integration Testing: Testing which ta#es place as sub elements are combined Ai.e.) integratedB to form higher-level elements Regression Testing: Selective re-testing of a system to verify the modification Abug fi3esB have not caused unintended effects and that system still complies with its specified re(uirements
Performance Testing Process & Methodology /Proprietary & Confidential -

System Testing: Testing the software for the re(uired specifications on the intended hardware Acceptance Testing: ;ormal testing conducted to determine whether or not a system satisfies its acceptance criteria) which enables a customer to determine whether to accept the system or not. Performance Testing: To evaluate the time ta#en or response time of the system to perform it*s re(uired functions in comparison Stress Testing: To evaluate a system beyond the limits of the specified re(uirements or system resources Asuch as dis# space) memory) processor utiliCationB to ensure the system do not brea# une3pectedly Load Testing: >oad Testing) a subset of stress testing) verifies that a web site can handle a particular number of concurrent users while maintaining acceptable response times Alpha Testing: Testing of a software product or system conducted at the developer*s site by the customer Beta Testing: Testing conducted at one or more customer sites by the end user of a delivered software product system.

1.+ The Testing Techni%ues


To perform these types of testing) there are two widely used testing techni(ues. The above said testing types are performed based on the following testing techni(ues. Black-Box testing techni !e: This techni(ue is used for testing based solely on analysis of re(uirements Aspecification) user documentation.B. !lso #nown as functional testing. "hite-Box testing techni !e: This techni(ue us used for testing based on analysis of internal logic Adesign) code) etc.BA?ut e3pected results still come re(uirementsB. !lso #nown as Structural testing. These topics will be elaborated in the coming chapters

1., Chapter Su!!ary


Performance Testing Process & Methodology 3Proprietary & Confidential -

This chapter covered the &ntroduction and basics of software testing mentioning about 4volution of Software Testing The Testing process and lifecycle ?road categories of testing $idely employed Types of Testing The Testing Techni(ues

Performance Testing Process & Methodology 1: -

Proprietary & Confidential

2 Blac- Bo. and #hite Bo. testing


2.1 Introduction
Test esign refers to understanding the sources of test cases) test coverage) how to develop and document test cases) and how to build and maintain test data. There are 7 primary methods by which tests can be designed and they areD ?>!CE ?"F $G&T4 ?"F

Blac-/0o. test design treats the system as a literal Hblac#-bo3H) so it doesnIt e3plicitly use #nowledge of the internal structure. &t is usually described as focusing on testing functional re(uirements. Synonyms for blac#-bo3 includeD behavioral) functional) opa(uebo3) and closed-bo3. #hite/0o. test design allows one to pee# inside the Hbo3H) and it focuses specifically on using internal #nowledge of the software to guide the selection of test data. &t is used to detect errors by means of e3ecution-oriented test cases. Synonyms for white-bo3 includeD structural) glass-bo3 and clear-bo3. $hile blac#-bo3 and white-bo3 are terms that are still in popular use) many people prefer the terms HbehavioralH and HstructuralH. ?ehavioral test design is slightly different from blac#-bo3 test design because the use of internal #nowledge isnIt strictly forbidden) but itIs still discouraged. &n practice) it hasnIt proven useful to use a single test design method. "ne has to use a mi3ture of different methods so that they arenIt hindered by the limitations of a particular one. Some call this Hgray-bo3H or Htranslucent-bo3H test design) but others wish weId stop tal#ing about bo3es altogether'''

2.2 Blac- 0o. testing


Blac- Bo. Testing is testing without #nowledge of the internal wor#ings of the item being tested. ;or e3ample) when blac# bo3 testing is applied to software engineering) the tester would only #now the HlegalH inputs and what the e3pected outputs should be) but not how the program actually arrives at those outputs. &t is because of this that blac# bo3 testing can be considered testing with respect to the specifications) no other #nowledge of the program is necessary. ;or this reason) the tester and the programmer can be independent of one another) avoiding programmer bias toward his own wor#. ;or this testing) test groups are often used) Though centered around the #nowledge of user re(uirements) blac# bo3 tests do not necessarily involve the participation of users. !mong the most important blac# bo3 tests that do not involve users are functionality testing) volume tests) stress tests) recovery testing) and benchmar#s . !dditionally) there are two types of blac# bo3 test that involve users) i.e. field and laboratory tests. &n the following the most important aspects of these blac# bo3 tests will be described briefly.

Performance Testing Process & Methodology 11 -

Proprietary & Confidential

2.2.1.1 Blac- 0o. testing / without user involve!ent


The so-called JJfunctionality testingII is central to most testing e3ercises. &ts primary ob+ective is to assess whether the program does what it is supposed to do) i.e. what is specified in the re(uirements. There are different approaches to functionality testing. "ne is the testing of each program feature or function in se(uence. The other is to test module by module) i.e. each function where it is called first. The ob+ective of volume tests is to find the limitations of the software by processing a huge amount of data. ! volume test can uncover problems that are related to the efficiency of a system) e.g. incorrect buffer siCes) a consumption of too much memory space) or only show that an error message would be needed telling the user that the system cannot process the given amount of data. 8uring a stress test) the system has to process a huge amount of data or perform many function calls within a short period of time. ! typical e3ample could be to perform the same function from all wor#stations connected in a >! within a short period of time Ae.g. sending e-mails) or) in the >6 area) to modify a term ban# via different terminals simultaneouslyB. The aim of recovery testing is to ma#e sure to which e3tent data can be recovered after a system brea#down. 8oes the system provide possibilities to recover all of the data or part of it1 Gow much can be recovered and how1 &s the recovered data still correct and consistent1 6articularly for software that needs high reliability standards) recovery testing is very important. The notion of benchmar# tests involves the testing of program efficiency. The efficiency of a piece of software strongly depends on the hardware environment and therefore benchmar# tests always consider the softKhardware combination. $hereas for most software engineers benchmar# tests are concerned with the (uantitative measurement of specific operations) some also consider user tests that compare the efficiency of different software systems as benchmar# tests. &n the conte3t of this document) however) benchmar# tests only denote operations that are independent of personal variables.

2.2.1.2 Blac- 0o. testing / with user involve!ent


;or tests involving users) methodological considerations are rare in S4 literature. <ather) one may find practical test reports that distinguish roughly between field and laboratory tests. &n the following only a rough description of field and laboratory tests will be given. 4.g. Scenario Tests. The term JJscenarioII has entered software evaluation in the early 199-s . ! scenario test is a test case which aims at a realistic user bac#ground for the evaluation of software as it was defined and performed &t is an instance of blac# bo3 testing where the ma+or ob+ective is to assess the suitability of a software product for every-day routines. &n short it involves putting the system into its intended use by its envisaged type of user) performing a standardised tas#. &n field tests users are observed while using the software system at their normal wor#ing place. !part from general usability-related aspects) field tests are particularly useful for assessing the interoperability of the software system) i.e. how the technical integration of the system wor#s. .oreover) field tests are the only real means to elucidate problems of the organisational integration of the software system into e3isting procedures. 6articularly in the >6 environment this problem has fre(uently been underestimated. ! typical
Performance Testing Process & Methodology 12 Proprietary & Confidential -

e3ample of the organisational problem of implementing a translation memory is the language service of a big automobile manufacturer) where the ma+or implementation problem is not the technical environment) but the fact that many clients still submit their orders as print-out) that neither source te3ts nor target te3ts are properly organised and stored and) last but not least) individual translators are not too motivated to change their wor#ing habits. >aboratory tests are mostly performed to assess the general usability of the system. 8ue to the high laboratory e(uipment costs laboratory tests are mostly only performed at big software houses such as &?. or .icrosoft. Since laboratory tests provide testers with many technical possibilities) data collection and analysis are easier than for field tests.

2.3 Testing Strategies1Techni%ues


?lac# bo3 testing should ma#e use of randomly generated inputs Aonly a test range should be specified by the testerB) to eliminate any guess wor# by the tester as to the methods of the function 8ata outside of the specified input range should be tested to chec# the robustness of the program ?oundary cases should be tested Atop and bottom of specified rangeB to ma#e sure the highest and lowest allowable inputs produce proper output The number Cero should be tested when numerical data is to be input Stress testing should be performed Atry to overload the program with inputs to see where it reaches its ma3imum capacityB) especially with real time systems Crash testing should be performed to see what it ta#es to bring the system down Test monitoring tools should be used whenever possible to trac# which tests have already been performed and the outputs of these tests to avoid repetition and to aid in the software maintenance "ther functional testing techni(ues includeD transaction testing) synta3 testing) domain testing) logic testing) and state testing. ;inite state machine models can be used as a guide to design functional tests !ccording to ?eiCer the following is a general order by which tests should be designedD 1. Clean tests against re(uirements. 7. !dditional structural tests for branch coverage) as needed. 9. !dditional tests for data-flow coverage as needed. :. 8omain tests not covered by the above. ,. Special techni(ues as appropriate--synta3) loop) state) etc. 6. !ny dirty tests not covered by the above.

Performance Testing Process & Methodology 1- -

Proprietary & Confidential

2." Blac- 0o. testing 2ethods


2.".1 3raph/0ased Testing 2ethods
2.".2 ?lac#-bo3 methods based on the nature of the relationships Alin#sB among the program ob+ects AnodesB) test cases are designed to traverse the entire graph Transaction flow testing Anodes represent steps in some transaction and lin#s represent logical connections between steps that need to be validatedB ;inite state modeling Anodes represent user observable states of the software and lin#s represent transitions between statesB 8ata flow modeling Anodes are data ob+ects and lin#s are transformations from one data ob+ect to anotherB Timing modeling Anodes are program ob+ects and lin#s are se(uential connections between these ob+ects) lin# weights are re(uired e3ecution timesB E%uivalence )artitioning ?lac#-bo3 techni(ue that divides the input domain into classes of data from which test cases can be derived !n ideal test case uncovers a class of errors that might re(uire many arbitrary test cases to be e3ecuted before a general error is observed 4(uivalence class guidelinesD 1. &f input condition specifies a range) one valid and two invalid e(uivalence classes are defined 7. &f an input condition re(uires a specific value) one valid and two invalid e(uivalence classes are defined 9. &f an input condition specifies a member of a set) one valid and one invalid e(uivalence class is defined :. &f an input condition is ?oolean) one valid and one invalid e(uivalence class is defined

2.".3 Boundary *alue (nalysis


?lac#-bo3 techni(ue that focuses on the boundaries of the input domain rather than its center ?=! guidelinesD 1. &f input condition specifies a range bounded by values a and b) test cases should include a and b) values +ust above and +ust below a and b 7. &f an input condition specifies and number of values) test cases should be e3ercise the minimum and ma3imum numbers) as well as values +ust above and +ust below the minimum and ma3imum values 9. !pply guidelines 1 and 7 to output conditions) test cases should be designed to produce the minimum and ma3im output reports :. &f internal program data structures have boundaries Ae.g. siCe limitationsB) be certain to test the boundaries
Performance Testing Process & Methodology 10 Proprietary & Confidential -

2."." Co!parison Testing


?lac#-bo3 testing for safety critical systems in which independently developed implementations of redundant systems are tested for conformance to specifications "ften e(uivalence class partitioning is used to develop a common set of test cases for each implementation

2.".+ 4rthogonal (rray Testing


?lac#-bo3 techni(ue that enables the design of a reasonably small set of test cases that provide ma3imum test coverage ;ocus is on categories of faulty logic li#ely to be present in the software component Awithout e3amining the codeB 6riorities for assessing tests using an orthogonal array 1. 8etect and isolate all single mode faults 7. 8etect all double mode faults 9. .ultimode faults

2."., Speciali5ed Testing


%raphical user interfaces ClientKserver architectures 8ocumentation and help facilities <eal-time systems 1. 7. 9. :. Tas# testing Atest each time dependent tas# independentlyB ?ehavioral testing Asimulate system response to e3ternal eventsB &ntertas# testing Achec# communications errors among tas#sB System testing Achec# interaction of integrated system software and hardwareB

2.".6 (dvantages of Blac- Bo. Testing


.ore effective on larger units of code than glass bo3 testing Tester needs no #nowledge of implementation) including specific programming languages Tester and programmer are independent of each other Tests are done from a userIs point of view $ill help to e3pose any ambiguities or inconsistencies in the specifications Test cases can be designed as soon as the specifications are complete

2.".7

isadvantages of Blac- Bo. Testing


"nly a small number of possible inputs can actually be tested) to test every possible input stream would ta#e nearly forever $ithout clear and concise specifications) test cases are hard to design There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried
Proprietary & Confidential -

Performance Testing Process & Methodology 11 -

.ay leave many program paths untested Cannot be directed toward specific segments of code which may be very comple3 Aand therefore more error proneB .ost testing related research has been directed toward glass bo3 testing

2.+ Blac- Bo. 8*s9 #hite Bo.


!n easy way to start up a debate in a software testing forum is to as# the difference between blac# bo3 and white bo3 testing. These terms are commonly used) yet everyone seems to have a different idea of what they mean.

?lac# bo3 testing begins with a metaphor. &magine you*re testing an electronics system. &t*s housed in a blac# bo3 with lights) switches) and dials on the outside. Lou must test it without opening it up) and you can*t see beyond its surface. Lou have to see if it wor#s +ust by flipping switches AinputsB and seeing what happens to the lights and dials AoutputsB. This is blac# bo3 testing. ?lac# bo3 software testing is doing the same thing) but with software. The actual meaning of the metaphor) however) depends on how you define the boundary of the bo3 and what #ind of access the 0blac#ness2 is bloc#ing.
!n opposite test approach would be to open up the electronics system) see how the circuits are wired) apply probes internally and maybe even disassemble parts of it. ?y analogy) this is called white bo3 testing) To help understand the different ways that software testing can be divided between blac# bo3 and white bo3 techni(ues) consider the ;ive-;old Testing System. &t lays out five dimensions that can be used for e3amining testingD 1.6eopleAwho does the testingB 7. Coverage Awhat gets testedB 9. <is#s Awhy you are testingB :.!ctivitiesAhow you are testingB ,. 4valuation Ahow you #now you*ve found a bugB

>et*s use this system to understand and clarify the characteristics of blac# bo3 and white bo3 testing. )eopleD $ho does the testing1 Some people #now how software wor#s AdevelopersB and others +ust use it AusersB. !ccordingly) any testing by users or other non-developers is sometimes called 0blac# bo32 testing. 8eveloper testing is called 0white bo32 testing. The distinction here is based on what the person #nows or can understand.
Performance Testing Process & Methodology 14 Proprietary & Confidential -

CoverageD $hat is tested1

&f we draw the bo3 around the system as a whole) 0blac# bo32 testing becomes another name for system testing. !nd testing the units inside the bo3 becomes white bo3 testing. This is one way to thin# about coverage. !nother is to
contrast testing that aims to cover all the re(uirements with testing that aims to cover all the code. These are the two most commonly used coverage criteria. ?oth are supported by e3tensive literature and commercial tools. <e(uirements-based testing could be called 0blac# bo32 because it ma#es sure that all the customer re(uirements have been verified. Code-based testing is often called 0white bo32 because it ma#es sure that all the code Athe statements) paths) or decisionsB is e3ercised. $is-sD $hy are you testing1

Sometimes testing is targeted at particular ris#s. ?oundary testing and other attac#-based techni(ues are targeted at common coding errors. 4ffective security testing also re(uires a detailed understanding of the code and the system architecture. Thus) these techni(ues might be classified as 0white bo32. !nother set of ris#s concerns whether the software will actually provide value to users. @sability testing focuses on this ris#) and could be termed 0blac# bo3.2
(ctivities: Gow do you test1

! common distinction is made between behavioral test design) which defines tests based on functional re(uirements) and structural test design) which defines tests based on the code itself. These are two design approaches. Since behavioral testing is based on e3ternal functional definition) it is often called 0blac# bo3)2 while structural testingMbased on the code internalsMis called 0white bo3.2 &ndeed) this is probably the most commonly cited definition for blac# bo3 and white bo3 testing. !nother activity-based distinction contrasts dynamic test e3ecution with formal code inspection. &n this case) the metaphor maps test e3ecution Adynamic testingB with blac# bo3 testing) and maps code inspection Astatic testingB with white bo3 testing. $e could also focus on the tools used. Some tool vendors refer to code-coverage tools as white bo3 tools) and tools that facilitate applying inputs and capturing inputsM most notably %@& capture replay toolsMas blac# bo3 tools. Testing is then categoriCed based on the types of tools used.
EvaluationD Gow do you #now if you*ve found a bug1

There are certain #inds of software faults that don*t always lead to obvious failures. They may be mas#ed by fault tolerance or simply luc#. .emory lea#s and wild pointers are e3amples. Certain test techni(ues see# to ma#e these #inds of problems more visible. <elated techni(ues capture code history and stac# information when faults occur) helping with diagnosis. !ssertions are another techni(ue for helping to ma#e problems more visible. !ll of these
Performance Testing Process & Methodology 1+ Proprietary & Confidential -

techni(ues could be considered white bo3 test techni(ues) since they use code instrumentation to ma#e the internal wor#ings of the software more visible. These contrast with blac# bo3 techni(ues that simply loo# at the official outputs of a program. $hite bo3 testing is concerned only with testing the software product) it cannot guarantee that the complete specification has been implemented. ?lac# bo3 testing is concerned only with testing the specification) it cannot guarantee that all parts of the implementation have been tested. Thus blac# bo3 testing is testing against the specification and will discover faults of omission) indicating that part of the specification has not been fulfilled. $hite bo3 testing is testing against the implementation and will discover faults of commission) indicating that part of the implementation is faulty. &n order to fully test a software product both blac# and white bo3 testing are re(uired.
$hite bo3 testing is much more e3pensive than blac# bo3 testing. &t re(uires the source code to be produced before the tests can be planned and is much more laborious in the determination of suitable input data and the determination if the software is or is not correct. The advice given is to start test planning with a blac# bo3 test approach as soon as the specification is available. $hite bo3 planning should commence as soon as all blac# bo3 tests have been successfully passed) with the production of flowgraphs and determination of paths. The paths should then be chec#ed against the blac# bo3 test plan and any additional re(uired test runs determined and applied. The conse(uences of test failure at this stage may be very e3pensive. ! failure of a white bo3 test may result in a change which re(uires all blac# bo3 testing to be repeated and the re-determination of the white bo3 paths To conclude) apart from the above described analytical methods of both glass and blac# bo3 testing) there are further constructive means to guarantee high (uality software end products. !mong the most important constructive means are the usage of ob+ect-oriented programming tools) the integration of C!S4 tools) rapid prototyping) and last but not least the involvement of users in both software development and testing procedures Su!!ary D

?lac# bo3 testing can sometimes describe user-based testing ApeopleBN system or re(uirements-based testing AcoverageBN usability testing Aris#BN or behavioral testing or capture replay automation AactivitiesB. $hite bo3 testing) on the other hand) can sometimes describe developer-based testing ApeopleBN unit or code-coverage testing AcoverageBN boundary or security testing Aris#sBN structural testing) inspection or code-coverage automation AactivitiesBN or testing based on probes) assertions) and logs AevaluationB.

Performance Testing Process & Methodology 1/ -

Proprietary & Confidential

2., #&ITE B4; TESTI<3


Software testing approaches that e3amine the program structure and derive test data from the program logic. Structural testing is sometimes referred to as clear-bo3 testing since white bo3es are considered opa(ue and do not really permit visibility into the code. Synony!s for white 0o. testing %lass ?o3 testing Structural testing Clear ?o3 testing "pen ?o3 Testing

Types of #hite Bo. testing ! typical rollout of a product is shown in figure 1 below.

The purpose of white 0o. testing &nitiate a strategic initiative to build (uality throughout the life cycle of a software product or service. 6rovide a complementary function to blac# bo3 testing. 6erform complete coverage at the component level. &mprove (uality by optimiCing performance. )ractices : This section outlines some of the general practices comprising white-bo3 testing process. &n general) white-bo3 testing practices have the following considerationsD
Performance Testing Process & Methodology 13 Proprietary & Confidential -

1. The allocation of resources to perform class and method analysis and to document and review the same. 7. 8eveloping a test harness made up of stubs) drivers and test ob+ect libraries. 9. 8evelopment and use of standard procedures) naming conventions and libraries. :. 4stablishment and maintenance of regression test suites and procedures. ,. !llocation of resources to design) document and manage a test history library. 6. The means to develop or ac(uire tool support for automation of captureKreplayKcompare) test suite e3ecution) results verification and documentation capabilities.

1 Code Coverage (nalysis


1.1 Basis )ath Testing ! testing mechanism proposed by .cCabe whose aim is to derive a logical comple3ity measure of a procedural design and use this as a guide for defining a basic set of e3ecution paths. These are test cases that e3ercise basic set will e3ecute every statement at least once.

1.1.1 F%&' G()*+ N&,),-&.


! notation for representing control flow similar to flow charts and @.> activity diagrams. 1.1.2 Cyclo!atic Co!ple.ity The cyclomatic comple3ity gives a (uantitative measure of :the logical comple3ity. This value gives the number of independent paths in the basis set) and an upper bound for the number of tests to ensure that each statement is e3ecuted at least once. !n independent path is any path through a program that introduces at least one new set of processing statements or a new condition Ai.e.) a new edgeB. Cyclomatic comple3ity provides upper bound for number of tests re(uired to guarantee coverage of all program statements.

1.2 Control Structure testing


1.2.1 Conditions Testing Condition testing aims to e3ercise all logical conditions in a program module. They may defineD <elational e3pressionD A41 op 47B) where 41 and 47 are arithmetic e3pressions. Simple conditionD ?oolean variable or relational e3pression) possibly proceeded by a "T operator. Compound conditionD composed of two or more simple conditions) ?oolean operators and parentheses. ?oolean e3pression D Condition without <elational e3pressions. 1.2.2 ata =low Testing
Proprietary & Confidential -

Performance Testing Process & Methodology 2: -

Selects test paths according to the location of definitions and use of variables. 1.2.3 Loop Testing >oops fundamental to many algorithms. Can define loops as simple) concatenated) nested) and unstructured. 43amplesD

ote that unstructured loops are not to be tested . rather) they are redesigned. 2 esign 0y Contract 8 0C9

8bC is a formal way of using comments to incorporate specification information into the code itself. ?asically) the code specification is e3pressed unambiguously using a formal language that describes the codeIs implicit contracts. These contracts specify such re(uirements asD Conditions that the client must meet before a method is invo#ed. Conditions that a method must meet after it e3ecutes. !ssertions that a method must satisfy at specific points of its e3ecution Tools that chec# 8bC contracts at runtime such as OContract PhttpDKKwww.parasoft.comKproductsK+tractKinde3.htmQ are used to perform this function. 3 )rofiling 6rofiling provides a framewor# for analyCing Oava code performance for speed and heap memory use. &t identifies routines that are consuming the ma+ority of the C6@ time so that problems may be trac#ed down to improve performance. These include the use of .icrosoft Oava 6rofiler !6& and Sun*s profiling tools that are bundled with the O8E. Third party tools such as Oa=iC
Performance Testing Process & Methodology 21 Proprietary & Confidential -

PhttpDKKwww.research.ibm.comK+ournalKs+K991K#aCi.htmlQ may also be used to perform this function.

" Error &andling 43ception and error handling is chec#ed thoroughly are simulating partial and complete fail-over by operating on error causing test vectors. 6roper error recovery) notification and logging are chec#ed against references to validate program design. + Transactions Systems that employ transaction) local or distributed) may be validated to ensure that !C&8 A!tomicity) Consistency) &solation) 8urabilityB. 4ach of the individual parameters is tested individually against a reference data set. Transactions are chec#ed thoroughly for partialKcomplete commits and rollbac#s encompassing databases and other F! compliant transaction processors. (dvantages of #hite Bo. Testing ;orces test developer to reason carefully about implementation !ppro3imate the partitioning done by e3ecution e(uivalence <eveals errors in HhiddenH code ?eneficent side-effects isadvantages of #hite Bo. Testing 43pensive Cases omitted in the code could be missed out.

Performance Testing Process & Methodology 22 -

Proprietary & Confidential

3'I Testing

$hat is %@& Testing1 %@& is the abbreviation for %raphic @ser &nterface. &t is absolutely essential that any application has to be user-friendly. The end user should be comfortable while using all the components on screen and the components should also perform their functionality with utmost clarity. Gence it becomes very essential to test the %@& components of any application. %@& Testing can refer to +ust ensuring that the loo#-and-feel of the application is acceptable to the user) or it can refer to testing the functionality of each and every component involved. The following is a set of guidelines to ensure effective %@& Testing and can be used even as a chec#list while testing a product K application.

3.1 Section 1 / #indows Co!pliance Testing


3.1.1 (pplication
Start !pplication by 8ouble Clic#ing on its &C" . The >oading message should show the application name) version number) and a bigger pictorial representation of the icon. o >ogin is necessary. The main window of the application should have the same caption as the caption of the icon in 6rogram .anager. Closing the application should result in an H!re you SureH message bo3 !ttempt to start application twice. This should not be allowed - you should be returned to main window. Try to start the application twice as it is loading. "n each window) if the application is busy) then the hour glass should be displayed. &f there is no hour glass) then some en(uiry in progress message should be displayed. !ll screens should have a Gelp button Ai.e.B ;1 #ey should wor# the same. &f $indow has a .inimiCe ?utton) clic# it. $indow should return to an icon on the bottom of the screen. This icon should correspond to the "riginal &con under 6rogram .anager. 8ouble Clic# the &con to return the $indow to its original siCe. The window caption for every application should have the name of the application and the window name especially the error messages. These should be chec#ed for spelling) 4nglish and clarity) especially on the top of the screen. Chec# does the title of the window ma#e sense. &f the screen has a Control menu) then use all un-grayed options. Chec# all te3t on window for SpellingKTense and %rammar. @se T!? to move focus around the $indow. @se SG&;TRT!? to move focus bac#wards. Tab order should be left to right) and @p to 8own within a group bo3 on the screen. !ll controls should get focus - indicated by dotted bo3) or cursor. Tabbing to an entry field with te3t in it should highlight the entire te3t in the field. The te3t in the .icro Gelp line should change - Chec# for spelling) clarity and non-updateable etc. &f a field is disabled AgrayedB then it should not get focus. &t should not be possible to select them with either the mouse or by using T!?. Try this for every grayed control.

Performance Testing Process & Methodology 2- -

Proprietary & Confidential

ever updateable fields should be displayed with blac# te3t on a gray bac#ground with a blac# label. !ll te3t should be left +ustified) followed by a colon tight to it. &n a field that may or may not be updateable) the label te3t and contents changes from blac# to gray depending on the current status. >ist bo3es are always white bac#ground with blac# te3t whether they are disabled or not. !ll others are gray. &n general) double-clic#ing is not essential. &n general) everything can be done using both the mouse and the #eyboard. !ll tab buttons should have a distinct letter.

3.1.2 Te.t Bo.es


.ove the .ouse Cursor over all 4nterable Te3t ?o3es. Cursor should change from arrow to &nsert ?ar. &f it doesnIt then the te3t in the bo3 should be gray or non-updateable. <efer to previous page. 4nter te3t into ?o3 Try to overflow the te3t by typing to many characters - should be stopped Chec# the field width with capitals $. 4nter invalid characters >etters in amount fields) try strange characters li#e R ) - S etc. in !ll fields. SG&;T and !rrow should Select Characters. Selection should also be possible with mouse. 8ouble Clic# should select all te3t in bo3.

3.1.3 4ption 8$adio Buttons9


>eft and <ight arrows should move I" I Selection. So should @p and 8own. Select with mouse by clic#ing.

3.1." Chec- Bo.es


Clic#ing with the mouse on the bo3) or on the te3t should S4TK@ S4T the bo3. S6!C4 should do the same.

3.1.+ Co!!and Buttons


&f Command ?utton leads to another Screen) and if the user can enter or change details on the other screen then the Te3t on the button should be followed by three dots. !ll ?uttons e3cept for "E and Cancel should have a letter !ccess to them. This is indicated by a letter underlined in the button te3t. 6ressing !>TR>etter should activate the button. .a#e sure there is no duplication. Clic# each button once with the mouse - This should activate Tab to each button - 6ress S6!C4 - This should activate Tab to each button - 6ress <4T@< - This should activate The above are =4<L &.6"<T! T> and should be done for E*E$? command ?utton. Tab to another type of control Anot a command buttonB. "ne button on the screen should be default Aindicated by a thic# blac# borderB. 6ressing <eturn in ! L no command button control should activate it. &f there is a Cancel ?utton on the screen) then pressing T4scU should activate it. &f pressing the Command button results in uncorrectable data e.g. closing an action step) there should be a message phrased positively with LesK o answers where Les results in the completion of the action.

3.1.,

rop

own List Bo.es

6ressing the !rrow should give list of options. This >ist may be scrollable. Lou should not be able to type te3t in the bo3. 6ressing a letter should bring you to the first item in the list with that start with that letter. 6ressing VCtrl - ;:* should openKdrop down the list bo3.
Performance Testing Process & Methodology 20 Proprietary & Confidential -

Spacing should be compatible with the e3isting windows spacing Aword etc.B. &tems should be in alphabetical order with the e3ception of blan#Knone) which is at the top or the bottom of the list bo3. 8rop down with the item selected should be display the list with the selected item on the top. .a#e sure only one space appears) shouldnIt have a blan# line at the bottom.

3.1.6 Co!0o Bo.es


Should allow te3t to be entered. Clic#ing !rrow should allow user to choose from list

3.1.7 List Bo.es


Should allow a single selection to be chosen) by clic#ing with the mouse) or using the @p and 8own !rrow #eys. 6ressing a letter should ta#e you to the first item in the list starting with that letter. &f there is a I=iewI or I"penI button besides the list bo3 then double clic#ing on a line in the >ist ?o3) should act in the same way as selecting and item in the list bo3) then clic#ing the command button. ;orce the scroll bar to appear) ma#e sure all the data can be seen in the bo3.

3.2 Section 2 / Screen *alidation Chec-list


3.2.1 (esthetic Conditions:
1. 7. 9. :. ,. 6. /. 8. 9. 1-. 11. 17. 19. 1:. 1,. 16. 1/. 18. 19. 7-. 71. &s the general screen bac#ground the correct color1 !re the field prompts the correct color1 !re the field bac#grounds the correct color1 &n read-only mode) are the field prompts the correct color1 &n read-only mode) are the field bac#grounds the correct color1 !re all the screen prompts specified in the correct screen font1 &s the te3t in all fields specified in the correct screen font1 !re all the field prompts aligned perfectly on the screen1 !re all the field edit bo3es aligned perfectly on the screen1 !re all group bo3es aligned correctly on the screen1 Should the screen be resiCable1 Should the screen be allowed to minimiCe1 !re all the field prompts spelt correctly1 !re all character or alphanumeric fields left +ustified1 This is the default unless otherwise specified. !re all numeric fields right +ustified1 This is the default unless otherwise specified. &s all the micro-help te3t spelt correctly on this screen1 &s all the error message te3t spelt correctly on this screen1 &s all user input captured in @664< case or lowercase consistently1 $here the database re(uires a value Aother than nullB then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact. !ssure that all windows have a consistent loo# and feel. !ssure that all dialog bo3es have a consistent loo# and feel.

Performance Testing Process & Methodology 21 -

Proprietary & Confidential

3.2.2 *alidation Conditions:


1. 7. 9. :. ,. 6. /. 8. 9. 1-. 8oes a failure of validation on every field cause a sensible user error message1 &s the user re(uired to fi3 entries) which have failed validation tests1 Gave any fields got multiple validation rules and if so are all rules being applied1 &f the user enters an invalid value and clic#s on the "E button Ai.e. does not T!? off the fieldB is the invalid entry identified and highlighted correctly with an error message1 &s validation consistently applied at screen level unless specifically re(uired at field level1 ;or all numeric fields chec# whether negative numbers can and should be able to be entered. ;or all numeric fields chec# the minimum and ma3imum values and also some mid-range values allowable1 ;or all characterKalphanumeric fields chec# the field to ensure that there is a character limit specified and that this limit is e3actly correct for the specified database siCe1 8o all mandatory fields re(uire user input1 &f any of the database columns donIt allow null values then the corresponding screen fields must be mandatory. A&f any field) which initially was mandatory) has become optional then chec# whether null values are allowed in this field.B

3.2.3 <avigation Conditions?


1. Can the screen be accessed correctly from the menu1 7. Can the screen be accessed correctly from the toolbar1 9. Can the screen be accessed correctly by double clic#ing on a list control on the previous screen1 :. Can all screens accessible via buttons on this screen be accessed correctly1 ,. Can all screens accessible by double clic#ing on a list control be accessed correctly1 6. &s the screen modal1 Ai.e.B &s the user prevented from accessing other functions when this screen is active and is this correct1 /. Can a number of instances of this screen be opened at the same time and is this correct1

3.2." 'sa0ility Conditions?


1. !re all the dropdowns on this screen sorted correctly1 !lphabetic sorting is the default unless otherwise specified. 7. &s all date entry re(uired in the correct format1 9. Gave all pushbuttons on the screen been given appropriate Shortcut #eys1 :. 8o the Shortcut #eys wor# correctly1 ,. Gave the menu options that apply to your screen got fast #eys associated and should they have1 6. 8oes the Tab "rder specified on the screen go in se(uence from Top >eft to bottom right1 This is the default unless otherwise specified. /. !re all read-only fields avoided in the T!? se(uence1 8. !re all disabled fields avoided in the T!? se(uence1 9. Can the cursor be placed in the microhelp te3t bo3 by clic#ing on the te3t bo3 with the mouse1
Performance Testing Process & Methodology 24 Proprietary & Confidential -

1-. Can the cursor be placed in read-only fields by clic#ing in the field with the mouse1 11. &s the cursor positioned in the first input field or control when the screen is opened1 17. &s there a default button specified on the screen1 19. 8oes the default button wor# correctly1 1:. $hen an error message occurs does the focus return to the field in error when the user cancels it1 1,. $hen the user !ltRTabIs to another application does this have any impact on the screen upon return to the application1 16. 8o all the fields edit bo3es indicate the number of characters they will hold by there length1 e.g. a 9- character field should be a lot longer

3.2.+

ata Integrity Conditions:

1. &s the data saved when the window is closed by double clic#ing on the close bo31 7. Chec# the ma3imum field lengths to ensure that there are no truncated characters1 9. $here the database re(uires a value Aother than nullB then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact. :. Chec# ma3imum and minimum field values for numeric fields1 ,. &f numeric fields accept negative values can these be stored correctly on the database and does it ma#e sense for the field to accept negative numbers1 6. &f a set of radio buttons represents a fi3ed set of values such as !) ? and C then what happens if a blan# value is retrieved from the database1 A&n some situations rows can be created on the database by other functions) which are not screen based) and thus the re(uired initial values can be incorrect.B /. &f a particular set of data is saved to the database chec# that each value gets saved fully to the database. Ai.e.B ?eware of truncation Aof stringsB and rounding of numeric values.

3.2., 2odes 8Edita0le $ead/only9 Conditions:


1. 7. 9. :. !re the screen and field colors ad+usted correctly for read-only mode1 Should a read-only mode be provided for this screen1 !re all fields and controls disabled in read-only mode1 Can the screen be accessed from the previous screenKmenuKtoolbar in read-only mode1 ,. Can all screens available from this screen be accessed in read-only mode1 6. Chec# that no validation is performed in read-only mode.

3.2.6 3eneral Conditions:


1. 7. 9. :. !ssure the e3istence of the HGelpH menu. !ssure that the proper commands and options are in each menu. !ssure that all buttons on all tool bars have a corresponding #ey commands. !ssure that each menu command has an alternative Ahot-#eyB #ey se(uence) which will invo#e it where appropriate. ,. &n drop down list bo3es) ensure that the names are not abbreviations K cut short
Performance Testing Process & Methodology 2+ Proprietary & Confidential -

6. &n drop down list bo3es) assure that the list and each entry in the list can be accessed via appropriate #ey K hot #ey combinations. /. 4nsure that duplicate hot #eys do not e3ist on each screen 8. 4nsure the proper usage of the escape #ey Awhich is to undo any changes that have been madeB and generates a caution message HChanges will be lost Continue yesKnoH 9. !ssure that the cancel button functions the same as the escape #ey. 1-. !ssure that the Cancel button operates) as a Close button when changes have been made that cannot be undone. 11. !ssure that only command buttons) which are used by a particular window) or in a particular dialog bo3) are present. W Ai.eB ma#e sure they donIt wor# on the screen behind the current screen. 17. $hen a command button is used sometimes and not at other times) assures that it is grayed out when it should not be used. 19. !ssure that "E and Cancel buttons are grouped separately from other command buttons. 1:. !ssure that command button names are not abbreviations. 1,. !ssure that all field labelsKnames are not technical labels) but rather are names meaningful to system users. 16. !ssure that command buttons are all of similar siCe and shape) and same font 5 font siCe. 1/. !ssure that each command button can be accessed via a hot #ey combination. 18. !ssure that command buttons in the same windowKdialog bo3 do not have duplicate hot #eys. 19. !ssure that each windowKdialog bo3 has a clearly mar#ed default value Acommand button) or other ob+ectB which is invo#ed when the 4nter #ey is pressed - and "T the Cancel or Close button 7-. !ssure that focus is set to an ob+ectKbutton) which ma#es sense according to the function of the windowKdialog bo3. 71. !ssure that all option buttons Aand radio buttonsB names are not abbreviations. 77. !ssure that option button names are not technical labels) but rather are names meaningful to system users. 79. &f hot #eys are used to access option buttons) assure that duplicate hot #eys do not e3ist in the same windowKdialog bo3. 7:. !ssure that option bo3 names are not abbreviations. 7,. !ssure that option bo3es) option buttons) and command buttons are logically grouped together in clearly demarcated areas H%roup ?o3H 76. !ssure that the Tab #ey se(uence) which traverses the screens) does so in a logical way. 7/. !ssure consistency of mouse actions across windows. 78. !ssure that the color red is not used to highlight active ob+ects Amany individuals are red-green color blindB. 79. !ssure that the user will have control of the des#top with respect to general color and highlighting Athe application should not dictate the des#top bac#ground characteristicsB. 9-. !ssure that the screenKwindow does not have a cluttered appearance 91. Ctrl R ;6 opens ne3t tab within tabbed window 97. Shift R Ctrl R ;6 opens previous tab within tabbed window 99. Tabbing will open ne3t tab within tabbed window if on last field of current tab
Performance Testing Process & Methodology 2/ Proprietary & Confidential -

9:. Tabbing will go onto the IContinueI button if on last field of last tab within tabbed window 9,. Tabbing will go onto the ne3t editable field in the window 96. ?anner style 5 siCe 5 display e3act same as e3isting windows 9/. &f 8 or less options in a list bo3) display all options on open of list bo3 - should be no need to scroll 98. 4rrors on continue will cause user to be returned to the tab and the focus should be on the field causing the error. Ai.e the tab is opened) highlighting the field with the error on itB 99. 6ressing continue while on the first tab of a tabbed window Aassuming all fields filled correctlyB will not open all the tabs. :-. "n open of tab focus will be on first editable field :1. !ll fonts to be the same :7. !ltR;: will close the tabbed window and return you to main screen or previous screen Aas appropriateB) generating Hchanges will be lostH message if necessary. :9. .icrohelp te3t for every enabled field 5 button ::. 4nsure all fields are disabled in read-only mode :,. 6rogress messages on load of tabbed screens :6. <eturn operates continue :/. &f retrieve on load of tabbed window fails window should not open

3.3 Specific =ield Tests


3.3.1 ate =ield Chec-s
cause cause cause errorsK errorsK cause 1. !ssure that leap years are validated correctly 5 do not errorsKmiscalculations. 7. !ssure that month code -- and 19 are validated correctly 5 do not errorsKmiscalculations. 9. !ssure that -- and 19 are reported as errors. :. !ssure that day values -- and 97 are validated correctly 5 do not errorsKmiscalculations. ,. !ssure that ;eb. 78) 79) 9- are validated correctly 5 do not cause miscalculations. 6. !ssure that ;eb. 9- is reported as an error. /. !ssure that century change is validated correctly 5 does not cause miscalculations. 8. !ssure that out of cycle dates are validated correctly 5 do not errorsKmiscalculations.

3.3.2 <u!eric =ields


1. 7. 9. :. !ssure that lowest and highest values are handled correctly. !ssure that invalid values are logged and reported. !ssure that valid values are handles by the correct procedure. !ssure that numeric fields with a blan# in position 1 are processed or reported as an error. ,. !ssure that fields with a blan# in the last position are processed or reported as an error an error. 6. !ssure that both R and - values are correctly processed. /. !ssure that division by Cero does not occur.
Performance Testing Process & Methodology 23 Proprietary & Confidential -

8. 9. 1-. 11. 17.

&nclude value Cero in all calculations. &nclude at least one in-range value. &nclude ma3imum and minimum range values. &nclude out of range values above the ma3imum and below the minimum. !ssure that upper and lower values in ranges are handled correctly.

3.3.3 (lpha =ield Chec-s


1. 7. 9. :. ,. 6. @se blan# and non-blan# data. &nclude lowest and highest values. &nclude invalid characters 5 symbols. &nclude valid characters. &nclude data items with first position blan#. &nclude data items with last position blan#.

3." *alidation Testing / Standard (ctions


3.".1 E.a!ples of Standard (ctions / Su0stitute your specific co!!ands
'dd VieB Change *elete ContinCe - 8i.e. continCe saDing changes or additions9 'dd VieB Change *elete Cancel - 8i.e. aEandon changes or additions9 #ill each field - Valid data #ill each field - !nDalid data *ifferent ChecF .oG 7 (adio .oG comEinations %croll Lists 7 *rop *oBn List .oGes $elp #ill Lists and %croll TaE TaE %eHCence %hift TaE

3.".2 Shortcut -eys 1 &ot @eys


Note: The following keys are used in some windows applications, and are included as a guide.
Performance Testing Process & Methodology -: Proprietary & Confidential -

K/0 F1 F2 F3 F4

N& !&1-2-/( $elp "7' "7' "7'

S+-2, Enter Mode "7' "7' "7'

CTRL $elp "7' "7' "7'

ALT "7' "7' "7'

Close Close *ocCment 7 'pplication. Child BindoB. "7' "7' "7' "7' "7' "7' "7'

F5 F6 F7 F8

"7' "7' "7'

"7' "7' "7'

Toggle eGtend Toggle 'dd "7' modeI if modeI if sCpported. sCpported. "7' "7' "7' "7' "7' MoDe to neGt open *ocCment or Child BindoB. 8'dding %$!#T reDerses the order of moDement9. "7'

F9 F1"

"7' "7' "7' %Bitch to preDioCsly Csed application. 8$olding doBn the 'LT Fey displays all open applications9. "7'

Toggle menC Ear "7' actiDation. "7'

F113 F12 "7' T)4

MoDe to neGt MoDe to actiDe7editaEle preDioCs field. actiDe7editaEle field.

A%,

PCts focCs on "7' first menC command 8e.g. J#ileJ9.

Performance Testing Process & Methodology -1 -

Proprietary & Confidential

3.".3 Control Shortcut @eys


Key CT(L K = CT(L K 6 CT(L K C CT(L K V CT(L K " CT(L K O CT(L K P CT(L K % CT(L K . CT(L K ! CT(L K Function ndo CCt Copy Paste "eB Open Print %aDe .oldL !talicL nderlineL

L These shortcCts are sCggested for teGt formatting applicationsI in the conteGt for Bhich they maFe sense. 'pplications may Cse other modifiers for these operations.

Performance Testing Process & Methodology -2 -

Proprietary & Confidential

" $egression Testing ".1 #hat is regression Testing


<egression testing is the process of testing changes to computer programs to ma#e sure that the older programming still wor#s with the new changes. <egression testing is a normal part of the program development process. Test department coders develop code test scenarios and e3ercises that will test new units of code after they have been written. ?efore a new version of a software product is released) the old test cases are run against the new version to ma#e sure that all the old capabilities still wor#. The reason they might not wor# because changing or adding new code to a program can easily introduce errors into code that is not intended to be changed. that any bugs have been fi3ed and that no other previously wor#ing functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software. !lso referred to as verification testing

The selective retesting of a software system that has been modified to ensure

<egression testing is initiated after a programmer has attempted to fi3 a


recogniCed problem or has added source code to a program that may have inadvertently introduced errors. &t is a (uality control measure to ensure that the newly modified code still complies with its specified re(uirements and that unmodified code has not been affected by the maintenance activity.

Performance Testing Process & Methodology -- -

Proprietary & Confidential

".2 Test E.ecution


Test 43ecution is the heart of the testing process. 4ach time your application changes) you will want to e3ecute the relevant parts of your test plan in order to locate defects and assess (uality.

".2.1 Create Test Cycles


8uring this stage you decide the subset of tests from your test database you want to e3ecute. @sually you do not run all the tests at once. !t different stages of the (uality assurance process) you need to e3ecute different tests in order to address specific goals. ! related group of tests is called a test cycle) and can include both manual and automated tests 43ampleD Lou can create a cycle containing basic tests that run on each build of the application throughout development. Lou can run the cycle each time a new build is ready) to determine the applicationIs stability before beginning more rigorous testing. 43ampleD Lou can create another set of tests for a particular module in your application. This test cycle includes tests that chec# that module in depth. To decide which test cycles to build) refer to the testing goals you defined at the beginning of the process. !lso consider issues such as the current state of the application and whether new functions have been added or modified. ;ollowing are e3amples of some general categories of test cycles to considerD sanity cycle chec#s the entire system at a basic level Abreadth) rather than depthB to see that it is functional and stable. This cycle should include basic-level tests containing mostly positive chec#s. nor!al cycle tests the system a little more in depth than the sanity cycle. This cycle can group medium-level tests) containing both positive and negative chec#s. advanced cycle tests both breadth and depth. This cycle can be run when more time is available for testing. The tests in the cycle cover the entire application AbreadthB) and also test advanced options in the application AdepthB. regression cycle tests maintenance builds. The goal of this type of cycle is to verify that a change to one part of the software did not brea# the rest of the application. ! regression cycle includes sanity-level tests for testing the entire software) as well as in-depth tests for the specific area of the application that was modified.

".2.2 $un Test Cycles 8(uto!ated A 2anual Tests9


"nce you have created cycles that cover your testing ob+ectives) you begin e3ecuting the tests in the cycle. Lou perform manual tests using the test steps. Testing Tools e3ecutes
Performance Testing Process & Methodology -0 Proprietary & Confidential -

automated tests for you. ! test cycle is complete only when all tests-automatic and manual-have been run. $ith .anual Test 43ecution you follow the instructions in the test steps of each test. Lou use the application) enter input) compare the application output with the e3pected output) and log the results. ;or each test step you assign either pass or fail status. 8uring !utomated Test 43ecution you create a batch of tests and launch the entire batch at once. Testing Tools runs the tests one at a time. &t then imports results) providing outcome summaries for each test.

".2.3 (naly5e Test $esults


!fter every test run one analyCe and validate the test results. !nd have to identify all the failed steps in the tests and to determine whether a bug has been detected) or if the e3pected result needs to be updated.

".3 Change $e%uest


".3.1 Initiating a Change $e%uest
! user or developer wants to suggest a modification that would improve an e3isting application) notices a problem with an application) or wants to recommend an enhancement. !ny ma+or or minor re(uest is considered a problem with an application and will be entered as a change re(uest.

".3.2 Type of Change $e%uest


Bug the application wor#s incorrectly or provides incorrect information. Afor e3ample) a letter is allowed to be entered in a number fieldB Change a modification of the e3isting application. Afor e3ample) sorting the files alphabetically by the second field rather than numerically by the first field ma#es them easier to findB Enhance!ent new functionality or item added to the application. Afor e3ample) a new report) a new field) or a new buttonB

".3.3 )riority for the re%uest


Low the application wor#s but this would ma#e the function easier or more user friendly. &igh the application wor#s) but this is necessary to perform a +ob. Critical the application does not wor#) +ob functions are impaired and there is no wor# around. This also applies to any Section ,-8 infraction.

"." Bug Trac-ing


>ocating and repairing software bugs is an essential part of software development.
Proprietary & Confidential -

Performance Testing Process & Methodology -1 -

?ugs can be detected and reported by engineers) testers) and end-users in all phases of the testing process. &nformation about bugs must be detailed and organiCed in order to schedule bug fi3es and determine software release dates.

?ug Trac#ing involves two main stagesD reporting and trac#ing.

".".1 $eport Bugs


"nce you e3ecute the manual and automated tests in a cycle) you report the bugs Aor defectsB that you detected. The bugs are stored in a database so that you can manage them and analyCe the status of your application. $hen you report a bug) you record all the information necessary to reproduce and fi3 it. Lou also ma#e sure that the X! and development personnel involved in fi3ing the bug are notified.

".".2 Trac- and (naly5e Bugs


The lifecycle of a bug begins when it is reported and ends when it is fi3ed) verified) and closed. ;irst you report <ew bugs to the database) and provide all necessary information to reproduce) fi3) and follow up the bug. The Xuality !ssurance manager or 6ro+ect manager periodically reviews all ew bugs and decides which should be fi3ed. These bugs are given the status 4pen and are assigned to a member of the development team. Software developers fi3 the "pen bugs and assign them the status =i.ed. X! personnel test a new build of the application. &f a bug does not reoccur) it is Closed. &f a bug is detected again) it is reopened.

Communication is an essential part of bug trac#ingN all members of the development and (uality assurance team must be well informed in order to insure that bugs information is up to date and that the most important problems are addressed. The number of open or fi3ed bugs is a good indicator of the (uality status of your application. Lou can use data analysis tools such as re-ports and graphs in interpret bug data.

".+ Tracea0ility 2atri.


! traceability matri3 is created by associating re(uirements with the products that satisfy them. Tests are associated with the re(uirements on which they are based and the
Performance Testing Process & Methodology -4 Proprietary & Confidential -

product tested to meet the re(uirement. ?elow is a simple traceability matri3 structure. There can be more things included in a traceability matri3 than shown below. Traceability re(uires uni(ue identifiers for each re(uirement and product. umbers for products are established in a configuration management AC.B plan.

Traceability ensures completeness) that all lower level re(uirements derive from higher level re(uirements) and that all higher level re(uirements are allocated to lower level re(uirements. Traceability is also used in managing change and provides the basis for test planning. S!.6>4 T<!C4!?&>&TL .!T<&F ! traceability matri3 is a report from the re(uirements database or repository. The e3amples below show traceability between user and system re(uirements. @ser re(uirement identifiers begin with H@H and system re(uirements with HS.H

Tracing S17 to its source ma#es it clear this re(uirement is erroneousD it must be eliminated) rewritten) or the traceability corrected.

Performance Testing Process & Methodology -+ -

Proprietary & Confidential

&n addition to traceability matrices) other reports are necessary to manage re(uirements. $hat goes into each report depends on the information needs of those receiving the reportAsB. 8etermine their information needs and document the information that will be associated with the re(uirements when you set up your re(uirements database or repository

Performance Testing Process & Methodology -/ -

Proprietary & Confidential

+ )hases of Testing
+.1 Introduction
The 6rimary ob+ective of testing effort is to determine the conformance to re(uirements specified in the contracted documents. The integration of this code with the internal code is the important ob+ective. %oal is to evaluate the system as a whole) not its parts Techni(ues can be structural or functional. Techni(ues can be used in any stage that tests the system as a whole ASystem testing )!cceptance Testing) @nit testing) &nstallation) etc.B

+.2 Types and )hases of Testing


S LC ocu!ent Software <e(uirement Specification 8esign 8ocument ;unctional Specification 8esign 8ocument 5 ;unctional Specs 8esign 8ocument 5 ;unctional Specs 8esign 8ocument 5 ;unctional Specs @nit K System K &ntegration Test Case 8ocuments ;unctional Specs) 6erformance Criteria Software <e(uirement Specification) @nit K System K &ntegration K <egression K 6erformance Test Case 8ocuments B( ocu!ent <e(uirement Chec#list 8esign Chec#list ;unctional Chec#list @nit Test Case 8ocuments &ntegration Test Case 8ocuments System Test Case 8ocuments <egression Test Case 8ocuments 6erformance Test Case 8ocuments @ser !cceptance Test Case 8ocuments.

Performance Testing Process & Methodology -3 -

Proprietary & Confidential

+.3 The C*D2odel

Requirements

Acceptance Testing

Specification

System Testing

Architecture

Integration Testing

Detailed Design

Unit Testing

Coding

Performance Testing Process & Methodology 0: -

Proprietary & Confidential

Requirement Study

Requirement Checklist Software Requirement Specification unctional Specification Checklist unctional Specification Document Architecture Design Detailed Design Document Coding

Software Requirement Specification

unctional Specification Document Architecture Design

unctional Specification Document Design Document unctional Specification Document Unit!Integratio n!System Test Case unctional Documents Specification "erformance Document Criteria Software Requirement Regression Specification Test Case "erformance Document Test Cases and Scenarios

Unit Test Case Documents Unit Test Case Document System Test Case Document Integration Test Case Document Regression Test Case Document "erformance Test Cases and Scenarios User Acceptance Test Case Documents!Sce narios

Performance Testing Process & Methodology 01 -

Proprietary & Confidential

Requirement s

Regression Round # "erformance Testing Regression Round $

Requirement s Re&iew

Specification

Specification Re&iew

System Testing

Architecture

Regression Round %

Architectur e Re&iew

Integration Testing

Detailed Design Code

Design Re&iew

Unit Testing

Code 'alkthrough

Performance Testing Process & Methodology 02 -

Proprietary & Confidential

, Integration Testing
"ne of the most significant aspects of a software development pro+ect is the integration strategy. &ntegration may be performed all at once) top-down) bottom-up) critical piece first) or by first integrating functional subsystems and then integrating the subsystems in separate phases using any of the basic strategies. &n general) the larger the pro+ect) the more important the integration strategy. =ery small systems are often assembled and tested in one phase. ;or most real systems) this is impractical for two ma+or reasons. ;irst) the system would fail in so many places at once that the debugging and retesting effort would be impractical Second) satisfying any white bo3 testing criterion would be very difficult) because of the vast amount of detail separating the input data from the individual code modules. &n fact) most integration testing has been traditionally limited to JJblac# bo3II techni(ues. >arge systems may re(uire many integration phases) beginning with assembling modules into low-level subsystems) then assembling subsystems into larger subsystems) and finally assembling the highest level subsystems into the complete system. To be most effective) an integration testing techni(ue should fit well with the overall integration strategy. &n a multi-phase integration) testing at each phase helps detect errors early and #eep the system under control. 6erforming only cursory testing at early integration phases and then applying a more rigorous criterion for the final stage is really +ust a variant of the high-ris# Hbig bangH approach. Gowever) performing rigorous testing of the entire software involved in each integration phase involves a lot of wasteful duplication of effort across phases. The #ey is to leverage the overall integration structure to allow rigorous testing at each phase while minimiCing duplication of effort. &t is important to understand the relationship between module testing and integration testing. &n one view) modules are rigorously tested in isolation using stubs and drivers before any integration is attempted. Then) integration testing concentrates entirely on module interactions) assuming that the details within each module are accurate. !t the other e3treme) module and integration testing can be combined) verifying the details of each moduleIs implementation in an integration conte3t. .any pro+ects compromise) combining module testing with the lowest level of subsystem integration testing) and then performing pure integration testing at higher levels. 4ach of these views of integration testing may be appropriate for any given pro+ect) so an integration testing method should be fle3ible enough to accommodate them all.

#om$ining mod!le testing %ith $ottom-!p integration .

Performance Testing Process & Methodology 0- -

Proprietary & Confidential

,.1 3enerali5ation of !odule testing criteria


.odule testing criteria can often be generaliCed in several possible ways to support integration testing. !s discussed in the previous subsection) the most obvious generaliCation is to satisfy the module testing criterion in an integration conte3t) in effect using the entire program as a test driver environment for each module. Gowever) this trivial #ind of generaliCation does not ta#e advantage of the differences between module and integration testing. !pplying it to each phase of a multi-phase integration strategy) for e3ample) leads to an e3cessive amount of redundant testing. .ore useful generaliCations adapt the module testing criterion to focus on interactions between modules rather than attempting to test all of the details of each moduleIs implementation in an integration conte3t. The statement coverage module testing criterion) in which each statement is re(uired to be e3ercised during module testing) can be generaliCed to re(uire each module call statement to be e3ercised during integration testing. !lthough the specifics of the generaliCation of structured testing are more detailed) the approach is the same. Since structured testing at the module level re(uires that all the decision logic in a moduleIs control flow graph be tested independently) the appropriate generaliCation to the integration level re(uires that +ust the decision logic involved with calls to other modules be tested independently. 2odule design co!ple.ity <ather than testing all decision outcomes within a module independently) structured testing at the integration level focuses on the decision outcomes that are involved with module calls. The design reduction techni(ue helps identify those decision outcomes) so
Performance Testing Process & Methodology 00 Proprietary & Confidential -

that it is possible to e3ercise them independently during integration testing. The idea behind design reduction is to start with a module control flow graph) remove all control structures that are not involved with module calls) and then use the resultant HreducedH flow graph to drive integration testing. ;igure /-7 shows a systematic set of rules for performing design reduction. !lthough not strictly a reduction rule) the call rule states that function call AHblac# dotHB nodes cannot be reduced. The remaining rules wor# together to eliminate the parts of the flow graph that are not involved with module calls. The sequential rule eliminates se(uences of non-call AHwhite dotHB nodes. Since application of this rule removes one node and one edge from the flow graph) it leaves the cyclomatic comple3ity unchanged. Gowever) it does simplify the graph so that the other rules can be applied. The repetitive rule eliminates top-test loops that are not involved with module calls. The conditional rule eliminates conditional statements that do not contain calls in their bodies. The looping rule eliminates bottom-test loops that are not involved with module calls. &t is important to preserve the moduleIs connectivity when using the looping rule) since for poorly-structured code it may be hard to distinguish the JJtopII of the loop from the JJbottom.II ;or the rule to apply) there must be a path from the module entry to the top of the loop and a path from the bottom of the loop to the module e3it. Since the repetitive) conditional) and looping rules each remove one edge from the flow graph) they each reduce cyclomatic comple3ity by one. <ules 1 through : are intended to be applied iteratively until none of them can be applied) at which point the design reduction is complete. ?y this process) even very comple3 logic can be eliminated as long as it does not involve any module calls.

Performance Testing Process & Methodology 01 -

Proprietary & Confidential

I.5(/6/.,)% -.,/7(),-&.
Gierarchical system design limits each stage of development to a manageable effort) and it is important to limit the corresponding stages of testing as well. Gierarchical design is most effective when the coupling among sibling components decreases as the component siCe increases) which simplifies the derivation of data sets that test interactions among components. The remainder of this section e3tends the integration testing techni(ues of structured testing to handle the general case of incremental integration) including support for hierarchical design. The #ey principle is to test +ust the interaction among components at each integration stage) avoiding redundant testing of previously integrated sub-components.
Performance Testing Process & Methodology 04 Proprietary & Confidential -

To e3tend statement coverage to support incremental integration) it is re(uired that all module call statements from one component into a different component be e3ercised at each integration stage. To form a completely fle3ible Hstatement testingH criterion) it is re(uired that each statement be e3ecuted during the first phase Awhich may be anything from single modules to the entire programB) and that at each integration phase all call statements that cross the boundaries of previously integrated components are tested. %iven hierarchical integration stages with good cohesive partitioning properties) this limits the testing effort to a small fraction of the effort to cover each statement of the system at each integration phase. Structured testing can be e3tended to cover the fully general case of incremental integration in a similar manner. The #ey is to perform design reduction at each integration phase using +ust the module call nodes that cross component boundaries) yielding component-reduced graphs) and e3clude from consideration all modules that do not contain any cross-component calls. ;igure /-/ illustrates the structured testing approach to incremental integration. .odules ! and C have been previously integrated) as have modules ? and 8. &t would ta#e three tests to integrate this system in a single phase. Gowever) since the design predicate decision to call module 8 from module ? has been tested in a previous phase) only two additional tests are re(uired to complete the integration testing. .odules ? and 8 are removed from consideration because they do not contain cross-component calls) the component module design comple3ity of module ! is 1) and the component module design comple3ity of module C is 7.

Performance Testing Process & Methodology 0+ -

Proprietary & Confidential

Performance Testing Process & Methodology 0/ -

Proprietary & Confidential

6 (cceptance Testing
6.1 Introduction E (cceptance Testing
&n software engineering) acceptance testing is formal testing conducted to determine whether a system satisfies its acceptance criteria and thus whether the customer should accept the system. The main types of software testing areD Component. &nterface. System. !cceptance. <elease. !cceptance Testing chec#s the system against the H<e(uirementsH. &t is similar to systems testing in that the whole system is chec#ed but the important difference is the change in focusD Systems Testing chec#s that the system that was specified has been delivered. !cceptance Testing chec#s that the system delivers what was re(uested. The customer) and not the developer should always do acceptance testing. The customer #nows what is re(uired from the system to achieve value in the business and is the only person (ualified to ma#e that +udgment. The forms of the tests may follow those in system testing) but at all times they are informed by the business needs. The test procedures that lead to formal IacceptanceI of new or changed systems. @ser !cceptance Testing is a critical phase of any IsystemsI pro+ect and re(uires significant participation by the I4nd @sersI. To be of real use) an !cceptance Test 6lan should be developed in order to plan precisely) and in detail) the means by which I!cceptanceI will be achieved. The final part of the @!T can also include a parallel run to prove the system against the current system.

6.2 =actors influencing (cceptance Testing


The @ser !cceptance Test 6lan will vary from system to system but) in general) the testing should be planned in order to provide a realistic and ade(uate e3posure of the system to all reasonably e3pected events. The testing can be based upon the @ser <e(uirements Specification to which the system should conform. !s in any system though) problems will arise and it is important to have determined what will be the e3pected and re(uired responses from the various parties concernedN including @sersN 6ro+ect TeamN =endors and possibly Consultants K Contractors. &n order to agree what such responses should be) the 4nd @sers and the 6ro+ect Team need to develop and agree a range of ISeverity >evelsI. These levels will range from AsayB 1 to 6 and will represent the relative severity) in terms of business K commercial impact) of a problem with the system) found during testing. Gere is an e3ample which has been used successfullyN I1I is the most severeN and I6I has the least impact DFShow StopperF i.e. it is impossible to continue with the testing because of the severity of this error K bug
Performance Testing Process & Methodology 03 Proprietary & Confidential -

Critical )ro0le!G testing can continue but we cannot go into production AliveB with this problem 2aHor )ro0le!G testing can continue but live this feature will cause severe disruption to business processes in live operation 2ediu! )ro0le!G testing can continue and the system is li#ely to go live with only minimal departure from agreed business processes 2inor )ro0le! N both testing and live operations may progress. This problem should be corrected) but little or no changes to business processes are envisaged FCos!eticF )ro0le! e.g. coloursN fontsN pitch siCe Gowever) if such features are #ey to the business re(uirements they will warrant a higher severity level. The users of the system) in consultation with the e3ecutive sponsor of the pro+ect) must then agree upon the responsi0ilities and re%uired actions for each category of problem. ;or e3ample) you may demand that any problems in severity level 1) receive priority response and that all testing will cease until such level 1 problems are resolved. Caution. 4ven where the severity levels and the responses to each have been agreed by all partiesN the allocation of a problem into its appropriate severity level can be sub+ective and open to (uestion. To avoid the ris# of lengthy and protracted e3changes over the categorisation of problemsN we strongly advised that a range of e3amples are agreed in advance to ensure that there are no fundamental areas of disagreementN or) or if there are) these will be #nown in advance and your organisation is forewarned. ;inally) it is crucial to agree the Criteria for (cceptance . ?ecause no system is entirely fault free) it must be agreed between 4nd @ser and vendor) the ma3imum number of acceptable IoutstandingsI in any particular category. !gain) prior consideration of this is advisable. <.B. &n some cases) users may agree to accept AIsign offIB the system sub+ect to a range of conditions. These conditions need to be analysed as they may) perhaps unintentionally) see# additional functionality which could be classified as scope creep. &n any event) any and all fi3es from the software developers) must be sub+ected to rigorous Syste! Testing and) where appropriate <egression Testing.

6.3 Conclusion
Gence the goal of acceptance testing should verify the overall (uality) correct operation) scalability) completeness) usability) portability) and robustness of the functional components supplied by the Software system.

Performance Testing Process & Methodology 1: -

Proprietary & Confidential

7 S?STE2 TESTI<3
7.1 Introduction to S?STE2 TESTI<3
;or most organiCations) software and system testing represents a significant element of a pro+ectIs cost in terms of money and management time. .a#ing this function more effective can deliver a range of benefits including reductions in ris#) development costs and improved Itime to mar#etI for new systems. Systems with software components and software-intensive systems are more and more comple3 everyday. &ndustry sectors such as telecom) automotive) railway) and aeronautical and space) are good e3amples. &t is often agreed that testing is essential to manufacture reliable products. Gowever) the validation process does not often receive the re(uired attention. .oreover) the validation process is close to other activities such as conformance) acceptance and (ualification testing. The difference between function testing and system testing is that now the focus is on the whole application and its environment . Therefore the program has to be given completely. This does not mean that now single functions of the whole program are tested) because this would be too redundant. The main goal is rather to demonstrate the discrepancies of the product from its re(uirements and its documentation. &n other words) this again includes the (uestion) JJ8id we build the right product1II and not +ust) JJ8id we build the product right1II Gowever) system testing does not only deal with this more economical problem) it also contains some aspects that are orientated on the word JJsystemII . This means that those tests should be done in the environment for which the program was designed) li#e a mulituser networ# or whetever. 4ven security guide lines have to be included. "nce again) it is beyond doubt that this test cannot be done completely) and nevertheless) while this is one of the most incomplete test methods) it is one of the most important. ! number of time-domain software reliability models attempt to predict the growth of a systemIs reliability during the system test phase of the development life cycle. &n this paper we e3amine the results of applying several types of 6oisson-process models to the development of a large system for which system test was performed in two parallel trac#s) using different strategies for test data selection. we will test that the functionality of your systems meets with your specifications) integrating with which-ever type of development methodology you are applying. $e test for errors that users are li#ely to ma#e as they interact with the application as well as your application*s ability to trap errors gracefully. These techni(ues can be applied fle3ibly) whether testing a financial system) e-commerce) an online casino or games testing. System Testing is more than +ust functional testing) however) and can) when appropriate) also encompass many other types of testing) such asD o security o loadKstress o performance o browser compatibility o localisation

7.2 <eed for Syste! Testing


4ffective software testing) as a part of software engineering) has been proven over the last 9 decades to deliver real business benefits includingD
Performance Testing Process & Methodology 11 Proprietary & Confidential -

redCction of costs increased prodCctiDity redCce commercial risFs

(edCce reBorF and sCpport oDerheads More effort spent on deDeloping neB fCnctionality and less on MECg fiGingM as HCality increases !f it goes BrongI Bhat is the potential impact on yoCr commercial goals@ 5noBledge is poBerI so Bhy taFe a leap of faith Bhile yoCr competition step forBard Bith confidence@

These benefits are achieved as a result of some fundamental principles of testing) for e3ample) increased independence naturally increases ob+ectivity. Lour test strategy must ta#e into consideration the ris#s to your organisation) commercial and technical. Lou will have a personal interest in its success in which case it is only human for your ob+ectivity to be compromised.

7.3 Syste! Testing Techni%ues


%oal is to evaluate the system as a whole) not its parts Techni(ues can be structural or functional Techni(ues can be used in any stage that tests the system as a whole Aacceptance) installation) etc.B Techni(ues not mutually e3clusive Structural techni(ues stress testing - test larger-than-normal capacity in terms of transactions) data) users) speed) etc. e3ecution testing- test performance in terms of speed) precision) etc. recovery testing - test how the system recovers from a disaster) how it handles corrupted data) etc. operations testing - test how the system fits in with e3isting operations and procedures in the user organiCation compliance testing - test adherence to standards security testing - test security re(uirements ;unctional techni(ues re(uirements testing - fundamental form of testing - ma#es sure the system does what it*s re(uired to do regression testing - ma#e sure unchanged functionality remains unchanged error-handling testing - test re(uired error-handling functions Ausually user errorB manual-support testing - test that the system can be used properly - includes user documentation intersystem handling testing - test that the system is compatible with other systems in the environment control testing - test re(uired control mechanisms parallel testing - feed same input into two versions of the system to ma#e sure they produce the same output @nit Testing %oal is to evaluate some piece Afile) program) module) component) etc.B in isolation
Performance Testing Process & Methodology 12 Proprietary & Confidential -

Techni(ues can be structural or functional &n practice) it*s usually ad-hoc and loo#s a lot li#e debugging .ore structured approaches e3ist

7." =unctional techni%ues


input domain testing - pic# test cases representative of the range of allowable input) including high) low) and average values e(uivalence partitioning - partition the range of allowable input so that the program is e3pected to behave similarly for all inputs in a given partition) then pic# a test case from each partition boundary value - choose test cases with input values at the boundary Aboth inside and outsideB of the allowable range synta3 chec#ing - choose test cases that violate the format rules for input special values - design test cases that use input values that represent special situations output domain testing - pic# test cases that will produce output at the e3tremes of the output domain Structural techni(ues statement testing - ensure the set of test cases e3ercises every statement at least once branch testing - each branch of an ifKthen statement is e3ercised conditional testing - each truth statement is e3ercised both true and false e3pression testing - every part of every e3pression is e3ercised path testing - every path is e3ercised Aimpossible in practiceB 4rror-based techni(ues basic idea is that if you #now something about the nature of the defects in the code) you can estimate whether or not you*ve found all of them or not fault seeding - put a certain number of #nown faults into the code) then test until they are all found mutation testing - create mutants of the program by ma#ing single changes) then run test cases until all mutants have been #illed historical test data - an organiCation #eeps records of the average numbers of defects in the products it produces) then tests a new product until the number of defects found approaches the e3pected number

7.+ Conclusion:
Gence the system Test phase should begin once modules are integrated enough to perform tests in a whole system environment. System testing can occur in parallel with integration test) especially with the top-down method.

Performance Testing Process & Methodology 1- -

Proprietary & Confidential

'nit Testing

I.1 Introduction to 'nit Testing


@nit testing. &snIt that some annoying re(uirement that weIre going to ignore1 .any developers get very nervous when you mention unit tests.@sually this is a vision of a grand table with every single method listed) along with the e3pected results and passKfail date. &tIs important) but not relevant in most programming pro+ects. The unit test will motivate the code that you write. &n a sense) it is a little design document that says) H$hat will this bit of code do1H "r) in the language of ob+ect oriented programming) H$hat will these clusters of ob+ects do1H The crucial issue in constructing a unit test is scope. &f the scope is too narrow) then the tests will be trivial and the ob+ects might pass the tests) but there will be no design of their interactions. Certainly) interactions of ob+ects are the cru3 of any ob+ect oriented design. >i#ewise) if the scope is too broad) then there is a high chance that not every component of the new code will get tested. The programmer is then reduced to testing-by-po#ing-around) which is not an effective test strategy.

<eed for 'nit Test


Gow do you #now that a method doesnIt need a unit test1 ;irst) can it be tested by inspection1 &f the code is simple enough that the developer can +ust loo# at it and verify its correctness then it is simple enough to not re(uire a unit test. The developer should #now when this is the case. @nit tests will most li#ely be defined at the method level) so the art is to define the unit test on the methods that cannot be chec#ed by inspection. @sually this is the case when the method involves a cluster of ob+ects. @nit tests that isolate clusters of ob+ects for testing are doubly useful) because they test for failures) and they also identify those segments of code that are related. 6eople who revisit the code will use the unit tests to discover which ob+ects are related) or which ob+ects form a cluster. GenceD @nit tests isolate clusters of ob+ects for future developers. !nother good litmus test is to loo# at the code and see if it throws an error or catches an error. &f error handling is performed in a method) then that method can brea#. %enerally) any method that can brea# is a good candidate for having a unit test) because it may brea# at some time) and then the unit test will be there to help you fi3 it. The danger of not implementing a unit test on every method is that the coverage may be incomplete. Oust because we donIt test every method e3plicitly doesnIt mean that methods can get away with not being tested. The programmer should
Performance Testing Process & Methodology 10 Proprietary & Confidential -

#now that their unit testing is complete when the unit tests cover at the very least the functional re(uirements of all the code. The careful programmer will #now that their unit testing is complete when they have verified that their unit tests cover every cluster of ob+ects that form their application.

Life Cycle (pproach to Testing


Testing will occur throughout the pro+ect lifecycle i.e.) from <e(uirements till @ser !cceptance Testing.The main "b+ective to @nit Testing are as follows D YTo e3ecute a program with the intent of finding an error.N Y To uncover an as-yet undiscovered error N and Y 6repare a test case with a high probability of finding an as-yet undiscovered error .

LeDels of nit Testing


Y@ &T Y1--Z code coverage Y & T4%<!T&" Y SLST4. Y Y !CC46T! C4 Y .!& T4 ! C4 ! 8 <4%<4SS&"

C&.5/*,8 -. U.-, T/8,-.79 NThe most JmicroJ scale of testingO NTo test particClar fCnctions or code modCles. NTypically done Ey the programmer and not Ey testers. N 's it reHCires detailed FnoBledge of the internal program design and code. N "ot alBays easily done Cnless the application has a Bell-designed architectCre Bith tight codeO

I.2 'nit Testing E=low: driDer


!&1:%/ interface local data strCctCres EoCndary conditions independent paths error handling paths TestCases

Performance Testing Process & Methodology 11 -

Proprietary & Confidential

T0*/8 &2 E((&(8 1/,/5,/1


The following are the Types of errors that may be caught Y 4rror in 8ata Structures Y 6erformance 4rrors Y >ogic 4rrors Y =alidity of alternate and e3ception flows Y &dentified at analysisKdesign stages

U.-, T/8,-.7 ; B%)5< B&= A**(&)5+ N ;ield >evel Chec#


Y Y Y ;ield >evel =alidation @ser &nterface Chec# ;unctional >evel Chec#

U.-, T/8,-.7 ; W+-,/ B&= A**(&)5+


ST!T4.4 T C"=4<!%4 84C&S&" C"=4<!%4 C" 8&T&" C"=4<!%4 .@>T&6>4 C" 8&T&" C"=4<!%4 Anested conditionsB C" 8&T&" K84C&S&" C"=4<!%4 6!TG C"=4<!%4

U.-, T/8,-.7 ; FIELD LE$EL CHECKS


Y Y Y Y Y Y ull K ot ull Chec#s @ni(ueness Chec#s >ength Chec#s 8ate ;ield Chec#s umeric Chec#s egative Chec#s

U.-, T/8,-.7 ; F-/%1 L/>/% $)%-1),-&.8


Y Y Y Test all =alidations for an &nput field 8ate <ange Chec#s A;rom 8ateKTo 8ate*sB 8ate Chec# =alidation with System date

N N N N N

U.-, T/8,-.7 ; U8/( I.,/(2)5/ C+/5<8 (eadaEility of the Controls Tool Tips Validation Ease of se of !nterface 'cross TaE related ChecFs ser !nterface *ialog
Proprietary & Confidential -

Performance Testing Process & Methodology 14 -

) ! compliance checFs

U.-, T/8,-.7 # F:.5,-&.)%-,0 C+/5<8


Y Y Y Y Y Y Screen ;unctionalities ;ield 8ependencies !uto %eneration !lgorithms and Computations ormal and !bnormal terminations Specific ?usiness <ules if any..

U.-, T/8,-.7 # OTHER !EASURES


[;@ CT&" C"=4<!%4 [>""6 C"=4<!%4 [<!C4 C"=4<!%4

I.3 E.ecution of 'nit Tests


8esign a test case for every statement to be e3ecuted. Select the uni(ue set of test cases. This measure reports whether each e3ecutable statement is encountered. !lso #nown asD line coverage) segment coverage and basic bloc# coverage. ?asic bloc# coverage is the same as statement coverage e3cept the unit of code measured is each se(uence of non-branching statements.

E=)6*%/ &2 U.-, T/8,-.79


int invoice Aint 3) int yB \ int d1) d7) sN if A3T]9-B d7]1--N else d7]9-N s],S3 R 1- SyN if AsT7--B d1]1--N else if AsT1---B d1 ] 9,N else d1 ] 8-N return AsSd1Sd7K1----BN ^

U.-, T/8,-.7 F%&' 9

Performance Testing Process & Methodology 1+ -

Proprietary & Confidential

Performance Testing Process & Methodology 1/ -

Proprietary & Confidential

(dvantage of 'nit Testing _ Can be applied directly to ob+ect code and does not re(uire processing source code. _ 6erformance profilers commonly implement this measure.

IS( *(<T(3E of 'nit Testing

P&nsensitive to some control structures Anumber of iterationsB


_8oes not report whether loops reach their termination condition _Statement coverage is completely insensitive to the logical operators A`` and 55B.

2ethod for State!ent Coverage -8esign a test-case for the passKfailure of every decision point -Select uni(ue set of test cases _This measure reports whether ?oolean e3pressions tested in control structures Asuch as the if-statement and while-statementB evaluated to both true and false. _The entire ?oolean e3pression is considered one true-or-false predicate regardless of whether it contains logical-and or logical-or operators. _!dditionally) this measure includes coverage of switch-statement cases) e3ception handlers) and interrupt handlers _!lso #nown asD branch coverage) all-edges coverage) basis path coverage) decisiondecision-path testing _H?asis pathH testing selects paths that achieve decision coverage. _ !8=! T!%4D Simplicity without the problems of statement coverage IS( *(<T(3E _This measure ignores branches within boolean e3pressions which occur due to shortcircuit operators. 2ethod for Condition Coverage: -Test if every condition Asub-e3pressionB in decision for trueKfalse -Select uni(ue set of test cases.
Performance Testing Process & Methodology 13 Proprietary & Confidential -

_<eports the true or false outcome of each ?oolean sub-e3pression) separated by logical-and and logical-or if they occur. _ _Condition coverage measures the sub-e3pressions independently of each other. _<eports whether every possible combination of boolean sub-e3pressions occurs. !s with condition coverage) the sub-e3pressions are separated by logical-and and logical-or) when present. _The test cases re(uired for full multiple condition coverage of a condition are given by the logical operator truth table for the condition. IS( *(<T(3E: _Tedious to determine the minimum set of test cases re(uired) especially for very comple3 ?oolean e3pressions _ umber of test cases re(uired could vary substantially among conditions that have similar comple3ity _ConditionK8ecision Coverage is a hybrid measure composed by the union of condition coverage and decision coverage. _&t has the advantage of simplicity but without the shortcomings of its component measures _This measure reports whether each of the possible paths in each function have been followed. _! path is a uni(ue se(uence of branches from the function entry to the e3it. _!lso #nown as predicate coverage. 6redicate coverage views paths as possible combinations of logical conditions _6ath coverage has the advantage of re(uiring very thorough testing ='<CTI4< C4*E$(3E:

P This measCre reports Bhether yoC inDoFed each fCnction or procedCre. P !t is CsefCl dCring preliminary testing to assCre at least some coDerage in all areas of the softBare. P .roadI shalloB testing finds gross deficiencies in a test sCite HCicFly.
L44) C4*E$(3E This measure reports whether you e3ecuted each loop body Cero times) e3actly once) twice and more than twice AconsecutivelyB. ;or do-while loops) loop coverage reports whether you e3ecuted the body e3actly once) and more than once. The valuable aspect of this measure is determining whether while-loops and for-loops e3ecute more than once) information not reported by others measure. $(CE C4*E$(3E This measure reports whether multiple threads e3ecute the same code at the same time. Gelps detect failure to synchroniCe access to resources. @seful for testing multi-threaded programs such as in an operating system.

I." Conclusion
Performance Testing Process & Methodology 4: Proprietary & Confidential -

Testing irrespective of the phases of testing should encompass the following D Cost of ;ailure associated with defective products getting shipped and used by customer is enormous To find out whether the integrated product wor# as per the customer re(uirements To evaluate the product with an independent perspective To identify as many defects as possible before the customer finds To reduce the ris# of releasing the product

Performance Testing Process & Methodology 41 -

Proprietary & Confidential

1J Test Strategy
1J.1 Introduction
This 8ocument entails you towards the better insight of the Test Strategy and its methodology. &t is the role of test management to ensure that new or modified service products meet the business re(uirements for which they have been developed or enhanced. The Testing strategy should define the ob+ectives of all test stages and the techni(ues that apply. The testing strategy also forms the basis for the creation of a standardiCed documentation set) and facilitates communication of the test process and its implications outside of the test discipline. !ny test support tools introduced should be aligned with) and in support of) the test strategy. Test !pproachKTest !rchitecture are the acronyms for Test Strategy. Test management is also concerned with both test resource and test environment management.

1J.2 @ey ele!ents of Test 2anage!ent:


Test organi5ation Wthe set-up and management of a suitable test organiCational structure and e3plicit role definition. The pro+ect framewor# under which the testing activities will be carried out is reviewed) high level test phase plans prepared and resource schedules considered. Test organiCation also involves the determination of configuration standards and the definition of the test environment. Test planning W the re(uirements definition and design specifications facilitate in the identification of ma+or test items and these may necessitate the test strategy to be updated. ! detailed test plan and schedule is prepared with #ey test responsibilities being indicated. Test specifications W re(uired for all levels of testing and covering all categories of test. The re(uired outcome of each test must be #nown before the test is attempted. @nit) integration and system testing W configuration items are verified against the appropriate specifications and in accordance with the test plan. The test environment should also be under configuration control and test data and results stored for future evaluation. Test !onitoring and assess!ent W ongoing monitoring and assessment of the integrity of the development and construction. The status of the configuration items should be reviewed against the phase plans and test progress reports prepared providing some assurance of the verification and validation activities. )roduct assurance W the decision to negotiate the acceptance testing program and the release and commissioning of the service product is sub+ect to the Vproduct assurance* role being satisfied with the outcome of the verification activities. 6roduct assurance may oversee some of the test activity and may participate in process reviews. ! common criticism of construction programmers is that insufficient time is fre(uently allocated to the testing and commissioning of the building systems together with the involvement and subse(uent training of the ;acilities .anagement team. Testing and commissioning is often considered by teams as a secondary activity and given a lower priority particularly as pressure builds on the program towards completion. Sufficient time must be dedicated to testing and commissioning as ensuring the systems function correctly is fairly fundamental to the pro+ect*s success or failure. Traditionally the
Performance Testing Process & Methodology 42 Proprietary & Confidential -

responsibility for testing and commissioning is buried deep within the supply chain as a sub-contract of a sub-contract. &t is possible to gain greater control of this process and the associated ris# through the use of specialists such as Systems &ntegration who can be appointed as part of the professional team. The time necessary for testing and commissioning will vary from pro+ect to pro+ect depending upon the comple3ity of the systems and services that have been installed. The 6ro+ect Sponsor should ensure that the professional team and the contractor consider realistically how much time is needed. =itness for purpose chec-list: &s there a documented testing strategy that defines the ob+ectives of all test stages and the techni(ues that may apply) e.g. non-functional testing and the associated techni(ues such as performance) stress and security etc1 8oes the test plan prescribe the approach to be ta#en for intended test activities) identifyingD the items to be tested) the testing to be performed) test schedules) resource and facility re(uirements) reporting re(uirements) evaluation criteria) ris#s re(uiring contingency measures1 !re test processes and practices reviewed regularly to assure that the testing processes continue to meet specific business needs1 ;or e3ample) e-commerce testing may involve new user interfaces and a business focus on usability may mean that the organiCation must review its testing strategies .

1J.3 Test Strategy =low :


Test Cases and Test ProcedCres shoCld manifest Test %trategy.

Performance Testing Process & Methodology 4- -

Proprietary & Confidential

Test Strategy E Selection


Selection of the Test Strategy is 0ased on the following factors )roduct Test Strategy based on the !pplication to help people and teams of people in ma#ing decisions. Based on the @ey )otential $is-s Suggestion of $rong &deas. 6eople will use the 6roduct &ncorrectly &ncorrect comparison of scenarios. Scenarios may be corrupted. @nable to handle Comple3 8ecisions. eter!ination of (ctual $is-. @nderstand the underlying !lgorithm. Simulate the !lgorithm in parallel. Capability test each ma+or function. %enerate large number of decision scenarios. Create comple3 scenarios and compare them. <eview 8ocumentation and Gelp. Test for sensitivity to user 4rror.

Test Strategy E.ecution:


@nderstand the decision !lgorithm and generate the parallel decision analyCer using the 6erl or 43cel that will function as a reference for high volume testing of the app.
Performance Testing Process & Methodology 40 Proprietary & Confidential -

Create a means to generate and apply large numbers of decision scenarios to the product. This will be done using the %@& test !utomation system or through the direct generation of 8ecide <ight scenario files that would be loaded into the product during test. <eview the 8ocumentation) and the design of the user interface and functionality for its sensitivity to user error. Test with decision scenarios that are near the limit of comple3ity allowed by the product Compare comple3 scenarios. Test the product for the ris# of silent failures or corruptions in decision analysis. &ssues in 43ecution of the Test Strategy The difficulty of understanding and simulating the decision algorithm The ris# of coincidal failure of both the simulation and the product. The difficulty of automating decision tests

1J." 3eneral Testing Strategies


Top-doBn .ottom-Cp Thread testing %tress testing .acF-to-EacF testing

1J.+ <eed for Test Strategy


The ob+ective of testing is to reduce the ris#s inherent in computer systems. The strategy must address the ris#s and present a process that can reduce those ris#s. The system concerns on ris#s then establish the ob+ectives for the test process. The two components of the testing strategy are the Test ;actors and the Test 6hase.

'nalysis Errors -4Q and design Errors 40Q

Coding

Test ;actor W The ris# or issue that needs to be addressed as part of the test strategy. The strategy will select those factors that need to be addressed in the testing of a specific application system. Test 6hase W The 6hase of the systems development life cycle in which testing will occur.

Performance Testing Process & Methodology 41 -

Proprietary & Confidential

ot all the test factors will be applicable to all software systems. The development team will need to select and ran# the test factors for the specific software systems being developed. The test phase will vary based on the testing methodology used. ;or e3ample the test phases in as traditional waterfall life cycle methodology will be much different from the phases in a <apid !pplication 8evelopment methodology.

1J., eveloping a Test Strategy


The test Strategy will need to be customiCed for any specific software system. The applicable test factors would be listed as the phases in which the testing must occur. ;our test steps must be followed to develop a customiCed test strategy. Select and ran# Test ;actors &dentify the System 8evelopmental 6hases &dentify the ?usiness ris#s associated with the System under 8evelopment. 6lace ris#s in the .atri3
D0.)6-5 T/8, D/8-7. B:-%1 T/8,F)5,&(8?T /8, P+)8/ R/@:-(/6/.,8 !)-.,)-. I.,/7(),/

#acto rs

(isFs

1J.6 Conclusion:
Test Strategy should be developed in accordance with the business ris#s associated with the software when the test team develop the test tactics. Thus the Test team needs to ac(uire and study the test strategy that should (uestion the followingD $hat is the relationship of importance among the test factors1 $hich of the high level ris#s are the most significant1 $hat damage can be done to the business if the software fails to perform correctly1 $hat damage can be done to the business if the business if the software is not completed on time1 $ho are the individuals most #nowledgeable in understanding the impact of the identified business ris#s1

Gence the Test Strategy must address the ris#s and present a process that can reduce those ris#s. The system accordingly focuses on ris#s thereby establishes the ob+ectives for the test process.
Performance Testing Process & Methodology 44 Proprietary & Confidential -

Performance Testing Process & Methodology 4+ -

Proprietary & Confidential

11 TEST )L(<
11.1 #hat is a Test )lanK
! Test 6lan can be defined as a document that describes the scope) approach) resources and schedule of intended test activities. &t identifies test items) the features to be tested) the testing tas#s) who will do each tas#) and any ris#s re(uiring contingency planning. The main purpose of preparing a Test 6lan is that everyone concerned with the pro+ect are in sync with regards to the scope) responsibilities) deadlines and deliverables for the pro+ect. &t is in this respect that reviews and a sign-off are very important since it means that everyone is in agreement of the contents of the test plan and this also helps in case of any dispute during the course of the pro+ect Aespecially between the developers and the testersB.

)urpose of preparing a Test )lan


! Test 6lan is a useful way to thin# through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the IwhyI and IhowI of product validation. &t should be thorough enough to be useful but not so thorough that no one outside the test group will read it.

Contents of a Test )lan


1. 7. 9. :. ,. 6. /. 8. 9. 1-. 11. 17. 19. 6urpose Scope Test !pproach 4ntry Criteria <esources Tas#s K <esponsibilities 43it Criteria Schedules K .ilestones Gardware K Software <e(uirements <is#s 5 .itigation 6lans Tools to be used 8eliverables <eferences a. 6rocedures b. Templates c. StandardsK%uidelines 1:. !nne3ure 1,. Sign-"ff

11.2 Contents 8in detail9


)urpose This section should contain the purpose of preparing the test plan
Performance Testing Process & Methodology 4/ Proprietary & Confidential -

S5&*/
This section should tal# about the areas of the application which are to be tested by the X! team and specify those areas which are definitely out of scope Ascreens) database) mainframe processes etcB. Test (pproach This would contain details on how the testing is to be performed and whether any specific strategy is to be followed Aincluding configuration managementB. Entry Criteria This section e3plains the various steps to be performed before the start of a test Ai.e.B pre-re(uisites. ;or e3ampleD Timely environment set up) starting the web server K app server) successful implementation of the latest build etc. $esources This section should list out the people who would be involved in the pro+ect and their designation etc. Tas-s 1 $esponsi0ilities This section tal#s about the tas#s to be performed and the responsibilities assigned to the various members in the pro+ect. E.it criteria Contains tas#s li#e bringing down the system K server) restoring system to pre-test environment) database refresh etc. Schedules 1 2ilestones This sections deals with the final delivery date and the various milestone dates to be met in the course of the pro+ect. &ardware 1 Software $e%uire!ents This section would contain the details of 6C*s K servers re(uired Awith the configurationB to install the application or perform the testingN specific software that needs to be installed on the systems to get the application running or to connect to the databaseN connectivity related issues etc. $is-s A 2itigation )lans This section should list out all the possible ris#s that can arise during the testing and the mitigation plans that the X! team plans to implement incase the ris# actually turns into a reality. Tools to 0e used This would list out the testing tools or utilities Aif anyB that are to be used in the pro+ect Ae.g.B $in<unner) Test 8irector) 6C".) $inSX>. elivera0les This section contains the various deliverables that are due to the client at various points of time Ai.e.B daily) wee#ly) start of the pro+ect) end of the pro+ect etc. These could include Test 6lans) Test 6rocedure) Test .atrices) Status <eports) Test Scripts etc. Templates for all these could also be attached.
Performance Testing Process & Methodology 43 Proprietary & Confidential -

$eferences 6rocedures Templates AClient Specific or otherwiseB Standards K %uidelines Ae.g.B X=iew 6ro+ect related documents A<S8) !88) ;S8 etcB (nne.ure This could contain embedded documents or lin#s to documents which have been K will be used in the course of testing Ae.g.B templates used for reports) test cases etc. <eferenced documents can also be attached here. Sign/4ff This should contain the mutual agreement between the client and the X! team with both leads K managers signing off their agreement on the Test 6lan.

Performance Testing Process & Methodology +: -

Proprietary & Confidential

12 Test

ata )reparation / Introduction

! System is programmed by its data. ;unctional testing can suffer if data is poor) and good data can help improve functional testing. %ood test data can be structured to improve understanding and testability. &ts contents) correctly chosen) can reduce maintenance effort and allow fle3ibility. 6reparation of the data can help to focus the business where re(uirements are vague. The first stage of any recogniser development pro+ect is data preparation. Test data should however) be prepared which is representative of normal business transactions. !ctual customer names or contact details should also not be used for such tests. &t is recommended that a full test environment be set up for use in the applicable circumstances. 4ach separate test should be given a uni(ue reference number which will identify the ?usiness 6rocess being recorded) the simulated conditions used) the persons involved in the testing process and the date the test was carried out. This will enable the monitoring and testing reports to be co-coordinated with any feedbac# received. Tests must be planned and thought out a head of timeN you have to decide such things as what e3actly you are testing and testing for) the way the test is going to be run and applied) what steps are re(uired) etc. Testing is the process of creating) implementing and evaluating tests. 4ffective (uality control testing re(uires some basic goals and understandingD Lou must understand what you are testingN if youIre testing a specific functionality) you must #now how itIs supposed to wor#) how the protocols behave) etc. Lou should have a definition of what success and failure are. &n other words) is close enough good enough1 Lou should have a good idea of a methodology for the test) the more formal a plan the betterN you should design test cases. Lou must understand the limits inherent in the tests themselves. Lou must have a consistent schedule for testingN performing a specific set of tests at appropriate points in the process is more important than running the tests at a specific time. <oles of 8ata in ;unctional Testing Testing consumes and produces large amounts of data. 8ata describes the initial conditions for a test) forms the input) is the medium through which the tester influences the software. 8ata is manipulated) e3trapolated) summariCed and referenced by the functionality under test) which finally spews forth yet more data to be chec#ed against e3pectations. 8ata is a crucial part of most functional testing. This paper sets out to illustrate some of the ways that data can influence the test process) and will show that testing can be improved by a careful choice of input data. &n doing this) the paper will concentrate most on data-heavy applicationsN those which use databases or are heavily influenced by the data they hold. The paper will focus on input data) rather than output data or the transitional states the data passes through during processing) as input data has the greatest influence on functional testing and is the simplest to manipulate. The paper will not consider areas where data is important to nonfunctional testing) such as operational profiles) massive datasets and environmental tuning. ! SLST4. &S 6<"%<!..48 ?L &TS 8!T! .any modern systems allow tremendous fle3ibility in the way their basic functionality can be used.
Performance Testing Process & Methodology +1 Proprietary & Confidential -

Configuration data can dictate control flow) data manipulation) presentation and user interface. ! system can be configured to fit several business models) wor# AalmostB seamlessly with a variety of cooperative systems and provide tailored e3periences to a host of different users. ! business may loo# to an applicationIs configurability to allow them to #eep up with the mar#et without being slowed by the development process) an individual may loo# for a personaliCed e3perience from commonly-available software. ;@ CT&" !> T4ST& % S@;;4<S &; 8!T! &S 6""< Tests with poor data may not describe the business model effectively) they may be hard to maintain) or re(uire lengthy and difficult setup. They may obscure problems or avoid them altogether. 6oor data tends to result in poor tests) that ta#e longer to e3ecute. %""8 8!T! &S =&T!> T" <4>&!?>4 T4ST <4S@>TS !n important goal of functional testing is to allow the test to be repeated with the same result) and varied to allow diagnosis. $ithout this) it is hard to communicate problems to coders) and it can become difficult to have confidence in the X! teamIs results) whether they are good or bad. %ood data allows diagnosis) effective reporting) and allows tests to be repeated with confidence). %""8 8!T! C! G4>6 T4ST& % ST!L " SCG48@>4 !n easily comprehensible and well-understood dataset is a tool to help communication. %ood data can greatly assist in speedy diagnosis and rapid re-testing. <egression testing and automated test maintenance can be made speedier and easier by using good data) while an elegantly-chosen dataset can often allow new tests without the overhead of new data. ! formal test plan is a document that provides and records important information about a test pro+ect) for e3ampleD pro+ect and (uality assumptions pro+ect bac#ground information resources schedule 5 timeline entry and e3it criteria test milestones tests to be performed

Cse cases and7or test cases

12.1 Criteria for Test

ata Collection

This section of the 8ocument specifies the description of the test data needed to test recovery of each business process. Identify #ho is to Conduct the Tests &n order to ensure consistency of the testing process throughout the organiCation) one or more members of the ?usiness Continuity 6lanning A?C6B Team should be nominated to co-ordinate the testing process within each business unit) a nominated testing and across the organiCation. 4ach business process should be thoroughly tested and the coordinator should ensure that each business unit observes the necessary rules associated with ensuring that the testing process is carried out within a realistic environment. This section of the ?C6 should contain the names of the ?C6 Team members nominated to co-ordinate the testing process. &t should also list the duties of the appointed coordinators.

I1/.,-20 W+& -8 ,& C&.,(&% ).1 !&.-,&( ,+/ T/8,8


Performance Testing Process & Methodology +2 Proprietary & Confidential -

&n order to ensure consistency when measuring the results) the tests should be independently monitored. This tas# would normally be carried out by a nominated member of the ?usiness <ecovery Team or a member of the ?usiness Continuity 6lanning Team. This section of the ?C6 will contain the names of the persons nominated to monitor the testing process throughout the organiCation. &t will also contain a list of the duties to be underta#en by the monitoring staff. 6repare ;eedbac# Xuestionnaires &t is vital to receive feedbac# from the persons managing and participating in each of the tests. This feedbac# will hopefully enable wea#nesses within the ?usiness <ecovery 6rocess to be identified and eliminated. Completion of feedbac# forms should be mandatory for all persons participating in the testing process. The forms should be completed either during the tests Ato record a specific issueB or as soon after finishing as practical. This will enable observations and comments to be recorded whilst the event is still fresh in the persons mind. This section of the ?C6 should contain a template for a ;eedbac# Xuestionnaire. 6repare ?udget for Testing 6hase 4ach phase of the ?C6 process which incurs a cost re(uires that a budget be prepared and approved. The I6reparing for a 6ossible 4mergencyI 6hase of the ?C6 process will involve the identification and implementation of strategies for bac# up and recovery of data files or a part of a business process. &t is inevitable that these bac# up and recovery processes will involve additional costs. Critical parts of the business process such as the &T systems) may re(uire particularly e3pensive bac# up strategies to be implemented. $here the costs are significant they should be approved separately with a specific detailed budget for the establishment costs and the ongoing maintenance costs. This section of the ?C6 will contain a list of the testing phase activities and a cost for each. &t should be noted whenever part of the costs is already incorporated with the organiCation*s overall budgeting process.

T()-.-.7 C&(/ T/8,-.7 T/)6 2&( /)5+ B:8-./88 U.-,


&n order for the testing process to proceed smoothly) it is necessary for the core testing team to be trained in the emergency procedures. This is probably best handled in a wor#shop environment and should be presented by the persons responsible for developing the emergency procedures. This section of the ?C6 should contain a list of the core testing team for each of the business units who will be responsible for coordinating and underta#ing the ?usiness <ecovery Testing process. &t is important that clear instructions are given to the Core Testing Team regarding the simulated conditions which have to be observed. Conducting the Tests The tests must be carried out under authentic conditions and all participants must ta#e the process seriously. &t is important that all persons who are li#ely to be involved with recovering a particular business process in the event of an emergency should participate in the testing process. &t should be mandatory for the management of a business unit to be present when that unit is involved with conducting the tests. Test each part of the ?usiness <ecovery 6rocess &n so far as it is practical) each critical part of the business recovery process should be fully tested. 4very part of the procedures included as part of the recovery process is to be tested to ensure validity and relevance.
Performance Testing Process & Methodology +- Proprietary & Confidential -

This section of the ?C6 is to contain a list of each business process with a test schedule and information on the simulated conditions being used. The testing co-ordination and monitoring will endeavor to ensure that the simulated environments are maintained throughout the testing process) in a realistic manner. Test !ccuracy of 4mployee and =endor 4mergency Contact umbers 8uring the testing process the accuracy of employee and vendor emergency contact information is to be re-confirmed. !ll contact numbers are to be validated for all involved employees. This is particularly important for management and #ey employees who are critical to the success of the recovery process. This activity will usually be handled by the G<. 8epartment or 8ivision. $here) in the event of an emergency occurring outside of normal business hours) a large number of persons are to be contacted) a hierarchical process could be used whereby one person contacts five others. This process must have safety features incorporated to ensure that if one person is not contactable for any reason then this is notified to a nominated controller. This will enable alternative contact routes to be used. !ssess Test <esults 6repare a full assessment of the test results for each business process. The following (uestions may be appropriateD $ere ob+ectives of the ?usiness <ecovery 6rocess and the testing process met - if not) provide further comment $ere simulated conditions reasonably HauthenticH - if not) provide further comment $as test data representative - if not) provide further comment 8id the tests proceed without any problems - if not) provide further comment $hat were the main comments received in the feedbac# (uestionnaires 4ach test should be assessed as either fully satisfactory) ade(uate or re(uiring further testing. Training Staff in the Business $ecovery )rocess !ll staff should be trained in the business recovery process. This is particularly important when the procedures are significantly different from those pertaining to normal operations. This training may be integrated with the training phase or handled separately. The training should be carefully planned and delivered on a structured basis. The training should be assessed to verify that it has achieved its ob+ectives and is relevant for the procedures involved. Training may be delivered either using in-house resources or e3ternal resources depending upon available s#ills and related costs. 2anaging the Training )rocess ;or the ?C6 training phase to be successful it has to be both well managed and structured. &t will be necessary to identify the ob+ective and scope for the training) what specific training is re(uired) who needs it and a budget prepared for the additional costs associated with this phase. evelop 40Hectives and Scope of Training The ob+ectives and scope of the ?C6 training activities are to be clearly stated within the plan. The ?C6 should contain a description of the ob+ectives and scope of the training phase. This will enable the training to be consistent and organiCed in a manner where the results can be measured) and the training fine tuned) as appropriate.
Performance Testing Process & Methodology +0 Proprietary & Confidential -

The ob+ectives for the training could be as follows D HTo train all staff in the particular procedures to be followed during the business recovery processH. The scope of the training could be along the following lines D HThe training is to be carried out in a comprehensive and e3haustive manner so that staff become familiar with all aspects of the recovery process. The training will cover all aspects of the ?usiness <ecovery activities section of the ?C6 including &T systems recoveryH. Consideration should also be given to the development of a comprehensive corporate awareness program for communicating the procedures for the business recovery process. Training <eeds (ssess!ent The plan must specify which person or group of persons re(uires which type of training. &t is necessary for all new or revised processes to be e3plained carefully to the staff. ;or e3ample it may be necessary to carry out some process manually if the &T system is down for any length of time. These manual procedures must be fully understood by the persons who are re(uired to carry them out. ;or larger organiCations it may be practical to carry out the training in a classroom environment) however) for smaller organiCations the training may be better handled in a wor#shop style. This section of the ?C6 will identify for each business process what type of training is re(uired and which persons or group of persons need to be trained. Training 2aterials evelop!ent Schedule "nce the training needs have been identified it is necessary to specify and develop suitable training materials. This can be a time consuming tas# and unless priorities are given to critical training programmes) it could delay the organiCation in reaching an ade(uate level of preparedness. This section of the ?C6 contains information on each of the training programmes with details of the training materials to be developed) an estimate of resources and an estimate of the completion date. )repare Training Schedule "nce it has been agreed who re(uires training and the training materials have been prepared a detailed training schedule should be drawn up. This section of the ?C6 contains the overview of the training schedule and the groups of persons receiving the training. Communication to Staff "nce the training is arranged to be delivered to the employees) it is necessary to advise them about the training programmes they are scheduled to attend. This section of the ?C6 contains a draft communication to be sent to each member of staff to advise them about their training schedule. The communication should provide for feedbac# from the staff member where the training dates given are inconvenient. ! separate communication should be sent to the managers of the business units advising them of the proposed training schedule to be attended by their staff. 4ach member of staff will be given information on their role and responsibilities applicable in the event of an emergency. 6repare ?udget for Training 6hase 4ach phase of the ?C6 process which incurs a cost re(uires that a budget be prepared and approved. 8epending upon the cross charging system employed by the organiCation) the training costs will vary greatly. Gowever) it has to be recogniCed that) however well +ustified) training incurs additional costs and these should be approved by the appropriate authority within the organiCation.
Performance Testing Process & Methodology +1 Proprietary & Confidential -

This section of the ?C6 will contain a list of the training phase activities and a cost for each. &t should be noted whenever part of the costs is already incorporated with the organiCation*s overall budgeting process. !ssessing the Training The individual ?C6 training programmes and the overall ?C6 training process should be assessed to ensure its effectiveness and applicability. This information will be gathered from the trainers and also the trainees through the completion of feedbac# (uestionnaires. ;eedbac# Xuestionnaires !ssess ;eedbac# ;eedbac# Xuestionnaires &t is vital to receive feedbac# from the persons managing and participating in each of the training programmes. This feedbac# will enable wea#nesses within the ?usiness <ecovery 6rocess) or the training) to be identified and eliminated. Completion of feedbac# forms should be mandatory for all persons participating in the training process. The forms should be completed either during the training Ato record a specific issueB or as soon after finishing as practical. This will enable observations and comments to be recorded whilst the event is still fresh in the persons mind. This section of the ?C6 should contain a template for a ;eedbac# Xuestionnaire for the training phase. !ssess ;eedbac# The completed (uestionnaires from the trainees plus the feedbac# from the trainers should be assessed. &dentified wea#nesses should be notified to the ?C6 Team >eader and the process strengthened accordingly. The #ey issues raised by the trainees should be noted and consideration given to whether the findings are critical to the process or not. &f there are a significant number of negative issues raised then consideration should be given to possible re-training once the training materials) or the process) have been improved. This section of the ?C6 will contain a format for assessing the training feedbac#. Eeeping the 6lan @p-to-date Changes to most organiCations occur all the time. 6roducts and services change and also their method of delivery. The increase in technological based processes over the past ten years) and particularly within the last five) have significantly increased the level of dependency upon the availability of systems and information for the business to function effectively. These changes are li#ely to continue and probably the only certainty is that the pace of change will continue to increase. &t is necessary for the ?C6 to #eep pace with these changes in order for it to be of use in the event of a disruptive emergency. This chapter deals with updating the plan and the managed process which should be applied to this updating activity. .aintaining the ?C6 &t is necessary for the ?C6 updating process to be properly structured and controlled. $henever changes are made to the ?C6 they are to be fully tested and appropriate amendments should be made to the training materials. This will involve the use of formaliCed change control procedures under the control of the ?C6 Team >eader. Change Controls for @pdating the 6lan &t is recommended that formal change controls are implemented to cover any changes re(uired to the ?C6. This is necessary due to the level of comple3ity contained within the ?C6. ! Change re(uest ;orm K Change "rder form is to be prepared and approved in respect of each proposed change to the ?C6.
Performance Testing Process & Methodology +4 Proprietary & Confidential -

This section of the ?C6 will contain a Change <e(uest ;orm K Change "rder to be used for all such changes to the ?C6. $esponsi0ilities for 2aintenance of Each )art of the )lan 4ach part of the plan will be allocated to a member of the ?C6 Team or a Senior .anager with the organiCation who will be charged with responsibility for updating and maintaining the plan. The ?C6 Team >eader will remain in overall control of the ?C6 but business unit heads will need to #eep their own sections of the ?C6 up to date at all times. Similarly) G<. 8epartment will be responsible to ensure that all emergency contact numbers for staff are #ept up to date. &t is important that the relevant ?C6 coordinator and the ?usiness <ecovery Team are #ept fully informed regarding any approved changes to the plan. Test !ll Changes to 6lan The ?C6 Team will nominate one or more persons who will be responsible for coordinating all the testing processes and for ensuring that all changes to the plan are properly tested. $henever changes are made or proposed to the ?C6) the ?C6 Testing Co-ordinator will be notified. The ?C6 Testing Co-ordinator will then be responsible for notifying all affected units and for arranging for any further testing activities. This section of the ?C6 contains a draft communication from the ?C6 Co-ordinator to affected business units and contains information about the changes which re(uire testing or re-testing. (dvise )erson $esponsi0le for BC) Training ! member of the ?C6 Team will be given responsibility for co-ordinating all training activities A?C6 Training Co-ordinatorB. The ?C6 Team >eader will notify the ?C6 Training Co-ordinator of all approved changes to the ?C6 in order that the training materials can be updated. !n assessment should be made on whether the change necessitates any re-training activities. !dvise 6erson <esponsible for ?C6 Training ! member of the ?C6 Team will be given responsibility for co-ordinating all training activities A?C6 Training Co-ordinatorB. The ?C6 Team >eader will notify the ?C6 Training Co-ordinator of all approved changes to the ?C6 in order that the training materials can be updated. !n assessment should be made on whether the change necessitates any re-training activities. 6roblems which can be caused by 6oor Test 8ata .ost testers are familiar with the problems that can be caused by poor data. The following list details the most common problems familiar to the author. .ost pro+ects e3perience these problems at some stage - recogniCing them early can allow their effects to be mitigated. @nreliable test results. <unning the same test twice produces inconsistent results. This can be a symptom of an uncontrolled environment) unrecogniCed database corruption) or of a failure to recogniCe all the data that is influential on the system. 8egradation of test data over time. 6rogram faults can introduce inconsistency or corruption into a database. &f not spotted at the time of generation) they can cause hard-to-diagnose failures that may be apparently unrelated to the original fault. <estoring the data to a clean set gets rid of the symptom) but the original fault is undiagnosed and can carry on into live operation and perhaps future releases. ;urthermore) as the data is restored) evidence of the fault is lost. &ncreased test maintenance cost &f each test has its own data) the cost of test maintenance is correspondingly increased.
Performance Testing Process & Methodology ++ Proprietary & Confidential -

&f that data is itself hard to understand or manipulate) the cost increases further. <educed fle3ibility in test e3ecution &f datasets are large or hard to set up) some tests may be e3cluded from a test run. &f the datasets are poorly constructed) it may not be time-effective to construct further data to support investigatory tests. "bscure results and bug reports $ithout clearly comprehensible data) testers stand a greater chance of missing important diagnostic features of a failure) or indeed of missing the failure entirely. .ost reports ma#e reference to the input data and the actual and e3pected results. 6oor data can ma#e these reports hard to understand. >arger proportion of problems can be traced to poor data ! proportion of all failures logged will be found) after further analysis) not to be faults at all. 8ata can play a significant role in these failures. 6oor data will cause more of these problems. >ess time spent hunting bugs The more time spent doing unproductive testing or ineffective test maintenance) the less time spent testing. Confusion between developers) testers and business 4ach of these groups has different data re(uirements. ! failure to understand each others data can lead to ongoing confusion. <e(uirements problems can be hidden in inade(uate data &t is important to consider inputs and outputs of a process for re(uirements modeling. &nade(uate data can lead to ambiguous or incomplete re(uirements. Simpler to ma#e test mista#es 4verybody ma#es mista#es. Confusing or over-large datasets can ma#e data selection mista#es more common. @nwieldy volumes of data Small datasets can be manipulated more easily than large datasets. ! few datasets are easier to manage than many datasets. ?usiness data not representatively tested Test re(uirements) particularly in configuration data) often donIt reflect the way the system will be used in practice. $hile this may arguably lead to broad testing for a variety of purposes) it can be hard for the business or the end users to feel confidence in the test effort if they feel distanced from it. &nability to spot data corruption caused by bugs ! few well-#nown datasets can be more easily be chec#ed than a large number of comple3 datasets) and may lend themselves to automated testing K sanity chec#s. ! readily understandable dataset can allow straightforward diagnosisN a comple3 dataset will positively hinder diagnosis. 6oor databaseKenvironment integrity &f a large number of testers) or tests) share the same dataset) they can influence and corrupt each others results as they change the data in the system. This can not only cause false results) but can lead to database integrity problems and data corruption. This can ma#e portions of the application untestable for many testers simultaneously.

Performance Testing Process & Methodology +/ -

Proprietary & Confidential

12.2 Classification of Test

ata Types

&n the process of testing a system) many references are made to HThe 8ataH or H8ata 6roblemsH. !lthough it is perhaps simpler to discuss data in these terms) it is useful to be able to classify the data according to the way it is used. The following broad categories allow data to be handled and discussed more easily. Environ!ental data 4nvironmental data tells the system about its technical environment. &t includes communications addresses) directory trees and paths and environmental variables. The current date and time can be seen as environmental data. Setup data Setup data tells the system about the business rules. &t might include a cross reference between country and delivery cost or method) or methods of debt collection from different #inds of customers. Typically) setup data causes different functionality to apply to otherwise similar data. $ith an effective approach to setup data) business can offer new intangible products without developing new functionality - as can be seen in the mobile phone industry) where new billing products are supported and indeed created by additions to the setup data. Input data &nput data is the information input by day-to-day system functions. !ccounts) products) orders) actions) documents can all be input data. ;or the purposes of testing) it is useful to split the categoriCation once moreD ;&F48 & 6@T 8!T! ;i3ed input data is available before the start of the test) and can be seen as part of the test conditions. C" S@.!?>4 & 6@T 8!T! Consumable input data forms the test input &t can also be helpful to (ualify data after the system has started to use itN Transitional data Transitional data is data that e3ists only within the program) during processing of input data. Transitional data is not seen outside the system Aarguably) test handles and instrumentation ma#e it output dataB) but its state can be inferred from actions that the system has ta#en. Typically held in internal system variables) it is temporary and is lost at the end of processing. "utput data "utput data is all the data that a system outputs as a result of processing input data and events. &t generally has a correspondence with the input data Acf. Oac#sonIs Structured 6rogramming methodologyB) and includes not only files) transmissions) reports and database updates) but can also include test measurements. ! subset of the output data is generally compared with the e3pected results at the end of test e3ecution. !s such) it does not directly influence the (uality of the tests.

Performance Testing Process & Methodology +3 -

Proprietary & Confidential

12.3 4rgani5ing the data


! #ey part of any approach to data is the way the data is organiCedN the way it is chosen and described) influenced by the uses that are planned for it. ! good approach increases data reliability) reduces data maintenance time and can help improve the test process. %ood data assists testing) rather than hinders it. )er!utations .ost testers are familiar with the concept of permutationN generating tests so that all possible permutations of inputs are tested. .ost are also familiar with the ways in which this generally vast set can be cut down. 6air wise) or combinatorial testing addresses this problem by generating a set of tests that allow all possible pairs of combinations to be tested. Typically) for non-trivial sets) this produces a far smaller set of tests than the brute-force approach for all permutations) The same techni(ues can be applied to test dataN the test data can contain all possible pairs of permutations in a far smaller set than that which contains all possible permutations. This allows a small) easy to handle dataset - which also allows a wide range of tests. This small) and easy to manipulate dataset is capable of supporting many tests. &t allows complete pairwise coverage) and so is comprehensive enough to allow a great many new) ad-hoc) or diagnostic tests. 8atabase changes will affect it) but the data maintenance re(uired will be greatly lessened by the small siCe of the dataset and the amount of reuse it allows. ;inally) this method of wor#ing with fi3ed input data can help greatly in testing the setup data. This method is most appropriate when used) as above) on fi3ed input data. &t is most effective when the following conditions are satisfied. ;ortunately) these criteria apply to many traditional database-based systemsD [a;i3ed input data consists of many rows [a;ields are independent [aLou want to do many tests without loading K you do not load fi3ed input data for each test. To sum up) permutation helps becauseD [a6ermutation is familiar from test planning. [a!chieves good test coverage without having to construct massive datasets [aCan perform investigative testing without having to set up more data [a<educes the impact of functionalKdatabase changes [aCan be used to test other data - particularly setup data

P)(,-,-&.-.7
6artitions allow data access to be controlled) reducing uncontrolled changes in the data. 6artitions can be used independentlyN data use in one area will have no effect on the results of tests in another. 8ata can be safely and effectively partitioned by machine K database K application instance) although this partitioning can introduce configuration management problems in software version) machine setup) environmental data and data loadKreload. ! useful and basic way to start with partitions is to set up) not a single environment for each test or tester) but to set up three shared by many users) so allowing different #inds of data use. These three have the following characteristicsD
Performance Testing Process & Methodology /: Proprietary & Confidential -

Safe area [a@sed for en(uiry tests) usability tests etc. [a o test changes the data) so the area can be trusted. [a.any testers can use simultaneously Change area [a@sed for tests which updateKchange data. [a8ata must be reset or reloaded after testing. [a@sed by one testKtester at a time. Scratch area [a@sed for investigative update tests and those which have unusual re(uirements. [a43isting data cannot be trusted. [a@sed at testerIs own ris#' Testing rarely has the lu3ury of completely separate environments for each test and each tester. Controlling data) and the access to data) in a system can be fraught. .any different sta#eholders have different re(uirements of the data) but a common re(uirement is that of e3clusive use. $hile the impact of this re(uirement should not be underestimated) a number of sta#eholders may be able to wor# with the same environmental data) and to a lesser e3tent) setup data - and their wor# may not need to change the environmental or setup data. The test strategy can ta#e advantage of this by disciplined use of te3t K value fields) allowing the use of IsoftI partitions. ISoftI partitions allow the data to be split up conceptually) rather than physically. !lthough testers are able to interfere with each others tests) the team can be educated to avoid each others wor#. &f) for instance) tester 1Is tests may only use customers with <ussian nationality and tester 7Is tests only with ;rench) the two sets of wor# can operate independently in the same dataset. ! safe area could consist of >ondon addresses) the change area .anchester addresses) and the scratch area ?ristol addresses. Typically) values in free-te3t fields are used for soft partitioning. 8ata partitions help becauseD [a!llow controlled and reliable data) reducing data corruption K change problems [aCan reduce the need for e3clusive access to environmentsKmachines Clarity 6ermutation techni(ues may ma#e data easier to grasp by ma#ing the datasets small and commonly used) but we can ma#e our data clearer still by describing each row in its own free te3t fields) allowing testers to ma#e a simple comparison between the free te3t Awhich is generally displayed on outputB) and actions based on fields which tend not to be directly displayed. @se of free te3t fields with some correspondence to the internals of the record allows output to be chec#ed more easily. Testers often tal# about items of data) referring to them by anthropomorphic personification - that is to say) they give them names. This allows shorthand) but also acts as +argon) e3cluding those who are not in the #now. Setting this data) early on in testing) to have some meaningful value can be very useful) allowing testers to sense chec# input and output data) and choose appropriate input data for investigative tests. <eports) data e3tracts and sanity chec#s can also ma#e use of theseN sorting or selecting on a free te3t field that should have some correspondence with a functional field can help spot problems or eliminate unaffected data.
Performance Testing Process & Methodology /1 Proprietary & Confidential -

8ata is often used to communicate and illustrate problems to coders and to the business. Gowever) there is generally no mandate for outside groups to understand the format or re(uirements of test data. %iving some meaning to the data that can be referred to directly can help with improving mutual understanding. Clarity helps becauseD [a&mproves communication within and outside the team [a<educes test errors caused by using the wrong data [a!llows another method way of doing sanity chec#s for corrupted or inconsistent data [aGelps when chec#ing data after input [aGelps in selecting data for investigative tests

12." ata Load and ata 2aintenance


!n important consideration in preparing data for functional testing is the ways in which the data can be loaded into the system) and the possibility and ease of maintenance. Loading the data 8ata can be loaded into a test system in three general ways. [a@sing the system youIre trying to test The data can be manually entered) or data entry can be automated by using a captureKreplay tool. This method can be very slow for large datasets. &t uses the systemIs own validation and insertion methods) and can both be hampered by faults in the system) and help pinpoint them. &f the system is wor#ing well) data integrity can be ensured by using this method) and internally assigned #eys are li#ely to be effective and consistent. 8ata can be well-described in test scripts) or constructed and held in flat files. &t may) however) be input in an ad-hoc way) which is unli#ely to gain the advantages of good data listed above. [a@sing a data load tool 8ata load tools directly manipulate the systemIs underlying data structures. !s they do not use the systemIs own validation) they can be the only way to get bro#en data into the system in a consistent fashion. !s they do not use the system to load the data) they can provide a convenient wor#around to #nown faults in the systemIs data load routines. Gowever) they may come up against problems when generating internal #eys) and can have problems with data integrity and parentKchild relationships. 8ata loaded can have a range of origins. &n some cases) all new data is created for testing. This data may be complete and well specified) but can be hard to generate. ! common compromise is to use old data from an e3isting system) selected for testing) filtered for relevance and duplicates and migrated to the target data format. &n some cases) particularly for minor system upgrades) the complete set of live data is loaded into the system) but stripped of personal details for privacy reasons. $hile this last method may seem complete) it has disadvantages in that the data may not fully support testing) and that the large volume of data may ma#e test results hard to interpret. [a ot loaded at all Some tests simply ta#e whatever is in the system and try to test with it. This can be appropriate where a dataset is #nown and consistent) or has been set up by a prior round of testing. &t can also be appropriate in environments where data cannot be reloaded) such as the live system. Gowever) it can be symptomatic of an uncontrolled approach to data) and is not often desirable. 4nvironmental data tends to be manually loaded) either at installation or by manipulating
Performance Testing Process & Methodology /2 Proprietary & Confidential -

environmental or configuration scripts. >arge volumes of setup data can often be generated from e3isting datasets and loaded using a data load tool) while small volumes of setup data often have an associated system maintenance function and can be input using the system. ;i3ed input data may be generated or migrated and is loaded using any and all of the methods above) while consumable input data is typically listed in test scripts or generated as an input to automation tools. $hen data is loaded) it can append itself to e3isting data) overwrite e3isting data) or delete e3isting data first. 4ach is appropriate in different circumstances) and due consideration should be given to the conse(uences.

12.+ Testing the ata


! theme bought out at the start of this paper was I! System is 6rogrammed by its 8ataI. &n order to test the system) one must also test the data it is configured withN the environmental and setup data. 4nvironmental data is necessarily different between the test and live environment. !lthough testing can verify that the environmental variables are being read and used correctly) there is little point in testing their values on a system other than the target system. 4nvironmental data is often chec#ed manually on the live system during implementation and rollout) and the wide variety of possible methods will not be discussed further here. Setup data can change often) throughout testing) as the business environment changes W particularly if there is a long period between re(uirements gathering and live rollout. Testing done on the setup data needs to cover two (uestionsN [a8oes the plannedKcurrent setup data induce the functionality that the business re(uires1 [a$ ill changes made to the setup data have the desired effect1 Testing for these two (uestions only becomes possible when that data is controlled. !spects of all the elements above come into playN [aThe setup data should be organiCed to allow a good variety of scenarios to be considered [aThe setup data needs to be able to be loaded and maintained easily and repeatable [aThe business needs to become involved in the data so that their setup for live can be properly tested $hen testing the setup data) it is important to have a well-#nown set of fi3ed input data and consumable input data. This allows the effects of changes made to the setup data to be assessed repeat ably and allows results to be compared. The advantages of testing the setup data includeD [a"verall testing will be improved if the (uality of the setup data improves [a6roblems due to faults in the live setup data will be reduced [aThe business can re-configure the software for new business needs with increased confidence [a8ata-related failures in the live system can be assessed in the light of good data testing

Performance Testing Process & Methodology /- -

Proprietary & Confidential

12., Conclusion
8ata can be influential on the (uality of testing. $ell-planned data can allow fle3ibility and help reduce the cost of test maintenance. Common data problems can be avoided or reduced with preparation and automation. 4ffective testing of setup data is a necessary part of system testing) and good data can be used as a tool to enable and improve communication throughout the pro+ect. The following points summariCe the actions that can influence the (uality of the data and the effectiveness of its usageD 6lan the data for maintenance and fle3ibility Enow your data) and ma#e its structure and content transparent @se the data to improve understanding throughout testing and the business Test setup data as you would test functionality

Performance Testing Process & Methodology /0 -

Proprietary & Confidential

13 Test Logs / Introduction


Test 6roblem is a condition that e3ists within the software system that needs to be addressed. Carefully and completely documenting a test problem is the first step in correcting the problem. The following four attributes should be developed for all the test problemsD Statement of condition. WTells what it is. Criteria W Tells what should be. These two attributes are the basis for a finding. &f a comparison between the two gives little or no practical conse(uence) no finding e3ists. 4ffectD Tells why the difference between what is and what should be is significant CauseD Tells the reasons for the deviation. &dentification of the cause is the necessary as a basis for corrective action. ! well developed problem statement will include each of these atttributes.$hen one or more these attributes is missing) (uestions almost arise) such as CriteriaD $hy is the current state inade(uate1 4ffectD Gow significant is it1 CauseD $hat could have cause of the problem1

13.1 =actors defining the Test Log 3eneration


ocu!ent eviation: 6roblem statements begin to emerge by process of comparision.4ssentially the user compares2 what is2 with 0what should be2. $hen a deviation is identified between what is found to actually e3ist and what the user thin#s is correct or proper ) the first essential step toward development of a problem statement has occurred. &t is difficult to visualiCe any type of problem that is not in some way characteriCed by this deviation. The V$hat is2D can be called the statement of condition. The 0$hat should be2 shall be called the 0Criteria2. These concepts are the first two and the most basic ) attributes of a problem statement. The documenting of the deviation is describing the conditions) as they currently e3ist) and the criteria) which represents what the user desires. The actual deviation will be the difference or gap between 0what Wis2 and 0 what is desired2. The statement of condition is uncovering and documenting the facts) as they e3ist. $hat is a fact1 The statement of condition will of course depend on the nature and e3tent of the evidence or support that is e3amined and noted. ;or those facts) ma#ing up the statement of condition) the &KS professional will need to ensure that the information is accurate) well supported) and worded as clearly and precisely as possible. The statement of condition should document as many of the following attributes as appropriate of the problem. !ctivities &nvolvedD- The specific business or administered activities that are being performed during Test >og generation are as followsD 6rocedures used to perform wor#. W The specific step-by Wstep activities that are utiliCed in producing the output from the identical activities. "utputs K8eliverables W The products that are produced from the activity. &nputs - The triggers)events)or documents that cause this activity to be e3ecuted.
Performance Testing Process & Methodology /1 Proprietary & Confidential -

@sersKCustomers served WThe organiCation )individuvals)or class usersKcustomers serviced by this activity. 8eficiencies noted W The status of the results of e3ecuting this activity and any appropriate interpretation of those facts. The Criterion is the user*s statement of what is desired. &t can be stated in the either negative or positive terms. ;or e3ample ) it could indicate the need to reduce the complaints or delays as well as desired processing turn around time. $or# 6aper to describe the problem) and document the statement of condition and the statement of criteria. ;or e3ample the following $or# paper provides the information for Test >og 8ocumentationD

=ield $e%uire!ents: =ield Instructions for Entering ata <a!e of Software Tested D 6ut the name of the SK$ or subsystem tested. )ro0le! escription: $rite a brief narrative description of the variance uncovered from e3pectations State!ent of ConditionsD 6ut the results of actual processing that occurred here. State!ent of CriteriaD 6ut what testers believe was the e3pected result from processing Effect of eviation: &f this can be estimated ) testers should indicate what they believe the impact or effect of the problem will be on computer processing Cause of )ro0le!: The testers should indicate what they believe is the cause of the problem) if #nown. &f the testers re unable to do this ) the wor# paper will be given to the development team and they should indicate the cause of the problem. Location of the )ro0le!D The Tests should document where problem occurred as closely as possible. $eco!!ended (ctionD The testers should indicate any recommended action they believe would be helpful to the pro+ect team. &f not approved) the alternate action should be listed or the reason for not following the recommended action should be documented. <a!e of the S1# testedD )ro0le! escription State!ent of Condition State!ent of Criteria Effect of eviation Cause of a )ro0le! Location of the )ro0le! $eco!!ended (ction

13.2 Collecting Status ata


;our categories of data will be collected during testing. These are e3plained in the following paragraphs.

Test $esults ata This data Bill inclCdeI Test factors -The factors incorporated in the planI the Dalidation of Bhich Eecomes the Test OERectiDe. .Csiness oERectiDe >The Dalidation that specific ECsiness oERectiDes haDe Eeen met.
Performance Testing Process & Methodology /4 Proprietary & Confidential -

!nterface OERectiDes-Validation that data7OERects can Ee correctly passed among %oftBare components. #Cnctions7%CE fCnctions-!dentifiaEle %oftBare components normally associated Bith the reHCirements of the softBare. nits- The smallest identifiaEle softBare components Platform- The hardBare and %oftBare enDironment in Bhich the softBare system Bill operate.
Test Transactions> Test Suites> and Test Events These are the test products produced by the test team to perform testing. Test transactionsKeventsD The type of tests that will be conducted during the e3ecution of tests) which will be based on software re(uirements. &nspections W ! verification of process deliverables against deliverable specifications. <eviewsD =erification that the process deliverables K phases are meeting the user*s true needs.

efect
This category includes a 8escription of the individual defects uncovered during the testing process. This description includes but not limited to D 8ata the defect uncovered ame of the 8efect >ocation of the 8efect Severity of the 8efect Type of 8efect Gow the defect was uncoveredATest 8ataKTest ScriptB The Test >ogs should add to this information in the form of where the defect originated ) when it was corrected) and when it was entered for retest .

Storing

ata Collected during Testing

&t is recommended that a database be established in which to store the results collected during testing. &t is also suggested that the database be put in online through clientKserver systems so that with a vested interest in the status of the pro+ect can be readily accessed for the status update. !s described the most common test <eport is a simple Spread sheet ) which indicates the pro+ect component for which the status is re(uested) the test that will be performed to determine the status of that component) and the results of testing at any point of time.

eveloping Test Status $eports


<eport Software Status 4stablish a .easurement Team &nventory 43isting 6ro+ect .easures 8evelop a Consistent Set of 6ro+ect metrics 8efine 6rocess <e(uirements 8evelop and &mplement the 6rocess .onitor the 6rocess The Test process should produce a continuous series of reports that describe the status of testing. The test reports are for use of testers) test managers) and the software development team. The fre(uency of the test reports should be based on the discretion of the team and e3tensiveness of the test process.

Performance Testing Process & Methodology /+ -

Proprietary & Confidential

'se of =unction1Test !atri.:


This shows which tests must be performed in order to validate the functions and also used to determine the status of testing. .any organiCations use spreadsheet pac#age to maintain test results. The intersection can be coded with a number or symbol to indicate the followingD 1]Test is needed) but not performed 7]Test currently being performed 3L2I<4$ E=ECT <4TE :].a+or defect noted ,]Test complete and function is defect free for the criteria included in this test T4ST ;@ CT&" 1 7 9 : , 6 / 8 ! ? C 8 4 ;unction Test .atri3

13.2.1

2ethods of Test $eporting

$eporting Tools - @se of word processing) database) defect trac#ing) and graphic tools to prepare test reports. Some 8atabase test tools li#e 8ata =ision is a database reporting tool similar to Crystal <eports. <eports can be viewed and printed from the application or output as GT.>) >aTeF7e) F.>) 8oc?oo#) or tab- or comma-separated te3t files. ;rom the >aTeF7e and 8oc?oo# output files you can in turn produce 68;) te3t) GT.>) 6ostScript) and more. Some (uery tools available for >inu3-based databases includeD X.ySX> db.etri3 6g!ccess Cognos 6owerhouse This is not yet available for >inu3N Cognos is loo#ing into what interest people have in the product to assess what their strategy should be with respect to the >inu3 JJmar#et.II %<% - % @ <eport %enerator The %<% program reads record and field information from a d?ase9R file) delimited !SC&& te3t file or a SX> (uery to a <8?.S and produces a report listing. The program was loosely designed to produce TeFK>aTeF formatted output) but plain !SC&& te3t) troff) 6ostScript) GT.> or any other #ind of !SC&& based output format can be produced +ust as easily. #ord E)rocessing: "ne way of increasing the utility of computers and word processors for the teaching of writing may be to use software that will guide the processes of generating) organiCing) composing and revising te3t. This allows each person to use the normal functions of the computer #eyboard that are common to all word processors) email editors) order entry systems) and data base management products. ;rom the <eport .anager) however) you can (uic#ly scan through any number of these reports and see how each personIs history compares. ! one-page summary report may be printed with either the <eport .anager
Performance Testing Process & Methodology // Proprietary & Confidential -

program or from the individual #eyboard or #eypad software at any time. &ndividual <eports include all of the following information. Status <eport $ord 6rocessing Tests or Eeypad Tests ?asic S#ills Tests or 8ata 4ntry Tests 6rogress %raph %ame Scores Test <eport for each test Test irectorD ;acilitates consistent and repetitive testing process Central repository for all testing assets facilitates the adoption of a more consistent testing process) which can be repeated throughout the application life cycle 6rovides !nalysis and 8ecision Support %raphs and reports help analyCe application readiness at any point in the testing process <e(uirements coverage) run schedules) test e3ecution progress) defect statistics can be used for production planning 6rovides !nytime) !nywhere access to Test !ssets @sing Test 8irector*s web interface) tester) developers) business analysts and Client can participate and contribute to the testing process Traceability throughout the testing process Test Cases can be mapped to re(uirements providing ade(uate visibility over the test coverage of re(uirements Test 8irector lin#s re(uirements to test cases and test cases to defects .anages ?oth .anual and !utomated Testing Test 8irector can manage both manual and automated tests A$in <unnerB Scheduling of automated tests can be effectively done using Test 8irector

Test $eport Standards - 8efining the components that should be included in a test report. Statistical (nalysis - !bility to draw statistically valid conclusions from (uantitative test results. Testing ata used for !etrics

Testers are typically responsiEle for reporting their test statCs at regClar interDals. The folloBing measCrements generated dCring testing are applicaEle? Total nCmEer of tests "CmEer of Tests eGecCted to date "CmEer of tests eGecCted sCccessfClly to date *ata concerning softBare defects inclCde Total nCmEer of defects corrected in each actiDity
Performance Testing Process & Methodology /3 Proprietary & Confidential -

Total nCmEer of defects entered in each actiDity. 'Derage dCration EetBeen defect detection and defect correction 'Derage effort to correct a defect Total nCmEer of defects remaining at deliDery %oftBare performance data Cs CsCally generated dCring system testingI once the softBare has Eeen integrated and fCnctional testing is complete. 'Derage CP CtiliSation 'Derage memory tiliSation MeasCred !7O transaction rate
Test $eporting ! final test report should be prepared at the conclusion of each test activity. This includes the following &ndividual 6ro+ect Test <eport &ntegration Test <eport System Test <eport !cceptance test <eport These test reports are designed to document the results of testing as defined in the testplan.The test report can be a combination of electronic data and hard copy. ;or e3ample) if the function matri3 is maintained electronically) there is no reason to print that) as paper report will summariCe the data) draws appropriate conclusions and present recommendations.9 - 6urpose of a Test <eportD The test report has one immediate and three long term purposes. The immediate purpose is to provide information to customers of the software system so that they can determine whether the system is ready for production ) and if so) to assess the potential conse(uences and initiate appropriate actions to minimiCe those conse(uences. The first of the three long term uses is for the pro+ect to trace problems in the event the application malfunctions in production. Enowing which functions have been correctly tested and which ones still contain defects can assist in ta#ing corrective actions. The second long term purpose is to use the data to analyCe the rewor# process for ma#ing changes to prevent the defects from occurring in the future. These defect prone components identify tas#sKsteps that if improved) could eliminate or minimiCe the occurrence of high fre(uency defects. The Third long term purpose is to show what was accomplished in case of an L7E lawsuit. Individual )roHect Test $eport These reports focus on the &ndividual pro+ectsAsoftware systemB)when different testers should test individual pro+ects) they should prepare a report on their results. Integration Test $eport &ntegration testing tests the interfaces between individual pro+ects. ! good test plan will identify the interfaces and institute test conditions that will validate interfaces. %iven is the &ndividual 6ro+ect test report e3cept that conditions tested are interfaces. 1.Scope of Test W This section indicates which functions were and were not tested 7.Test <esults W This section indicates the results of testing) including any variance between what is and what should be 9.$hat wor#sK$hat does not wor# - This section defines the functions that wor# and do not wor# and the interfaces that wor# and do not wor# :. <ecommendations W This section recommends actions that should be ta#en to
Performance Testing Process & Methodology 3: Proprietary & Confidential -

;i3 functions K&nterfaces that do not wor#. .a#e additional improvements Syste! Test $eports ! System Test plan standard that identified the ob+ective of testing ) what was to be tested) how was it to be tested) and when tests should occur. The system test <eport should present the results of e3ecuting the test plan. &f these details are maintained 4lectronically ) then it need only be referenced ) not included in the report. (cceptance Test $eport There are two primary ob+ectives of !cceptance testing <eport .The first is to ensure that the system as implemented meets the real operating needs of the userKcustomer. &f the defined re(uirements are those true needs) testing should have accomplished this ob+ective. The second ob+ective is to ensure that software system can operate in the real world user environment) which includes people s#ills and attitudes) time pressures) changing business conditions) and so forth. The !cceptance Test <eport should encompass these criteria*s for the @ser acceptance respectively.

13.2.2

Conclusion

The Test >ogs obtained from the e3ecution of the test results and finally the test reports should be designed to accomplish the following ob+ectivesD 6rovide &nformation to the customer whether the system should be placed into production) if so the potential conse(uences and appropriate actions to minimiCe these conse(uences. "ne >ong term ob+ective is for the 6ro+ect and the other is for the information technology function. The pro+ect can use the test report to trace problems in the event the application malfunction in production. Enowing which functions have been correctly tested and which ones still contain defects can assist in ta#ing corrective actions. The data can also be used to analyCe the developmental process to ma#e changes to prevent defects from occurring in the future. These defect prone components identify tas#sKsteps that if improved) could eliminate or minimiCe the occurrence of high fre(uency defects in future.

Performance Testing Process & Methodology 31 -

Proprietary & Confidential

1" Test $eport


! Test <eport is a document that is prepared once the testing of a software product is complete and the delivery is to be made to the customer. This document would contain a summary of the entire pro+ect and would have to be presented in a way that any person who has not wor#ed on the pro+ect would also get a good overview of the testing effort.

C&.,/.,8 &2 ) T/8, R/*&(,


The contents of a test report are as followsD 43ecutive Summary "verview !pplication "verview Testing Scope Test 8etails Test !pproach Types of testing conducted Test 4nvironment Tools @sed .etrics Test <esults Test 8eliverables <ecommendations These sections are e3plained as followsD

1".1 E.ecutive Su!!ary


This section would comprise of general information regarding the pro+ect) the client) the application) tools and people involved in such a way that it can be ta#en as a summary of the Test <eport itself Ai.e.B all the topics mentioned here would be elaborated in the various sections of the report. 1. 4verview This comprises of 7 sections W !pplication "verview and Testing Scope. (pplication 4verview W This would include detailed information on the application under test) the end users and a brief outline of the functionality as well. Testing Scope W This would clearly outline the areas of the application that would K would not be tested by the X! team. This is done so that there would not be any misunderstandings between customer and X! as regards what needs to be tested and what does not need to be tested. This section would also contain information of "perating System K ?rowser combinations if Compatibility testing is included in the testing effort.
Performance Testing Process & Methodology 32 Proprietary & Confidential -

2. Test etails This section would contain the Test !pproach) Types of Testing conducted) Test 4nvironment and Tools @sed. Test (pproach W This would discuss the strategy followed for e3ecuting the pro+ect. This could include information on how coordination was achieved between "nsite and "ffshore teams) any innovative methods used for automation or for reducing repetitive wor#load on the testers) how information and daily K wee#ly deliverables were delivered to the client etc. Types of testing conducted W This section would mention any specific types of testing performed Ai.e.B ;unctional) Compatibility) 6erformance) @sability etc along with related specifications. Test Environ!ent W This would contain information on the Gardware and Software re(uirements for the pro+ect Ai.e.B server configuration) client machine configuration) specific software installations re(uired etc. Tools used W This section would include information on any tools that were used for testing the pro+ect. They could be functional or performance testing automation tools) defect management tools) pro+ect trac#ing tools or any other tools which made the testing wor# easier. 3. 2etrics This section would include details on total number of test cases e3ecuted in the course of the pro+ect) number of defects found etc. Calculations li#e defects found per test case or number of test cases e3ecuted per day per person etc would also be entered in this section. This can be used in calculating the efficiency of the testing effort. ". Test $esults This section is similar to the .etrics section) but is more for showcasing the salient features of the testing effort. &ncase many defects have been logged for the pro+ect) graphs can be generated accordingly and depicted in this section. The graphs can be for 8efects per build) 8efects based on severity) 8efects based on Status Ai.e.B how many were fi3ed and how many re+ected etc.

+. Test elivera0les This section would include lin#s to the various documents prepared in the course of the testing pro+ect Ai.e.B Test 6lan) Test 6rocedures) Test >ogs) <elease <eport etc.
Performance Testing Process & Methodology 3- Proprietary & Confidential -

,. $eco!!endations This section would include any recommendations from the X! team to the client on the product tested. &t could also mention the list of #nown defects which have been logged by X! but not yet fi3ed by the development team so that they can be ta#en care of in the ne3t release of the application.

Performance Testing Process & Methodology 30 -

Proprietary & Confidential

1+ efect 2anage!ent
1+.1 efect
A mismatch in the application and its specification is a defect. ! software error is present when the program does not do what its end user e3pects it to do.

1+.2 efect =unda!entals


! efect is a product anomaly or flaw. 8efects include such things as omissions and imperfections found during testing phases. Symptoms AflawsB of faults contained in software that is sufficiently mature for production will be considered as defects. 8eviations from e3pectation that is to be trac#ed and resolved is also termed a defect. !n evaluation of defects discovered during testing provides the best indication of software (uality. Xuality is the indication of how well the system meets the re(uirements. So in this conte3t defects are identified as any failure to meet the system re(uirements. 8efect evaluation is based on methods that range from simple number count to rigorous statistical modeling. <igorous evaluation uses assumptions about the arrival or discovery rates of defects during the testing process. The actual data about defect rates are then fit to the model. Such an evaluation estimates the current system reliability and predicts how the reliability will grow if testing and defect removal continue. This evaluation is described as system reliability growth modelling

Performance Testing Process & Methodology 31 -

Proprietary & Confidential

1+.2.1

efect Life Cycle

1+.3 efect Trac-ing


!fter a defect has been found) it must be reported to development so that it can be fi3ed. The &nitial State of a defect will be V <ewM. The 6ro+ect >ead of the development team will review the defect and set it to one of the following statusesD 4pen W !ccepts the bug and assigns it to a developer. Invalid Bug W The reported bug is not valid one as per the re(uirementsKdesign (s esigned W This is an intended functionality as per the re(uirementsKdesign eferred WThis will be an enhancement. uplicate W The bug has already been reported.
Proprietary & Confidential -

Performance Testing Process & Methodology 34 -

ocu!ent W "nce it is set to any of the above statuses apart from "pen) and the testing team does not agree with the development team it is set to document status. "nce the development team has started wor#ing on the defect the status is set to #I) 8A$or# in 6rogressB or if the development team is waiting for a go ahead or some technical feedbac#) they will set to ev #aiting !fter the development team has fi3ed the defect) the status is set to =I;E > which means the defect is ready to re-test. "n re-testing the defect) and the defect still e3ists) the status is set to $E4)E<E > which will follow the same cycle as an open defect. &f the fi3ed defect satisfies the re(uirementsKpasses the test case) it is set to Closed.

1+." efect Classification


The severity of bugs will be classified as followsD Critical Gigh The problem prevents further processing and testing. The 8evelopment Team must be informed immediately and they need to ta#e corrective action immediately. The problem affects selected processing to a significant degree) ma#ing it inoperable) Cause data loss) or could cause a user to ma#e an incorrect decision or entry. The 8evelopment Team must be informed that day) and they need to ta#e corrective action within - W 7: hours. The problem affects selected processing) but has a wor#-around that allows continued processing and testing. o data loss is suffered. These may be cosmetic problems that hamper usability or divulge client-specific information. The 8evelopment Team must be informed within 7: hours) and they need to ta#e corrective action within 7: - :8 hours. The problem is cosmetic) andKor does not affect further processing and testing. The 8evelopment Team must be informed within :8 hours) and they need to ta#e corrective action within :8 - 96 hours.

.edium

>ow

Performance Testing Process & Methodology 3+ -

Proprietary & Confidential

1+.+ efect $eporting 3uidelines


The Fey to maFing a good report is proDiding the deDelopment staff Bith as mCch information as necessary to reprodCce the ECg. This can Ee EroFen doBn into 1 points? 19 )iDe a Erief description of the proElem 29 List the steps that are needed to reprodCce the ECg or proElem -9 %Cpply all releDant information sCch as DersionI proRect and data Csed. 09 %Cpply a copy of all releDant reports and data inclCding copies of the eGpected resClts. 19 %CmmariSe Bhat yoC thinF the proElem is. &hen yoC are reporting a defect the more information yoC sCpplyI the easier it Bill Ee for the deDelopers to determine the proElem and fiG it. %imple proElems can haDe a simple reportI ECt the more compleG the proElem> the more information the deDeloper is going to need. #or eGample? cosmetic errors may only reHCire a Erief description of the screenI hoB to get it and Bhat needs to Ee changed. $oBeDerI an error in processing Bill reHCire a more detailed descriptionI sCch as? 19 The name of the process and hoB to get to it. 29 *ocCmentation on Bhat Bas eGpected. 8EGpected resClts9 -9 The soCrce of the eGpected resCltsI if aDailaEle. This inclCdes spread sheetsI an earlier Dersion of the softBare and any formClas Csed9 09 *ocCmentation on Bhat actCally happened. 8PerceiDed resClts9 19 'n eGplanation of hoB the resClts differed. 49 !dentify the indiDidCal items that are Brong. +9 !f specific data is inDolDedI a copy of the data Eoth Eefore and after the process shoCld Ee inclCded. /9 Copies of any oCtpCt shoCld Ee inclCded. 's a rCle the detail of yoCr report Bill increase Eased on a9 the seDerity of the ECgI E9 the leDel of the processingI c9 the compleGity of reprodCcing the ECg. A.),&60 &2 ) 4:7 (/*&(,
Performance Testing Process & Methodology 3/ Proprietary & Confidential -

.Cg reports need to do more than RCst descriEe the ECg. They haDe to giDe deDelopers something to BorF Bith so that they can sCccessfClly reprodCce the proElem. !n most cases the more information> correct information> giDen the Eetter. The report shoCld eGplain eGactly hoB to reprodCce the proElem and an eGplanation of eGactly Bhat the proElem is. The Easic items in a report are as folloBs? $/(8-&.9 This is Dery important. !n most cases the prodCct is not staticI deDelopers Bill haDe Eeen BorFing on it and if theyTDe foCnd a ECg> it may already haDe Eeen reported or eDen fiGed. !n either caseI they need to FnoB Bhich Dersion to Cse Bhen testing oCt the ECg. !f yoC are deDeloping more than one prodCct> !dentify the prodCct in HCestion. nless yoC are reporting something Dery simpleI sCch as a cosmetic error on a screenI yoC shoCld inclCde a dataset that eGhiEits the error. !f yoCTre reporting a processing errorI yoC shoCld inclCde tBo Dersions of the datasetI one Eefore the process and one after. !f the dataset from Eefore the process is not inclCdedI deDelopers Bill Ee forced to try and find the ECg Eased on forensic eDidence. &ith the dataI deDelopers can trace Bhat is happening. S,/*89 List the steps taFen to recreate the ECg. !nclCde all proper menC namesI donTt aEEreDiate and donTt assCme anything. 'fter yoCTDe finished Briting doBn the stepsI folloB them - maFe sCre yoCTDe inclCded eDerything yoC type and do to get to the proElem. !f there are parametersI list them. !f yoC haDe to enter any dataI sCpply the eGact data entered. )o throCgh the process again and see if there are any steps that can Ee remoDed. &hen yoC report the steps they shoCld Ee the clearest steps to recreating the ECg.
Performance Testing Process & Methodology 33 Proprietary & Confidential -

P(&1:5,? D),)9

D/85(-*,-&.9 EGplain Bhat is Brong - Try to Beed oCt any eGtraneoCs informationI ECt detail Bhat is Brong. !nclCde a list of Bhat Bas eGpected. (ememEer report one proElem at a timeI donTt comEine ECgs in one report. S:**&(,-.7 1&5:6/.,),-&.? !f aDailaEleI sCpply docCmentation. !f the process is a reportI inclCde a copy of the report Bith the proElem areas highlighted. !nclCde Bhat yoC eGpected. !f yoC haDe a report to compare againstI inclCde it and its soCrce information 8if itTs a printoCt from a preDioCs DersionI inclCde the Dersion nCmEer and the dataset Csed9 This information shoCld Ee stored in a centraliSed location so that *eDelopers and Testers haDe access to the information. The deDelopers need it to reprodCce the ECgI identify it and fiG it. Testers Bill need this information for later regression testing and Derification.

1+.+.1

Su!!ary

' ECg report is a case against a prodCct. !n order to BorF it mCst sCpply all necessary information to not only identify the proElem ECt Bhat is needed to fiG it as Bell. !t is not enoCgh to say that something is Brong. The report mCst also say Bhat the system shoCld Ee doing. The report shoCld Ee Britten in clear concise stepsI so that someone Bho has neDer seen the system can folloB the steps and reprodCce the proElem. !t shoCld inclCde information aEoCt the prodCctI inclCding the Dersion nCmEerI Bhat data Bas Csed. The more organiSed information proDided the Eetter the report Bill Ee.

Performance Testing Process & Methodology 1:: -

Proprietary & Confidential

1, (uto!ation
$hat is !utomation !utomated testing is automating the manual testing process currently in use

1,.1 #hy (uto!ate the Testing )rocessK


Today) rigorous application testing is a critical part of virtually all software development pro+ects. !s more organiCations develop mission-critical systems to support their business activities) the need is greatly increased for testing methods that support business ob+ectives. &t is necessary to ensure that these systems are reliable) built according to specification) and have the ability to support business processes. .any internal and e3ternal factors are forcing organiCations to ensure a high level of software (uality and reliability. &n the past) most software tests were performed using manual methods. This re(uired a large staff of test personnel to perform e3pensive) and time-consuming manual test procedures. "wing to the siCe and comple3ity of today*s advanced software applications) manual testing is no longer a viable option for most testing situations. 4very organiCation has uni(ue reasons for automating software (uality activities) but several reasons are common across industries.

Using Testing Effectively


?y definition) testing is a repetitive activity. The very nature of application software development dictates that no matter which methods are employed to carry out testing Amanual or automatedB) they remain repetitious throughout the development lifecycle. !utomation of testing processes allows machines to complete the tedious) repetitive wor# while human personnel perform other tas#s. !utomation allows the tester to reduce or eliminate the re(uired 0thin# time2 or 0read time2 necessary for the manual interpretation of when or where to clic# the mouse or press the enter #ey. !n automated test e3ecutes the ne3t operation in the test hierarchy at machine speed) allowing tests to be completed many times faster than the fastest individual. ;urthermore) some types of testing) such as loadKstress testing) are virtually impossible to perform manually.

Reducing Testing Costs


The cost of performing manual testing is prohibitive when compared to automated methods. The reason is that computers can e3ecute instructions many times faster) and with fewer errors than
Performance Testing Process & Methodology 1:1 Proprietary & Confidential -

individuals. .any automated testing tools can replicate the activity of a large number of users Aand their associated transactionsB using a single computer. Therefore) loadKstress testing using automated methods re(uire only a fraction of the computer hardware that would be necessary to complete a manual test. &magine performing a load test on a typical distributed clientKserver application on which ,- concurrent users were planned. To do the testing manually) ,- application users employing ,- 6Cs with associated software) an available networ#) and a cadre of coordinators to relay instructions to the users would be re(uired. $ith an automated scenario) the entire test operation could be created on a single machine having the ability to run and rerun the test as necessary) at night or on wee#ends without having to assemble an army of end users. !s another e3ample) imagine the same application used by hundreds or thousands of users. &t is easy to see why manual methods for loadKstress testing is an e3pensive and logistical nightmare.

Replicating Testing Across Different Platforms


!utomation allows the testing organiCation to perform consistent and repeatable tests. $hen applications need to be deployed across different hardware or software platforms) standard or benchmar# tests can be created and repeated on target platforms to ensure that new platforms operate consistently.

Repeatability and Control


?y using automated techni(ues) the tester has a very high degree of control over which types of tests are being performed) and how the tests will be e3ecuted. @sing automated tests enforces consistent procedures that allow developers to evaluate the effect of various application modifications as well as the effect of various user actions. ;or e3ample) automated tests can be built that e3tract variable data from e3ternal files or applications and then run a test using the data as an input value. .ost importantly) automated tests can be e3ecuted as many times as necessary without re(uiring a user to recreate a test script each time the test is run.

Greater Application Coverage


The productivity gains delivered by automated testing allow and encourage organiCations to test more often and more completely. %reater application test coverage also reduces the ris# of e3posing users to malfunctioning or non-compliant software. &n some industries such as healthcare and pharmaceuticals) organiCations are re(uired to comply with strict (uality
Performance Testing Process & Methodology 1:2 Proprietary & Confidential -

regulations as well as being re(uired to document their (uality assurance efforts for all parts of their systems.

1,.2 (uto!ation Life Cycle

Identifying Tests Requiring Automation


.ost) but not all) types of tests can be automated. Certain types of tests li#e user comprehension tests) tests that run only once) and tests that re(uire constant human intervention are usually not worth the investment to automate. The following are e3amples of criteria that can be used to identify tests that are prime candidates for automation. &igh )ath =re%uency / !utomated testing can be used to verify the performance of application paths that are used with a high degree of fre(uency when the software is running in full production. 43amples includeD creating customer records) invoicing and other high volume activities where software failures would occur fre(uently. Critical Business )rocesses / &n many situations) software applications can literally define or control the core of a company*s business. &f the application fails) the company can face e3treme
Performance Testing Process & Methodology 1:- Proprietary & Confidential -

disruptions in critical operations. .ission-critical processes are prime candidates for automated testing. 43amples includeD financial month-end closings) production planning) sales order entry and other core activities. !ny application with a high-degree of ris# associated with a failure is a good candidate for test automation. $epetitive Testing / &f a testing procedure can be reused many times) it is also a prime candidate for automation. ;or e3ample) common outline files can be created to establish a testing session) close a testing session and apply testing values. These automated modules can be used again and again without having to rebuild the test scripts. This modular approach saves time and money when compared to creating a new end-to-end script for each and every test. (pplications with a Long Life Span / &f an application is planned to be in production for a long period of time) the greater the benefits are from automation. #hat to Loo- =or in a Testing Tool Choosing an automated software testing tool is an important step) and one which often poses enterprise-wide implications. Gere are several #ey issues) which should be addressed when selecting an application testing solution.

Test Planning and anagement


! robust testing tool should have the capability to manage the testing process) provide organiCation for testing components) and create meaningful end-user and management reports. &t should also allow users to include non-automated testing procedures within automated test plans and test results. ! robust tool will allow users to integrate e3isting test results into an automated test plan. ;inally) an automated test should be able to lin# business re(uirements to test results) allowing users to evaluate application readiness based upon the applicationIs ability to support the business re(uirements.

Testing Product Integration


Testing tools should provide tightly integrated modules that support test component reusability. Test components built for performing functional tests should also support other types of testing including regression and loadKstress testing. !ll products within the testing product environment should be based upon a common) easy-to-understand language. @ser training and e3perience gained in performing one testing tas# should be transferable to other testing tas#s. !lso) the architecture of the testing tool environment should be open to support interaction with other technologies such as defect or bug trac#ing pac#ages.

Internet!Intranet Testing
! good tool will have the ability to support testing within the scope of a web browser. The tests created for testing &nternet or intranet-based applications should be portable across browsers) and should automatically ad+ust for different load times and performance levels.
Performance Testing Process & Methodology 1:0 Proprietary & Confidential -

Ease of Use
Testing tools should be engineered to be usable by non-programmers and application end-users. $ith much of the testing responsibility shifting from the development staff to the departmental level) a testing tool that re(uires programming s#ills is unusable by most organiCations. 4ven if programmers are responsible for testing) the testing tool itself should have a short learning curve.

GUI and Client!"erver Testing

! robust testing tool should support testing with a variety of user interfaces and create simple-to manage) easy-to-modify tests. Test component reusability should be a cornerstone of the product architecture.

#oad and Performance Testing

The selected testing solution should allow users to perform meaningful load and performance tests to accurately measure system performance. &t should also provide test results in an easy-to-understand reporting format.

1,.3 )reparing the Test Environ!ent


"nce the test cases have been created) the test environment can be prepared. The test environment is defined as the complete set of steps necessary to e3ecute the test as described in the test plan. The test environment includes initial set up and description of the environment) and the procedures needed for installation and restoration of the environment. escription - 8ocument the technical environment needed to e3ecute the tests. Test Schedule - &dentify the times during which your testing facilities will be used for a given test. .a#e sure that other groups that might share these resources are informed of this schedule. 4perational Support - &dentify any support needed from other parts of your organiCation. Installation )rocedures - "utline the procedures necessary to install the application software to be tested. $estoration )rocedures - ;inally) outline those procedures needed to restore the test environment to its original state. ?y doing this) you are ready to re-e3ecute tests or prepare for a different set of tests. Inputs to the Test Environ!ent )reparation )rocess Technical 4nvironment 8escriptions !pproved Test 6lan Test 43ecution Schedules <esource !llocation Schedule !pplication Software to be installed

Performance Testing Process & Methodology 1:1 -

Proprietary & Confidential

Test Planning
Careful planning is the #ey to any successful process. To guarantee the best possible result from an automated testing program) those evaluating test automation should consider these fundamental planning steps. The time invested in detailed planning significantly improves the benefits resulting from test automation.

Evaluating $usiness Requirements

?egin the automated testing process by defining e3actly what tas#s your application software should accomplish in terms of the actual business activities of the end-user. The definition of these tas#s) or business re(uirements) defines the high-level) functional re(uirements of the software system in (uestion. These business re(uirements should be defined in such a way as to ma#e it abundantly clear that the software system correctly Aor incorrectlyB performs the necessary business functions. ;or e3ample) a business re(uirement for a payroll application might be to calculate a salary) or to print a salary chec#.

Creating a Test Plan


;or the greatest return on automated testing) a testing plan should be created at the same time the software application re(uirements are defined. This enables the testing team to define the tests) locate and configure test-related hardware and software products and coordinate the human resources re(uired to complete all testing. This plan is very much a 0living document2 that should evolve as the application functions become more clearly defined. ! good testing plan should be reviewed and approved by the test team) the software development team) all user groups and the organiCation*s management. The following items detail the input and output components of the test planning process.

Inputs to t%e Test Planning Process


(pplication $e%uire!ents - $hat is the application intended to do1 These should be stated in the terms of the business re(uirements of the end users. (pplication I!ple!entation Schedules / $hen is the scheduled release1 $hen are updates or enhancements planned1 !re there any specific events or actions that are dependent upon the application1 (cceptance Criteria for i!ple!entation / $hat critical actions must the application accomplish before it can be deployed1 This information forms the basis for ma#ing informed decisions on whether or not the application is ready to deploy.

Test Design and Development


!fter the test components have been defined) the standardiCed test cases can be created that will be used to test the application. The type and number of test cases needed will be dictated by the testing plan.
Performance Testing Process & Methodology 1:4 Proprietary & Confidential -

! test case identifies the specific input values that will be sent to the application) the procedures for applying those inputs) and the e3pected application values for the procedure being tested. ! proper test case will include the following #ey componentsD Test Case <a!e8s9 - 4ach test case must have a uni(ue name) so that the results of these test elements can be traced and analyCed. Test Case )rere%uisites - &dentify set up or testing criteria that must be established before a test can be successfully e3ecuted. Test Case E.ecution 4rder - Specify any relationships) run orders and dependencies that might e3ist between test cases. Test )rocedures W &dentify the application steps necessary to complete the test case. Input *alues - This section of the test case identifies the values to be supplied to the application as input including) if necessary) the action to be completed. E.pected $esults - 8ocument all screen identifierAsB and e3pected valueAsB that must be verified as part of the test. These e3pected results will be used to measure the acceptance criteria) and therefore the ultimate success of the test. Test ata Sources - Ta#e note of the sources for e3tracting test data if it is not included in the test case. Inputs to the Test esign and Construction )rocess Test Case 8ocumentation Standards Test Case aming Standards !pproved Test 6lan ?usiness 6rocess 8ocumentation ?usiness 6rocess ;low Test 8ata sources 4utputs fro! the Test esign and Construction )rocess <evised Test 6lan Test 6rocedures for each Test Case Test CaseAsB for each application function described in the test plan 6rocedures for test set up) test e3ecution and restoration

E&ecuting t%e Test


The test is now ready to be run. This step applies the test cases identified by the test plan) documents the results) and validates those results against e3pected performance. Specific performance measurements of the test e3ecution phase includeD (pplication of Test Cases W The test cases previously created are applied to the target software application as described in the testing environment ocu!entation - !ctivities within the test e3ecution are logged and analyCed as followsD !ctual <esults achieved during test e3ecution are compared to e3pected application behavior from the test cases Test Case completion status A6assK;ailB !ctual results of the behavior of the technical test environment 8eviations ta#en from the test plan or test process
Performance Testing Process & Methodology 1:+ Proprietary & Confidential -

Inputs to the Test E.ecution )rocess !pproved Test 6lan 8ocumented Test Cases StabiliCed) repeatable) test e3ecution environment StandardiCed Test >ogging 6rocedures 4utputs fro! the Test E.ecution )rocess Test 43ecution >ogAsB <estored test environment The test e3ecution phase of your software test process will control how the test gets applied to the application. This step of the process can range from very chaotic to very simple and schedule driven. The problems e3perienced in test e3ecution are usually attributed to not properly performing steps from earlier in the process. !dditionally) there may be several test e3ecution cycles necessary to complete all the necessary types of testing re(uired for your application. ;or e3ample) a test e3ecution may be re(uired for the functional testing of an application) and a separate test e3ecution cycle may be re(uired for the stressKvolume testing of the same application. ! complete and thorough test plan will identify this need and many of the test cases can be used for both test cycles. The secret to a controlled test e3ecution is comprehensive planning. $ithout an ade(uate test plan in place to control your entire test process) you may inadvertently cause problems for subse(uent testing.

easuring t%e Results


This step evaluates the results of the test as compared to the acceptance criteria set down in the test plan. Specific elements to be measured and analyCed includeD Test E.ecution Log $eview / The >og <eview compiles a listing of the activities of all test cases) noting those that passed) failed or were not e3ecuted. eter!ine (pplication Status / This step identifies the overall status of the application after testing) for e3ampleD ready for release) needs more testing) etc. Test E.ecution Statistics / This summary identifies the total number of tests that were e3ecuted) the type of test) and the completion status. (pplication efects / This final and very important report identifies potential defects in the software) including application processes that need to be analyCed further.

1,." (uto!ation 2ethods


Capture1)lay0ac- (pproach The CaptureK6laybac# tools capture the se(uence of
manual operations in a test script that are entered by the test engineer. These se(uences are played bac# during the test e3ecution. The benefit of this approach is that the captured session can be re-run at some later point in time to ensure that the system performs the re(uired behavior. The short-comings of CaptureK6laybac# are that in many cases) if the system functionality changes) the captureKplaybac# session will need to be completely re-run to capture the new se(uence of user interactions. Tools li#e $in<unner provide a scripting language) and it is possible for engineers to edit and maintain such scripts. This sometimes reduces the effort over the completely manual approach) however overall savings is usually minimal.
Performance Testing Process & Methodology 1:/ Proprietary & Confidential -

ata

riven (pproach

8ata driven approach is a test that plays bac# the same user actions but with varying input values. This allows one script to test multiple sets of positive data. This is applicable when large volumes and different sets of data need to be fed to the application and tested for correctness. The benefit of this approach is that the time consumed is less and accurate than manually testing it. Testing can be done with both positive and negative approach simultaneously.

Test Script e.ecution:


&n this phase we e3ecute the scripts that are already created. Scripts need to be reviewed and validated for results and accepted as functioning as e3pected before they are used live. Steps to be followed before e3ecution of scriptsD 1.Test tool to be installed in the machine. 7. Test environment Kapplication to be tested to be installed in the machine. 9. 6rere(uisite for running the scripts such as tool settings) playbac# options) necessary data table or data pool updation needs to be ta#en care. :.Select the script that needs to be e3ecuted and run itb ,.$ait until e3ecution is done. 6.!nalysis the results via Test manager or in the logs.

Test script e3ecution processD

Performance Testing Process & Methodology 1:3 -

Proprietary & Confidential

Test tool ready

Test ready

application

Tool settingsI PlayEacF options %cript eGecCtion

(esClt analysis

8efect management

Performance Testing Process & Methodology 11: -

Proprietary & Confidential

16 3eneral auto!ation tool co!parison


!nyone who has contemplated the implementation of an automated test tool has (uic#ly realiCed the wide variety of options on the mar#et in terms of both the #inds of test tools being offered and the number of vendors. The best tool for any particular situation depends on the system engineering environment that applies and the testing methodology that will be used) which in turn will dictate how automation will be invo#ed to support the process. This appendi3 evaluates ma+or tool vendors on their test tool characteristics) test e3ecution capability) tool integration capability) test reporting capability) performance testing and analysis) and vendor (ualification. The following tool vendors evaluated are Compuware) 4mpiri3K<S$) .ercury) <ational) and Segue.

16.1 =unctional Test Tool 2atri.


The Tool .atri3 is provided for (uic# and easy reference to the capabilities of the test tools. 4ach category in the matri3 is given a rating of 1 W ,. 1 ] 43cellent support for this functionality) 7 ] %ood support but lac#ing or another tool provides more effective support) 9 ] ?asicK support only. : ] This is only supported by use of an !6& call or third party add-in but not included in the general test toolKbelow average) , ] o support. &n general a set of criteria can be built up by using this matri3 and an indicative score obtained to help in the evaluation process. @sually the lower the score the better but this is sub+ective and is based on the e3perience of the author and the test professionals opinions used to create this document. ! detailed description is given below of each of the categories used in the matri3.

16.2 $ecord and )lay0acThis category details how easy it is to record 5 playbac# a test. 8oes the tool support low-level recording Amouse drags) e3act screen locationB1 &s there ob+ect recognition when recording and playing bac# or does it appear to record o# but then on playbac# Awithout environment change or uni(ue id*s) etc changesB fail1 Gow easy is it to read the recorded script. $hen automating) this is the first thing that most test professionals will do. They will record a simple scriptN loo# at the code and then playbac#. This is very similar to recording a macro in say .icrosoft !ccess. 4ventually record and playbac# becomes less and less part of the automation process as it is usually more robust to use the built-in functions to directly test ob+ects) databases) etc. Gowever this should be done as a minimum in the evaluation process because if the tool of choice cannot recogniCe the applications ob+ects then the automation process will be a very tedious e3perience.

Performance Testing Process & Methodology 111 -

Proprietary & Confidential

16.3

#e0 Testing

$eb based functionality on most applications is now a part of everyday life. !s such the test tool should provide good web based test functionality in addition to its clientKserver functions. &n +udging the rating for this category & loo#ed at the tools native support for GT.> tables) frames) 8".) various platforms for browsers) $eb site maps and lin#s. $eb testing can be riddled with problems if various considerations are not ta#en into account. Gere are a few e3amples !re there functions to tell me when the page has finished loading1 Can & tell the test tool to wait until an image appears1 Can & test whether lin#s are valid or not1 Can & test web based ob+ects functions li#e is it enabled) does it contain data) etc. !re there facilities that will allow me to programmatically loo# for ob+ects of a certain type on a web page or locate a specific ob+ect1 Can & e3tract data from the web page itself1 4.g. the title1 ! hidden form element1 $ith Client server testing the target customer is usually well defined you #now what networ# operating system you will be using) the applications and so on but on the web it is far different. ! person may be connecting from the @S! or !frica) they may be disabled) they may use various browsers) and the screen resolution on their computer will be different. They will spea# different languages) will have fast connections and slow connections) connect using .!C) >inu3 or $indows) etc) etc. So the cost to set up a test environment is usually greater than for a client server test where the environment is fairly well defined.

16." ata0ase Tests


.ost applications will provide the facility to preserve data outside of itself. This is usually achieved by holding the data in a 8atabase. !s such) chec#ing what is in the bac#end database usually verifies the proper validation of tests carried out on the front end of an application. ?ecause of the many databases available e.g. "racle) 8?7) SX>Server) Sybase) &nformi3) &ngres) etc all of them support a universal (uery language #nown as SX> and a protocol for communicating with these databases called "8?C AO8?C can be used on +ava environmentsB. & have loo#ed at all the tools support for SX>) "8?C and how they hold returned data e.g. is this in an array) a cursor) a variable) etc. Gow does the tool manipulate this returned data1 Can it call stored procedures and supply re(uired input variables1 $hat is the range of functions supplied for this testing1

16.+ ata =unctions


!s mentioned above applications usually provide a facility for storing data off line. So to test this) we will need to create data to input into the application. & have loo#ed at
Performance Testing Process & Methodology 112 Proprietary & Confidential -

all the tools facilities for creating and manipulating data. 8oes the tool allow you to specify the type of data you want1 Can you automatically generate data1 Can you interface with files) spreadsheets) etc to create) e3tract data1 Can you randomise the access to that data1 &s the data access truly random1 This functionality is normally more important than database tests as the databases will usually have their own interface for running (ueries. Gowever applications Ae3cept for manual inputB do not usually provide facilities for bul# data input. The added benefit Aas & have foundB is this functionality can be used for a production reason e.g. for the aforementioned bul# data input sometimes carried out in data migration or application upgrades. These functions are also very important as you move from the recordKplaybac# phase) to data-driven to framewor# testing. 8ata-driven tests are tests that replace hard coded names) address) numbersN etc with variables supplied from an e3ternal source usually a CS= AComma Separated variableB file) spreadsheet or database. ;ramewor#s are usually the ultimate goal in deploying automation test tools. ;ramewor#s provide an interface to all the applications under test by e3posing a suitable list of functions) databases) etc. This allows an ine3perienced testerKuser to run tests by +ust runningKproviding the test framewor# with #now commandsKvariables. ! test framewor# has parallels to Software framewor#s where you develop an encapsulation layer of software Aframewor#B around the applications) databases etc and e3pose functions) classes) methods etc that is used to call the underlying applications) return data) input data) etc. Gowever to do this re(uires a lot of time) s#illed resources and money to facilitate the first two.

16., 40Hect 2apping


&f you are in a role that can help influence the design of a product) try to get the developmentKdesign team to use standard and not custom ob+ects. Then hopefully you will not need this functionality. Gowever you may find that most AhopefullyB of the application has been implemented using standard ob+ects supported by your test tool vendor but there may be a few ob+ects that are custom ones. .ost custom ob+ects will behave li#e a similar standard control here are a few standard ob+ects that are seen in everyday applications. 6ushbuttons Chec#bo3es <adio buttons >ist views 4dit bo3es Combo bo3es

&f you have a custom ob+ect that behaves li#e one of these are you able to map Atell the test tool that the custom control behaves li#e the standardB control1 8oes it support all the standard controls methods1 Can you add the custom control to it*s own class of control1
Performance Testing Process & Methodology 11- Proprietary & Confidential -

16.6 I!age Testing


>ets hope this is not a ma+or part of your testing effort but occasionally you may have to use this to test bit map and similar images. !lso when the application has painted controls li#e those in the calculator app found on a lot of windows applications you may need to use this. !t least one of the tools allows you to map painted controls to standard controls but to do this you have to rely on the screen co-ordinates of the image. 8oes the tool provide "C< Aoptical character recognitionB1 Can it compare one image against another1 Gow fast does the compare ta#e1 &f the compare fails how long does that ta#e1 8oes the tool allow you to mas# certain areas of the screen when comparing. & have loo#ed at these facilities in the base tool set.

16.7 Test1Error recovery


This can be one of the most difficult areas to automate but if it is automated) it provides the foundation to produce a truly robust test suite. Suppose the application crashes while & am testing what can & do1 &f a function does not receive the correct information how can & handle this1 &f & get an error message how do & deal with that1 &f & access a web site and get a warning what do & do1 & cannot get a database connection how do & s#ip those tests1 The test tool should provide facilities to handle the above (uestions. & loo#ed at built in wiCards of the test tools for standard test recovery Awhen you finish tests or when a script failsB. 4rror recovery caused by the application and environment. Gow easy is it to build this into your code1 The rating given will depend on how much errors the tool can capture) the types of errors) how it recovers from errors) etc.

16.I 40Hect <a!e 2ap


!s you test your application using the test tool of your choice you will notice that it records actions against the ob+ects that it interacts with. These ob+ects are either identified through the co-ordinates on the screen or preferably via some uni(ue ob+ect reference referred to as a tag) ob+ect &8) inde3) name) etc. ;irstly the tool should provide services to uni(uely identify each ob+ect it interacts with and by various means. The last and least desirable should be by co-ordinates on the screen. "nce you are well into automation and build up 1-*s and 1--*s of scripts that reference these ob+ects you will want to have a mechanism that provides an easy update if the application being tested changes. !ll tools provide a search and replace facility but the best implementations are those that provide a central repository to store these ob+ect identities. The premise is it is better to change the reference in one place rather than having to go through each of the scripts to replace it there. & found this to be true but not as big a point as some have stated because those tools that don*t support the central repository schemeN can be programmed to reference windows and ob+ect names in one place Asay via a variableB and that variable can be used throughout the script Awhere that ob+ect appearsB. 8oes the "b+ect ame .ap allow you to alias the name or change the name given by the tool to some more meaningful name1
Performance Testing Process & Methodology 110 Proprietary & Confidential -

16.1J40Hect Identity Tool


"nce you become more proficient with automation testing one of the primary means of identifying ob+ects will be via an &8 Tool. ! sort of spy that loo#s at the internals of the ob+ect giving you details li#e the ob+ect name) &8 and similar. This will allow you to reference that ob+ect within a function call. The tool should give you details of some of the ob+ect*s properties) especially those associated with uni(uely identifying the ob+ect or window. The tool will usually provide the tester with a point and &8 service where you can use the mouse to point at the ob+ect and in some window you will see all of that ob+ects &8*s and properties. ! lot of the tools will allow you to search all the open applications in one swoop and show you the result in a tree that you can loo# at when re(uired.

16.11E.tensi0le Language
Gere is a (uestion that you will here time and time again in automation forums. 0Gow do & get \insert test tool name here^ to do such and such2) there will be one of four answers. & don*t #now &t can*t do it &t can do it using the function 3) y or c &t can*t in the standard language but you can do it li#e this

$hat we are concerned with in this section is the last answer e.g. if the standard test language does not support it can & create a 8>> or e3tend the language in some way to do it1 This is usually an advanced topic and is not encountered until the trained tester has been using the tool for at least 6 W 17 months. Gowever when this is encountered the tool should support language e3tension. &f via 8>>*s then the tester must have #nowledge of a traditional development language e.g. C) CRR or =?. ;or instance if & wanted to e3tend a tool that could use 8>>*s created by =? & would need to have =isual ?asic then open say an !ctiveF dll pro+ect) create a class containing various methods Asimilar to functionsB then & would ma#e a dll file. <egister it on the machine then reference that dll from the test tool calling the methods according to their specification. This will sound a lot clearer as you go on in the tools and this document will be updated to include advanced topics li#e this in e3tending the tools capabilities. Some tools provide e3tension by allowing you to create user defined functions) methods) classes) etc but these are normally a mi3ture of the already supported data types) functions) etc rather than e3tending the tool beyond it*s released functionality. ?ecause this is an advanced topic & have not ta#en into account ease of use) as those people who have got to this level should have already e3hausted the current capabilities of the tools. So want to use e3ternal functions li#e win97api functions and so on and should have a good grasp of programming.

Performance Testing Process & Methodology 111 -

Proprietary & Confidential

16.12Environ!ent Support
Gow many environments does the tool support out the bo31 8oes it support the latest Oava release) what "racle) 6owerbuilder) $!6) etc. .ost tools can interface to unsupported environments if the developers in that environment provide classes) dll*s etc that e3pose some of the applications details but whether a developer will or has time to do this is another (uestion. @ltimately this is the most important part of automation. 4nvironment support. &f the tool does not support your environmentKapplication then you are in trouble and in most cases you will need to revert to manually testing the application Amore shelf wareB.

16.13Integration
Gow well does the tool integrate with other tools. This is becoming more and more important. 8oes the tool allow you to run it from various test management suites1 Can you raise a bug directly from the tool and feed the information gathered from your test logs into it1 8oes it integrate with products li#e word) e3cel or re(uirements management tools1 $hen managing large test pro+ects with an automation team greater than five and testers totaling more than ten. The management aspect and the tools integration moves further up the importance ladder. !n e3ample could be a ma+or ?an# wants to redesign its wor#flow management system to allow faster processing of customer (ueries. The anticipated re(uirements for the new wor#flow software numbers in the thousands. To test these re(uirements :-)--- test cases have been identified 7-)--of these can be automated. Gow do & manage this1 This is where a test management tool comes in real handy. !lso how do & manage the bugs raised as a result of automation testing) etc1 &ntegration becomes very important rather than having separate systems that don*t share data that may re(uire duplication of information. The companies that will score larger on these are those that provide tools outside the testing arena as they can build in integration to their other products and so when it comes down to the wire on some pro+ects) we have gone with the tool that integrated with the products we already had.

16.1"Cost
&n my opinion cost is the least significant in this matri3) why1 ?ecause all the tools are similar in price e3cept =isual Test that is at least , times cheaper than the rest but as you will see from the matri3 there is a reason. !lthough very functional it does not provide the range of facilities that the other tools do. 6rice typically ranges from d7)9-- - d,)--- Adepending on (uantity brought) pac#ages) etcB in the @S and around e7)9-- - e,)--- in the @E for the base tools included in this document. So you #now the tools will all cost a similar price it is usually a case of which one will do the +ob for me rather than which is the cheapest. =isual Test & believe will prove to be a bigger hit as it e3pands its functional range it was not that long ago where it did not support web based testing.
Performance Testing Process & Methodology 114 Proprietary & Confidential -

The prices are #ept this high because they can. !ll the tools are roughly the same price and the volumes of sales is low relative to say a fully blown programming language &84 li#e O?uilder or =isual CRR which are a lot more function rich and fle3ible than any of the test tools. "n top of the above prices you usually pay an additional maintenance fee of between 1- and 7-Z. There are not many applications & #now that cost this much per license not even some very advanced operating systems. Gowever it is all a matter of supply. The bigger the supply the less the price as you can spread the development costs more. Gowever & do not anticipate a move on the prices upwards as this seems to be the price the mar#et will tolerate. =isual Test also provides a free runtime license.

16.1+Ease 4f 'se
This section is very sub+ective but & have used testers Amy guinea pigsB of various levels and got them from scratch to use each of the tools. &n more cases than not they have agreed on which was the easiest to use AinitiallyB. "bviously this can change as the tester becomes more e3perienced and the issues of say e3tensibility) script maintenance) integration) data-driven tests) etc are re(uired. Gowever this score is based on the productivity that can be gained in say the first three months when those issues are not such a big concern. 4ase of use includes out the bo3 functions) debugging facilities) layout on screen) help files and user manuals.

16.1,Support
&n the @E this can be a problem as most of the test tool vendors are based in the @S! with satellite branches in the @E. Oust from my own e3perience and the testers & #now in the @E. $e have found .ercury to be the best for support) then Compuware) <ational and last Segue. Gowever having said that you can find a lot of resources for Segue on the &nternet including a forum at www.betasoft.com that can provide most of the answers rather than ringing the support line. "n their website Segue and .ercury provide many useful user and vendor contributed material. & have also included various other criteria li#e the availability of s#illed resources) online resources) validity of responses from the helpdes#) speed of responses and similar

16.1640Hect Tests
ow presuming the tool of choice does wor# with the application you wish to test what services does it provide for testing ob+ect properties1 Can it validate several properties at once1 Can it validate several ob+ects at once1 Can you set ob+ect properties to capture the application state1 This should form the bul# of your verification as far as the automation process is concerned so & have loo#ed at the tools facilities on clientKserver as well as web based applications.
Performance Testing Process & Methodology 11+ Proprietary & Confidential -

16.172atri.
$hat will follow after the matri3 is a tool-by-tool comparison under the appropriate heading Aas listed aboveB so that the user can get a feel for the tools functionality side by side. 4ach category in the matri3 is given a rating of 1 W ,. 1 ] 43cellent support for this functionality) 7 ] %ood support but lac#ing or another tool provides more effective support) 9 ] ?asicK support only. : ] This is only supported by use of an !6& call or third party add-in but not included in the general test toolKbelow average) , ] o support. &mage testing "b+ect &dentity Tool <ecord 5 6laybac# 8atabase tests 8ata functions "b+ect .apping 43tensible >anguage TestK4rror recovery 4nvironment support "b+ect Tests 1 1 1 7 1 $eb Testing ame .ap &ntegration 4ase of use 7 7 9 9 1 1 7 7 7 7
-

$in<unner X! <un Sil# Test =isual Test <obot

7 1 1 9 1

1 7 7 9 7

1 1 1 : 1

7 7 7 9 1

1 1 1 7 1

1 1 1 7 1

7 7 1 7 7

1 7 1 : :

"b+ect

7 1 7 1 1

7 7 1 7 1

1 7 7 9 7

1 1 9 7 1

9 7 9 1 7

16.1I2atri. score
$in <unner ] 7: X!<un ] 7, Sil#Test ] 7: =isual Test ] 99 <obot ] 7:

Performance Testing Process & Methodology 11/ -

Proprietary & Confidential

Support

Cost

17 Sa!ple Test (uto!ation Tool


<ational offers the most complete lifecycle toolset Aincluding testingB of these vendors for the windows platform. $hen it comes to "b+ect "riented development they are the ac#nowledged leaders with most of the leading "" e3perts wor#ing for them. Some of their products are worldwide leaders e.g. <ational <obot) <ational <ose) Clear case) <e(uiste6ro) etc.Their @nified 6rocess is a very good development model that & have been involved with which allows mapping of re(uirements to use cases) test cases and a whole set of tools to support the process.

17.1 $ational Suite of tools


$ational $e%uisite)ro is a re(uirements management tool that helps pro+ect teams control the development process. <e(uisite6ro organiCes your re(uirements by lin#ing .icrosoft $ord to a re(uirements repository and providing traceability and change management throughout the pro+ect lifecycle. ! baseline version of <e(uisite6ro is included with <ational Test .anager. $hen you define a test re(uirement in <e(uisite6ro) you can access it in Test .anager. $ational Clear Buest is a change-re(uest management tool that trac#s and manages defects and change re(uests throughout the development process. $ith Clear Xuest) you can manage every type of change activity associated with software development) including enhancement re(uests) defect reports) and documentation modifications. $ational )urify is a comprehensive CKCR R run-time error chec#ing tool that automatically pinpoints run-time errors and memory lea#s in all components of an application) including third-party libraries) ensuring that code is reliable $ational Buantify is an advanced performance profiler that provides application performance analysis) enabling developers to (uic#ly find) prioritiCe and eliminate performance bottlenec#s within an application. $ational )ure Coverage is a customiCable code coverage analysis tool that provides detailed application analysis and ensures that all code has been e3ercised) preventing untested code from reaching the end-user. $ational Suite )erfor!ance Studio is a sophisticated tool for automating performance tests on clientKserver systems. ! clientKserver system includes client applications accessing a database or application server) and browsers accessing a $eb server. 6erformance Studio includes <ational <obot and <ational >oad Test. @se <obot to record clientKserver conversations and store them in scripts. @se >oad Test to schedule and play bac# the scripts.

Performance Testing Process & Methodology 113 -

Proprietary & Confidential

$ational $o0ot. ;acilitates functional and performance testing by automating record and playbac# of test scripts. !llows you to write) organiCe) and run tests) and to capture and analyCe the results. $ational Test =actory. !utomates testing by combining automatic test generation with source-code coverage analysis. Tests an entire application) including all %@& features and all lines of source code. 8uring playbac#) $ational Load Test can emulate hundreds) even thousands) of users placing heavy loads and stress on your database and $eb servers. $ational Test categoriCes test information within a repository by pro+ect. Lou can use the <ational !dministrator to create and manage pro+ects.

The tools that are to discussed here are <ational !dministrator <ational <obot <ational Test .anager

17.2 $ational (d!inistrator


#hat is a $ational )roHectK ! <ational pro+ect is a logical collection of databases and data stores that associates the data you use when wor#ing with <ational Suite. ! <ational pro+ect is associated with one <ational Test data store) one <e(uisite6ro database) one Clear Xuest databases) and multiple <ose models and <e(uisite6ro pro+ects) and optionally places them under configuration management. <ational administrator is used to create and manage rational repositories) users and groups and manage security privileges. &ow to create a new proHectK

Performance Testing Process & Methodology 12: -

Proprietary & Confidential

"pen the <ational administrator and go to =ile/N<ew )roHect. &n the above window opened enter the details li#e 6ro+ect name and location. Clic# <e.t. &n the corresponding window displayed) enter the 6assword if you want to protect the pro+ect with password) which is re(uired to connect to) configure or delete the pro+ect.

Clic# =inish. &n the configure pro+ect window displayed clic# the Create button. To manage the <e(uirements assets connect to <e(uisite 6ro) to manage test assets create associated test data store and for defect management connect to Clear (uest database.

Performance Testing Process & Methodology 121 -

Proprietary & Confidential

"nce the Create button in the Configure pro+ect window is chosen) the below seen Create Test 8ata store window will be displayed. !ccept the default path and clic# 4@ button.

Performance Testing Process & Methodology 122 -

Proprietary & Confidential

"nce the below window is displayed it is confirmed that the Test datastore is successfully created and clic# 4@ to close the window.

Clic# 4@ in the configure pro+ect window and now your first $ational proHect is ready to play withO.

Performance Testing Process & Methodology 12- -

Proprietary & Confidential

<ational !dministrator will display your 0Test)roHectD details as belowD

17.3 $ational $o0ot


<ational <obot to develop three #inds of scriptsD %@& scripts for functional testing and =@ and =? scripts for performance testing. <obot can be used toD 6erform full functional testing. <ecord and play bac# scripts that navigate through your application and test the state of ob+ects through verification points. 6erform full performance testing. @se <obot and Test.anager together to record and play bac# scripts that help you determine whether a multi-client system is performing within user-defined standards under varying loads.
Proprietary & Confidential -

Performance Testing Process & Methodology 120 -

Create and edit scripts using the SX!?asic) =?) and =@ scripting environments. The <obot editor provides color-coded commands with #eyword Gelp for powerful integrated programming during script development. Test applications developed with &84s such as =isual ?asic) "racle ;orms) 6ower?uilder) GT.>) and Oava. Test ob+ects even if they are not visible in the applicationIs interface. Collect diagnostic information about an application during script playbac#. <obot is integrated with <ational 6urify) Xuantify) and 6ureCoverage. Lou can play bac# scripts under a diagnostic tool and see the results in the log.

The "b+ect-"riented <ecording technology in <obot lets you generate scripts (uic#ly by simply running and using the application-under-test. <obot uses "b+ect-"riented <ecording to identify ob+ects by their internal ob+ect names) not by screen coordinates. &f ob+ects change locations or their te3t changes) <obot still finds them on playbac#. The "b+ect Testing technology in <obot lets you test any ob+ect in the application-under-test) including the ob+ectIs properties and data. Lou can test standard $indows ob+ects and &84specific ob+ects) whether they are visible in the interface or hidden.

&'() Ro$ot login %indo%

"nce logged you will see the robot window. %o to ;ile-U ew-UScript

Performance Testing Process & Methodology 121 -

Proprietary & Confidential

&n the above screen displayed enter the name of the script say 0;irst Script2 by which the script is referred to from now on and any description A ot mandatoryB.The type of the script is %@& for functional testing and =@ for performance testing.

&'(* Rational Ro$ot main %indo%-+UI script

Performance Testing Process & Methodology 124 -

Proprietary & Confidential

The %@& Script top paneB window displays %@& scripts that you are currently recording) editing) or debugging. &t has two panesD !sset pane AleftB W >ists the names of all verification points and low-level scripts for this script. Script pane ArightB W 8isplays the script.

The "utput window bottom paneB has two tabsD ?uild W 8isplays compilation results for all scripts compiled in the last operation. >ine numbers are enclosed in parentheses to indicate lines in the script with warnings and errors. Console W 8isplays messages that you send with the SX!Console$rite command. !lso displays certain system messages from <obot.

To display the "utput windowD Clic# =iew f "utput. Gow to record a play bac# script1 To record a script +ust go to $ecord/NInsert at cursor Then perform the navigation in the application to be tested and once recording is done stop the recording. $ecord/N Stop

&'(, Record and Play$ack options


%o to Tools/N 3'I $ecord options the below window will be displayed.

Performance Testing Process & Methodology 12+ -

Proprietary & Confidential

&n this window we can set general options li#e identification of lists) menus )recording thin# time in 3eneral tabD #e0 0rowser ta0: .ention the browser type &4 or etscapeb $o0ot #indow: 8uring recording how the robot should be displayed and hot#eys detailsb 40Hect $ecognition 4rderD the order in which the recording is to happen . ;or e3D Select a preference in the "b+ect order preference list.

&f you will be testing CRR applications) change the ob+ect order preference to CRR <ecognition "rder.

17.,.1

)lay0ac- options
Proprietary & Confidential -

%o to Tools/N )lay0ac- options to set the options needed while running the script.
Performance Testing Process & Methodology 12/ -

This will help you to handle une3pected window during playbac#) error recovery) mention the time out period) to manage log and log data.

&'(- .erification points


! verification point is a point in a script that you create to confirm the state of an ob+ect across builds of the application-under-test. 8uring recording) the verification point captures ob+ect information Abased on the type of verification pointB and stores it in a baseline data file. The information in this file becomes the baseline of the e3pected state of the ob+ect during subse(uent builds. $hen you play bac# the script against a new build) <obot retrieves the information in the baseline file for each verification point and compares it to the state of the ob+ect in the new build. &f the captured ob+ect does not match the baseline) <obot creates an actual data file. The information in this file shows the actual state of the ob+ect in the build. !fter playbac#) the results of each verification point appear in the log in Test .anager. &f a verification point fails Athe baseline and actual data do not matchB) you can select the verification point in the log and clic# =iew f =erification 6oint to open the appropriate Comparator. The Comparator displays the baseline and actual files so that you can compare them.
Performance Testing Process & Methodology 123 Proprietary & Confidential -

! verification point is stored in the pro+ect and is always associated with a script. $hen you create a verification point) its name appears in the !sset AleftB pane of the Script window. The verification point script command) which always begins with <esult ]) appears in the Script ArightB pane. ?ecause verification points are assets of a script) if you delete a script) <obot also deletes all of its associated verification points. Lou can easily copy verification points to other scripts if you want to reuse them.

17.6.1

List of *erification )oints

The following table summariCes each <obot verification point. Type !lphanumeric Clipboard ;ile Comparison ;ile 43istence .enu escription Captures and compares alphabetic or numeric values. Captures and compares alphanumeric data that has been copied to the Clipboard. Compares the contents of two files. Chec#s for the e3istence of a specified file Captures and compares the te3t) accelerator #eys) and state of menus. Captures up to five levels of sub-menus. Chec#s whether a specified module is loaded into a specified conte3t AprocessB) or is loaded anywhere in memory. Captures and compares the data in ob+ects. Captures and compares the properties of ob+ects. Captures and compares a region of the screen Aas a bitmapB. Captures a baseline of a $eb site and compares it to the $eb site at another point in time. Chec#s the content of a $eb site with every revision and ensures that changes have not resulted in defects. Chec#s that the specified window is displayed before continuing with the playbac# Captures and compares the client area of a window as a bitmap Athe menu) title bar) and border are not capturedB.

.odule 43istence

"b+ect 8ata "b+ect 6roperties <egion &mage $eb Site Compare $eb Site Scan $indow 43istence $indow &mage

Performance Testing Process & Methodology 1-: -

Proprietary & Confidential

17.7 (0out SB(Basic &eader =iles


SX!?asic header files let you declare custom procedures) constants) and variables that you want to use with multiple scripts or SX!?asic library source files. SX!?asic files are stored in the SX!?as97 folder of the pro+ect) unless you specify another location. Lou can specify another location by clic#ing Tools f %eneral "ptions. Clic# the 6references tab. @nder SX!?asic path) use the ?rowse button to find the location. <obot will chec# this location first. &f the file is not there) it will loo# in the SX!?as97 directory. Lou can use <obot to create and edit SX!?asic header files. They can be accessed by all modules within the pro+ect. SX!?asic header files have the e3tension .sbh.

&'(/ Adding 0eclarations to the +lo$al 1eader 2ile


;or your convenience) <obot provides a blan# header file called %lobal.sbh. %lobal.sbh is a pro+ect-wide header file stored in SX!?as97 in the pro+ect. Lou can add declarations to this global header file andKor create your own. To open %lobal.sbhD 1.Clic# ;ile f "pen f SX!?asic ;ile. 7.Set the file type to Geader ;iles AS.sbhB. 9. Select global.sbh) and then clic# "pen.

&'(&3Inserting a #omment into a +UI Script:


8uring recording or editing) you can insert lines of comment te3t into a %@& script. Comments are helpful for documenting and editing scripts. <obot ignores comments at compile time. To insert a comment into a script during recording or editing. 1. &f recording) clic# the 8isplay %@& &nsert Toolbar button on the %@& <ecord toolbar.

&f editing) position the pointer in the script and clic# the 8isplay %@& &nsert Toolbar button on the Standard toolbar. 7. 9. :. Clic# the Comment button on the %@& &nsert toolbar. Type the comment A6- characters ma3imumB. Clic# "E to continue recording or editing.

<obot inserts the comment into the script Ain green by defaultB preceded by a single %uotation mar#. ;or e3ampleD

F This is a co!!ent in the script To change lines of te3t into comments or to uncomment te3tD 1. Gighlight the te3t.
Proprietary & Confidential -

Performance Testing Process & Methodology 1-1 -

7.

Clic# 4dit f Comment >ine or 4dit f @ncomment >ine.

&'(&&A$o!t 0ata pools


! datapool is a test dataset. &t supplies data values to the variables in a script during script playbac#. 8atapools let you automatically pump test data to virtual testers under high-volume conditions that potentially involve hundreds of virtual testers performing thousands of transactions. Typically) you use a datapool so thatD 4ach virtual tester that runs the script can send realistic data Awhich can include uni(ue dataB to the server. ! single virtual tester that performs the same transaction multiple times can send realistic data to the server in each transaction.

17.11.1

'sing

atapools with 3'I Scripts

&f you are providing one or more values to the client application during %@& recording) you might want a datapool to supply those values during playbac#. ;or e3ample) you might be filling out a data entry form and providing values such as order number) part name) and so forth. &f you plan to repeat the transaction multiple times during playbac#) you might want to provide a different set of values each time. ! %@& script can access a datapool when it is played bac# in <obot. !lso) when a %@& script is played bac# in a Test.anager suite) the %@& script can access the same datapool as other scripts. There are differences in the way %@& scripts and sessions are set up for datapool accessD Lou must add datapool commands to %@& scripts manually while editing the script in <obot. <obot adds datapool commands to =@ scripts automatically. There is no 8!T!6"">gC" ;&% statement in a %@& script. The SX!8atapool"pen command defines the access method to use for the datapool. !lthough there are differences in setting up datapool access in %@& scripts and sessions) you define a datapool for either type of script using Test.anager in e3actly the same way.

&'(&40e$!g men!
The 8ebug menu has the following commandsD
Performance Testing Process & Methodology 1-2 Proprietary & Confidential -

%o %o @ntil Cursor !nimate 6ause Stop Set or Clear ?rea#points Clear !ll ?rea#points Step "ver Step &nto Step "ut oteD The 8ebug menu commands are for use with %@& scripts only.

&'(&5#ompiling the script


$hen you play bac# a %@& script or =@ script) or when you debug a %@& script) <obot compiles the script if it has been modified since it last ran. Lou can also compile scripts and SX!?asic library source files manually. . To co!pile o this The active script or library source file Clic# ;ile f Compile. !ll scripts and library source files in Clic# ;ile f Compile !ll. @se this if) the current pro+ect for e3ample) you have made changes to global definitions that may affect all of your SX!?asic files 8uring compilation) the ?uild tab in the "utput window displays compilation results and error messages with line numbers for all compiled scripts and library source files. The compilation results can be viewed in the Build tab of the 4utput window.

Performance Testing Process & Methodology 1-- -

Proprietary & Confidential

&'(&)#ompilation errors

'fter the script is created and compiled and errors fiGed it can Ee eGecCted. The resClts need to Ee analySed in the Test Manager.

Performance Testing Process & Methodology 1-0 -

Proprietary & Confidential

1I $ational Test 2anager


Test Manager is the open and eGtensiEle frameBorF that Cnites all of the toolsI assetsI and data Eoth related to and prodCced Ey the testing effort. nder this single frameBorFI all participants in the testing effort can define and refine the HCality goals they are BorFing toBard. !t is Bhere the team defines the plan it Bill implement to meet those goals. 'ndI most importantlyI it proDides the entire team Bith one place to go to determine the state of the system at any time. In Test Manager you can plan, design, implement, execute tests and evaluate results. With Test manager we can CreateI manageI and rCn reports. The reporting tools help yoC tracF assets sCch as scriptsI ECildsI and test docCmentsI and tracF test coDerage and progress. Create and manage ECildsI log foldersI and logs. reate and manage data pools and data types When the script execution is started the following window will !e displayed. The folder in which the log is to stored and the log name needs to !e given in this window.

Performance Testing Process & Methodology 1-1 -

Proprietary & Confidential

&/(& Test 6anager-Res!lts screen

&n the $esults tab of the Test .anager) you could see the results stored. ;rom Test .anager you can #now start time of the script and

Performance Testing Process & Methodology 1-4 -

Proprietary & Confidential

Performance Testing Process & Methodology 1-+ -

Proprietary & Confidential

2J Supported environ!ents 43(& 7perating system


$in T:.- with service pac# , $in7--$inF6A<ational 7--7B $in98 $in9, with service pac#1

43(4 Protocols
"racle SX> server GTT6 Sybase Tu3edo S!6 6eople soft

43(5 "e$ $ro%sers


&4:.- or later etscape navigator Alimited supportB

43() 6ark!p lang!ages


GT.> and 8GT.> pages on &4:.- or later.

43(* 0e8elopment en8ironments


=isual basic :.- or above =isual CRR Oava "racle forms :., 8elphi 6ower builder ,.- and above The basic product supports =isual basic) =CRR and basic web pages. To test other types of application) you have to download and run a free enabler program from <ational*s website. ;or more details visit

www.rational.co!

Performance Testing Process & Methodology 1-/ -

Proprietary & Confidential

21 )erfor!ance Testing
The performance testing is a measure of the performance characteristics of an application. The main ob+ective of a performance testing is to demonstrate that the system functions to specification with acceptable response times while processing the re(uired transaction volumes in real-time production database. The ob+ective of a performance test is to demonstrate that the system meets re(uirements for transaction throughput and response times simultaneously. The main deliverables from such a test) prior to e3ecution) are automated test scripts and an infrastructure to be used to e3ecute automated tests for e3tended periods.

21.1 #hat is )erfor!ance testingK


6erformance testing of an application is basically the process of understanding how the web application and its operating environment respond at various user load levels. &n general) we want to measure the latency) throughput) and utiliCation of the web site while simulating attempts by virtual users to simultaneously access the site. "ne of the main ob+ectives of performance testing is to maintain a web site with low latency) high throughput) and low utiliCation.

21.2 #hy )erfor!ance testingK


6erformance problems are usually the result of contention for) or e3haustion of) some system resource. $hen a system resource is e3hausted) the system is unable to scale to higher levels of performance. .aintaining optimum $eb application performance is a top priority for application developers and administrators. 6erformance analysis is also carried for various purposes such asD 8uring a design or redesign of a module or a part of the system) more than one alternative presents itself. &n such cases) the evaluation of a design alternative is the prime mover for an analysis. 6ost-deployment realities create a need for the tuning the e3isting system. ! systematic approach li#e performance analysis is essential to e3tract ma3imum benefit from an e3isting system. &dentification of bottlenec#s in a system is more of an effort at troubleshooting. This helps to replace and focus efforts at improving overall system response. !s the user base grows) the cost of failure becomes increasingly unbearable. To increase confidence and to provide an advance warning of potential problems in case of load conditions) analysis must be done to forecast performance under load.

Typically to debug applications) developers would e3ecute their applications using different e3ecution streams Ai.e.) completely e3ercise the applicationB in an attempt to find errors. $hen loo#ing for errors in the application) performance is a secondary issue to featuresN
Performance Testing Process & Methodology 1-3 Proprietary & Confidential -

however) it is still an issue.

21.3 )erfor!ance Testing 40Hectives


The ob+ective of a performance test is to demonstrate that the system meets re(uirements for transaction throughput and response times simultaneously. This infrastructure is an asset and an e3pensive one too) so it pays to ma#e as much use of this infrastructure as possible. ;ortunately) this infrastructure is a test bed) which can be re-used for other tests with broader ob+ectives. ! comprehensive test strategy would define a test infrastructure to enable all these ob+ectives be met. The performance testing goals areD 4nd-to-end transaction response time measurements. .easure !pplication Server components performance under various loads. .easure database components performance under various loads. .onitor system resources under various loads. .easure the networ# delay between the server and clients

21." )re/$e%uisites for )erfor!ance Testing


$e can identify five pre-re(uisites for a performance test. ot all of these need be in place prior to planning or preparing the test Aalthough this might be helpfulB) but rather) the list defines what is re(uired before a test can be e3ecuted. ;irst and foremost thing is The design specification or a separate performance re(uirements document should 8efines specific performance goals for each feature instrumented. ?ases performance goals on customer re(uirements. 8efines specific customer scenarios.

?
is

that

Buantitative> relevant> !easura0le> realistic> achieva0le re%uire!ents !s a foundation to all tests) performance re(uirements should be agreed prior to the test. This helps in determining whether or not the system meets the stated re(uirements. The following attributes will help to have a meaningful performance comparison. Xuantitative - e3pressed in (uantifiable terms such that when response times are measured) a sensible comparison can be derived. <elevant - a response time must be relevant to a business process. .easurable - a response time should be defined such that it can be measured using a tool or stopwatch and at reasonable cost.

Performance Testing Process & Methodology 10: -

Proprietary & Confidential

<ealistic - response time re(uirements should be +ustifiable when compared with the durations of the activities within the business process the system supports. !chievable - response times should ta#e some account of the cost of achieving them.

Sta0le syste! ! test team attempting to construct a performance test of a system whose software is of poor (uality is unli#ely to be successful. &f the software crashes regularly) it will probably not withstand the relatively minor stress of repeated use. Testers will not be able to record scripts in the first instance) or may not be able to e3ecute a test for a reasonable length of time before the software) middleware or operating systems crash. $ealistic test environ!ent The test environment should ideally be the production environment or a close simulation and be dedicated to the performance test team for the duration of the test. "ften this is not possible. Gowever) for the results of the test to be realistic) the test environment should be comparable to the actual production environment. 4ven with an environment which is somewhat different from the production environment) it should still be possible to interpret the results obtained using a model of the system to predict) with some confidence) the behavior of the target environment. ! test environment which bears no similarity to the actual production environment may be useful for finding obscure errors in the code) but is) however) useless for a performance test.

21.+ )erfor!ance $e%uire!ents


6erformance re(uirements normally comprise three componentsD <esponse time re(uirements Transaction volumes detailed in V>oad 6rofiles* 8atabase volumes

$esponse ti!e re%uire!ents $hen as#ed to specify performance re(uirements) users normally focus attention on response times) and often wish to define re(uirements in terms of generic response times. ! single response time re(uirement for all transactions might be simple to define from the user*s point of view) but is unreasonable. Some functions are critical and re(uire short response times) but others are less critical and response time re(uirements can be less stringent. Load profiles The second component of performance re(uirements is a schedule of load profiles. ! load profile is the level of system loading e3pected to occur during a specific business scenario. ?usiness scenarios might cover different situations when the users* organiCation has different levels of activity or involve a varying mi3 of activities) which must be supported by the system. ata0ase volu!es 8ata volumes) defining the numbers of table rows which should be present in the database
Performance Testing Process & Methodology 101 Proprietary & Confidential -

after a specified period of live running complete the load profile. Typically) data volumes estimated to e3ist after one year*s use of the system are used) but two year volumes or greater might be used in some circumstances) depending on the business application.

Performance Testing Process & Methodology 102 -

Proprietary & Confidential

22 )erfor!ance Testing )rocess


<e(uirements Collection 6reparation <e(uirement Collection

Test 6lan 6reparation

Test 6lan

Test 8esign 6reparation

Test 8esign

Scripting

Test Scripts

Test 43ecution

6re Test 5 6ost Test 6rocedure

Test !nalysis

6reliminary <eport

!ctivity
&s 6erforman ce %oal 8eliverable<eached1

"

8eliverable

&nternal

L4S
6reparation of <eports ;inal <eport

Performance Testing Process & Methodology 10- -

Proprietary & Confidential

22.1 )hase 1 E $e%uire!ents Study


This activity is carried out during the business and technical re(uirements identification phase. The ob+ective is to understand the performance test re(uirements) Gardware 5 Software components and @sage .odel. &t is important to understand as accurately and as ob+ectively as possible the nature of load that must be generated. ;ollowing are the important performance test re(uirement that needs to be captured during this phase. <esponse Time Transactions 6er Second Gits 6er Second $or#load o of con current users =olume of data 8ata growth rate <esource usage Gardware and Software configurations (ctivity 6erformance- Stress Test) >oad Test) =olume Test) Spi#e Test) 4ndurance Test #or- ite!s @nderstand the system and application model Server side and Client side Gardware and software re(uirements. ?rowser 4mulation and !utomation Tool Selection 8ecide on the type and mode of testing "perational &nputs W Time of Testing) Client and Server side parameters.

22.1.1 22.1.2 elivera0les


Sa!ple

elivera0le <e(uirement Collection


<e(uirementCollectio n.doc

Performance Testing Process & Methodology 100 -

Proprietary & Confidential

22.2 )hase 2 E Test )lan


The following configuration information will be identified as part of performance testing environment re(uirement identification. &ardware )latfor! Server .achines 6rocessors .emory 8is# Storage >oad .achines configuration etwor# configuration Software Configuration "perating System Server Software Client .achine Software !pplications (ctivity Test 6lan 6reparation #or- ite!s Gardware and Software 8etails Test data Transaction Traversal that is to be tested with sleep times. 6eriodic status update to the client.

22.2.1
Test 6lan

elivera0les
elivera0le Sa!ple

Test6lan.doc

22.3 )hase 3 E Test

esign

?ased on the test strategy detailed test scenarios would be prepared. 8uring the test design period the following activities will be carried outD Scenario design 8etailed test e3ecution plan 8edicated test environment setup Script <ecordingK 6rogramming
Performance Testing Process & Methodology 101 Proprietary & Confidential -

Script CustomiCation A8elay) Chec#points) SynchroniCations pointsB 8ata %eneration 6arameteriCationK 8ata pooling

(ctivity Test 8esign %eneration

#or- ite!s Gardware and Software re(uirements that includes the server components ) the >oad %enerators used etc.) Setting up the monitoring servers Setting up the data 6reparing all the necessary folders for saving the results as the test is over. 6re Test and 6ost Test 6rocedures

22.3.1

elivera0les
elivera0le Sa!ple

Test 8esign
Test8esign.doc

22." )hase " EScripting


(ctivity Scripting #or- ite!s ?rowse through the application and record the transactions with the tool 6arameteriCation) 4rror Chec#s and =alidations <un the script for single user for chec#ing the validity of scripts

22.".1

elivera0les
elivera0le Test Scripts
Sample Script.doc

Sa!ple

Performance Testing Process & Methodology 104 -

Proprietary & Confidential

22.+ )hase + E Test E.ecution


The test e3ecution will follow the various types of test as identified in the test plan. !ll the scenarios identified will be e3ecuted. =irtual user loads are simulated based on the usage pattern and load levels applied as stated in the performance test strategy. The following artifacts will be produced during test e3ecution periodD Test logs Test <esult (ctivity Test 43ecution #or- ite!s Starting the 6re Test 6rocedure scripts which includes start scripts for server monitoring. .odification of automated scripts if necessary Test <esult !nalysis <eport preparation for every cycle

22.+.1

elivera0les
elivera0le Test 43ecution
Time Sheet.doc

Sa!ple

<un >ogs.doc

22., )hase , E Test (nalysis


(ctivity Test !nalysis #or- ite!s !nalyCing the run results and preparation of preliminary report.

22.,.1

elivera0les
elivera0le Sa!ple
Proprietary & Confidential -

Performance Testing Process & Methodology 10+ -

Test !nalysis
6reliminary <eport.doc

22.6 )hase 6 E )reparation of $eports


The test logs and results generated are analyCed based on 6erformance under various load) TransactionKsecond) database throughput) etwor# throughput) Thin# time) etwor# delay) <esource usage) Transaction 8istribution and 8ata handling. .anual and automated results analysis methods can be used for performance results analysis. The following performance test reportsK graphs can be generated as part of performance testingD Transaction <esponse time Transactions per Second Transaction Summary graph Transaction performance Summary graph Transaction <esponse graph W @nder load graph =irtual user Summary graph 4rror Statistics graph Gits per second graph Throughput graph 8own load per second graph ?ased on the 6erformance report analysis) suggestions on improvement or tuning will be provided to the design teamD 6erformance improvements to application software) middleware) database organiCation. Changes to server system parameters. @pgrades to client or server hardware) networ# capacity or routing. (ctivity 6reparation of <eports #or- ite!s 6reparation of final report.

22.6.1

elivera0les
elivera0le ;inal <eport
;inal <eport.doc

Sa!ple

Performance Testing Process & Methodology 10/ -

Proprietary & Confidential

22.7 Co!!on 2ista-es in )erfor!ance Testing


Y
Y

o %oals o general purpose model Y %oals ]UTechni(ues) .etrics) $or#load Y ot trivial Y ?iased %oals Y VTo show that "@< system is better than TG4&<S2 Y !nalysts ] Oury Y @nsystematic !pproach Y !nalysis without @nderstanding the 6roblem Y &ncorrect 6erformance .etrics Y @nrepresentative $or#load Y $rong 4valuation Techni(ue Y "verloo# &mportant 6arameters Y &gnore Significant ;actors Y &nappropriate 43perimental 8esign Y &nappropriate >evel of 8etail Y o !nalysis Y 4rroneous !nalysis Y o Sensitivity !nalysis Y &gnoring 4rrors in &nput Y &mproper Treatment of "utliers Y !ssuming o Change in the ;uture Y &gnoring =ariability Y Too Comple3 !nalysis Y &mproper 6resentation of <esults Y &gnoring Social !spects Y "mitting !ssumptions and >imitations

22.I Bench!ar-ing Lessons


4ver build needs to be measured. $e should run the automated performance test suite against every build and compare the results against previous results. !lso) we should run the performance test suite under controlled conditions from build to build. This typically means measuring performance on HcleanH test environments. 6erformance issues must be identified as soon as possible to prevent further degradation. 6erformance goals needs to be ensured. &f we decide to ma#e performance a goal and a measure of the (uality criteria for release) the management team must decide to enforce the
Performance Testing Process & Methodology 103 Proprietary & Confidential -

goals. 4stablish incremental performance goals throughout the product development cycle. !ll the members in the team should agree that a performance issue is not +ust a bugN it is a software architectural problem. 6erformance testing of $eb services and applications is paramount to ensuring an e3cellent customer e3perience on the &nternet. The $eb Capacity !nalysis A$ebC!TB tool provides $eb server performance analysisN the tool can also assess &nternet Server !pplication 6rogramming &nterface and application server provider A&S!6&K!S6B applications. Creating an automated test suite to measure performance is time-consuming and laborintensive. Therefore) it is important to define concrete performance goals. $ithout defined performance goals or re(uirements) testers must guess) without a clear purpose) at how to instrument tests to best measure various response times. The performance tests should not be used to find functionality-type bugs. 8esign the performance test suite to measure response times and not to identify bugs in the product. 8esign the build verification test A?=TB suite to ensure that no new bugs are in+ected into the build that would prevent the performance test suite from successfully completing. The performance tests should be modified consistently. Significant changes to the performance test suite s#ew or ma#e obsolete all previous data. Therefore) #eep the performance test suite fairly static throughout the product development cycle. &f the design or re(uirements change and you must modify a test) perturb only one variable at a time for each build. Strive to achieve the ma+ority of the performance goals early in the product development cycle becauseD .ost performance issues re(uire architectural change. 6erformance is #nown to degrade slightly during the stabiliCation phase of the development cycle.

!chieving performance goals early also helps to ensure that the ship date is met because a product rarely ships if it does not meet performance goals. Lou should reuse automated performance tests !utomated performance tests can often be reused in many other automated test suites. ;or e3ample) incorporate the performance test suite into the stress test suite to validate stress scenarios and to identify potential performance issues under different stress conditions. Tests are capturing secondary metrics when the instrumented tests have nothing to do with measuring clear and established performance goals. !lthough secondary metrics loo# good on wall charts and in reports) if the data is not going to be used in a meaningful way to ma#e improvements in the engineering cycle) it is probably wasted data. 4n sure that you #now what you are measuring and why.

Performance Testing Process & Methodology 11: -

Proprietary & Confidential

Testing for most applications will be automated. Tools used for testing would be the tool specified in the re(uirement specification. The tools used for performance testing are >oadrunner 6., and $ebload :.,3

Performance Testing Process & Methodology 111 -

Proprietary & Confidential

23 Tools
23.1 Load$unner ,.+
>oad<unner is .ercury &nteractive*s tool for testing the performance of clientKserver systems. >oad<unner enables you to test your system under controlled and pea# load conditions. To generate load) >oad<unner runs thousands of =irtual @sers that are distributed over a networ#. @sing a minimum of hardware resources) these =irtual users provide consistent. <epeatable and measurable load to e3ecute your clientKserver system +ust as real users would. >oad<unner*s in depth reports and graphs provide the information that you need to evaluate the performance of your clientKserver system.

23.2 #e0Load ".+


$ebload is a testing tool for testing the scalability) functionality and performance of $ebbased applications W both &nternet and &ntranet. &t can measure the performance of your application under any load conditions. @se $eb>oad to test how well your web site will perform under real-world conditions by combining performance) load and functional tests or by running them individually. $ebload supports GTT61.- and 1.1) including coo#ies) pro3ies) SS>) TS>) client certificates) authentifications) persistent connections and chun#ed transfer coding. $ebload generates load by creating virtual clients that emulate networ# traffic. Lou create test scripts Acalled agendasB using Oava Scripts that instruct those virtual clients about what to do. $hen $ebload runs the test) it gathers results at a per-client) per-transaction and perinstance level from the computers that are generating the load. $ebload can also gather information server*s performance monitor. Lou can watch the results as they occur- $ebload displays them in graphs and tables in real-time and you can save and e3port the results when the test is finished.

Performance Testing Process & Methodology 112 -

Proprietary & Confidential

)erfor!ance Testing Tools / su!!ary and co!parison


This table lists several performance testing tools available on the mar#et. ;or your convenience we compared them based on cost and "S re(uired.

T&&% N)6/

URL

C&8,

OS

D/85(-*,-&.
>oad test tool emphasiCing easeof-use. Supports all browsers and web serversN simulates up to 7-- users per playbac# machine at various connection speedsN records and allows viewing of e3act bytes flowing between browser and server. .odem simulation allows each virtual user to be bandwidth limited. Can automatically handle variations in session-specific items such as coo#ies) usernames) passwords) and any other parameter to simulate multiple virtual users. otesD downloadable) will emulate 7, users) and will e3pire in 7 wee#s Amay be e3tendB .ercuryIs loadKstress testing
-

W/4 P/(2&(6).5/ T()-./(

http?77BBB.BeEperf center.com7loadtesting. html

6rice AdB per number of virtual usersD $indows T) 1:---1-$indows 7---) 7:9,-7->inu3 Solaris :99,-9-/99,-1--1199,-,---

A8,()

http?77BBB.astratryand

6rice AdB Sun"S) per number G6-@F)


Proprietary & Confidential

Performance Testing Process & Methodology 11- -

L&)1T/8,

ECy.com

of virtual usersD 999,-,1/99,-1-7999,-7,-

&?. !&F) C<) $indows T) $& 7---

toolN includes recordKplaybac# capabilitiesN integrated spreadsheet parameteriCes recorded input to e3ercise application with a wide variety of data. IScenario ?uilderI visually combines virtual users and host machines for tests representing real user traffic. IContent Chec#I chec#s for failures under heavy loadN <eal-time monitors and analysis otesD downloadable) evaluation version 4-commerce load testing tool from ClientKServer Solutions) &nc. &ncludes recordKplaybac#) web form processing) user sessions) scripting) coo#ies) SS>. !lso includes predeveloped industry standard benchmar#s such as !S9!6) SetXuery) $isconsin) $ebStone) and others. &ncludes optimiCed database drivers for vendorneutral comparisons - .S SX> Server) "racle / and 8) Sybase
-

B/.5+6)(< F)5,&(0

http?77BBB.EenchmarF factory.com

$indows T) $indows7---

Performance Testing Process & Methodology 110 -

Proprietary & Confidential

System 11) "8?C) &?.Is 8?7 C>&) &nformi3. otesD downloadable A1B) after submitting information ! 6age with suggestion to apply for ne3t infos to closest dealers appeared Supports recording of SS> sessions) coo#ies) pro3ies) password authentication) dynamic GT.>N multiple platforms otesD downloadable) 4valuation version does not support SSl .icrosoft stress test tool created by .icrosoftIs &nternal Tools %roup A&T%B and subse(uently made available for e3ternal use. &ncludes recordKplaybac#) script recording from browser) SS>) ad+ustable delay between re(uests otesD one of the advanced tools in the listingb <ationalIs clientKserver and web performance testing tool. I>oadSmart SchedulingI capabilities allow
-

R)1>-/'A8 W/4L&)1

http?77BBB.radDieB.com

$in9,K98 $indows T) $indows 7--Solaris) !&F

!S W/4 A**%-5),-&. S,(/88 T/8,

http?77homer.rte.microsoft. com

;ree

$indows T) $indows7---

R),-&.)% S:-,/ P/(2&(6).5/ S,:1-&3 R),-&.)% S-,/L&)1

http?77BBB.rational.com7 prodCcts

$indows T) $indows7---) @ni3

Performance Testing Process & Methodology 111 -

Proprietary & Confidential

comple3 usage scenarios and randomiCed transaction se(uencesN handles dynamic web pages. otesD re(uest a cd only. ot downloadable >oad testing tool from ;acilita Software for web) client-server) networ#) and database systems otesD not downloadable ;ree web benchmar#ingKload testing tool available as source codeN will compile on any @ &F platform otesD unsupportable A1B) bro#en download lin#. >oad test tool from <S$ geared to testing web applications under load and testing scalability of 4commerce applications. ;or use in con+unction with test scripts from their e-Tester functional test tool. !llows on-the-fly changes and has real-time reporting capabilities. otesD
-

F&(/5)8,

http?77BBB.facilita.co.CF

@ni3

B/:8

http?77BeEperf.SeCs. co.CF7intro.html

;ree

@ni3

E#L&)1

http?77BBB.rsBsoftBare. com7prodCcts7eloadU indeG.shtml

$in9,K98K $indows T

Performance Testing Process & Methodology 114 -

Proprietary & Confidential

downloadable) free cd re(uest) evaluation copy ;ree load test application to generate web server loads otesD free and easy. CompuwareIs X!>oad for loadKstress testing of database) web) and char-based systems) wor#s with such middleware asD SX>net) 8?>ib or C?>ib) SX> Server) "8?C) Telnet) and $eb otes D free cd re(uest >oad and performance testing component of SegueIs Sil# web testing toolset. otesD no download. Tool for load testing of up to 1---7-simulated usersN also includes functional and regression testing capabilities) and captureKplaybac# and scripting language. 4valuation copy avail. otesD downloadable

HTTP#L&)1

http?77BBB.acme.com7 softBare7httpUload

;ree

@ni3

CAL&)1

http?77BBB.compCBare. com7prodCcts7aCto7 releases72'Load.htm

$in9,K T - managerN @ni3) $indows T load test player

S-%<P/(2&(6/(

http?77BBB.segCe.com7 html7sUsolCtions7sUperf ormer7sUperformer.htm

$indows T) $indows 7---

WEBA(,

http?77BBB.oclc.org7 BeEart

$indows 98) $indows T :.-) $indows 7---) Sun"SKSolaris) !&F) >inu3

Performance Testing Process & Methodology 11+ -

Proprietary & Confidential

W/4%&)1

http?77BBB.ca.com7 prodCcts7platinCm7 appdeD7feUiltps.htm

!&F) $indows T) $indows 9,) Sun Solaris

;inal 43am $eb>oad integration and predeployment testing ensures the reliability) performance) and scalability of $eb applications. &t generates and monitors load stress tests - which can be recorded during a $eb session with any browser - and assesses $eb application performance under user-defined variable system loads. >oad scenarios can include unlimited numbers of virtual users on one or more load servers) as well as single users on multiple client wor#stations. otesD downloadable) 1,day eval. period $eb load test tool from .icrosoft for load testing of .S &&S on T >oad testing toolN includes lin# testing capabilitiesN can simulate up to 1)--- clients from a single &6 addressN also supports multiple &6 addresses with or without aliases.
-

!-5(&8&2, WCAT %&)1 ,/8, ,&&% W/48*()0

http?77msdn.microsoft. com7BorFshop7serDer7 toolEoG7Bcat.asp http?77BBB.redhillnet BorFs.com

;ree

$indows T) $indows 7--$indows 98) $indows T :.-) $indows 7---

d199 Ad99 with discountB

Performance Testing Process & Methodology 11/ -

Proprietary & Confidential

otesD not downloadable >oad testing and captureKplaybac# tools from Technovations. $ebSiCr load testing tool supports authentication) coo#ies) redirects otesD downloadable) 9eval. period.

W/4S-D(3 W/4C&(1/(

http?77BBB.technoDa tions.com7home.htm

$in9,A98B) $indows T) $indows 7---

23.3 (rchitecture Bench!ar-ing


&ardware Bench!ar-ing / Gardware benchmar#ing is performed to siCe the application with the planned Gardware platform. &t is significantly different from capacity planning e3ercise in that it is done after development and before deployment Software Bench!ar-ing - 8efining the right placement and composition of software instances can help in vertical scalability of the system without addition of hardware resources. This is achieved through software benchmar# test.

23." 3eneral Tests


$hat follows is a list of tests adaptable to assess the performance of most systems. The methodologies below are generic) allowing one to use a wide range of tools to conduct the assessments. 2ethodology efinitions $esult: provide information about what the test will accomplish. )urpose: e3plains the value and focus of the test) along with some simple bac#ground information that might be helpful during testing. Constraints: details any constraints and values that should not be e3ceeded during testing. Ti!e esti!ate: a rough estimate of the amount of time that the test may ta#e to complete.
Performance Testing Process & Methodology 113 Proprietary & Confidential -

Type of wor-load: in order to properly achieve the goals of the test) each test re(uires a certain type of wor#load. This methodology specification provides information on the appropriate script of pages or transactions for the user. 2ethodology: a list of suggested steps to ta#e in order to assess the system under test. #hat to loo- for: contains information on behaviors) issues and errors to pay attention to during and after the test.

Performance Testing Process & Methodology 14: -

Proprietary & Confidential

2" )erfor!ance 2etrics


The Common .etrics selected Kused during the performance testing is as below <esponse time Turnaround time ] the time between the submission of a batch +ob and the completion of its output. Stretch ;actorD The ratio of the response time with single user to that of concurrent users. ThroughputD <ate Are(uests per unit of timeB 43amplesD Oobs per second <e(uests per second .illions of &nstructions 6er Second A.&6SB .illions of ;loating 6oint "perations 6er Second A.;>"6SB 6ac#ets 6er Second A66SB ?its per second AbpsB Transactions 6er Second AT6SB CapacityD ominal CapacityD .a3imum achievable throughput under ideal wor#load conditions. 4.g.) bandwidth in bits per second. The response time at ma3imum throughput is too high. @sable capacityD .a3imum throughput achievable without e3ceeding a pre-specified response-time limit 4fficiencyD <atio usable capacity to nominal capacity. "r) the ratio of the performance of an n-processor system to that of a one-processor system is its efficiency. @tiliCationD The fraction of time the resource is busy servicing re(uests. !verage ;raction used for memory. !s tests are e3ecuted) metrics such as response times for transactions) GTT6 re(uests per second) throughput etc.) should be collected. &t is also important to monitor and collect the statistics such as C6@ utiliCation) memory) dis# space and networ# usage on individual web) application and database servers and ma#e sure those numbers recede as load decreases. CogniCant has built custom monitoring tools to collect the statistics. Third party monitoring tools are also used based on the re(uirement.

2".1 Client Side Statistics


<unning =users Gits per Second Throughput GTT6 Status Code GTT6 responses per Second 6ages downloaded per Second Transaction response time
Proprietary & Confidential -

Performance Testing Process & Methodology 141 -

6age Component brea#down time 6age 8ownload time Component siCe !nalysis 4rror Statistics 4rrors per Second Total SuccessfulK;ailed Transactions

2".2 Server Side Statistics


System <esources - 6rocessor @tiliCation) .emory and 8is# Space $eb Server <esourcesWThreads) Cache Git <atio !pplication Server <esourcesWGeap siCe) O8?C Connection 6ool 8atabase Server <esourcesW$ait 4vents) SX> Xueries Transaction 6rofiling Code ?loc# !nalysis

2".3 <etwor- Statistics


?andwidth @tiliCation etwor# delay time etwor# Segment delay time

2"." Conclusion
6erformance testing is an independent discipline and involves all the phases as the mainstream testing lifecycle i.e strategy) plan) design) e3ecution) analysis and reporting. $ithout the rigor described in this paper) e3ecuting performance testing does not yield anything more than finding more defects in the system. Gowever) if e3ecuted systematically with appropriate planning) performance testing can unearth issues that otherwise cannot be done through mainstream testing. &t is very typical of the pro+ect manager to be overta#en by time and resource pressures leading not enough budget being allocated for performance testing) the conse(uences of which could be disastrous to the final system. There is another flip side of the coin. Gowever there is an important point to be noted here. ?efore testing the system for performance re(uirements) the system should have been architected and designed for meeting the re(uired performance goals. &f not) it may be too late in the software development cycle to correct serious performance issues. $eb-enabled applications and infrastructures must be able to e3ecute evolving business processes with speed and precision while sustaining high volumes of changing and unpredictable user audiences. >oad testing gives the greatest line of defense against poor performance and accommodates complementary strategies for performance management and monitoring of a production environment. The discipline helps businesses succeed in leveraging $eb technologies to their best advantage) enabling new business opportunity lowering transaction costs and strengthening profitability. ;ortunately) robust and viable solutions e3ist to help fend off disasters that result from poor performance. !utomated load
Performance Testing Process & Methodology 142 Proprietary & Confidential -

testing tools and services are available to meet the critical need of measuring and optimiCing comple3 and dynamic application and infrastructure performance. "nce these solutions are properly adopted and utiliCed) leveraging an ongoing) lifecycle-focused approach) businesses can begin to ta#e charge and leverage information technology assets to their competitive advantage. ?y continuously testing and monitoring the performance of critical software applications) business can confidently and proactively e3ecute strategic corporate initiatives for the benefit of shareholders and customers ali#e.

Performance Testing Process & Methodology 14- -

Proprietary & Confidential

2+ Load Testing
>oad Testing is creation of a simulated load on a real computer system by using virtual users who submit wor# as real users would do at real client wor#stations and thus testing the systems ability to support such wor#load. Testing of critical web applications during its development and before its deployment should include functional testing to confirm to the specifications) performance testing to chec# if it offers an acceptable response time and load testing to see what hardware or software configuration will be re(uired to provide acceptable response time and handle the load that will created by the real users of the system

2+.1 #hy is load testing i!portant K


>oad Testing increases the uptime for critical web applications by helping you spot the bottlenec#s in the system under large user stress scenarios before they happen in a production environment

2+.2 #hen should load testing 0e doneK


>oad testing should be done when the probable cost of the load test is li#ely less than the cost of a failed application deployment. Thus a load testing is accomplished by stressing the real application under simulated load provided by virtual users.

Performance Testing Process & Methodology 140 -

Proprietary & Confidential

2, Load Testing )rocess


2,.1 Syste! (nalysis
This is the first step when the pro+ect decides on load testing for its system. 4valuation of the re(uirements and needs of a system) prior to load testing will provide more realistic test conditions. ;or this one should #now all #ey performance goals and ob+ectives li#e number of concurrent connections) hits per second etc.) !nother important analysis of the system would also include the appropriate strategy for testing applications. &t can be load testing or stress testing or capacity testing. >oad Testing is used to test the application against a re(uested number of users. The ob+ective is to determine whether the site can sustain a re(uested number of users with acceptable response times. Stress testing is nothing but load testing over e3tended periods of time to validate an application*s stability and reliability. Similarly capacity testing is used to determine the ma3imum number of concurrent users that an application can manage. Gence for businesses capacity testing would be the benchmar# to say that the ma3imum loads of concurrent users the site can sustain before the system fails.

#inally it shoCld also Ee taFen into consideration of the test tool Bhich sCpports load testing Ey determining its mCltithreading capaEilities and the creation of nCmEer of DirtCal Csers Bith minimal resoCrce consCmption and maGimal DirtCal Cser coCnt.

2,.2 'ser Scripts


Once the analysis of the system is done the neGt step BoCld Ee the creation of Cser scripts. ' script recorder can Ee Csed to captCre all the ECsiness processes into test scripts and this more often referred as DirtCal Csers or DirtCal Cser scripts. ' DirtCal Cser is nothing ECt an emClated real Cser Bho driDes the real application as client. 'll the ECsiness process shoCld Ee recorded end to end so that these transactions Bill assist in EreaFdoBn of all actions and the time it taFes to measCre the performance of ECsiness process.

2,.3 Settings
(Cn time settings shoCld Ee defined the Bay the scripts shoCld Ee rCn in order to accCrately emClate real Csers. %ettings can configCre the nCmEer of concCrrent connectionsI test rCn timeI folloB $TTP redirects etc.I %ystem response times also can Dary Eased on the connection speed. $ence throttling EandBidth can emClate dial Cp connections at Darying modem speeds 82/./ 5Eps or 14.4 5Eps or T1 81.10M9 etc.

Performance Testing Process & Methodology 141 -

Proprietary & Confidential

2,." )erfor!ance 2onitoring


EDery component of the system needs monitoring ?the clientsI the netBorFI the BeEs serDerI the application serDerI the dataEase etc.I This Bill resClt in instantly identifying the performance Eottle necFs dCring load testing. .Ct if the tools sCpport real time monitoring then testers BoCld Ee aEle to DieB the application performance at any time dCring the test. ThCs rCnning the load test scenario and monitoring the performance BoCld accelerate the test process thereEy prodCcing a more staEle application

2,.+ (naly5ing $esults


The last ECt most important step in load testing is collecting and processing the data to resolDe performance EottlenecFs. The reports generated can Ee anything ranging from "CmEer of hitsI nCmEer of test clientsI reHCests per secondI socFet errors etc.I $ence analySing the resClts Bill isolate Eottle necFs and determine Bhich changes are needed to improDe the system performance. 'fter these changes are made the tests mCst re rCn the load test scenarios to Derify adRCstments. L&)1 T/8,-.7 '-,+ WAST &eE 'pplication %tress is a tool to simClate large nCmEer of Csers Bith a relatiDely small nCmEer of client machines. Performance data on an BeE application can Ee gathered Ey stressing the BeEsite and measCring the maGimCm reHCests per second that the BeE serDer can handle. The neGt step is to determine Bhich resoCrce preDents the reHCests per second from going higherI sCch as CP I memoryI or EacFend dependencies.

2,., Conclusion
Load testing is the measCre of an entire &eE applicationJs aEility to sCstain a nCmEer of simCltaneoCs Csers and transactionsI Bhile maintaining adeHCate response times. !t is the only Bay to accCrately test the end-to-end performance of a &eE site prior to going liDe. TBo common methods for implementing this load testing process are manCal and aCtomated testing. ManCal testing BoCld inDolDe
Performance Testing Process & Methodology 144 -

Coordination of the operations of Csers


Proprietary & Confidential -

MeasCre response times (epeat tests in a consistent Bay Compare resClts

's load testing is iteratiDe in natCreI the performance proElems mCst Ee identified so that system can Ee tCned and retested to checF for EottlenecFs. #or this reasonI manCal testing is not a Dery practical option. TodayI aCtomated load testing is the preferred choice for load testing a &eE application. The testing tools typically Cse three maRor components to eGecCte a test? ' consoleI Bhich organiSesI driDes and manages the load VirtCal CsersI performing a ECsiness process on a client application Load serDersI Bhich are Csed to rCn the DirtCal Csers

&ith aCtomated load testing toolsI tests can Ee easily rerCn any nCmEer of times and the resClts can Ee reported aCtomatically. !n this BayI aCtomated testing tools proDide a more cost-effectiDe and efficient solCtion than their manCal coCnterparts. PlCsI they minimiSe the risF of hCman error dCring testing.

Performance Testing Process & Methodology 14+ -

Proprietary & Confidential

26 Stress Testing
26.1 Introduction to Stress Testing
This testing is accomplished through reviews Aproduct re(uirements) software functional re(uirements) software designs) code) test plans) etc.B) unit testing) system testing Aalso #nown as functional testingB) e3pert user testing Ali#e beta testing but in-houseB) smo#e tests) etc. !ll these Vtesting* activities are important and each plays an essential role in the overall effort but) none of these specifically loo# for problems li#e memory and resource management. ;urther) these testing activities do little to (uantify the robustness of the application or determine what may happen under abnormal circumstances. $e try to fill this gap in testing by using stress testing. Stress testing can imply many different types of testing depending upon the audience. 4ven in literature on software testing) stress testing is often confused with load testing andKor volume testing. ;or our purposes) we define stress testing as perfor!ing rando! operational se%uences at larger than nor!al volu!es> at faster than nor!al speeds and for longer than nor!al periods of ti!e as a !ethod to accelerate the rate of finding defects and verify the ro0ustness of our product . Stress testing in its simplest form is any test that repeats a set of actions over and over with the purpose of 0brea#ing the product2. The system is put through its paces to find where it may fail. !s a first step) you can ta#e a common set of actions for your system and #eep repeating them in an attempt to brea# the system. !dding some randomiCation to these steps will help find more defects. Gow long can your application stay functioning doing this operation repeatedly1 To help you reproduce your failures one of the most important things to remember to do is to log everything as you proceed. Lou need to #now what e3actly was happening when the system failed. 8id the system loc# up with 1-- attempts or 1--)--attempts1P1Q ote that there are many other types of testing which have not mentioned above) for e3ample) ris# based testing) random testing) security testing) etc. $e have found) and it seems they agree) that it is best to review what needs to be tested) pic# multiple testing types that will provide the best coverage for the product to be tested) and then master these testing types) rather than trying to implement every testing type. Some of the defects that we have been able to catch with stress testing that have not been found in any other way are memory lea#s) deadloc#s) software asserts) and configuration conflicts. ;or more details about these types of defects or how we were able to detect them) refer to the section VTypical 8efects ;ound by Stress Testing*. Table 1 provides a summary of some of the strengths and wea#nesses that we have found with stress testing.

Performance Testing Process & Methodology 14/ -

Proprietary & Confidential

Ta0le 1 Stress Testing Strengths and #ea-nesses Strengths #ea-ness


;ind defects that no other type of test would find @sing randomiCation increase coverage Test the robustness of the application ot real world situation 8efects are not always reproducible "ne se(uence of operations may catch a problem right away) but use another se(uence may never find the problem 8oes not test correctness of system response to user input

Gelpful at finding memory lea#s) deadloc#s) software asserts) and configuration conflicts

26.2 Bac-ground to (uto!ated Stress Testing


Stress testing can be done manually - which is often referred to as 0mon#ey2 testing. &n this #ind of stress testing) the tester would use the application 0aimlessly2 li#e a mon#ey - po#ing buttons) turning #nobs) 0banging2 on the #eyboard etc.) in order to find defects. "ne of the problems with 0mon#ey2 testing is reproducibility. &n this #ind of testing) where the tester uses no guide or script and no log is recorded) it*s often impossible to repeat the steps e3ecuted before a problem occurred. !ttempts have been made to use #eyboard spyware) video recorders and the li#e to capture user interactions with varying Aoften poorB levels of success. "ur applications are re(uired to operate for long periods of time with no significant loss of performance or reliability. $e have found that stress testing of a software application helps in accessing and increasing the robustness of our applications and it has become a re(uired activity before every software release. 6erforming stress manually is not feasible and repeating the test for every software release is almost impossible) so this is a clear e3ample of an area that benefits from automation) you get a return on your investment (uic#ly) and it will provide you with more than +ust a mirror of your manual test suite. 6reviously) we had attempted to stress test our applications using manual techni(ues and have found that they were lac#ing in several respects. Some of the wea#nesses of manual stress testing we found wereD 1. .anual techni(ues cannot provide the #ind of intense simulation of ma3imum user interaction over time. Gumans can not #eep the rate of interaction up high enough and long enough. 7. .anual testing does not provide the breadth of test coverage of the product featuresKcommands that is needed. 6eople tend to do the same things in the same way over and over so some configuration transitions do not get tested. 9. .anual testing generally does not allow for repeatability of command se(uences) so reproducing failures is nearly impossible. 0. .anual testing does not perform automatic recording of discrete values with each command se(uence for trac#ing memory utiliCation over time W critical for detecting memory lea#s.
Performance Testing Process & Methodology 143 Proprietary & Confidential -

$ith automated stress testing) the stress test is performed under computer control. The stress test tool is implemented to determine the applications* configuration) to e3ecute all valid command se(uences in a random order) and to perform data logging. Since the stress test is automated) it becomes easy to e3ecute multiple stress tests simultaneously across more than one product at the same time. 8epending on how the stress inputs are configured stress can do both Vpositive* and Vnegative* testing. 6ositive testing is when only valid parameters are provided to the device under test) whereas negative testing provides both valid and invalid parameters to the device as a way of trying to brea# the system under abnormal circumstances. ;or e3ample) if a valid input is in seconds) positive testing would test - to ,9 and negative testing would try W1 to 6-) etc. 4ven though there are clearly advantages to automated stress testing) it still has its disadvantages. ;or e3ample) we have found that each time the product application changes we most li#ely need to change the stress tool Aor more commonly commands need to be added toKor deleted from the input command setB. !lso) if the input command set changes) then the output command se(uence also changes given pseudo-randomiCation. Table 7 provides a summary of some of these advantages and disadvantages that we have found with automated stress testing.

Ta0le 2 (uto!ated Stress Testing (dvantages and isadvantages (dvantages isadvantages


!utomated stress testing is performed under computer control Capability to test all product application command se(uences .ultiple product applications can be supported by one stress tool @ses randomiCation to increase coverageN tests vary with new seed values <epeatability of commands and parameters help reproduce problems or verify that e3isting problems have been resolved &nformative log files facilitate investigation of problem <e(uires capital e(uipment and development of a stress test tool <e(uires maintaince of the tool as the product application changes <eproducible stress runs must use the same input command set 8efects are not always reproducible even with the same seed value <e(uires test application information to be #ept and maintained .ay ta#e a long time to e3ecute

!n sCmmaryI aCtomated stress testing oDercomes the maRor disadDantages of manCal stress testing and finds defects that no other testing types can find. 'Ctomated stress testing eGercises DarioCs featCres of the systemI at a rate eGceeding that at Bhich actCal end-Csers can Ee eGpected to doI and for dCrations of time that eGceed typical Cse. The aCtomated stress test randomiSes the order in Bhich the prodCct featCres are accessed. !n this BayI non-typical seHCences of Cser interaction are tested Bith the system in an attempt to find latent defects not detectaEle Bith other techniHCes.
Performance Testing Process & Methodology 1+: Proprietary & Confidential -

To ta#e advantage of automated stress testing) our challenge then was to create an automated stress test tool that wouldD 1. Simulate user interaction for long periods of time Asince it is computer controlled we can e3ercise the product more than a user canB. 7. 6rovide as much randomiCation of command se(uences to the product as possible to improve test coverage over the entire set of possible featuresKcommands. 9. Continuously log the se(uence of events so that issues can be reliably reproduced after a system failure. :. <ecord the memory in use over time to allow memory management analysis. ,. Stress the resource and memory management features of the system.

26.3 (uto!ated Stress Testing I!ple!entation


!utomated stress testing implementations will be different depending on the interface to the product application. The types of interfaces available to the product drive the design of the automated stress test tool. The interfaces fall into two main categoriesD

19

)rogra!!a0le Interfaces: &nterfaces li#e command prompts) <S-797) 4thernet) %eneral 6urpose &nterface ?us A%6&?B) @niversal Serial ?us A@S?B) etc. that accept strings representing command functions without regard to conte3t or the current state of the device. 3raphical 'ser Interfaces 83'IMs9: &nterfaces that use the $indows model to allow the user direct control over the device) individual windows and controls may or may not be visible andKor active depending on the state of the device.

29

26." )rogra!!a0le Interfaces


These interfaces have allowed users to setup) control) and retrieve data in a variety of application areas li#e manufacturing) research and development) and service. To meet the needs of these customers) the products provide programmable interfaces) which generally support a large number of commands A1---RB) and are re(uired to operate for long periods of time) for e3ample) on a manufacturing line where the product is used 7: hours a day) / days a wee#. Testing all possible combinations of commands on these products is practically impossible using manual testing methods. 6rogrammable interface stress testing is performed by randomly selecting from a list of individual commands and then sending these commands to the device under test A8@TB through the interface. &f a command has parameters) then the parameters are also enumerated by randomly generating a uni(ue command parameter. ?y using a pseudorandom number generator) each uni(ue seed value will create the same se(uence of commands with the same parameters each time the stress test is e3ecuted. 4ach command is also written to a log file which can be then used later to reproduce any defects that were uncovered.
Performance Testing Process & Methodology 1+1 Proprietary & Confidential -

;or additional comple3ity) other variations of the automated stress test can be performed. ;or e3ample) the stress test can vary the rate at which commands are sent to the interface) the stress test can send the commands across multiple interfaces simultaneously) Aif the product supports itB) or the stress test can send multiple commands at the same time.

26.+ 3raphical 'ser Interfaces


&n recent years) %raphical @ser &nterfaces have become dominant and it became clear that we needed a means to test these user interfaces analogous to that which is used for programmable interfaces. Gowever) since accessing the %@& is not as simple as sending streams of command line input to the product application) a new approach was needed. &t is necessary to store not only the ob+ect recognition method for the control) but also information about its parent window and other information li#e its e3pected state) certain property values) etc. !n e3ample would be a VG4>6* menu item. There may be multiple windows open with a VG4>6* menu item) so it is not sufficient to simply store 0clic# the VG4>6* menu item2) but you have to store 0clic# the VG4>6* menu item for the particular window2. $ith this information it is possible to uni(uely define all the possible product application operations Ai.e. each control can be uni(uely identifiedB. !dditionally) the flow of each operation can be important. .any controls are not visible until several levels of modal windows have been opened andKor closed) for e3ample) a typical confirm file overwrite dialog bo3 for a V;ile-USave !sb* filename operation is not available until the following se(uence has been e3ecutedD 1. Set Conte3t to the .ain $indow 7. Select V;ile-USave !sb* 9. Select Target 8irectory from tree control :. Type a valid filename into the edit-bo3 ,. Clic# the VS!=4* button 6. &f the filename already e3ists) either confirm the file overwrite by clic#ing the V"E* button in the confirmation dialog or clic# the cancel button. &n this case) you need to group these si3 operations together as one 0big2 operation in order to correctly e3ercise this particular V"E* button.

26., ata =low

iagra!

! stress test tool can have many different interactions and be implemented in many different ways. ;igure 1 shows a bloc# diagram) which can be used to illustrate some of the stress test tool interactions. The main interactions for the stress test tool include an input file and 8evice @nder Test A8@TB. The input file is used here to provide the stress test tool with a list of all the commands and interactions needed to test the 8@T.

Performance Testing Process & Methodology 1+2 -

Proprietary & Confidential

S08,/6 R/8&:(5/ !&.-,&(

I.*:, F-%/

S,(/88 T/8, T&&%

DUT

L&7 5&66).1 S/@:/.5/

L&7 T/8, R/8:%,8

F-7:(/ 19 S,(/88 T/8, T&&% I.,/()5,-&.8


!dditionally) data logging Acommands and test resultsB and system resource monitoring are very beneficial in helping determine what the 8@T was trying to do before it crashed and how well it was able to manage its system resources. The basic flow control of an automated stress test tool is to setup the 8@T into a #nown state and then to loop continuously selecting a new random interaction) trying to e3ecute the interaction) and logging the results. This loop continues until a set number of interactions have occurred or the 8@T crashes.

26.6 Techni%ues 'sed to Isolate efects


*epending on the type of defect to Ee isolatedI tBo different techniHCes are Csed? 1. %ystem crashes > 8asserts and the liFe9 do not try to rCn the fCll stress test from the EeginningI Cnless it only taFes a feB minCtes to prodCce the defect. !nsteadI EacF-Cp and rCn the stress test from the last seed 8for Cs this is normally RCst the last 1:: commands9. !f the defect still occCrsI then continCe to redCce the nCmEer of commands in the playEacF Cntil the defect is isolated. 2. *iminishing resoCrce issCes > 8memory leaFs and the liFe9 are CsCally limited to a single sCEsystem. To isolate the sCEsystemI start remoDing sCEsystems from the dataEase and re-rCn the stress test Bhile monitoring the system resoCrces. ContinCe this process Cntil the sCEsystem caCsing the redCction in resoCrces is identified. This techniHCe is most effectiDe after fCll integration of mCltiple sCEsystems 8orI modCles9 has Eeen achieDed. %ome defects are RCst hard to reprodCce > eDen Bith the same seHCence of commands. These defects shoCld still Ee logged into the defect tracFing system. 's the defect rePerformance Testing Process & Methodology 1+- Proprietary & Confidential -

occCrsI continCe to add additional data to the defect description. EDentCallyI oDer timeI yoC Bill Ee aEle to detect a patternI isolate the root caCse and resolDe the defect. %ome defects RCst seem to Ee Cn-reprodCciEleI especially those that reside aroCnd page faCltsI ECt oDerallI Be FnoB that the roECstness of oCr applications increases proportionally Bith the amoCnt of time that the stress test Bill rCn CninterrCpted.

Performance Testing Process & Methodology 1+0 -

Proprietary & Confidential

27 Test Case Coverage

27.1 Test Coverage


Test CoDerage is an important measCre of HCality for softBare systems. Test CoDerage analysis is the process of? #inding areas of a program not eGercised Ey a set of test casesI Creating additional test cases to increase coDerageI and *etermining a HCantitatiDe measCre of code coDerageI Bhich is an indirect measCre of HCality. 'lso an optional aspect of test coDerage analysis is? !dentifying redCndant test cases that do not increase coDerage.

' test coDerage analySer aCtomates this process. Test coDerage analysis is sometimes called code coDerage analysis. The tBo terms are synonymoCs. The academic Borld more often Cses the term Mtest coDerageM Bhile practitioners more often Cse Mcode coDerageM. Test coDerage analysis can Ee Csed to assCre HCality of the set of testsI and not the HCality of the actCal prodCct. CoDerage analysis reHCires access to test program soCrce code and often reHCires recompiling it Bith a special command. Code coDerage analysis is a strCctCral testing techniHCe 8Bhite EoG testing9. %trCctCral testing compares test program EehaDior against the apparent intention of the soCrce code. This contrasts Bith fCnctional testing 8ElacF-EoG testing9I Bhich compares test program EehaDior against a reHCirements specification. %trCctCral testing eGamines hoB the program BorFsI taFing into accoCnt possiEle pitfalls in the strCctCre and logic. #Cnctional testing eGamines Bhat the program accomplishesI BithoCt regard to hoB it BorFs internally.

27.2 Test coverage !easures


' large Dariety of coDerage measCres eGist. $ere is a description of some fCndamental measCres and their strengths and BeaFnesses
Performance Testing Process & Methodology 1+1 Proprietary & Confidential -

27.3 )rocedure/Level Test Coverage

ProEaEly the most Easic form of test coDerage is to measCre Bhat procedCres Bere and Bere not eGecCted dCring the test sCite. This simple statistic is typically aDailaEle from eGecCtion profiling toolsI Bhose RoE is really to measCre performance EottlenecFs. !f the eGecCtion time in some procedCres is SeroI yoC need to Brite neB tests that hit those procedCres. .Ct this measCre of test coDerage is so coarse-grained itJs not Dery practical.
27." Line/Level Test Coverage

The Easic measCre of a dedicated test coDerage tool is tracFing Bhich lines of code are eGecCtedI and Bhich are not. This resClt is often presented in a sCmmary at the procedCreI fileI or proRect leDel giDing a percentage of the code that Bas eGecCted. ' large proRect that achieDed 3:Q code coDerage might Ee considered a Bell-tested prodCct. Typically the line coDerage information is also presented at the soCrce code leDelI alloBing yoC to see eGactly Bhich lines of code Bere eGecCted and Bhich Bere not. ThisI of coCrseI is often the Fey to Briting more tests that Bill increase coDerage? .y stCdying the CneGecCted codeI yoC can see eGactly Bhat fCnctionality has not Eeen tested.
27.+ Condition Coverage and 4ther 2easures

!tJs easy to find cases Bhere line coDerage doesnJt really tell the Bhole story. #or eGampleI consider a ElocF of code that is sFipped Cnder certain conditions 8e.g.I a statement in an -2 claCse9. !f that code is shoBn as eGecCtedI yoC donJt FnoB Bhether yoC haDe tested the case Bhen it is sFipped. ,oC need condition coDerage to FnoB. There are many other test coDerage measCres. $oBeDerI most aDailaEle code coDerage tools do not proDide mCch Eeyond Easic line coDerage. !n theoryI yoC shoCld haDe more. .Ct in practiceI if yoC achieDe 31KQ line coDerage and still haDe time and ECdget to commit to fCrther testing improDementsI it is an enDiaEle commitment to HCalityV

27., &ow Test Coverage Tools #orTo monitor eGecCtionI test coDerage tools generally MinstrCmentM the program Ey inserting MproEesM. $oB and Bhen this instrCmentation phase happens can Dary greatly EetBeen different prodCcts.
Performance Testing Process & Methodology 1+4 Proprietary & Confidential -

'dding proEes to the program Bill maFe it Eigger and sloBer. !f the test sCite is large and time-consCmingI the performance factor may Ee significant.

27.,.1

Source/Level Instru!entation

%ome prodCcts add proEes at the soCrce leDel. They analySe the soCrce code as BrittenI and add additional code 8sCch as calls to a code coDerage rCntime9 that Bill record Bhere the program reached. %Cch a tool may not actCally generate neB soCrce files Bith the additional code. %ome prodCctsI for eGampleI intercept the compiler after parsing ECt Eefore code generation to insert the changes they need. One draBEacF of this techniHCe is the need to modify the ECild process. ' separate Dersion namelyI code coDerage Dersion in addition to other DersionsI sCch as deECg 8Cn optimiSed9 and release 8optimiSed9 needs to Ee maintained. Proponents claim this techniHCe can proDide higher leDels of code coDerage measCrement 8condition coDerageI etc.9 than other forms of instrCmentation. This type of instrCmentation is dependent on programming langCage -- the proDider of the tool mCst eGplicitly choose Bhich langCages to sCpport. .Ct it can Ee someBhat independent of operating enDironment 8processorI O%I or DirtCal machine9.

27.,.2

E.ecuta0le Instru!entation

ProEes can also Ee added to a completed eGecCtaEle file. The tool Bill analySe the eGisting eGecCtaEleI and then create a neBI instrCmented one. This type of instrCmentation is independent of programming langCage. $oBeDerI it is dependent on operating enDironment -- the proDider of the tool mCst eGplicitly choose Bhich processors or DirtCal machines to sCpport.

27.,.3

$unti!e Instru!entation

ProEes need not Ee added Cntil the program is actCally rCn. The proEes eGist only in the in-memory copy of the eGecCtaEle fileO the file itself is not modified. The same eGecCtaEle file Csed for prodCct release testing shoCld Ee Csed for code coDerage. .ecaCse the file is not modified in any BayI RCst eGecCting it Bill not aCtomatically start code coDerage 8as it BoCld Bith the other methods of instrCmentation9. !nsteadI the code coDerage tool mCst start program eGecCtion directly or indirectly. 'lternatiDelyI the code coDerage tool Bill add a tiny Eit of instrCmentation to the eGecCtaEle. This neB code Bill BaFe Cp and connect to a Baiting coDerage tool BheneDer the program eGecCtes. This added code does not affect the siSe or performance of the eGecCtaEleI and does nothing if the coDerage tool is not Baiting.
Performance Testing Process & Methodology 1++ Proprietary & Confidential -

LiFe EGecCtaEle !nstrCmentationI (Cntime !nstrCmentation is independent of programming langCage ECt dependent on operating enDironment.

27.6 Test Coverage Tools at a 3lance


There are lots of tools aDailaEle for measCring Test coDerage. Company .Cllseye CompC&are ProdCct .CllseyeCoDerage *eDPartner O% &in-2I niG &in-2 &in-2I niG &in-2I niG &in-2I niG &in-2 Lang C7CKK C7CKKI AaDaI V. C7CKKI AaDaI V. C7CKKI AaDa C7CKK C7CKKI V.

(ational 8!.M9 PCrifyPlCs %oftBare (esearch TestBell Paterson Technology TC'T CTCKK LiDeCoDerage

CoDerage analysis is a strCctCral testing techniHCe that helps eliminate gaps in a test sCite. !t helps most in the aEsence of a detailedI Cp-to-date reHCirements specification. Each proRect mCst choose a minimCm percent coDerage for release criteria Eased on aDailaEle testing resoCrces and the importance of preDenting post-release failCres. ClearlyI safety-critical softBare shoCld haDe a high goal. &e mCst set a higher coDerage goal for Cnit testing than for system testing since a failCre in loBer-leDel code may affect mCltiple high-leDel callers.

Performance Testing Process & Methodology 1+/ -

Proprietary & Confidential

2I Test Case points/TC)


2I.1 #hat is a Test Case )oint 8TC)9

TCP is a measCre of estimating the compleGity of an application. This is also Csed as an estimation techniHCe to calcClate the siSe and effort of a testing proRect. The TCP coCnts are nothing ECt ranFing the reHCirements and the test cases that are to Ee Britten for those reHCirements into simpleI aDerage and compleG and HCantifying the same into a measCre of compleGity. !n this coCrseBare Be shall giDe an oDerDieB aEoCt Test Case Points and not elaEorate on Csing TCP as an estimation techniHCe.
2I.2 Calculating the Test Case )oints:

.ased on the #Cnctional (eHCirement *ocCment 8#(*9I the application is classified into DarioCs modCles liFe say for a BeE applicationI Be can haDe WLogin and 'CthenticationT as a modCle and ranF that particClar modCle as %impleI 'Derage and CompleG Eased on the nCmEer and compleGity of the reHCirements for that modCle. ' %imple reHCirement is oneI Bhich can Ee giDen a DalCe in the scale of 1 to-. 'n 'Derage reHCirement is ranFed EetBeen 0 and +. ' CompleG reHCirement is ranFed EetBeen / and 1:. C&6*%/=-,0 &2 R/@:-(/6/.,8 (eHCirement Classification %imple 81--9 'Derage 80-+9 CompleG 8X /9 Total

The test cases for a particClar reHCirement are classified into %impleI 'Derage and CompleG Eased on the folloBing foCr factors. Test case compleGity for that reHCirement O( !nterface Bith other Test cases O( "o. of Derification points O( .aseline Test data

(efer the test case classification taEle giDen EeloB


Performance Testing Process & Methodology 1+3 Proprietary & Confidential -

29.2.1.1 T/8, C)8/ C%)88-2-5),-&. C&6*%/=-,0 T0*/ C&6*%/=-,0 &2 T/8, C)8/

I.,/(2)5/ '-,+ &,+/( T/8, 5)8/ " E3 F3

N:64/( &2 >/(-2-5),-&. *&-.,8 E2 3#8 F8

B)8/%-./ T/8, D),) N&, R/@:-(/1 R/@:-(/1 R/@:-(/1

%imple 'Derage CompleG

E 2 ,().8)5,-&.8 3#6 ,().8)5,-&.8 F 6 ,().8)5,-&.8

' sample gCideline for classification of test cases is giDen EeloB. 'ny Derification point containing a calcClation is considered JCompleGJ 'ny Derification pointI Bhich interfaces Bith or interacts Bith another application is classified as JCompleGJ 'ny Derification point consisting of report Derification is considered as JCompleGJ ' Derification point comprising %earch fCnctionality may Ee classified as JCompleGJ or J'DerageJ depending on the compleGity

*epending on the respectiDe proRectI the compleGity needs to Ee identified in a similar manner. .ased on the test case type an adRCstment factor is assigned for simpleI aDerage and compleG test cases. This adRCstment factor has Eeen calcClated after a thoroCgh stCdy and analysis done on many testing proRects. The 'dRCstment #actor in the taEle mentioned EeloB is pre-determined and mCst not Ee changed for eDery proRect.

Performance Testing Process & Methodology 1/: -

Proprietary & Confidential

Test Case Type %imple 'Derage CompleG T&,)% T/8, C)8/ P&-.,8

C&6*%/=-,0 Adjustment W/-7+, Factor 1 2 28'9 08.9 /8C9

Number Result "o of %imple reHCirements in "CmEerL'dRCst factor ' the proRect 8(19 "o of 'Derage reHCirements in "CmEerL'dRCst factor . the proRect 8(29 "o of CompleG reHCirements in "CmEerL'dRCst factor C the proRect 8(-9 R1GR2GR3

#rom the EreaF Cp of CompleGity of (eHCirements done in the first stepI Be can get the nCmEer of simpleI aDerage and compleG test case types. .y mCltiplying the nCmEer of reHCirements Bith it s corresponding adRCstment factorI Be get the simpleI aDerage and compleG test case points. %Cmming Cp the three resCltsI Be arriDe at the count of Total Test ase "oints.

2I.3 Chapter Su!!ary


This chapter coDered the Easics on &hat is Test CoDerage Test CoDerage measCres $oB does Test coDerage tools BorF List of Test CoDerage tools &hat is TCP and hoB to calcClate the Test Case Points for an application

Performance Testing Process & Methodology 1/1 -

Proprietary & Confidential

You might also like