Professional Documents
Culture Documents
Confidential
Table of Contents 1 INTRODUCTION TO SOFTWARE..........................................................................................7 1.1 EVOL T!O" O# T$E %O#T&'(E TE%T!") *!%C!PL!"E...............................................................+ 1.2 T$E TE%T!") P(OCE%% '"* T$E %O#T&'(E TE%T!") L!#E C,CLE.........................................+ 1.- .(O'* C'TE)O(!E% O# TE%T!")............................................................................................./ 1.0 &!*EL, EMPLO,E* T,PE% O# TE%T!") ................................................................................../ 1.1 T$E TE%T!") TEC$"!2 E%.......................................................................................................3 1.4 C$'PTE( % MM'(,.................................................................................................................3 2 BLACK BOX AND WHITE BOX TESTING..........................................................................11 2.1 !"T(O* CT!O".......................................................................................................................11 2.2 .L'C5 .O6 TE%T!")..............................................................................................................11 2.- TE%T!") %T('TE)!E%7TEC$"!2 E%........................................................................................12.0 .L'C5 .O6 TE%T!") MET$O*%.............................................................................................10 2.1 .L'C5 .O6 8V%9 &$!TE .O6................................................................................................14 2.4 &$!TE .O6 TE%T!")........................................................................................................13 3 GUI TESTING............................................................................................................................23 -.1 %ECT!O" 1 - &!"*O&% COMPL!'"CE TE%T!")......................................................................2-.2 %ECT!O" 2 - %C(EE" V'L!*'T!O" C$EC5L!%T......................................................................21 -.- %PEC!#!C #!EL* TE%T%............................................................................................................23 -.0 V'L!*'T!O" TE%T!") - %T'"*'(* 'CT!O"%........................................................................-: 4 REGRESSION TESTING..........................................................................................................33 0.1 &$'T !% (E)(E%%!O" TE%T!")..............................................................................................-0.2 TE%T E6EC T!O" ...................................................................................................................-0 0.- C$'")E (E2 E%T..................................................................................................................-1 0.0 . ) T('C5!") ......................................................................................................................-1 0.1 T('CE'.!L!T, M'T(!6..........................................................................................................-4 5 PHASES OF TESTING..............................................................................................................39 1.1 !"T(O* CT!O" ......................................................................................................................-3 1.2 T,PE% '"* P$'%E% O# TE%T!")............................................................................................-3 1.- T$E ;V<MO*EL......................................................................................................................0: ........................................................................................................................................................42 6 INTEGRATION TESTING.......................................................................................................43 4.1 )E"E('L!='T!O" O# MO*
LE TE%T!") C(!TE(!'..................................................................00
.........................................................................................................................................................46 7 ACCEPTANCE TESTING........................................................................................................49 +.1 !"T(O* CT!O" > 'CCEPT'"CE TE%T!")...............................................................................03 +.2 #'CTO(% !"#L E"C!") 'CCEPT'"CE TE%T!").....................................................................03 +.- CO"CL %!O"...........................................................................................................................1: 8 S STE! TESTING....................................................................................................................51 /.1 !"T(O* CT!O" TO %,%TEM TE%T!")...............................................................................11 /.2 "EE* #O( %,%TEM TE%T!") ..................................................................................................11
Performance Testing Process & Methodology 2Proprietary & Confidential -
/.- %,%TEM TE%T!") TEC$"!2 E% .............................................................................................12 /.0 # "CT!O"'L TEC$"!2 E%......................................................................................................1/.1 CO"CL %!O"?..........................................................................................................................19 UNIT TESTING.........................................................................................................................54 3.1 !"T(O* CT!O" TO "!T TE%T!")..........................................................................................10 3.2 "!T TE%T!") >#LO&?...........................................................................................................11 1 RESULTS.....................................................................................................................................55
"!T TE%T!") > .L'C5 .O6 'PP(O'C$...................................................................................14 "!T TE%T!") > &$!TE .O6 'PP(O'C$....................................................................................14 "!T TE%T!") > #!EL* LEVEL C$EC5%................................................................................14 "!T TE%T!") > #!EL* LEVEL V'L!*'T!O"%...........................................................................14 "!T TE%T!") > %E( !"TE(#'CE C$EC5%................................................................................14 3.- E6EC T!O" O# "!T TE%T%....................................................................................................1+ "!T TE%T!") #LO& ?.................................................................................................................1+ *!%'*V'"T')E O# "!T TE%T!")........................................................................................13 MET$O* #O( %T'TEME"T COVE(')E........................................................................................13
('CE COVE(')E...................................................................................................................4: 3.0 CO"CL %!O"...........................................................................................................................4: 1" TEST STRATEG ....................................................................................................................62 1:.1 !"T(O* CT!O" ....................................................................................................................42 1:.2 5E, ELEME"T% O# TE%T M'"')EME"T?.............................................................................42 1:.- TE%T %T('TE), #LO& ?.......................................................................................................41:.0 )E"E('L TE%T!") %T('TE)!E%...........................................................................................41 1:.1 "EE* #O( TE%T %T('TE),..................................................................................................41 1:.4 *EVELOP!") ' TE%T %T('TE),..........................................................................................44 1:.+ CO"CL %!O"?........................................................................................................................44 11 TEST PLAN...............................................................................................................................68 11.1 &$'T !% ' TE%T PL'"@........................................................................................................4/ CO"TE"T% O# ' TE%T PL'"........................................................................................................4/ 11.2 CO"TE"T% 8!" *ET'!L9.........................................................................................................4/ 12 TEST DATA PREPARATION # INTRODUCTION.............................................................71 12.1 C(!TE(!' #O( TE%T *'T' COLLECT!O" .............................................................................+2 12.2 CL'%%!#!C'T!O" O# TE%T *'T' T,PE%...............................................................................+3 12.- O()'"!=!") T$E *'T'......................................................................................................../: 12.0 *'T' LO'* '"* *'T' M'!"TE"'"CE............................................................................../2 12.1 TE%T!") T$E *'T'............................................................................................................./12.4 CO"CL %!O"........................................................................................................................./0 13 TEST LOGS # INTRODUCTION ..........................................................................................85 1-.1 #'CTO(% *E#!"!") T$E TE%T LO) )E"E('T!O"............................................................../1 1-.2 COLLECT!") %T'T % *'T'................................................................................................/4 14 TEST REPORT........................................................................................................................92 10.1 E6EC
T!VE
15 DEFECT !ANAGE!ENT.....................................................................................................95 11.1 *E#ECT.................................................................................................................................31 11.2 *E#ECT # "*'ME"T'L% .....................................................................................................31 11.- *E#ECT T('C5!")...............................................................................................................34 11.0 *E#ECT CL'%%!#!C'T!O"......................................................................................................3+ 11.1 *E#ECT (EPO(T!") ) !*EL!"E%.........................................................................................3/ 16 AUTO!ATION......................................................................................................................1"1 14.1 &$, ' TOM'TE T$E TE%T!") P(OCE%%@.........................................................................1:1 14.2 ' TOM'T!O" L!#E C,CLE.................................................................................................1:14.- P(EP'(!") T$E TE%T E"V!(O"ME"T................................................................................1:1 14.0 ' TOM'T!O" MET$O*%....................................................................................................1:/ 17 GENERAL AUTO!ATION TOOL CO!PARISON........................................................111 1+.1 # "CT!O"'L TE%T TOOL M'T(!6......................................................................................111 1+.2 (ECO(* '"* PL',.'C5....................................................................................................111 1+.- &E. TE%T!").....................................................................................................................112 1+.0 *'T'.'%E TE%T%...............................................................................................................112 1+.1 *'T' # "CT!O"%...............................................................................................................112 1+.4 O.AECT M'PP!")...............................................................................................................111+.+ !M')E TE%T!")..................................................................................................................110 1+./ TE%T7E((O( (ECOVE(,.....................................................................................................110 1+.3 O.AECT "'ME M'P............................................................................................................110 1+.1: O.AECT !*E"T!T, TOOL...................................................................................................111 1+.11 E6TE"%!.LE L'") ')E...................................................................................................111 1+.12 E"V!(O"ME"T % PPO(T..................................................................................................114 1+.1- !"TE)('T!O"....................................................................................................................114 1+.10 CO%T.................................................................................................................................114 1+.11 E'%E O# %E....................................................................................................................11+ 1+.14 % PPO(T...........................................................................................................................11+ 1+.1+ O.AECT TE%T%...................................................................................................................11+ 1+.1/ M'T(!6.............................................................................................................................11/ 1+.13 M'T(!6 %CO(E.................................................................................................................11/ 18 SA!PLE TEST AUTO!ATION TOOL.............................................................................119 1/.1 ('T!O"'L % !TE O# TOOL% ...............................................................................................113 1/.2 ('T!O"'L '*M!"!%T('TO(...............................................................................................12: 1/.- ('T!O"'L (O.OT...............................................................................................................120 1/.0 (O.OT LO)!" &!"*O&.......................................................................................................121 1/.1 ('T!O"'L (O.OT M'!" &!"*O&-) ! %C(!PT..................................................................124 1/.4 (ECO(* '"* PL',.'C5 OPT!O"%.....................................................................................12+ 1/.+ VE(!#!C'T!O" PO!"T%.........................................................................................................123 1/./ '.O T %2'.'%!C $E'*E( #!LE%.....................................................................................1-1 1/.3 '**!") *ECL'('T!O"% TO T$E )LO.'L $E'*E( #!LE...................................................1-1 1/.1: !"%E(T!") ' COMME"T !"TO ' ) ! %C(!PT?..................................................................1-1 1/.11 '.O T *'T' POOL%.........................................................................................................1-2 1/.12 *E. ) ME" ....................................................................................................................1-2 1/.1- COMP!L!") T$E %C(!PT.....................................................................................................1-1/.10 COMP!L'T!O" E((O(%......................................................................................................1-0
Performance Testing Process & Methodology 0Proprietary & Confidential -
2" SUPPORTED EN$IRON!ENTS.........................................................................................138 2:.1 OPE('T!") %,%TEM............................................................................................................1-/ 2:.2 P(OTOCOL%.........................................................................................................................1-/ 2:.- &E. .(O&%E(%..................................................................................................................1-/ 2:.0 M'(5 P L'") ')E%.........................................................................................................1-/ 2:.1 *EVELOPME"T E"V!(O"ME"T%.........................................................................................1-/ 21 PERFOR!ANCE TESTING.................................................................................................139 21.1 &$'T !% PE(#O(M'"CE TE%T!")@...................................................................................1-3 21.2 &$, PE(#O(M'"CE TE%T!")@........................................................................................1-3 21.- PE(#O(M'"CE TE%T!") O.AECT!VE%.................................................................................10: 21.0 P(E-(E2 !%!TE% #O( PE(#O(M'"CE TE%T!")..................................................................10: 21.1 PE(#O(M'"CE (E2 !(EME"T%..........................................................................................101 22 PERFOR!ANCE TESTING PROCESS.............................................................................143 22.1 P$'%E 1 > (E2 !(EME"T% %T *,.....................................................................................100 22.2 P$'%E 2 > TE%T PL'"........................................................................................................101 22.- P$'%E - > TE%T *E%!)".....................................................................................................101 22.0 P$'%E 0 >%C(!PT!")..........................................................................................................104 22.1 P$'%E 1 > TE%T E6EC T!O"..............................................................................................10+ 22.4 P$'%E 4 > TE%T '"'L,%!%.................................................................................................10+ 22.+ P$'%E + > P(EP'('T!O" O# (EPO(T%...............................................................................10/ 22./ COMMO" M!%T'5E% !" PE(#O(M'"CE TE%T!")...............................................................103 22.3 .E"C$M'(5!") LE%%O"% .................................................................................................103 23 TOOLS.....................................................................................................................................152 2-.1 LO'*( ""E( 4.1...............................................................................................................112 2-.2 &E.LO'* 0.1.....................................................................................................................112 2-.- '(C$!TECT (E .E"C$M'(5!").......................................................................................113 2-.0 )E"E('L TE%T%................................................................................................................113 24 PERFOR!ANCE !ETRICS................................................................................................161 20.1 CL!E"T %!*E %T'T!%T!C%....................................................................................................141 20.2 %E(VE( %!*E %T'T!%T!C%...................................................................................................142 20.- "ET&O(5 %T'T!%T!C%........................................................................................................142 20.0 CO"CL %!O".......................................................................................................................142 25 LOAD TESTING.....................................................................................................................164 21.1 &$, !% LO'* TE%T!") !MPO(T'"T @.................................................................................140 21.2 &$E" %$O L* LO'* TE%T!") .E *O"E@...........................................................................140 26 LOAD TESTING PROCESS.................................................................................................165 24.1 %,%TEM '"'L,%!%.............................................................................................................141 24.2 %E( %C(!PT%.....................................................................................................................141 24.- %ETT!")%............................................................................................................................141 24.0 PE(#O(M'"CE MO"!TO(!").............................................................................................144
Performance Testing Process & Methodology 1Proprietary & Confidential -
24.1 '"'L,=!") (E% LT%.........................................................................................................144 24.4 CO"CL %!O".......................................................................................................................144 27 STRESS TESTING.................................................................................................................168 2+.1 !"T(O* CT!O" TO %T(E%% TE%T!")...................................................................................14/ 2+.2 .'C5)(O "* TO ' TOM'TE* %T(E%% TE%T!").............................................................143 2+.- ' TOM'TE* %T(E%% TE%T!") !MPLEME"T'T!O"..............................................................1+1 2+.0 P(O)('MM'.LE !"TE(#'CE%............................................................................................1+1 2+.1 )('P$!C'L %E( !"TE(#'CE%...........................................................................................1+2 2+.4 *'T' #LO& *!')('M.......................................................................................................1+2 2+.+ TEC$"!2 E% %E* TO !%OL'TE *E#ECT%..........................................................................1+28 TEST CASE CO$ERAGE.....................................................................................................175 2/.1 TE%T COVE(')E.................................................................................................................1+1 2/.2 TE%T COVE(')E ME'% (E%...............................................................................................1+1 2/.- P(OCE* (E-LEVEL TE%T COVE(')E................................................................................1+4 2/.0 L!"E-LEVEL TE%T COVE(')E............................................................................................1+4 2/.1 CO"*!T!O" COVE(')E '"* OT$E( ME'% (E%................................................................1+4 2/.4 $O& TE%T COVE(')E TOOL% &O(5................................................................................1+4 2/.+ TE%T COVE(')E TOOL% 'T ' )L'"CE..............................................................................1+/ 29 TEST CASE POINTS#TCP....................................................................................................179 23.1 &$'T !% ' TE%T C'%E PO!"T 8TCP9..................................................................................1+3 23.2 C'LC L'T!") T$E TE%T C'%E PO!"T%?.............................................................................1+3 23.- C$'PTE( % MM'(,...........................................................................................................1/1
1.2 The Testing process and the Software Testing Life Cycle
4very testing pro+ect has to follow the waterfall model of the testing process. The waterfall model is as given below 1.Test Strategy 5 6lanning 7.Test 8esign 9.Test 4nvironment setup :.Test 43ecution ,.8efect !nalysis 5 Trac#ing 6.;inal <eporting !ccording to the respective pro+ects) the scope of testing can be tailored) but the process mentioned above is common to any testing activity. Software Testing has been accepted as a separate discipline to the e3tent that there is a separate life cycle for the testing activity. &nvolving software testing in all phases of the
Performance Testing Process & Methodology +Proprietary & Confidential -
software development life cycle has become a necessity as part of the software (uality assurance process. <ight from the <e(uirements study till the implementation) there needs to be testing done on every phase. The =-.odel of the Software Testing >ife Cycle along with the Software 8evelopment >ife cycle given below indicates the various phases or levels of testing.
$e%uire!ent Study &igh Level esign Low Level esign 'nit Testing
)roduction *erification Testing 'ser (cceptance Testing Syste! Testing Integration Testing
SDLC # STLC
System Testing: Testing the software for the re(uired specifications on the intended hardware Acceptance Testing: ;ormal testing conducted to determine whether or not a system satisfies its acceptance criteria) which enables a customer to determine whether to accept the system or not. Performance Testing: To evaluate the time ta#en or response time of the system to perform it*s re(uired functions in comparison Stress Testing: To evaluate a system beyond the limits of the specified re(uirements or system resources Asuch as dis# space) memory) processor utiliCationB to ensure the system do not brea# une3pectedly Load Testing: >oad Testing) a subset of stress testing) verifies that a web site can handle a particular number of concurrent users while maintaining acceptable response times Alpha Testing: Testing of a software product or system conducted at the developer*s site by the customer Beta Testing: Testing conducted at one or more customer sites by the end user of a delivered software product system.
This chapter covered the &ntroduction and basics of software testing mentioning about 4volution of Software Testing The Testing process and lifecycle ?road categories of testing $idely employed Types of Testing The Testing Techni(ues
Blac-/0o. test design treats the system as a literal Hblac#-bo3H) so it doesnIt e3plicitly use #nowledge of the internal structure. &t is usually described as focusing on testing functional re(uirements. Synonyms for blac#-bo3 includeD behavioral) functional) opa(uebo3) and closed-bo3. #hite/0o. test design allows one to pee# inside the Hbo3H) and it focuses specifically on using internal #nowledge of the software to guide the selection of test data. &t is used to detect errors by means of e3ecution-oriented test cases. Synonyms for white-bo3 includeD structural) glass-bo3 and clear-bo3. $hile blac#-bo3 and white-bo3 are terms that are still in popular use) many people prefer the terms HbehavioralH and HstructuralH. ?ehavioral test design is slightly different from blac#-bo3 test design because the use of internal #nowledge isnIt strictly forbidden) but itIs still discouraged. &n practice) it hasnIt proven useful to use a single test design method. "ne has to use a mi3ture of different methods so that they arenIt hindered by the limitations of a particular one. Some call this Hgray-bo3H or Htranslucent-bo3H test design) but others wish weId stop tal#ing about bo3es altogether'''
e3ample of the organisational problem of implementing a translation memory is the language service of a big automobile manufacturer) where the ma+or implementation problem is not the technical environment) but the fact that many clients still submit their orders as print-out) that neither source te3ts nor target te3ts are properly organised and stored and) last but not least) individual translators are not too motivated to change their wor#ing habits. >aboratory tests are mostly performed to assess the general usability of the system. 8ue to the high laboratory e(uipment costs laboratory tests are mostly only performed at big software houses such as &?. or .icrosoft. Since laboratory tests provide testers with many technical possibilities) data collection and analysis are easier than for field tests.
2.".7
.ay leave many program paths untested Cannot be directed toward specific segments of code which may be very comple3 Aand therefore more error proneB .ost testing related research has been directed toward glass bo3 testing
?lac# bo3 testing begins with a metaphor. &magine you*re testing an electronics system. &t*s housed in a blac# bo3 with lights) switches) and dials on the outside. Lou must test it without opening it up) and you can*t see beyond its surface. Lou have to see if it wor#s +ust by flipping switches AinputsB and seeing what happens to the lights and dials AoutputsB. This is blac# bo3 testing. ?lac# bo3 software testing is doing the same thing) but with software. The actual meaning of the metaphor) however) depends on how you define the boundary of the bo3 and what #ind of access the 0blac#ness2 is bloc#ing.
!n opposite test approach would be to open up the electronics system) see how the circuits are wired) apply probes internally and maybe even disassemble parts of it. ?y analogy) this is called white bo3 testing) To help understand the different ways that software testing can be divided between blac# bo3 and white bo3 techni(ues) consider the ;ive-;old Testing System. &t lays out five dimensions that can be used for e3amining testingD 1.6eopleAwho does the testingB 7. Coverage Awhat gets testedB 9. <is#s Awhy you are testingB :.!ctivitiesAhow you are testingB ,. 4valuation Ahow you #now you*ve found a bugB
>et*s use this system to understand and clarify the characteristics of blac# bo3 and white bo3 testing. )eopleD $ho does the testing1 Some people #now how software wor#s AdevelopersB and others +ust use it AusersB. !ccordingly) any testing by users or other non-developers is sometimes called 0blac# bo32 testing. 8eveloper testing is called 0white bo32 testing. The distinction here is based on what the person #nows or can understand.
Performance Testing Process & Methodology 14 Proprietary & Confidential -
&f we draw the bo3 around the system as a whole) 0blac# bo32 testing becomes another name for system testing. !nd testing the units inside the bo3 becomes white bo3 testing. This is one way to thin# about coverage. !nother is to
contrast testing that aims to cover all the re(uirements with testing that aims to cover all the code. These are the two most commonly used coverage criteria. ?oth are supported by e3tensive literature and commercial tools. <e(uirements-based testing could be called 0blac# bo32 because it ma#es sure that all the customer re(uirements have been verified. Code-based testing is often called 0white bo32 because it ma#es sure that all the code Athe statements) paths) or decisionsB is e3ercised. $is-sD $hy are you testing1
Sometimes testing is targeted at particular ris#s. ?oundary testing and other attac#-based techni(ues are targeted at common coding errors. 4ffective security testing also re(uires a detailed understanding of the code and the system architecture. Thus) these techni(ues might be classified as 0white bo32. !nother set of ris#s concerns whether the software will actually provide value to users. @sability testing focuses on this ris#) and could be termed 0blac# bo3.2
(ctivities: Gow do you test1
! common distinction is made between behavioral test design) which defines tests based on functional re(uirements) and structural test design) which defines tests based on the code itself. These are two design approaches. Since behavioral testing is based on e3ternal functional definition) it is often called 0blac# bo3)2 while structural testingMbased on the code internalsMis called 0white bo3.2 &ndeed) this is probably the most commonly cited definition for blac# bo3 and white bo3 testing. !nother activity-based distinction contrasts dynamic test e3ecution with formal code inspection. &n this case) the metaphor maps test e3ecution Adynamic testingB with blac# bo3 testing) and maps code inspection Astatic testingB with white bo3 testing. $e could also focus on the tools used. Some tool vendors refer to code-coverage tools as white bo3 tools) and tools that facilitate applying inputs and capturing inputsM most notably %@& capture replay toolsMas blac# bo3 tools. Testing is then categoriCed based on the types of tools used.
EvaluationD Gow do you #now if you*ve found a bug1
There are certain #inds of software faults that don*t always lead to obvious failures. They may be mas#ed by fault tolerance or simply luc#. .emory lea#s and wild pointers are e3amples. Certain test techni(ues see# to ma#e these #inds of problems more visible. <elated techni(ues capture code history and stac# information when faults occur) helping with diagnosis. !ssertions are another techni(ue for helping to ma#e problems more visible. !ll of these
Performance Testing Process & Methodology 1+ Proprietary & Confidential -
techni(ues could be considered white bo3 test techni(ues) since they use code instrumentation to ma#e the internal wor#ings of the software more visible. These contrast with blac# bo3 techni(ues that simply loo# at the official outputs of a program. $hite bo3 testing is concerned only with testing the software product) it cannot guarantee that the complete specification has been implemented. ?lac# bo3 testing is concerned only with testing the specification) it cannot guarantee that all parts of the implementation have been tested. Thus blac# bo3 testing is testing against the specification and will discover faults of omission) indicating that part of the specification has not been fulfilled. $hite bo3 testing is testing against the implementation and will discover faults of commission) indicating that part of the implementation is faulty. &n order to fully test a software product both blac# and white bo3 testing are re(uired.
$hite bo3 testing is much more e3pensive than blac# bo3 testing. &t re(uires the source code to be produced before the tests can be planned and is much more laborious in the determination of suitable input data and the determination if the software is or is not correct. The advice given is to start test planning with a blac# bo3 test approach as soon as the specification is available. $hite bo3 planning should commence as soon as all blac# bo3 tests have been successfully passed) with the production of flowgraphs and determination of paths. The paths should then be chec#ed against the blac# bo3 test plan and any additional re(uired test runs determined and applied. The conse(uences of test failure at this stage may be very e3pensive. ! failure of a white bo3 test may result in a change which re(uires all blac# bo3 testing to be repeated and the re-determination of the white bo3 paths To conclude) apart from the above described analytical methods of both glass and blac# bo3 testing) there are further constructive means to guarantee high (uality software end products. !mong the most important constructive means are the usage of ob+ect-oriented programming tools) the integration of C!S4 tools) rapid prototyping) and last but not least the involvement of users in both software development and testing procedures Su!!ary D
?lac# bo3 testing can sometimes describe user-based testing ApeopleBN system or re(uirements-based testing AcoverageBN usability testing Aris#BN or behavioral testing or capture replay automation AactivitiesB. $hite bo3 testing) on the other hand) can sometimes describe developer-based testing ApeopleBN unit or code-coverage testing AcoverageBN boundary or security testing Aris#sBN structural testing) inspection or code-coverage automation AactivitiesBN or testing based on probes) assertions) and logs AevaluationB.
Types of #hite Bo. testing ! typical rollout of a product is shown in figure 1 below.
The purpose of white 0o. testing &nitiate a strategic initiative to build (uality throughout the life cycle of a software product or service. 6rovide a complementary function to blac# bo3 testing. 6erform complete coverage at the component level. &mprove (uality by optimiCing performance. )ractices : This section outlines some of the general practices comprising white-bo3 testing process. &n general) white-bo3 testing practices have the following considerationsD
Performance Testing Process & Methodology 13 Proprietary & Confidential -
1. The allocation of resources to perform class and method analysis and to document and review the same. 7. 8eveloping a test harness made up of stubs) drivers and test ob+ect libraries. 9. 8evelopment and use of standard procedures) naming conventions and libraries. :. 4stablishment and maintenance of regression test suites and procedures. ,. !llocation of resources to design) document and manage a test history library. 6. The means to develop or ac(uire tool support for automation of captureKreplayKcompare) test suite e3ecution) results verification and documentation capabilities.
Selects test paths according to the location of definitions and use of variables. 1.2.3 Loop Testing >oops fundamental to many algorithms. Can define loops as simple) concatenated) nested) and unstructured. 43amplesD
ote that unstructured loops are not to be tested . rather) they are redesigned. 2 esign 0y Contract 8 0C9
8bC is a formal way of using comments to incorporate specification information into the code itself. ?asically) the code specification is e3pressed unambiguously using a formal language that describes the codeIs implicit contracts. These contracts specify such re(uirements asD Conditions that the client must meet before a method is invo#ed. Conditions that a method must meet after it e3ecutes. !ssertions that a method must satisfy at specific points of its e3ecution Tools that chec# 8bC contracts at runtime such as OContract PhttpDKKwww.parasoft.comKproductsK+tractKinde3.htmQ are used to perform this function. 3 )rofiling 6rofiling provides a framewor# for analyCing Oava code performance for speed and heap memory use. &t identifies routines that are consuming the ma+ority of the C6@ time so that problems may be trac#ed down to improve performance. These include the use of .icrosoft Oava 6rofiler !6& and Sun*s profiling tools that are bundled with the O8E. Third party tools such as Oa=iC
Performance Testing Process & Methodology 21 Proprietary & Confidential -
" Error &andling 43ception and error handling is chec#ed thoroughly are simulating partial and complete fail-over by operating on error causing test vectors. 6roper error recovery) notification and logging are chec#ed against references to validate program design. + Transactions Systems that employ transaction) local or distributed) may be validated to ensure that !C&8 A!tomicity) Consistency) &solation) 8urabilityB. 4ach of the individual parameters is tested individually against a reference data set. Transactions are chec#ed thoroughly for partialKcomplete commits and rollbac#s encompassing databases and other F! compliant transaction processors. (dvantages of #hite Bo. Testing ;orces test developer to reason carefully about implementation !ppro3imate the partitioning done by e3ecution e(uivalence <eveals errors in HhiddenH code ?eneficent side-effects isadvantages of #hite Bo. Testing 43pensive Cases omitted in the code could be missed out.
3'I Testing
$hat is %@& Testing1 %@& is the abbreviation for %raphic @ser &nterface. &t is absolutely essential that any application has to be user-friendly. The end user should be comfortable while using all the components on screen and the components should also perform their functionality with utmost clarity. Gence it becomes very essential to test the %@& components of any application. %@& Testing can refer to +ust ensuring that the loo#-and-feel of the application is acceptable to the user) or it can refer to testing the functionality of each and every component involved. The following is a set of guidelines to ensure effective %@& Testing and can be used even as a chec#list while testing a product K application.
ever updateable fields should be displayed with blac# te3t on a gray bac#ground with a blac# label. !ll te3t should be left +ustified) followed by a colon tight to it. &n a field that may or may not be updateable) the label te3t and contents changes from blac# to gray depending on the current status. >ist bo3es are always white bac#ground with blac# te3t whether they are disabled or not. !ll others are gray. &n general) double-clic#ing is not essential. &n general) everything can be done using both the mouse and the #eyboard. !ll tab buttons should have a distinct letter.
3.1.,
rop
6ressing the !rrow should give list of options. This >ist may be scrollable. Lou should not be able to type te3t in the bo3. 6ressing a letter should bring you to the first item in the list with that start with that letter. 6ressing VCtrl - ;:* should openKdrop down the list bo3.
Performance Testing Process & Methodology 20 Proprietary & Confidential -
Spacing should be compatible with the e3isting windows spacing Aword etc.B. &tems should be in alphabetical order with the e3ception of blan#Knone) which is at the top or the bottom of the list bo3. 8rop down with the item selected should be display the list with the selected item on the top. .a#e sure only one space appears) shouldnIt have a blan# line at the bottom.
1-. Can the cursor be placed in read-only fields by clic#ing in the field with the mouse1 11. &s the cursor positioned in the first input field or control when the screen is opened1 17. &s there a default button specified on the screen1 19. 8oes the default button wor# correctly1 1:. $hen an error message occurs does the focus return to the field in error when the user cancels it1 1,. $hen the user !ltRTabIs to another application does this have any impact on the screen upon return to the application1 16. 8o all the fields edit bo3es indicate the number of characters they will hold by there length1 e.g. a 9- character field should be a lot longer
3.2.+
1. &s the data saved when the window is closed by double clic#ing on the close bo31 7. Chec# the ma3imum field lengths to ensure that there are no truncated characters1 9. $here the database re(uires a value Aother than nullB then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact. :. Chec# ma3imum and minimum field values for numeric fields1 ,. &f numeric fields accept negative values can these be stored correctly on the database and does it ma#e sense for the field to accept negative numbers1 6. &f a set of radio buttons represents a fi3ed set of values such as !) ? and C then what happens if a blan# value is retrieved from the database1 A&n some situations rows can be created on the database by other functions) which are not screen based) and thus the re(uired initial values can be incorrect.B /. &f a particular set of data is saved to the database chec# that each value gets saved fully to the database. Ai.e.B ?eware of truncation Aof stringsB and rounding of numeric values.
6. &n drop down list bo3es) assure that the list and each entry in the list can be accessed via appropriate #ey K hot #ey combinations. /. 4nsure that duplicate hot #eys do not e3ist on each screen 8. 4nsure the proper usage of the escape #ey Awhich is to undo any changes that have been madeB and generates a caution message HChanges will be lost Continue yesKnoH 9. !ssure that the cancel button functions the same as the escape #ey. 1-. !ssure that the Cancel button operates) as a Close button when changes have been made that cannot be undone. 11. !ssure that only command buttons) which are used by a particular window) or in a particular dialog bo3) are present. W Ai.eB ma#e sure they donIt wor# on the screen behind the current screen. 17. $hen a command button is used sometimes and not at other times) assures that it is grayed out when it should not be used. 19. !ssure that "E and Cancel buttons are grouped separately from other command buttons. 1:. !ssure that command button names are not abbreviations. 1,. !ssure that all field labelsKnames are not technical labels) but rather are names meaningful to system users. 16. !ssure that command buttons are all of similar siCe and shape) and same font 5 font siCe. 1/. !ssure that each command button can be accessed via a hot #ey combination. 18. !ssure that command buttons in the same windowKdialog bo3 do not have duplicate hot #eys. 19. !ssure that each windowKdialog bo3 has a clearly mar#ed default value Acommand button) or other ob+ectB which is invo#ed when the 4nter #ey is pressed - and "T the Cancel or Close button 7-. !ssure that focus is set to an ob+ectKbutton) which ma#es sense according to the function of the windowKdialog bo3. 71. !ssure that all option buttons Aand radio buttonsB names are not abbreviations. 77. !ssure that option button names are not technical labels) but rather are names meaningful to system users. 79. &f hot #eys are used to access option buttons) assure that duplicate hot #eys do not e3ist in the same windowKdialog bo3. 7:. !ssure that option bo3 names are not abbreviations. 7,. !ssure that option bo3es) option buttons) and command buttons are logically grouped together in clearly demarcated areas H%roup ?o3H 76. !ssure that the Tab #ey se(uence) which traverses the screens) does so in a logical way. 7/. !ssure consistency of mouse actions across windows. 78. !ssure that the color red is not used to highlight active ob+ects Amany individuals are red-green color blindB. 79. !ssure that the user will have control of the des#top with respect to general color and highlighting Athe application should not dictate the des#top bac#ground characteristicsB. 9-. !ssure that the screenKwindow does not have a cluttered appearance 91. Ctrl R ;6 opens ne3t tab within tabbed window 97. Shift R Ctrl R ;6 opens previous tab within tabbed window 99. Tabbing will open ne3t tab within tabbed window if on last field of current tab
Performance Testing Process & Methodology 2/ Proprietary & Confidential -
9:. Tabbing will go onto the IContinueI button if on last field of last tab within tabbed window 9,. Tabbing will go onto the ne3t editable field in the window 96. ?anner style 5 siCe 5 display e3act same as e3isting windows 9/. &f 8 or less options in a list bo3) display all options on open of list bo3 - should be no need to scroll 98. 4rrors on continue will cause user to be returned to the tab and the focus should be on the field causing the error. Ai.e the tab is opened) highlighting the field with the error on itB 99. 6ressing continue while on the first tab of a tabbed window Aassuming all fields filled correctlyB will not open all the tabs. :-. "n open of tab focus will be on first editable field :1. !ll fonts to be the same :7. !ltR;: will close the tabbed window and return you to main screen or previous screen Aas appropriateB) generating Hchanges will be lostH message if necessary. :9. .icrohelp te3t for every enabled field 5 button ::. 4nsure all fields are disabled in read-only mode :,. 6rogress messages on load of tabbed screens :6. <eturn operates continue :/. &f retrieve on load of tabbed window fails window should not open
&nclude value Cero in all calculations. &nclude at least one in-range value. &nclude ma3imum and minimum range values. &nclude out of range values above the ma3imum and below the minimum. !ssure that upper and lower values in ranges are handled correctly.
K/0 F1 F2 F3 F4
Close Close *ocCment 7 'pplication. Child BindoB. "7' "7' "7' "7' "7' "7' "7'
F5 F6 F7 F8
Toggle eGtend Toggle 'dd "7' modeI if modeI if sCpported. sCpported. "7' "7' "7' "7' "7' MoDe to neGt open *ocCment or Child BindoB. 8'dding %$!#T reDerses the order of moDement9. "7'
F9 F1"
"7' "7' "7' %Bitch to preDioCsly Csed application. 8$olding doBn the 'LT Fey displays all open applications9. "7'
A%,
L These shortcCts are sCggested for teGt formatting applicationsI in the conteGt for Bhich they maFe sense. 'pplications may Cse other modifiers for these operations.
The selective retesting of a software system that has been modified to ensure
automated tests for you. ! test cycle is complete only when all tests-automatic and manual-have been run. $ith .anual Test 43ecution you follow the instructions in the test steps of each test. Lou use the application) enter input) compare the application output with the e3pected output) and log the results. ;or each test step you assign either pass or fail status. 8uring !utomated Test 43ecution you create a batch of tests and launch the entire batch at once. Testing Tools runs the tests one at a time. &t then imports results) providing outcome summaries for each test.
?ugs can be detected and reported by engineers) testers) and end-users in all phases of the testing process. &nformation about bugs must be detailed and organiCed in order to schedule bug fi3es and determine software release dates.
Communication is an essential part of bug trac#ingN all members of the development and (uality assurance team must be well informed in order to insure that bugs information is up to date and that the most important problems are addressed. The number of open or fi3ed bugs is a good indicator of the (uality status of your application. Lou can use data analysis tools such as re-ports and graphs in interpret bug data.
product tested to meet the re(uirement. ?elow is a simple traceability matri3 structure. There can be more things included in a traceability matri3 than shown below. Traceability re(uires uni(ue identifiers for each re(uirement and product. umbers for products are established in a configuration management AC.B plan.
Traceability ensures completeness) that all lower level re(uirements derive from higher level re(uirements) and that all higher level re(uirements are allocated to lower level re(uirements. Traceability is also used in managing change and provides the basis for test planning. S!.6>4 T<!C4!?&>&TL .!T<&F ! traceability matri3 is a report from the re(uirements database or repository. The e3amples below show traceability between user and system re(uirements. @ser re(uirement identifiers begin with H@H and system re(uirements with HS.H
Tracing S17 to its source ma#es it clear this re(uirement is erroneousD it must be eliminated) rewritten) or the traceability corrected.
&n addition to traceability matrices) other reports are necessary to manage re(uirements. $hat goes into each report depends on the information needs of those receiving the reportAsB. 8etermine their information needs and document the information that will be associated with the re(uirements when you set up your re(uirements database or repository
+ )hases of Testing
+.1 Introduction
The 6rimary ob+ective of testing effort is to determine the conformance to re(uirements specified in the contracted documents. The integration of this code with the internal code is the important ob+ective. %oal is to evaluate the system as a whole) not its parts Techni(ues can be structural or functional. Techni(ues can be used in any stage that tests the system as a whole ASystem testing )!cceptance Testing) @nit testing) &nstallation) etc.B
Requirements
Acceptance Testing
Specification
System Testing
Architecture
Integration Testing
Detailed Design
Unit Testing
Coding
Requirement Study
Requirement Checklist Software Requirement Specification unctional Specification Checklist unctional Specification Document Architecture Design Detailed Design Document Coding
unctional Specification Document Design Document unctional Specification Document Unit!Integratio n!System Test Case unctional Documents Specification "erformance Document Criteria Software Requirement Regression Specification Test Case "erformance Document Test Cases and Scenarios
Unit Test Case Documents Unit Test Case Document System Test Case Document Integration Test Case Document Regression Test Case Document "erformance Test Cases and Scenarios User Acceptance Test Case Documents!Sce narios
Requirement s
Requirement s Re&iew
Specification
Specification Re&iew
System Testing
Architecture
Regression Round %
Architectur e Re&iew
Integration Testing
Design Re&iew
Unit Testing
Code 'alkthrough
, Integration Testing
"ne of the most significant aspects of a software development pro+ect is the integration strategy. &ntegration may be performed all at once) top-down) bottom-up) critical piece first) or by first integrating functional subsystems and then integrating the subsystems in separate phases using any of the basic strategies. &n general) the larger the pro+ect) the more important the integration strategy. =ery small systems are often assembled and tested in one phase. ;or most real systems) this is impractical for two ma+or reasons. ;irst) the system would fail in so many places at once that the debugging and retesting effort would be impractical Second) satisfying any white bo3 testing criterion would be very difficult) because of the vast amount of detail separating the input data from the individual code modules. &n fact) most integration testing has been traditionally limited to JJblac# bo3II techni(ues. >arge systems may re(uire many integration phases) beginning with assembling modules into low-level subsystems) then assembling subsystems into larger subsystems) and finally assembling the highest level subsystems into the complete system. To be most effective) an integration testing techni(ue should fit well with the overall integration strategy. &n a multi-phase integration) testing at each phase helps detect errors early and #eep the system under control. 6erforming only cursory testing at early integration phases and then applying a more rigorous criterion for the final stage is really +ust a variant of the high-ris# Hbig bangH approach. Gowever) performing rigorous testing of the entire software involved in each integration phase involves a lot of wasteful duplication of effort across phases. The #ey is to leverage the overall integration structure to allow rigorous testing at each phase while minimiCing duplication of effort. &t is important to understand the relationship between module testing and integration testing. &n one view) modules are rigorously tested in isolation using stubs and drivers before any integration is attempted. Then) integration testing concentrates entirely on module interactions) assuming that the details within each module are accurate. !t the other e3treme) module and integration testing can be combined) verifying the details of each moduleIs implementation in an integration conte3t. .any pro+ects compromise) combining module testing with the lowest level of subsystem integration testing) and then performing pure integration testing at higher levels. 4ach of these views of integration testing may be appropriate for any given pro+ect) so an integration testing method should be fle3ible enough to accommodate them all.
that it is possible to e3ercise them independently during integration testing. The idea behind design reduction is to start with a module control flow graph) remove all control structures that are not involved with module calls) and then use the resultant HreducedH flow graph to drive integration testing. ;igure /-7 shows a systematic set of rules for performing design reduction. !lthough not strictly a reduction rule) the call rule states that function call AHblac# dotHB nodes cannot be reduced. The remaining rules wor# together to eliminate the parts of the flow graph that are not involved with module calls. The sequential rule eliminates se(uences of non-call AHwhite dotHB nodes. Since application of this rule removes one node and one edge from the flow graph) it leaves the cyclomatic comple3ity unchanged. Gowever) it does simplify the graph so that the other rules can be applied. The repetitive rule eliminates top-test loops that are not involved with module calls. The conditional rule eliminates conditional statements that do not contain calls in their bodies. The looping rule eliminates bottom-test loops that are not involved with module calls. &t is important to preserve the moduleIs connectivity when using the looping rule) since for poorly-structured code it may be hard to distinguish the JJtopII of the loop from the JJbottom.II ;or the rule to apply) there must be a path from the module entry to the top of the loop and a path from the bottom of the loop to the module e3it. Since the repetitive) conditional) and looping rules each remove one edge from the flow graph) they each reduce cyclomatic comple3ity by one. <ules 1 through : are intended to be applied iteratively until none of them can be applied) at which point the design reduction is complete. ?y this process) even very comple3 logic can be eliminated as long as it does not involve any module calls.
I.5(/6/.,)% -.,/7(),-&.
Gierarchical system design limits each stage of development to a manageable effort) and it is important to limit the corresponding stages of testing as well. Gierarchical design is most effective when the coupling among sibling components decreases as the component siCe increases) which simplifies the derivation of data sets that test interactions among components. The remainder of this section e3tends the integration testing techni(ues of structured testing to handle the general case of incremental integration) including support for hierarchical design. The #ey principle is to test +ust the interaction among components at each integration stage) avoiding redundant testing of previously integrated sub-components.
Performance Testing Process & Methodology 04 Proprietary & Confidential -
To e3tend statement coverage to support incremental integration) it is re(uired that all module call statements from one component into a different component be e3ercised at each integration stage. To form a completely fle3ible Hstatement testingH criterion) it is re(uired that each statement be e3ecuted during the first phase Awhich may be anything from single modules to the entire programB) and that at each integration phase all call statements that cross the boundaries of previously integrated components are tested. %iven hierarchical integration stages with good cohesive partitioning properties) this limits the testing effort to a small fraction of the effort to cover each statement of the system at each integration phase. Structured testing can be e3tended to cover the fully general case of incremental integration in a similar manner. The #ey is to perform design reduction at each integration phase using +ust the module call nodes that cross component boundaries) yielding component-reduced graphs) and e3clude from consideration all modules that do not contain any cross-component calls. ;igure /-/ illustrates the structured testing approach to incremental integration. .odules ! and C have been previously integrated) as have modules ? and 8. &t would ta#e three tests to integrate this system in a single phase. Gowever) since the design predicate decision to call module 8 from module ? has been tested in a previous phase) only two additional tests are re(uired to complete the integration testing. .odules ? and 8 are removed from consideration because they do not contain cross-component calls) the component module design comple3ity of module ! is 1) and the component module design comple3ity of module C is 7.
6 (cceptance Testing
6.1 Introduction E (cceptance Testing
&n software engineering) acceptance testing is formal testing conducted to determine whether a system satisfies its acceptance criteria and thus whether the customer should accept the system. The main types of software testing areD Component. &nterface. System. !cceptance. <elease. !cceptance Testing chec#s the system against the H<e(uirementsH. &t is similar to systems testing in that the whole system is chec#ed but the important difference is the change in focusD Systems Testing chec#s that the system that was specified has been delivered. !cceptance Testing chec#s that the system delivers what was re(uested. The customer) and not the developer should always do acceptance testing. The customer #nows what is re(uired from the system to achieve value in the business and is the only person (ualified to ma#e that +udgment. The forms of the tests may follow those in system testing) but at all times they are informed by the business needs. The test procedures that lead to formal IacceptanceI of new or changed systems. @ser !cceptance Testing is a critical phase of any IsystemsI pro+ect and re(uires significant participation by the I4nd @sersI. To be of real use) an !cceptance Test 6lan should be developed in order to plan precisely) and in detail) the means by which I!cceptanceI will be achieved. The final part of the @!T can also include a parallel run to prove the system against the current system.
Critical )ro0le!G testing can continue but we cannot go into production AliveB with this problem 2aHor )ro0le!G testing can continue but live this feature will cause severe disruption to business processes in live operation 2ediu! )ro0le!G testing can continue and the system is li#ely to go live with only minimal departure from agreed business processes 2inor )ro0le! N both testing and live operations may progress. This problem should be corrected) but little or no changes to business processes are envisaged FCos!eticF )ro0le! e.g. coloursN fontsN pitch siCe Gowever) if such features are #ey to the business re(uirements they will warrant a higher severity level. The users of the system) in consultation with the e3ecutive sponsor of the pro+ect) must then agree upon the responsi0ilities and re%uired actions for each category of problem. ;or e3ample) you may demand that any problems in severity level 1) receive priority response and that all testing will cease until such level 1 problems are resolved. Caution. 4ven where the severity levels and the responses to each have been agreed by all partiesN the allocation of a problem into its appropriate severity level can be sub+ective and open to (uestion. To avoid the ris# of lengthy and protracted e3changes over the categorisation of problemsN we strongly advised that a range of e3amples are agreed in advance to ensure that there are no fundamental areas of disagreementN or) or if there are) these will be #nown in advance and your organisation is forewarned. ;inally) it is crucial to agree the Criteria for (cceptance . ?ecause no system is entirely fault free) it must be agreed between 4nd @ser and vendor) the ma3imum number of acceptable IoutstandingsI in any particular category. !gain) prior consideration of this is advisable. <.B. &n some cases) users may agree to accept AIsign offIB the system sub+ect to a range of conditions. These conditions need to be analysed as they may) perhaps unintentionally) see# additional functionality which could be classified as scope creep. &n any event) any and all fi3es from the software developers) must be sub+ected to rigorous Syste! Testing and) where appropriate <egression Testing.
6.3 Conclusion
Gence the goal of acceptance testing should verify the overall (uality) correct operation) scalability) completeness) usability) portability) and robustness of the functional components supplied by the Software system.
7 S?STE2 TESTI<3
7.1 Introduction to S?STE2 TESTI<3
;or most organiCations) software and system testing represents a significant element of a pro+ectIs cost in terms of money and management time. .a#ing this function more effective can deliver a range of benefits including reductions in ris#) development costs and improved Itime to mar#etI for new systems. Systems with software components and software-intensive systems are more and more comple3 everyday. &ndustry sectors such as telecom) automotive) railway) and aeronautical and space) are good e3amples. &t is often agreed that testing is essential to manufacture reliable products. Gowever) the validation process does not often receive the re(uired attention. .oreover) the validation process is close to other activities such as conformance) acceptance and (ualification testing. The difference between function testing and system testing is that now the focus is on the whole application and its environment . Therefore the program has to be given completely. This does not mean that now single functions of the whole program are tested) because this would be too redundant. The main goal is rather to demonstrate the discrepancies of the product from its re(uirements and its documentation. &n other words) this again includes the (uestion) JJ8id we build the right product1II and not +ust) JJ8id we build the product right1II Gowever) system testing does not only deal with this more economical problem) it also contains some aspects that are orientated on the word JJsystemII . This means that those tests should be done in the environment for which the program was designed) li#e a mulituser networ# or whetever. 4ven security guide lines have to be included. "nce again) it is beyond doubt that this test cannot be done completely) and nevertheless) while this is one of the most incomplete test methods) it is one of the most important. ! number of time-domain software reliability models attempt to predict the growth of a systemIs reliability during the system test phase of the development life cycle. &n this paper we e3amine the results of applying several types of 6oisson-process models to the development of a large system for which system test was performed in two parallel trac#s) using different strategies for test data selection. we will test that the functionality of your systems meets with your specifications) integrating with which-ever type of development methodology you are applying. $e test for errors that users are li#ely to ma#e as they interact with the application as well as your application*s ability to trap errors gracefully. These techni(ues can be applied fle3ibly) whether testing a financial system) e-commerce) an online casino or games testing. System Testing is more than +ust functional testing) however) and can) when appropriate) also encompass many other types of testing) such asD o security o loadKstress o performance o browser compatibility o localisation
(edCce reBorF and sCpport oDerheads More effort spent on deDeloping neB fCnctionality and less on MECg fiGingM as HCality increases !f it goes BrongI Bhat is the potential impact on yoCr commercial goals@ 5noBledge is poBerI so Bhy taFe a leap of faith Bhile yoCr competition step forBard Bith confidence@
These benefits are achieved as a result of some fundamental principles of testing) for e3ample) increased independence naturally increases ob+ectivity. Lour test strategy must ta#e into consideration the ris#s to your organisation) commercial and technical. Lou will have a personal interest in its success in which case it is only human for your ob+ectivity to be compromised.
Techni(ues can be structural or functional &n practice) it*s usually ad-hoc and loo#s a lot li#e debugging .ore structured approaches e3ist
7.+ Conclusion:
Gence the system Test phase should begin once modules are integrated enough to perform tests in a whole system environment. System testing can occur in parallel with integration test) especially with the top-down method.
'nit Testing
#now that their unit testing is complete when the unit tests cover at the very least the functional re(uirements of all the code. The careful programmer will #now that their unit testing is complete when they have verified that their unit tests cover every cluster of ob+ects that form their application.
C&.5/*,8 -. U.-, T/8,-.79 NThe most JmicroJ scale of testingO NTo test particClar fCnctions or code modCles. NTypically done Ey the programmer and not Ey testers. N 's it reHCires detailed FnoBledge of the internal program design and code. N "ot alBays easily done Cnless the application has a Bell-designed architectCre Bith tight codeO
N N N N N
U.-, T/8,-.7 ; U8/( I.,/(2)5/ C+/5<8 (eadaEility of the Controls Tool Tips Validation Ease of se of !nterface 'cross TaE related ChecFs ser !nterface *ialog
Proprietary & Confidential -
) ! compliance checFs
(dvantage of 'nit Testing _ Can be applied directly to ob+ect code and does not re(uire processing source code. _ 6erformance profilers commonly implement this measure.
2ethod for State!ent Coverage -8esign a test-case for the passKfailure of every decision point -Select uni(ue set of test cases _This measure reports whether ?oolean e3pressions tested in control structures Asuch as the if-statement and while-statementB evaluated to both true and false. _The entire ?oolean e3pression is considered one true-or-false predicate regardless of whether it contains logical-and or logical-or operators. _!dditionally) this measure includes coverage of switch-statement cases) e3ception handlers) and interrupt handlers _!lso #nown asD branch coverage) all-edges coverage) basis path coverage) decisiondecision-path testing _H?asis pathH testing selects paths that achieve decision coverage. _ !8=! T!%4D Simplicity without the problems of statement coverage IS( *(<T(3E _This measure ignores branches within boolean e3pressions which occur due to shortcircuit operators. 2ethod for Condition Coverage: -Test if every condition Asub-e3pressionB in decision for trueKfalse -Select uni(ue set of test cases.
Performance Testing Process & Methodology 13 Proprietary & Confidential -
_<eports the true or false outcome of each ?oolean sub-e3pression) separated by logical-and and logical-or if they occur. _ _Condition coverage measures the sub-e3pressions independently of each other. _<eports whether every possible combination of boolean sub-e3pressions occurs. !s with condition coverage) the sub-e3pressions are separated by logical-and and logical-or) when present. _The test cases re(uired for full multiple condition coverage of a condition are given by the logical operator truth table for the condition. IS( *(<T(3E: _Tedious to determine the minimum set of test cases re(uired) especially for very comple3 ?oolean e3pressions _ umber of test cases re(uired could vary substantially among conditions that have similar comple3ity _ConditionK8ecision Coverage is a hybrid measure composed by the union of condition coverage and decision coverage. _&t has the advantage of simplicity but without the shortcomings of its component measures _This measure reports whether each of the possible paths in each function have been followed. _! path is a uni(ue se(uence of branches from the function entry to the e3it. _!lso #nown as predicate coverage. 6redicate coverage views paths as possible combinations of logical conditions _6ath coverage has the advantage of re(uiring very thorough testing ='<CTI4< C4*E$(3E:
P This measCre reports Bhether yoC inDoFed each fCnction or procedCre. P !t is CsefCl dCring preliminary testing to assCre at least some coDerage in all areas of the softBare. P .roadI shalloB testing finds gross deficiencies in a test sCite HCicFly.
L44) C4*E$(3E This measure reports whether you e3ecuted each loop body Cero times) e3actly once) twice and more than twice AconsecutivelyB. ;or do-while loops) loop coverage reports whether you e3ecuted the body e3actly once) and more than once. The valuable aspect of this measure is determining whether while-loops and for-loops e3ecute more than once) information not reported by others measure. $(CE C4*E$(3E This measure reports whether multiple threads e3ecute the same code at the same time. Gelps detect failure to synchroniCe access to resources. @seful for testing multi-threaded programs such as in an operating system.
I." Conclusion
Performance Testing Process & Methodology 4: Proprietary & Confidential -
Testing irrespective of the phases of testing should encompass the following D Cost of ;ailure associated with defective products getting shipped and used by customer is enormous To find out whether the integrated product wor# as per the customer re(uirements To evaluate the product with an independent perspective To identify as many defects as possible before the customer finds To reduce the ris# of releasing the product
1J Test Strategy
1J.1 Introduction
This 8ocument entails you towards the better insight of the Test Strategy and its methodology. &t is the role of test management to ensure that new or modified service products meet the business re(uirements for which they have been developed or enhanced. The Testing strategy should define the ob+ectives of all test stages and the techni(ues that apply. The testing strategy also forms the basis for the creation of a standardiCed documentation set) and facilitates communication of the test process and its implications outside of the test discipline. !ny test support tools introduced should be aligned with) and in support of) the test strategy. Test !pproachKTest !rchitecture are the acronyms for Test Strategy. Test management is also concerned with both test resource and test environment management.
responsibility for testing and commissioning is buried deep within the supply chain as a sub-contract of a sub-contract. &t is possible to gain greater control of this process and the associated ris# through the use of specialists such as Systems &ntegration who can be appointed as part of the professional team. The time necessary for testing and commissioning will vary from pro+ect to pro+ect depending upon the comple3ity of the systems and services that have been installed. The 6ro+ect Sponsor should ensure that the professional team and the contractor consider realistically how much time is needed. =itness for purpose chec-list: &s there a documented testing strategy that defines the ob+ectives of all test stages and the techni(ues that may apply) e.g. non-functional testing and the associated techni(ues such as performance) stress and security etc1 8oes the test plan prescribe the approach to be ta#en for intended test activities) identifyingD the items to be tested) the testing to be performed) test schedules) resource and facility re(uirements) reporting re(uirements) evaluation criteria) ris#s re(uiring contingency measures1 !re test processes and practices reviewed regularly to assure that the testing processes continue to meet specific business needs1 ;or e3ample) e-commerce testing may involve new user interfaces and a business focus on usability may mean that the organiCation must review its testing strategies .
Create a means to generate and apply large numbers of decision scenarios to the product. This will be done using the %@& test !utomation system or through the direct generation of 8ecide <ight scenario files that would be loaded into the product during test. <eview the 8ocumentation) and the design of the user interface and functionality for its sensitivity to user error. Test with decision scenarios that are near the limit of comple3ity allowed by the product Compare comple3 scenarios. Test the product for the ris# of silent failures or corruptions in decision analysis. &ssues in 43ecution of the Test Strategy The difficulty of understanding and simulating the decision algorithm The ris# of coincidal failure of both the simulation and the product. The difficulty of automating decision tests
Coding
Test ;actor W The ris# or issue that needs to be addressed as part of the test strategy. The strategy will select those factors that need to be addressed in the testing of a specific application system. Test 6hase W The 6hase of the systems development life cycle in which testing will occur.
ot all the test factors will be applicable to all software systems. The development team will need to select and ran# the test factors for the specific software systems being developed. The test phase will vary based on the testing methodology used. ;or e3ample the test phases in as traditional waterfall life cycle methodology will be much different from the phases in a <apid !pplication 8evelopment methodology.
#acto rs
(isFs
1J.6 Conclusion:
Test Strategy should be developed in accordance with the business ris#s associated with the software when the test team develop the test tactics. Thus the Test team needs to ac(uire and study the test strategy that should (uestion the followingD $hat is the relationship of importance among the test factors1 $hich of the high level ris#s are the most significant1 $hat damage can be done to the business if the software fails to perform correctly1 $hat damage can be done to the business if the business if the software is not completed on time1 $ho are the individuals most #nowledgeable in understanding the impact of the identified business ris#s1
Gence the Test Strategy must address the ris#s and present a process that can reduce those ris#s. The system accordingly focuses on ris#s thereby establishes the ob+ectives for the test process.
Performance Testing Process & Methodology 44 Proprietary & Confidential -
11 TEST )L(<
11.1 #hat is a Test )lanK
! Test 6lan can be defined as a document that describes the scope) approach) resources and schedule of intended test activities. &t identifies test items) the features to be tested) the testing tas#s) who will do each tas#) and any ris#s re(uiring contingency planning. The main purpose of preparing a Test 6lan is that everyone concerned with the pro+ect are in sync with regards to the scope) responsibilities) deadlines and deliverables for the pro+ect. &t is in this respect that reviews and a sign-off are very important since it means that everyone is in agreement of the contents of the test plan and this also helps in case of any dispute during the course of the pro+ect Aespecially between the developers and the testersB.
S5&*/
This section should tal# about the areas of the application which are to be tested by the X! team and specify those areas which are definitely out of scope Ascreens) database) mainframe processes etcB. Test (pproach This would contain details on how the testing is to be performed and whether any specific strategy is to be followed Aincluding configuration managementB. Entry Criteria This section e3plains the various steps to be performed before the start of a test Ai.e.B pre-re(uisites. ;or e3ampleD Timely environment set up) starting the web server K app server) successful implementation of the latest build etc. $esources This section should list out the people who would be involved in the pro+ect and their designation etc. Tas-s 1 $esponsi0ilities This section tal#s about the tas#s to be performed and the responsibilities assigned to the various members in the pro+ect. E.it criteria Contains tas#s li#e bringing down the system K server) restoring system to pre-test environment) database refresh etc. Schedules 1 2ilestones This sections deals with the final delivery date and the various milestone dates to be met in the course of the pro+ect. &ardware 1 Software $e%uire!ents This section would contain the details of 6C*s K servers re(uired Awith the configurationB to install the application or perform the testingN specific software that needs to be installed on the systems to get the application running or to connect to the databaseN connectivity related issues etc. $is-s A 2itigation )lans This section should list out all the possible ris#s that can arise during the testing and the mitigation plans that the X! team plans to implement incase the ris# actually turns into a reality. Tools to 0e used This would list out the testing tools or utilities Aif anyB that are to be used in the pro+ect Ae.g.B $in<unner) Test 8irector) 6C".) $inSX>. elivera0les This section contains the various deliverables that are due to the client at various points of time Ai.e.B daily) wee#ly) start of the pro+ect) end of the pro+ect etc. These could include Test 6lans) Test 6rocedure) Test .atrices) Status <eports) Test Scripts etc. Templates for all these could also be attached.
Performance Testing Process & Methodology 43 Proprietary & Confidential -
$eferences 6rocedures Templates AClient Specific or otherwiseB Standards K %uidelines Ae.g.B X=iew 6ro+ect related documents A<S8) !88) ;S8 etcB (nne.ure This could contain embedded documents or lin#s to documents which have been K will be used in the course of testing Ae.g.B templates used for reports) test cases etc. <eferenced documents can also be attached here. Sign/4ff This should contain the mutual agreement between the client and the X! team with both leads K managers signing off their agreement on the Test 6lan.
12 Test
! System is programmed by its data. ;unctional testing can suffer if data is poor) and good data can help improve functional testing. %ood test data can be structured to improve understanding and testability. &ts contents) correctly chosen) can reduce maintenance effort and allow fle3ibility. 6reparation of the data can help to focus the business where re(uirements are vague. The first stage of any recogniser development pro+ect is data preparation. Test data should however) be prepared which is representative of normal business transactions. !ctual customer names or contact details should also not be used for such tests. &t is recommended that a full test environment be set up for use in the applicable circumstances. 4ach separate test should be given a uni(ue reference number which will identify the ?usiness 6rocess being recorded) the simulated conditions used) the persons involved in the testing process and the date the test was carried out. This will enable the monitoring and testing reports to be co-coordinated with any feedbac# received. Tests must be planned and thought out a head of timeN you have to decide such things as what e3actly you are testing and testing for) the way the test is going to be run and applied) what steps are re(uired) etc. Testing is the process of creating) implementing and evaluating tests. 4ffective (uality control testing re(uires some basic goals and understandingD Lou must understand what you are testingN if youIre testing a specific functionality) you must #now how itIs supposed to wor#) how the protocols behave) etc. Lou should have a definition of what success and failure are. &n other words) is close enough good enough1 Lou should have a good idea of a methodology for the test) the more formal a plan the betterN you should design test cases. Lou must understand the limits inherent in the tests themselves. Lou must have a consistent schedule for testingN performing a specific set of tests at appropriate points in the process is more important than running the tests at a specific time. <oles of 8ata in ;unctional Testing Testing consumes and produces large amounts of data. 8ata describes the initial conditions for a test) forms the input) is the medium through which the tester influences the software. 8ata is manipulated) e3trapolated) summariCed and referenced by the functionality under test) which finally spews forth yet more data to be chec#ed against e3pectations. 8ata is a crucial part of most functional testing. This paper sets out to illustrate some of the ways that data can influence the test process) and will show that testing can be improved by a careful choice of input data. &n doing this) the paper will concentrate most on data-heavy applicationsN those which use databases or are heavily influenced by the data they hold. The paper will focus on input data) rather than output data or the transitional states the data passes through during processing) as input data has the greatest influence on functional testing and is the simplest to manipulate. The paper will not consider areas where data is important to nonfunctional testing) such as operational profiles) massive datasets and environmental tuning. ! SLST4. &S 6<"%<!..48 ?L &TS 8!T! .any modern systems allow tremendous fle3ibility in the way their basic functionality can be used.
Performance Testing Process & Methodology +1 Proprietary & Confidential -
Configuration data can dictate control flow) data manipulation) presentation and user interface. ! system can be configured to fit several business models) wor# AalmostB seamlessly with a variety of cooperative systems and provide tailored e3periences to a host of different users. ! business may loo# to an applicationIs configurability to allow them to #eep up with the mar#et without being slowed by the development process) an individual may loo# for a personaliCed e3perience from commonly-available software. ;@ CT&" !> T4ST& % S@;;4<S &; 8!T! &S 6""< Tests with poor data may not describe the business model effectively) they may be hard to maintain) or re(uire lengthy and difficult setup. They may obscure problems or avoid them altogether. 6oor data tends to result in poor tests) that ta#e longer to e3ecute. %""8 8!T! &S =&T!> T" <4>&!?>4 T4ST <4S@>TS !n important goal of functional testing is to allow the test to be repeated with the same result) and varied to allow diagnosis. $ithout this) it is hard to communicate problems to coders) and it can become difficult to have confidence in the X! teamIs results) whether they are good or bad. %ood data allows diagnosis) effective reporting) and allows tests to be repeated with confidence). %""8 8!T! C! G4>6 T4ST& % ST!L " SCG48@>4 !n easily comprehensible and well-understood dataset is a tool to help communication. %ood data can greatly assist in speedy diagnosis and rapid re-testing. <egression testing and automated test maintenance can be made speedier and easier by using good data) while an elegantly-chosen dataset can often allow new tests without the overhead of new data. ! formal test plan is a document that provides and records important information about a test pro+ect) for e3ampleD pro+ect and (uality assumptions pro+ect bac#ground information resources schedule 5 timeline entry and e3it criteria test milestones tests to be performed
ata Collection
This section of the 8ocument specifies the description of the test data needed to test recovery of each business process. Identify #ho is to Conduct the Tests &n order to ensure consistency of the testing process throughout the organiCation) one or more members of the ?usiness Continuity 6lanning A?C6B Team should be nominated to co-ordinate the testing process within each business unit) a nominated testing and across the organiCation. 4ach business process should be thoroughly tested and the coordinator should ensure that each business unit observes the necessary rules associated with ensuring that the testing process is carried out within a realistic environment. This section of the ?C6 should contain the names of the ?C6 Team members nominated to co-ordinate the testing process. &t should also list the duties of the appointed coordinators.
&n order to ensure consistency when measuring the results) the tests should be independently monitored. This tas# would normally be carried out by a nominated member of the ?usiness <ecovery Team or a member of the ?usiness Continuity 6lanning Team. This section of the ?C6 will contain the names of the persons nominated to monitor the testing process throughout the organiCation. &t will also contain a list of the duties to be underta#en by the monitoring staff. 6repare ;eedbac# Xuestionnaires &t is vital to receive feedbac# from the persons managing and participating in each of the tests. This feedbac# will hopefully enable wea#nesses within the ?usiness <ecovery 6rocess to be identified and eliminated. Completion of feedbac# forms should be mandatory for all persons participating in the testing process. The forms should be completed either during the tests Ato record a specific issueB or as soon after finishing as practical. This will enable observations and comments to be recorded whilst the event is still fresh in the persons mind. This section of the ?C6 should contain a template for a ;eedbac# Xuestionnaire. 6repare ?udget for Testing 6hase 4ach phase of the ?C6 process which incurs a cost re(uires that a budget be prepared and approved. The I6reparing for a 6ossible 4mergencyI 6hase of the ?C6 process will involve the identification and implementation of strategies for bac# up and recovery of data files or a part of a business process. &t is inevitable that these bac# up and recovery processes will involve additional costs. Critical parts of the business process such as the &T systems) may re(uire particularly e3pensive bac# up strategies to be implemented. $here the costs are significant they should be approved separately with a specific detailed budget for the establishment costs and the ongoing maintenance costs. This section of the ?C6 will contain a list of the testing phase activities and a cost for each. &t should be noted whenever part of the costs is already incorporated with the organiCation*s overall budgeting process.
This section of the ?C6 is to contain a list of each business process with a test schedule and information on the simulated conditions being used. The testing co-ordination and monitoring will endeavor to ensure that the simulated environments are maintained throughout the testing process) in a realistic manner. Test !ccuracy of 4mployee and =endor 4mergency Contact umbers 8uring the testing process the accuracy of employee and vendor emergency contact information is to be re-confirmed. !ll contact numbers are to be validated for all involved employees. This is particularly important for management and #ey employees who are critical to the success of the recovery process. This activity will usually be handled by the G<. 8epartment or 8ivision. $here) in the event of an emergency occurring outside of normal business hours) a large number of persons are to be contacted) a hierarchical process could be used whereby one person contacts five others. This process must have safety features incorporated to ensure that if one person is not contactable for any reason then this is notified to a nominated controller. This will enable alternative contact routes to be used. !ssess Test <esults 6repare a full assessment of the test results for each business process. The following (uestions may be appropriateD $ere ob+ectives of the ?usiness <ecovery 6rocess and the testing process met - if not) provide further comment $ere simulated conditions reasonably HauthenticH - if not) provide further comment $as test data representative - if not) provide further comment 8id the tests proceed without any problems - if not) provide further comment $hat were the main comments received in the feedbac# (uestionnaires 4ach test should be assessed as either fully satisfactory) ade(uate or re(uiring further testing. Training Staff in the Business $ecovery )rocess !ll staff should be trained in the business recovery process. This is particularly important when the procedures are significantly different from those pertaining to normal operations. This training may be integrated with the training phase or handled separately. The training should be carefully planned and delivered on a structured basis. The training should be assessed to verify that it has achieved its ob+ectives and is relevant for the procedures involved. Training may be delivered either using in-house resources or e3ternal resources depending upon available s#ills and related costs. 2anaging the Training )rocess ;or the ?C6 training phase to be successful it has to be both well managed and structured. &t will be necessary to identify the ob+ective and scope for the training) what specific training is re(uired) who needs it and a budget prepared for the additional costs associated with this phase. evelop 40Hectives and Scope of Training The ob+ectives and scope of the ?C6 training activities are to be clearly stated within the plan. The ?C6 should contain a description of the ob+ectives and scope of the training phase. This will enable the training to be consistent and organiCed in a manner where the results can be measured) and the training fine tuned) as appropriate.
Performance Testing Process & Methodology +0 Proprietary & Confidential -
The ob+ectives for the training could be as follows D HTo train all staff in the particular procedures to be followed during the business recovery processH. The scope of the training could be along the following lines D HThe training is to be carried out in a comprehensive and e3haustive manner so that staff become familiar with all aspects of the recovery process. The training will cover all aspects of the ?usiness <ecovery activities section of the ?C6 including &T systems recoveryH. Consideration should also be given to the development of a comprehensive corporate awareness program for communicating the procedures for the business recovery process. Training <eeds (ssess!ent The plan must specify which person or group of persons re(uires which type of training. &t is necessary for all new or revised processes to be e3plained carefully to the staff. ;or e3ample it may be necessary to carry out some process manually if the &T system is down for any length of time. These manual procedures must be fully understood by the persons who are re(uired to carry them out. ;or larger organiCations it may be practical to carry out the training in a classroom environment) however) for smaller organiCations the training may be better handled in a wor#shop style. This section of the ?C6 will identify for each business process what type of training is re(uired and which persons or group of persons need to be trained. Training 2aterials evelop!ent Schedule "nce the training needs have been identified it is necessary to specify and develop suitable training materials. This can be a time consuming tas# and unless priorities are given to critical training programmes) it could delay the organiCation in reaching an ade(uate level of preparedness. This section of the ?C6 contains information on each of the training programmes with details of the training materials to be developed) an estimate of resources and an estimate of the completion date. )repare Training Schedule "nce it has been agreed who re(uires training and the training materials have been prepared a detailed training schedule should be drawn up. This section of the ?C6 contains the overview of the training schedule and the groups of persons receiving the training. Communication to Staff "nce the training is arranged to be delivered to the employees) it is necessary to advise them about the training programmes they are scheduled to attend. This section of the ?C6 contains a draft communication to be sent to each member of staff to advise them about their training schedule. The communication should provide for feedbac# from the staff member where the training dates given are inconvenient. ! separate communication should be sent to the managers of the business units advising them of the proposed training schedule to be attended by their staff. 4ach member of staff will be given information on their role and responsibilities applicable in the event of an emergency. 6repare ?udget for Training 6hase 4ach phase of the ?C6 process which incurs a cost re(uires that a budget be prepared and approved. 8epending upon the cross charging system employed by the organiCation) the training costs will vary greatly. Gowever) it has to be recogniCed that) however well +ustified) training incurs additional costs and these should be approved by the appropriate authority within the organiCation.
Performance Testing Process & Methodology +1 Proprietary & Confidential -
This section of the ?C6 will contain a list of the training phase activities and a cost for each. &t should be noted whenever part of the costs is already incorporated with the organiCation*s overall budgeting process. !ssessing the Training The individual ?C6 training programmes and the overall ?C6 training process should be assessed to ensure its effectiveness and applicability. This information will be gathered from the trainers and also the trainees through the completion of feedbac# (uestionnaires. ;eedbac# Xuestionnaires !ssess ;eedbac# ;eedbac# Xuestionnaires &t is vital to receive feedbac# from the persons managing and participating in each of the training programmes. This feedbac# will enable wea#nesses within the ?usiness <ecovery 6rocess) or the training) to be identified and eliminated. Completion of feedbac# forms should be mandatory for all persons participating in the training process. The forms should be completed either during the training Ato record a specific issueB or as soon after finishing as practical. This will enable observations and comments to be recorded whilst the event is still fresh in the persons mind. This section of the ?C6 should contain a template for a ;eedbac# Xuestionnaire for the training phase. !ssess ;eedbac# The completed (uestionnaires from the trainees plus the feedbac# from the trainers should be assessed. &dentified wea#nesses should be notified to the ?C6 Team >eader and the process strengthened accordingly. The #ey issues raised by the trainees should be noted and consideration given to whether the findings are critical to the process or not. &f there are a significant number of negative issues raised then consideration should be given to possible re-training once the training materials) or the process) have been improved. This section of the ?C6 will contain a format for assessing the training feedbac#. Eeeping the 6lan @p-to-date Changes to most organiCations occur all the time. 6roducts and services change and also their method of delivery. The increase in technological based processes over the past ten years) and particularly within the last five) have significantly increased the level of dependency upon the availability of systems and information for the business to function effectively. These changes are li#ely to continue and probably the only certainty is that the pace of change will continue to increase. &t is necessary for the ?C6 to #eep pace with these changes in order for it to be of use in the event of a disruptive emergency. This chapter deals with updating the plan and the managed process which should be applied to this updating activity. .aintaining the ?C6 &t is necessary for the ?C6 updating process to be properly structured and controlled. $henever changes are made to the ?C6 they are to be fully tested and appropriate amendments should be made to the training materials. This will involve the use of formaliCed change control procedures under the control of the ?C6 Team >eader. Change Controls for @pdating the 6lan &t is recommended that formal change controls are implemented to cover any changes re(uired to the ?C6. This is necessary due to the level of comple3ity contained within the ?C6. ! Change re(uest ;orm K Change "rder form is to be prepared and approved in respect of each proposed change to the ?C6.
Performance Testing Process & Methodology +4 Proprietary & Confidential -
This section of the ?C6 will contain a Change <e(uest ;orm K Change "rder to be used for all such changes to the ?C6. $esponsi0ilities for 2aintenance of Each )art of the )lan 4ach part of the plan will be allocated to a member of the ?C6 Team or a Senior .anager with the organiCation who will be charged with responsibility for updating and maintaining the plan. The ?C6 Team >eader will remain in overall control of the ?C6 but business unit heads will need to #eep their own sections of the ?C6 up to date at all times. Similarly) G<. 8epartment will be responsible to ensure that all emergency contact numbers for staff are #ept up to date. &t is important that the relevant ?C6 coordinator and the ?usiness <ecovery Team are #ept fully informed regarding any approved changes to the plan. Test !ll Changes to 6lan The ?C6 Team will nominate one or more persons who will be responsible for coordinating all the testing processes and for ensuring that all changes to the plan are properly tested. $henever changes are made or proposed to the ?C6) the ?C6 Testing Co-ordinator will be notified. The ?C6 Testing Co-ordinator will then be responsible for notifying all affected units and for arranging for any further testing activities. This section of the ?C6 contains a draft communication from the ?C6 Co-ordinator to affected business units and contains information about the changes which re(uire testing or re-testing. (dvise )erson $esponsi0le for BC) Training ! member of the ?C6 Team will be given responsibility for co-ordinating all training activities A?C6 Training Co-ordinatorB. The ?C6 Team >eader will notify the ?C6 Training Co-ordinator of all approved changes to the ?C6 in order that the training materials can be updated. !n assessment should be made on whether the change necessitates any re-training activities. !dvise 6erson <esponsible for ?C6 Training ! member of the ?C6 Team will be given responsibility for co-ordinating all training activities A?C6 Training Co-ordinatorB. The ?C6 Team >eader will notify the ?C6 Training Co-ordinator of all approved changes to the ?C6 in order that the training materials can be updated. !n assessment should be made on whether the change necessitates any re-training activities. 6roblems which can be caused by 6oor Test 8ata .ost testers are familiar with the problems that can be caused by poor data. The following list details the most common problems familiar to the author. .ost pro+ects e3perience these problems at some stage - recogniCing them early can allow their effects to be mitigated. @nreliable test results. <unning the same test twice produces inconsistent results. This can be a symptom of an uncontrolled environment) unrecogniCed database corruption) or of a failure to recogniCe all the data that is influential on the system. 8egradation of test data over time. 6rogram faults can introduce inconsistency or corruption into a database. &f not spotted at the time of generation) they can cause hard-to-diagnose failures that may be apparently unrelated to the original fault. <estoring the data to a clean set gets rid of the symptom) but the original fault is undiagnosed and can carry on into live operation and perhaps future releases. ;urthermore) as the data is restored) evidence of the fault is lost. &ncreased test maintenance cost &f each test has its own data) the cost of test maintenance is correspondingly increased.
Performance Testing Process & Methodology ++ Proprietary & Confidential -
&f that data is itself hard to understand or manipulate) the cost increases further. <educed fle3ibility in test e3ecution &f datasets are large or hard to set up) some tests may be e3cluded from a test run. &f the datasets are poorly constructed) it may not be time-effective to construct further data to support investigatory tests. "bscure results and bug reports $ithout clearly comprehensible data) testers stand a greater chance of missing important diagnostic features of a failure) or indeed of missing the failure entirely. .ost reports ma#e reference to the input data and the actual and e3pected results. 6oor data can ma#e these reports hard to understand. >arger proportion of problems can be traced to poor data ! proportion of all failures logged will be found) after further analysis) not to be faults at all. 8ata can play a significant role in these failures. 6oor data will cause more of these problems. >ess time spent hunting bugs The more time spent doing unproductive testing or ineffective test maintenance) the less time spent testing. Confusion between developers) testers and business 4ach of these groups has different data re(uirements. ! failure to understand each others data can lead to ongoing confusion. <e(uirements problems can be hidden in inade(uate data &t is important to consider inputs and outputs of a process for re(uirements modeling. &nade(uate data can lead to ambiguous or incomplete re(uirements. Simpler to ma#e test mista#es 4verybody ma#es mista#es. Confusing or over-large datasets can ma#e data selection mista#es more common. @nwieldy volumes of data Small datasets can be manipulated more easily than large datasets. ! few datasets are easier to manage than many datasets. ?usiness data not representatively tested Test re(uirements) particularly in configuration data) often donIt reflect the way the system will be used in practice. $hile this may arguably lead to broad testing for a variety of purposes) it can be hard for the business or the end users to feel confidence in the test effort if they feel distanced from it. &nability to spot data corruption caused by bugs ! few well-#nown datasets can be more easily be chec#ed than a large number of comple3 datasets) and may lend themselves to automated testing K sanity chec#s. ! readily understandable dataset can allow straightforward diagnosisN a comple3 dataset will positively hinder diagnosis. 6oor databaseKenvironment integrity &f a large number of testers) or tests) share the same dataset) they can influence and corrupt each others results as they change the data in the system. This can not only cause false results) but can lead to database integrity problems and data corruption. This can ma#e portions of the application untestable for many testers simultaneously.
ata Types
&n the process of testing a system) many references are made to HThe 8ataH or H8ata 6roblemsH. !lthough it is perhaps simpler to discuss data in these terms) it is useful to be able to classify the data according to the way it is used. The following broad categories allow data to be handled and discussed more easily. Environ!ental data 4nvironmental data tells the system about its technical environment. &t includes communications addresses) directory trees and paths and environmental variables. The current date and time can be seen as environmental data. Setup data Setup data tells the system about the business rules. &t might include a cross reference between country and delivery cost or method) or methods of debt collection from different #inds of customers. Typically) setup data causes different functionality to apply to otherwise similar data. $ith an effective approach to setup data) business can offer new intangible products without developing new functionality - as can be seen in the mobile phone industry) where new billing products are supported and indeed created by additions to the setup data. Input data &nput data is the information input by day-to-day system functions. !ccounts) products) orders) actions) documents can all be input data. ;or the purposes of testing) it is useful to split the categoriCation once moreD ;&F48 & 6@T 8!T! ;i3ed input data is available before the start of the test) and can be seen as part of the test conditions. C" S@.!?>4 & 6@T 8!T! Consumable input data forms the test input &t can also be helpful to (ualify data after the system has started to use itN Transitional data Transitional data is data that e3ists only within the program) during processing of input data. Transitional data is not seen outside the system Aarguably) test handles and instrumentation ma#e it output dataB) but its state can be inferred from actions that the system has ta#en. Typically held in internal system variables) it is temporary and is lost at the end of processing. "utput data "utput data is all the data that a system outputs as a result of processing input data and events. &t generally has a correspondence with the input data Acf. Oac#sonIs Structured 6rogramming methodologyB) and includes not only files) transmissions) reports and database updates) but can also include test measurements. ! subset of the output data is generally compared with the e3pected results at the end of test e3ecution. !s such) it does not directly influence the (uality of the tests.
P)(,-,-&.-.7
6artitions allow data access to be controlled) reducing uncontrolled changes in the data. 6artitions can be used independentlyN data use in one area will have no effect on the results of tests in another. 8ata can be safely and effectively partitioned by machine K database K application instance) although this partitioning can introduce configuration management problems in software version) machine setup) environmental data and data loadKreload. ! useful and basic way to start with partitions is to set up) not a single environment for each test or tester) but to set up three shared by many users) so allowing different #inds of data use. These three have the following characteristicsD
Performance Testing Process & Methodology /: Proprietary & Confidential -
Safe area [a@sed for en(uiry tests) usability tests etc. [a o test changes the data) so the area can be trusted. [a.any testers can use simultaneously Change area [a@sed for tests which updateKchange data. [a8ata must be reset or reloaded after testing. [a@sed by one testKtester at a time. Scratch area [a@sed for investigative update tests and those which have unusual re(uirements. [a43isting data cannot be trusted. [a@sed at testerIs own ris#' Testing rarely has the lu3ury of completely separate environments for each test and each tester. Controlling data) and the access to data) in a system can be fraught. .any different sta#eholders have different re(uirements of the data) but a common re(uirement is that of e3clusive use. $hile the impact of this re(uirement should not be underestimated) a number of sta#eholders may be able to wor# with the same environmental data) and to a lesser e3tent) setup data - and their wor# may not need to change the environmental or setup data. The test strategy can ta#e advantage of this by disciplined use of te3t K value fields) allowing the use of IsoftI partitions. ISoftI partitions allow the data to be split up conceptually) rather than physically. !lthough testers are able to interfere with each others tests) the team can be educated to avoid each others wor#. &f) for instance) tester 1Is tests may only use customers with <ussian nationality and tester 7Is tests only with ;rench) the two sets of wor# can operate independently in the same dataset. ! safe area could consist of >ondon addresses) the change area .anchester addresses) and the scratch area ?ristol addresses. Typically) values in free-te3t fields are used for soft partitioning. 8ata partitions help becauseD [a!llow controlled and reliable data) reducing data corruption K change problems [aCan reduce the need for e3clusive access to environmentsKmachines Clarity 6ermutation techni(ues may ma#e data easier to grasp by ma#ing the datasets small and commonly used) but we can ma#e our data clearer still by describing each row in its own free te3t fields) allowing testers to ma#e a simple comparison between the free te3t Awhich is generally displayed on outputB) and actions based on fields which tend not to be directly displayed. @se of free te3t fields with some correspondence to the internals of the record allows output to be chec#ed more easily. Testers often tal# about items of data) referring to them by anthropomorphic personification - that is to say) they give them names. This allows shorthand) but also acts as +argon) e3cluding those who are not in the #now. Setting this data) early on in testing) to have some meaningful value can be very useful) allowing testers to sense chec# input and output data) and choose appropriate input data for investigative tests. <eports) data e3tracts and sanity chec#s can also ma#e use of theseN sorting or selecting on a free te3t field that should have some correspondence with a functional field can help spot problems or eliminate unaffected data.
Performance Testing Process & Methodology /1 Proprietary & Confidential -
8ata is often used to communicate and illustrate problems to coders and to the business. Gowever) there is generally no mandate for outside groups to understand the format or re(uirements of test data. %iving some meaning to the data that can be referred to directly can help with improving mutual understanding. Clarity helps becauseD [a&mproves communication within and outside the team [a<educes test errors caused by using the wrong data [a!llows another method way of doing sanity chec#s for corrupted or inconsistent data [aGelps when chec#ing data after input [aGelps in selecting data for investigative tests
environmental or configuration scripts. >arge volumes of setup data can often be generated from e3isting datasets and loaded using a data load tool) while small volumes of setup data often have an associated system maintenance function and can be input using the system. ;i3ed input data may be generated or migrated and is loaded using any and all of the methods above) while consumable input data is typically listed in test scripts or generated as an input to automation tools. $hen data is loaded) it can append itself to e3isting data) overwrite e3isting data) or delete e3isting data first. 4ach is appropriate in different circumstances) and due consideration should be given to the conse(uences.
12., Conclusion
8ata can be influential on the (uality of testing. $ell-planned data can allow fle3ibility and help reduce the cost of test maintenance. Common data problems can be avoided or reduced with preparation and automation. 4ffective testing of setup data is a necessary part of system testing) and good data can be used as a tool to enable and improve communication throughout the pro+ect. The following points summariCe the actions that can influence the (uality of the data and the effectiveness of its usageD 6lan the data for maintenance and fle3ibility Enow your data) and ma#e its structure and content transparent @se the data to improve understanding throughout testing and the business Test setup data as you would test functionality
@sersKCustomers served WThe organiCation )individuvals)or class usersKcustomers serviced by this activity. 8eficiencies noted W The status of the results of e3ecuting this activity and any appropriate interpretation of those facts. The Criterion is the user*s statement of what is desired. &t can be stated in the either negative or positive terms. ;or e3ample ) it could indicate the need to reduce the complaints or delays as well as desired processing turn around time. $or# 6aper to describe the problem) and document the statement of condition and the statement of criteria. ;or e3ample the following $or# paper provides the information for Test >og 8ocumentationD
=ield $e%uire!ents: =ield Instructions for Entering ata <a!e of Software Tested D 6ut the name of the SK$ or subsystem tested. )ro0le! escription: $rite a brief narrative description of the variance uncovered from e3pectations State!ent of ConditionsD 6ut the results of actual processing that occurred here. State!ent of CriteriaD 6ut what testers believe was the e3pected result from processing Effect of eviation: &f this can be estimated ) testers should indicate what they believe the impact or effect of the problem will be on computer processing Cause of )ro0le!: The testers should indicate what they believe is the cause of the problem) if #nown. &f the testers re unable to do this ) the wor# paper will be given to the development team and they should indicate the cause of the problem. Location of the )ro0le!D The Tests should document where problem occurred as closely as possible. $eco!!ended (ctionD The testers should indicate any recommended action they believe would be helpful to the pro+ect team. &f not approved) the alternate action should be listed or the reason for not following the recommended action should be documented. <a!e of the S1# testedD )ro0le! escription State!ent of Condition State!ent of Criteria Effect of eviation Cause of a )ro0le! Location of the )ro0le! $eco!!ended (ction
Test $esults ata This data Bill inclCdeI Test factors -The factors incorporated in the planI the Dalidation of Bhich Eecomes the Test OERectiDe. .Csiness oERectiDe >The Dalidation that specific ECsiness oERectiDes haDe Eeen met.
Performance Testing Process & Methodology /4 Proprietary & Confidential -
!nterface OERectiDes-Validation that data7OERects can Ee correctly passed among %oftBare components. #Cnctions7%CE fCnctions-!dentifiaEle %oftBare components normally associated Bith the reHCirements of the softBare. nits- The smallest identifiaEle softBare components Platform- The hardBare and %oftBare enDironment in Bhich the softBare system Bill operate.
Test Transactions> Test Suites> and Test Events These are the test products produced by the test team to perform testing. Test transactionsKeventsD The type of tests that will be conducted during the e3ecution of tests) which will be based on software re(uirements. &nspections W ! verification of process deliverables against deliverable specifications. <eviewsD =erification that the process deliverables K phases are meeting the user*s true needs.
efect
This category includes a 8escription of the individual defects uncovered during the testing process. This description includes but not limited to D 8ata the defect uncovered ame of the 8efect >ocation of the 8efect Severity of the 8efect Type of 8efect Gow the defect was uncoveredATest 8ataKTest ScriptB The Test >ogs should add to this information in the form of where the defect originated ) when it was corrected) and when it was entered for retest .
Storing
&t is recommended that a database be established in which to store the results collected during testing. &t is also suggested that the database be put in online through clientKserver systems so that with a vested interest in the status of the pro+ect can be readily accessed for the status update. !s described the most common test <eport is a simple Spread sheet ) which indicates the pro+ect component for which the status is re(uested) the test that will be performed to determine the status of that component) and the results of testing at any point of time.
13.2.1
$eporting Tools - @se of word processing) database) defect trac#ing) and graphic tools to prepare test reports. Some 8atabase test tools li#e 8ata =ision is a database reporting tool similar to Crystal <eports. <eports can be viewed and printed from the application or output as GT.>) >aTeF7e) F.>) 8oc?oo#) or tab- or comma-separated te3t files. ;rom the >aTeF7e and 8oc?oo# output files you can in turn produce 68;) te3t) GT.>) 6ostScript) and more. Some (uery tools available for >inu3-based databases includeD X.ySX> db.etri3 6g!ccess Cognos 6owerhouse This is not yet available for >inu3N Cognos is loo#ing into what interest people have in the product to assess what their strategy should be with respect to the >inu3 JJmar#et.II %<% - % @ <eport %enerator The %<% program reads record and field information from a d?ase9R file) delimited !SC&& te3t file or a SX> (uery to a <8?.S and produces a report listing. The program was loosely designed to produce TeFK>aTeF formatted output) but plain !SC&& te3t) troff) 6ostScript) GT.> or any other #ind of !SC&& based output format can be produced +ust as easily. #ord E)rocessing: "ne way of increasing the utility of computers and word processors for the teaching of writing may be to use software that will guide the processes of generating) organiCing) composing and revising te3t. This allows each person to use the normal functions of the computer #eyboard that are common to all word processors) email editors) order entry systems) and data base management products. ;rom the <eport .anager) however) you can (uic#ly scan through any number of these reports and see how each personIs history compares. ! one-page summary report may be printed with either the <eport .anager
Performance Testing Process & Methodology // Proprietary & Confidential -
program or from the individual #eyboard or #eypad software at any time. &ndividual <eports include all of the following information. Status <eport $ord 6rocessing Tests or Eeypad Tests ?asic S#ills Tests or 8ata 4ntry Tests 6rogress %raph %ame Scores Test <eport for each test Test irectorD ;acilitates consistent and repetitive testing process Central repository for all testing assets facilitates the adoption of a more consistent testing process) which can be repeated throughout the application life cycle 6rovides !nalysis and 8ecision Support %raphs and reports help analyCe application readiness at any point in the testing process <e(uirements coverage) run schedules) test e3ecution progress) defect statistics can be used for production planning 6rovides !nytime) !nywhere access to Test !ssets @sing Test 8irector*s web interface) tester) developers) business analysts and Client can participate and contribute to the testing process Traceability throughout the testing process Test Cases can be mapped to re(uirements providing ade(uate visibility over the test coverage of re(uirements Test 8irector lin#s re(uirements to test cases and test cases to defects .anages ?oth .anual and !utomated Testing Test 8irector can manage both manual and automated tests A$in <unnerB Scheduling of automated tests can be effectively done using Test 8irector
Test $eport Standards - 8efining the components that should be included in a test report. Statistical (nalysis - !bility to draw statistically valid conclusions from (uantitative test results. Testing ata used for !etrics
Testers are typically responsiEle for reporting their test statCs at regClar interDals. The folloBing measCrements generated dCring testing are applicaEle? Total nCmEer of tests "CmEer of Tests eGecCted to date "CmEer of tests eGecCted sCccessfClly to date *ata concerning softBare defects inclCde Total nCmEer of defects corrected in each actiDity
Performance Testing Process & Methodology /3 Proprietary & Confidential -
Total nCmEer of defects entered in each actiDity. 'Derage dCration EetBeen defect detection and defect correction 'Derage effort to correct a defect Total nCmEer of defects remaining at deliDery %oftBare performance data Cs CsCally generated dCring system testingI once the softBare has Eeen integrated and fCnctional testing is complete. 'Derage CP CtiliSation 'Derage memory tiliSation MeasCred !7O transaction rate
Test $eporting ! final test report should be prepared at the conclusion of each test activity. This includes the following &ndividual 6ro+ect Test <eport &ntegration Test <eport System Test <eport !cceptance test <eport These test reports are designed to document the results of testing as defined in the testplan.The test report can be a combination of electronic data and hard copy. ;or e3ample) if the function matri3 is maintained electronically) there is no reason to print that) as paper report will summariCe the data) draws appropriate conclusions and present recommendations.9 - 6urpose of a Test <eportD The test report has one immediate and three long term purposes. The immediate purpose is to provide information to customers of the software system so that they can determine whether the system is ready for production ) and if so) to assess the potential conse(uences and initiate appropriate actions to minimiCe those conse(uences. The first of the three long term uses is for the pro+ect to trace problems in the event the application malfunctions in production. Enowing which functions have been correctly tested and which ones still contain defects can assist in ta#ing corrective actions. The second long term purpose is to use the data to analyCe the rewor# process for ma#ing changes to prevent the defects from occurring in the future. These defect prone components identify tas#sKsteps that if improved) could eliminate or minimiCe the occurrence of high fre(uency defects. The Third long term purpose is to show what was accomplished in case of an L7E lawsuit. Individual )roHect Test $eport These reports focus on the &ndividual pro+ectsAsoftware systemB)when different testers should test individual pro+ects) they should prepare a report on their results. Integration Test $eport &ntegration testing tests the interfaces between individual pro+ects. ! good test plan will identify the interfaces and institute test conditions that will validate interfaces. %iven is the &ndividual 6ro+ect test report e3cept that conditions tested are interfaces. 1.Scope of Test W This section indicates which functions were and were not tested 7.Test <esults W This section indicates the results of testing) including any variance between what is and what should be 9.$hat wor#sK$hat does not wor# - This section defines the functions that wor# and do not wor# and the interfaces that wor# and do not wor# :. <ecommendations W This section recommends actions that should be ta#en to
Performance Testing Process & Methodology 3: Proprietary & Confidential -
;i3 functions K&nterfaces that do not wor#. .a#e additional improvements Syste! Test $eports ! System Test plan standard that identified the ob+ective of testing ) what was to be tested) how was it to be tested) and when tests should occur. The system test <eport should present the results of e3ecuting the test plan. &f these details are maintained 4lectronically ) then it need only be referenced ) not included in the report. (cceptance Test $eport There are two primary ob+ectives of !cceptance testing <eport .The first is to ensure that the system as implemented meets the real operating needs of the userKcustomer. &f the defined re(uirements are those true needs) testing should have accomplished this ob+ective. The second ob+ective is to ensure that software system can operate in the real world user environment) which includes people s#ills and attitudes) time pressures) changing business conditions) and so forth. The !cceptance Test <eport should encompass these criteria*s for the @ser acceptance respectively.
13.2.2
Conclusion
The Test >ogs obtained from the e3ecution of the test results and finally the test reports should be designed to accomplish the following ob+ectivesD 6rovide &nformation to the customer whether the system should be placed into production) if so the potential conse(uences and appropriate actions to minimiCe these conse(uences. "ne >ong term ob+ective is for the 6ro+ect and the other is for the information technology function. The pro+ect can use the test report to trace problems in the event the application malfunction in production. Enowing which functions have been correctly tested and which ones still contain defects can assist in ta#ing corrective actions. The data can also be used to analyCe the developmental process to ma#e changes to prevent defects from occurring in the future. These defect prone components identify tas#sKsteps that if improved) could eliminate or minimiCe the occurrence of high fre(uency defects in future.
2. Test etails This section would contain the Test !pproach) Types of Testing conducted) Test 4nvironment and Tools @sed. Test (pproach W This would discuss the strategy followed for e3ecuting the pro+ect. This could include information on how coordination was achieved between "nsite and "ffshore teams) any innovative methods used for automation or for reducing repetitive wor#load on the testers) how information and daily K wee#ly deliverables were delivered to the client etc. Types of testing conducted W This section would mention any specific types of testing performed Ai.e.B ;unctional) Compatibility) 6erformance) @sability etc along with related specifications. Test Environ!ent W This would contain information on the Gardware and Software re(uirements for the pro+ect Ai.e.B server configuration) client machine configuration) specific software installations re(uired etc. Tools used W This section would include information on any tools that were used for testing the pro+ect. They could be functional or performance testing automation tools) defect management tools) pro+ect trac#ing tools or any other tools which made the testing wor# easier. 3. 2etrics This section would include details on total number of test cases e3ecuted in the course of the pro+ect) number of defects found etc. Calculations li#e defects found per test case or number of test cases e3ecuted per day per person etc would also be entered in this section. This can be used in calculating the efficiency of the testing effort. ". Test $esults This section is similar to the .etrics section) but is more for showcasing the salient features of the testing effort. &ncase many defects have been logged for the pro+ect) graphs can be generated accordingly and depicted in this section. The graphs can be for 8efects per build) 8efects based on severity) 8efects based on Status Ai.e.B how many were fi3ed and how many re+ected etc.
+. Test elivera0les This section would include lin#s to the various documents prepared in the course of the testing pro+ect Ai.e.B Test 6lan) Test 6rocedures) Test >ogs) <elease <eport etc.
Performance Testing Process & Methodology 3- Proprietary & Confidential -
,. $eco!!endations This section would include any recommendations from the X! team to the client on the product tested. &t could also mention the list of #nown defects which have been logged by X! but not yet fi3ed by the development team so that they can be ta#en care of in the ne3t release of the application.
1+ efect 2anage!ent
1+.1 efect
A mismatch in the application and its specification is a defect. ! software error is present when the program does not do what its end user e3pects it to do.
1+.2.1
ocu!ent W "nce it is set to any of the above statuses apart from "pen) and the testing team does not agree with the development team it is set to document status. "nce the development team has started wor#ing on the defect the status is set to #I) 8A$or# in 6rogressB or if the development team is waiting for a go ahead or some technical feedbac#) they will set to ev #aiting !fter the development team has fi3ed the defect) the status is set to =I;E > which means the defect is ready to re-test. "n re-testing the defect) and the defect still e3ists) the status is set to $E4)E<E > which will follow the same cycle as an open defect. &f the fi3ed defect satisfies the re(uirementsKpasses the test case) it is set to Closed.
.edium
>ow
.Cg reports need to do more than RCst descriEe the ECg. They haDe to giDe deDelopers something to BorF Bith so that they can sCccessfClly reprodCce the proElem. !n most cases the more information> correct information> giDen the Eetter. The report shoCld eGplain eGactly hoB to reprodCce the proElem and an eGplanation of eGactly Bhat the proElem is. The Easic items in a report are as folloBs? $/(8-&.9 This is Dery important. !n most cases the prodCct is not staticI deDelopers Bill haDe Eeen BorFing on it and if theyTDe foCnd a ECg> it may already haDe Eeen reported or eDen fiGed. !n either caseI they need to FnoB Bhich Dersion to Cse Bhen testing oCt the ECg. !f yoC are deDeloping more than one prodCct> !dentify the prodCct in HCestion. nless yoC are reporting something Dery simpleI sCch as a cosmetic error on a screenI yoC shoCld inclCde a dataset that eGhiEits the error. !f yoCTre reporting a processing errorI yoC shoCld inclCde tBo Dersions of the datasetI one Eefore the process and one after. !f the dataset from Eefore the process is not inclCdedI deDelopers Bill Ee forced to try and find the ECg Eased on forensic eDidence. &ith the dataI deDelopers can trace Bhat is happening. S,/*89 List the steps taFen to recreate the ECg. !nclCde all proper menC namesI donTt aEEreDiate and donTt assCme anything. 'fter yoCTDe finished Briting doBn the stepsI folloB them - maFe sCre yoCTDe inclCded eDerything yoC type and do to get to the proElem. !f there are parametersI list them. !f yoC haDe to enter any dataI sCpply the eGact data entered. )o throCgh the process again and see if there are any steps that can Ee remoDed. &hen yoC report the steps they shoCld Ee the clearest steps to recreating the ECg.
Performance Testing Process & Methodology 33 Proprietary & Confidential -
P(&1:5,? D),)9
D/85(-*,-&.9 EGplain Bhat is Brong - Try to Beed oCt any eGtraneoCs informationI ECt detail Bhat is Brong. !nclCde a list of Bhat Bas eGpected. (ememEer report one proElem at a timeI donTt comEine ECgs in one report. S:**&(,-.7 1&5:6/.,),-&.? !f aDailaEleI sCpply docCmentation. !f the process is a reportI inclCde a copy of the report Bith the proElem areas highlighted. !nclCde Bhat yoC eGpected. !f yoC haDe a report to compare againstI inclCde it and its soCrce information 8if itTs a printoCt from a preDioCs DersionI inclCde the Dersion nCmEer and the dataset Csed9 This information shoCld Ee stored in a centraliSed location so that *eDelopers and Testers haDe access to the information. The deDelopers need it to reprodCce the ECgI identify it and fiG it. Testers Bill need this information for later regression testing and Derification.
1+.+.1
Su!!ary
' ECg report is a case against a prodCct. !n order to BorF it mCst sCpply all necessary information to not only identify the proElem ECt Bhat is needed to fiG it as Bell. !t is not enoCgh to say that something is Brong. The report mCst also say Bhat the system shoCld Ee doing. The report shoCld Ee Britten in clear concise stepsI so that someone Bho has neDer seen the system can folloB the steps and reprodCce the proElem. !t shoCld inclCde information aEoCt the prodCctI inclCding the Dersion nCmEerI Bhat data Bas Csed. The more organiSed information proDided the Eetter the report Bill Ee.
1, (uto!ation
$hat is !utomation !utomated testing is automating the manual testing process currently in use
individuals. .any automated testing tools can replicate the activity of a large number of users Aand their associated transactionsB using a single computer. Therefore) loadKstress testing using automated methods re(uire only a fraction of the computer hardware that would be necessary to complete a manual test. &magine performing a load test on a typical distributed clientKserver application on which ,- concurrent users were planned. To do the testing manually) ,- application users employing ,- 6Cs with associated software) an available networ#) and a cadre of coordinators to relay instructions to the users would be re(uired. $ith an automated scenario) the entire test operation could be created on a single machine having the ability to run and rerun the test as necessary) at night or on wee#ends without having to assemble an army of end users. !s another e3ample) imagine the same application used by hundreds or thousands of users. &t is easy to see why manual methods for loadKstress testing is an e3pensive and logistical nightmare.
regulations as well as being re(uired to document their (uality assurance efforts for all parts of their systems.
disruptions in critical operations. .ission-critical processes are prime candidates for automated testing. 43amples includeD financial month-end closings) production planning) sales order entry and other core activities. !ny application with a high-degree of ris# associated with a failure is a good candidate for test automation. $epetitive Testing / &f a testing procedure can be reused many times) it is also a prime candidate for automation. ;or e3ample) common outline files can be created to establish a testing session) close a testing session and apply testing values. These automated modules can be used again and again without having to rebuild the test scripts. This modular approach saves time and money when compared to creating a new end-to-end script for each and every test. (pplications with a Long Life Span / &f an application is planned to be in production for a long period of time) the greater the benefits are from automation. #hat to Loo- =or in a Testing Tool Choosing an automated software testing tool is an important step) and one which often poses enterprise-wide implications. Gere are several #ey issues) which should be addressed when selecting an application testing solution.
Internet!Intranet Testing
! good tool will have the ability to support testing within the scope of a web browser. The tests created for testing &nternet or intranet-based applications should be portable across browsers) and should automatically ad+ust for different load times and performance levels.
Performance Testing Process & Methodology 1:0 Proprietary & Confidential -
Ease of Use
Testing tools should be engineered to be usable by non-programmers and application end-users. $ith much of the testing responsibility shifting from the development staff to the departmental level) a testing tool that re(uires programming s#ills is unusable by most organiCations. 4ven if programmers are responsible for testing) the testing tool itself should have a short learning curve.
! robust testing tool should support testing with a variety of user interfaces and create simple-to manage) easy-to-modify tests. Test component reusability should be a cornerstone of the product architecture.
The selected testing solution should allow users to perform meaningful load and performance tests to accurately measure system performance. &t should also provide test results in an easy-to-understand reporting format.
Test Planning
Careful planning is the #ey to any successful process. To guarantee the best possible result from an automated testing program) those evaluating test automation should consider these fundamental planning steps. The time invested in detailed planning significantly improves the benefits resulting from test automation.
?egin the automated testing process by defining e3actly what tas#s your application software should accomplish in terms of the actual business activities of the end-user. The definition of these tas#s) or business re(uirements) defines the high-level) functional re(uirements of the software system in (uestion. These business re(uirements should be defined in such a way as to ma#e it abundantly clear that the software system correctly Aor incorrectlyB performs the necessary business functions. ;or e3ample) a business re(uirement for a payroll application might be to calculate a salary) or to print a salary chec#.
! test case identifies the specific input values that will be sent to the application) the procedures for applying those inputs) and the e3pected application values for the procedure being tested. ! proper test case will include the following #ey componentsD Test Case <a!e8s9 - 4ach test case must have a uni(ue name) so that the results of these test elements can be traced and analyCed. Test Case )rere%uisites - &dentify set up or testing criteria that must be established before a test can be successfully e3ecuted. Test Case E.ecution 4rder - Specify any relationships) run orders and dependencies that might e3ist between test cases. Test )rocedures W &dentify the application steps necessary to complete the test case. Input *alues - This section of the test case identifies the values to be supplied to the application as input including) if necessary) the action to be completed. E.pected $esults - 8ocument all screen identifierAsB and e3pected valueAsB that must be verified as part of the test. These e3pected results will be used to measure the acceptance criteria) and therefore the ultimate success of the test. Test ata Sources - Ta#e note of the sources for e3tracting test data if it is not included in the test case. Inputs to the Test esign and Construction )rocess Test Case 8ocumentation Standards Test Case aming Standards !pproved Test 6lan ?usiness 6rocess 8ocumentation ?usiness 6rocess ;low Test 8ata sources 4utputs fro! the Test esign and Construction )rocess <evised Test 6lan Test 6rocedures for each Test Case Test CaseAsB for each application function described in the test plan 6rocedures for test set up) test e3ecution and restoration
Inputs to the Test E.ecution )rocess !pproved Test 6lan 8ocumented Test Cases StabiliCed) repeatable) test e3ecution environment StandardiCed Test >ogging 6rocedures 4utputs fro! the Test E.ecution )rocess Test 43ecution >ogAsB <estored test environment The test e3ecution phase of your software test process will control how the test gets applied to the application. This step of the process can range from very chaotic to very simple and schedule driven. The problems e3perienced in test e3ecution are usually attributed to not properly performing steps from earlier in the process. !dditionally) there may be several test e3ecution cycles necessary to complete all the necessary types of testing re(uired for your application. ;or e3ample) a test e3ecution may be re(uired for the functional testing of an application) and a separate test e3ecution cycle may be re(uired for the stressKvolume testing of the same application. ! complete and thorough test plan will identify this need and many of the test cases can be used for both test cycles. The secret to a controlled test e3ecution is comprehensive planning. $ithout an ade(uate test plan in place to control your entire test process) you may inadvertently cause problems for subse(uent testing.
ata
riven (pproach
8ata driven approach is a test that plays bac# the same user actions but with varying input values. This allows one script to test multiple sets of positive data. This is applicable when large volumes and different sets of data need to be fed to the application and tested for correctness. The benefit of this approach is that the time consumed is less and accurate than manually testing it. Testing can be done with both positive and negative approach simultaneously.
Test ready
application
(esClt analysis
8efect management
16.2 $ecord and )lay0acThis category details how easy it is to record 5 playbac# a test. 8oes the tool support low-level recording Amouse drags) e3act screen locationB1 &s there ob+ect recognition when recording and playing bac# or does it appear to record o# but then on playbac# Awithout environment change or uni(ue id*s) etc changesB fail1 Gow easy is it to read the recorded script. $hen automating) this is the first thing that most test professionals will do. They will record a simple scriptN loo# at the code and then playbac#. This is very similar to recording a macro in say .icrosoft !ccess. 4ventually record and playbac# becomes less and less part of the automation process as it is usually more robust to use the built-in functions to directly test ob+ects) databases) etc. Gowever this should be done as a minimum in the evaluation process because if the tool of choice cannot recogniCe the applications ob+ects then the automation process will be a very tedious e3perience.
16.3
#e0 Testing
$eb based functionality on most applications is now a part of everyday life. !s such the test tool should provide good web based test functionality in addition to its clientKserver functions. &n +udging the rating for this category & loo#ed at the tools native support for GT.> tables) frames) 8".) various platforms for browsers) $eb site maps and lin#s. $eb testing can be riddled with problems if various considerations are not ta#en into account. Gere are a few e3amples !re there functions to tell me when the page has finished loading1 Can & tell the test tool to wait until an image appears1 Can & test whether lin#s are valid or not1 Can & test web based ob+ects functions li#e is it enabled) does it contain data) etc. !re there facilities that will allow me to programmatically loo# for ob+ects of a certain type on a web page or locate a specific ob+ect1 Can & e3tract data from the web page itself1 4.g. the title1 ! hidden form element1 $ith Client server testing the target customer is usually well defined you #now what networ# operating system you will be using) the applications and so on but on the web it is far different. ! person may be connecting from the @S! or !frica) they may be disabled) they may use various browsers) and the screen resolution on their computer will be different. They will spea# different languages) will have fast connections and slow connections) connect using .!C) >inu3 or $indows) etc) etc. So the cost to set up a test environment is usually greater than for a client server test where the environment is fairly well defined.
all the tools facilities for creating and manipulating data. 8oes the tool allow you to specify the type of data you want1 Can you automatically generate data1 Can you interface with files) spreadsheets) etc to create) e3tract data1 Can you randomise the access to that data1 &s the data access truly random1 This functionality is normally more important than database tests as the databases will usually have their own interface for running (ueries. Gowever applications Ae3cept for manual inputB do not usually provide facilities for bul# data input. The added benefit Aas & have foundB is this functionality can be used for a production reason e.g. for the aforementioned bul# data input sometimes carried out in data migration or application upgrades. These functions are also very important as you move from the recordKplaybac# phase) to data-driven to framewor# testing. 8ata-driven tests are tests that replace hard coded names) address) numbersN etc with variables supplied from an e3ternal source usually a CS= AComma Separated variableB file) spreadsheet or database. ;ramewor#s are usually the ultimate goal in deploying automation test tools. ;ramewor#s provide an interface to all the applications under test by e3posing a suitable list of functions) databases) etc. This allows an ine3perienced testerKuser to run tests by +ust runningKproviding the test framewor# with #now commandsKvariables. ! test framewor# has parallels to Software framewor#s where you develop an encapsulation layer of software Aframewor#B around the applications) databases etc and e3pose functions) classes) methods etc that is used to call the underlying applications) return data) input data) etc. Gowever to do this re(uires a lot of time) s#illed resources and money to facilitate the first two.
&f you have a custom ob+ect that behaves li#e one of these are you able to map Atell the test tool that the custom control behaves li#e the standardB control1 8oes it support all the standard controls methods1 Can you add the custom control to it*s own class of control1
Performance Testing Process & Methodology 11- Proprietary & Confidential -
16.11E.tensi0le Language
Gere is a (uestion that you will here time and time again in automation forums. 0Gow do & get \insert test tool name here^ to do such and such2) there will be one of four answers. & don*t #now &t can*t do it &t can do it using the function 3) y or c &t can*t in the standard language but you can do it li#e this
$hat we are concerned with in this section is the last answer e.g. if the standard test language does not support it can & create a 8>> or e3tend the language in some way to do it1 This is usually an advanced topic and is not encountered until the trained tester has been using the tool for at least 6 W 17 months. Gowever when this is encountered the tool should support language e3tension. &f via 8>>*s then the tester must have #nowledge of a traditional development language e.g. C) CRR or =?. ;or instance if & wanted to e3tend a tool that could use 8>>*s created by =? & would need to have =isual ?asic then open say an !ctiveF dll pro+ect) create a class containing various methods Asimilar to functionsB then & would ma#e a dll file. <egister it on the machine then reference that dll from the test tool calling the methods according to their specification. This will sound a lot clearer as you go on in the tools and this document will be updated to include advanced topics li#e this in e3tending the tools capabilities. Some tools provide e3tension by allowing you to create user defined functions) methods) classes) etc but these are normally a mi3ture of the already supported data types) functions) etc rather than e3tending the tool beyond it*s released functionality. ?ecause this is an advanced topic & have not ta#en into account ease of use) as those people who have got to this level should have already e3hausted the current capabilities of the tools. So want to use e3ternal functions li#e win97api functions and so on and should have a good grasp of programming.
16.12Environ!ent Support
Gow many environments does the tool support out the bo31 8oes it support the latest Oava release) what "racle) 6owerbuilder) $!6) etc. .ost tools can interface to unsupported environments if the developers in that environment provide classes) dll*s etc that e3pose some of the applications details but whether a developer will or has time to do this is another (uestion. @ltimately this is the most important part of automation. 4nvironment support. &f the tool does not support your environmentKapplication then you are in trouble and in most cases you will need to revert to manually testing the application Amore shelf wareB.
16.13Integration
Gow well does the tool integrate with other tools. This is becoming more and more important. 8oes the tool allow you to run it from various test management suites1 Can you raise a bug directly from the tool and feed the information gathered from your test logs into it1 8oes it integrate with products li#e word) e3cel or re(uirements management tools1 $hen managing large test pro+ects with an automation team greater than five and testers totaling more than ten. The management aspect and the tools integration moves further up the importance ladder. !n e3ample could be a ma+or ?an# wants to redesign its wor#flow management system to allow faster processing of customer (ueries. The anticipated re(uirements for the new wor#flow software numbers in the thousands. To test these re(uirements :-)--- test cases have been identified 7-)--of these can be automated. Gow do & manage this1 This is where a test management tool comes in real handy. !lso how do & manage the bugs raised as a result of automation testing) etc1 &ntegration becomes very important rather than having separate systems that don*t share data that may re(uire duplication of information. The companies that will score larger on these are those that provide tools outside the testing arena as they can build in integration to their other products and so when it comes down to the wire on some pro+ects) we have gone with the tool that integrated with the products we already had.
16.1"Cost
&n my opinion cost is the least significant in this matri3) why1 ?ecause all the tools are similar in price e3cept =isual Test that is at least , times cheaper than the rest but as you will see from the matri3 there is a reason. !lthough very functional it does not provide the range of facilities that the other tools do. 6rice typically ranges from d7)9-- - d,)--- Adepending on (uantity brought) pac#ages) etcB in the @S and around e7)9-- - e,)--- in the @E for the base tools included in this document. So you #now the tools will all cost a similar price it is usually a case of which one will do the +ob for me rather than which is the cheapest. =isual Test & believe will prove to be a bigger hit as it e3pands its functional range it was not that long ago where it did not support web based testing.
Performance Testing Process & Methodology 114 Proprietary & Confidential -
The prices are #ept this high because they can. !ll the tools are roughly the same price and the volumes of sales is low relative to say a fully blown programming language &84 li#e O?uilder or =isual CRR which are a lot more function rich and fle3ible than any of the test tools. "n top of the above prices you usually pay an additional maintenance fee of between 1- and 7-Z. There are not many applications & #now that cost this much per license not even some very advanced operating systems. Gowever it is all a matter of supply. The bigger the supply the less the price as you can spread the development costs more. Gowever & do not anticipate a move on the prices upwards as this seems to be the price the mar#et will tolerate. =isual Test also provides a free runtime license.
16.1+Ease 4f 'se
This section is very sub+ective but & have used testers Amy guinea pigsB of various levels and got them from scratch to use each of the tools. &n more cases than not they have agreed on which was the easiest to use AinitiallyB. "bviously this can change as the tester becomes more e3perienced and the issues of say e3tensibility) script maintenance) integration) data-driven tests) etc are re(uired. Gowever this score is based on the productivity that can be gained in say the first three months when those issues are not such a big concern. 4ase of use includes out the bo3 functions) debugging facilities) layout on screen) help files and user manuals.
16.1,Support
&n the @E this can be a problem as most of the test tool vendors are based in the @S! with satellite branches in the @E. Oust from my own e3perience and the testers & #now in the @E. $e have found .ercury to be the best for support) then Compuware) <ational and last Segue. Gowever having said that you can find a lot of resources for Segue on the &nternet including a forum at www.betasoft.com that can provide most of the answers rather than ringing the support line. "n their website Segue and .ercury provide many useful user and vendor contributed material. & have also included various other criteria li#e the availability of s#illed resources) online resources) validity of responses from the helpdes#) speed of responses and similar
16.1640Hect Tests
ow presuming the tool of choice does wor# with the application you wish to test what services does it provide for testing ob+ect properties1 Can it validate several properties at once1 Can it validate several ob+ects at once1 Can you set ob+ect properties to capture the application state1 This should form the bul# of your verification as far as the automation process is concerned so & have loo#ed at the tools facilities on clientKserver as well as web based applications.
Performance Testing Process & Methodology 11+ Proprietary & Confidential -
16.172atri.
$hat will follow after the matri3 is a tool-by-tool comparison under the appropriate heading Aas listed aboveB so that the user can get a feel for the tools functionality side by side. 4ach category in the matri3 is given a rating of 1 W ,. 1 ] 43cellent support for this functionality) 7 ] %ood support but lac#ing or another tool provides more effective support) 9 ] ?asicK support only. : ] This is only supported by use of an !6& call or third party add-in but not included in the general test toolKbelow average) , ] o support. &mage testing "b+ect &dentity Tool <ecord 5 6laybac# 8atabase tests 8ata functions "b+ect .apping 43tensible >anguage TestK4rror recovery 4nvironment support "b+ect Tests 1 1 1 7 1 $eb Testing ame .ap &ntegration 4ase of use 7 7 9 9 1 1 7 7 7 7
-
7 1 1 9 1
1 7 7 9 7
1 1 1 : 1
7 7 7 9 1
1 1 1 7 1
1 1 1 7 1
7 7 1 7 7
1 7 1 : :
"b+ect
7 1 7 1 1
7 7 1 7 1
1 7 7 9 7
1 1 9 7 1
9 7 9 1 7
16.1I2atri. score
$in <unner ] 7: X!<un ] 7, Sil#Test ] 7: =isual Test ] 99 <obot ] 7:
Support
Cost
$ational $o0ot. ;acilitates functional and performance testing by automating record and playbac# of test scripts. !llows you to write) organiCe) and run tests) and to capture and analyCe the results. $ational Test =actory. !utomates testing by combining automatic test generation with source-code coverage analysis. Tests an entire application) including all %@& features and all lines of source code. 8uring playbac#) $ational Load Test can emulate hundreds) even thousands) of users placing heavy loads and stress on your database and $eb servers. $ational Test categoriCes test information within a repository by pro+ect. Lou can use the <ational !dministrator to create and manage pro+ects.
The tools that are to discussed here are <ational !dministrator <ational <obot <ational Test .anager
"pen the <ational administrator and go to =ile/N<ew )roHect. &n the above window opened enter the details li#e 6ro+ect name and location. Clic# <e.t. &n the corresponding window displayed) enter the 6assword if you want to protect the pro+ect with password) which is re(uired to connect to) configure or delete the pro+ect.
Clic# =inish. &n the configure pro+ect window displayed clic# the Create button. To manage the <e(uirements assets connect to <e(uisite 6ro) to manage test assets create associated test data store and for defect management connect to Clear (uest database.
"nce the Create button in the Configure pro+ect window is chosen) the below seen Create Test 8ata store window will be displayed. !ccept the default path and clic# 4@ button.
"nce the below window is displayed it is confirmed that the Test datastore is successfully created and clic# 4@ to close the window.
Clic# 4@ in the configure pro+ect window and now your first $ational proHect is ready to play withO.
Create and edit scripts using the SX!?asic) =?) and =@ scripting environments. The <obot editor provides color-coded commands with #eyword Gelp for powerful integrated programming during script development. Test applications developed with &84s such as =isual ?asic) "racle ;orms) 6ower?uilder) GT.>) and Oava. Test ob+ects even if they are not visible in the applicationIs interface. Collect diagnostic information about an application during script playbac#. <obot is integrated with <ational 6urify) Xuantify) and 6ureCoverage. Lou can play bac# scripts under a diagnostic tool and see the results in the log.
The "b+ect-"riented <ecording technology in <obot lets you generate scripts (uic#ly by simply running and using the application-under-test. <obot uses "b+ect-"riented <ecording to identify ob+ects by their internal ob+ect names) not by screen coordinates. &f ob+ects change locations or their te3t changes) <obot still finds them on playbac#. The "b+ect Testing technology in <obot lets you test any ob+ect in the application-under-test) including the ob+ectIs properties and data. Lou can test standard $indows ob+ects and &84specific ob+ects) whether they are visible in the interface or hidden.
"nce logged you will see the robot window. %o to ;ile-U ew-UScript
&n the above screen displayed enter the name of the script say 0;irst Script2 by which the script is referred to from now on and any description A ot mandatoryB.The type of the script is %@& for functional testing and =@ for performance testing.
The %@& Script top paneB window displays %@& scripts that you are currently recording) editing) or debugging. &t has two panesD !sset pane AleftB W >ists the names of all verification points and low-level scripts for this script. Script pane ArightB W 8isplays the script.
The "utput window bottom paneB has two tabsD ?uild W 8isplays compilation results for all scripts compiled in the last operation. >ine numbers are enclosed in parentheses to indicate lines in the script with warnings and errors. Console W 8isplays messages that you send with the SX!Console$rite command. !lso displays certain system messages from <obot.
To display the "utput windowD Clic# =iew f "utput. Gow to record a play bac# script1 To record a script +ust go to $ecord/NInsert at cursor Then perform the navigation in the application to be tested and once recording is done stop the recording. $ecord/N Stop
&n this window we can set general options li#e identification of lists) menus )recording thin# time in 3eneral tabD #e0 0rowser ta0: .ention the browser type &4 or etscapeb $o0ot #indow: 8uring recording how the robot should be displayed and hot#eys detailsb 40Hect $ecognition 4rderD the order in which the recording is to happen . ;or e3D Select a preference in the "b+ect order preference list.
&f you will be testing CRR applications) change the ob+ect order preference to CRR <ecognition "rder.
17.,.1
)lay0ac- options
Proprietary & Confidential -
%o to Tools/N )lay0ac- options to set the options needed while running the script.
Performance Testing Process & Methodology 12/ -
This will help you to handle une3pected window during playbac#) error recovery) mention the time out period) to manage log and log data.
! verification point is stored in the pro+ect and is always associated with a script. $hen you create a verification point) its name appears in the !sset AleftB pane of the Script window. The verification point script command) which always begins with <esult ]) appears in the Script ArightB pane. ?ecause verification points are assets of a script) if you delete a script) <obot also deletes all of its associated verification points. Lou can easily copy verification points to other scripts if you want to reuse them.
17.6.1
The following table summariCes each <obot verification point. Type !lphanumeric Clipboard ;ile Comparison ;ile 43istence .enu escription Captures and compares alphabetic or numeric values. Captures and compares alphanumeric data that has been copied to the Clipboard. Compares the contents of two files. Chec#s for the e3istence of a specified file Captures and compares the te3t) accelerator #eys) and state of menus. Captures up to five levels of sub-menus. Chec#s whether a specified module is loaded into a specified conte3t AprocessB) or is loaded anywhere in memory. Captures and compares the data in ob+ects. Captures and compares the properties of ob+ects. Captures and compares a region of the screen Aas a bitmapB. Captures a baseline of a $eb site and compares it to the $eb site at another point in time. Chec#s the content of a $eb site with every revision and ensures that changes have not resulted in defects. Chec#s that the specified window is displayed before continuing with the playbac# Captures and compares the client area of a window as a bitmap Athe menu) title bar) and border are not capturedB.
.odule 43istence
"b+ect 8ata "b+ect 6roperties <egion &mage $eb Site Compare $eb Site Scan $indow 43istence $indow &mage
&f editing) position the pointer in the script and clic# the 8isplay %@& &nsert Toolbar button on the Standard toolbar. 7. 9. :. Clic# the Comment button on the %@& &nsert toolbar. Type the comment A6- characters ma3imumB. Clic# "E to continue recording or editing.
<obot inserts the comment into the script Ain green by defaultB preceded by a single %uotation mar#. ;or e3ampleD
F This is a co!!ent in the script To change lines of te3t into comments or to uncomment te3tD 1. Gighlight the te3t.
Proprietary & Confidential -
7.
17.11.1
'sing
&f you are providing one or more values to the client application during %@& recording) you might want a datapool to supply those values during playbac#. ;or e3ample) you might be filling out a data entry form and providing values such as order number) part name) and so forth. &f you plan to repeat the transaction multiple times during playbac#) you might want to provide a different set of values each time. ! %@& script can access a datapool when it is played bac# in <obot. !lso) when a %@& script is played bac# in a Test.anager suite) the %@& script can access the same datapool as other scripts. There are differences in the way %@& scripts and sessions are set up for datapool accessD Lou must add datapool commands to %@& scripts manually while editing the script in <obot. <obot adds datapool commands to =@ scripts automatically. There is no 8!T!6"">gC" ;&% statement in a %@& script. The SX!8atapool"pen command defines the access method to use for the datapool. !lthough there are differences in setting up datapool access in %@& scripts and sessions) you define a datapool for either type of script using Test.anager in e3actly the same way.
&'(&40e$!g men!
The 8ebug menu has the following commandsD
Performance Testing Process & Methodology 1-2 Proprietary & Confidential -
%o %o @ntil Cursor !nimate 6ause Stop Set or Clear ?rea#points Clear !ll ?rea#points Step "ver Step &nto Step "ut oteD The 8ebug menu commands are for use with %@& scripts only.
&'(&)#ompilation errors
'fter the script is created and compiled and errors fiGed it can Ee eGecCted. The resClts need to Ee analySed in the Test Manager.
&n the $esults tab of the Test .anager) you could see the results stored. ;rom Test .anager you can #now start time of the script and
43(4 Protocols
"racle SX> server GTT6 Sybase Tu3edo S!6 6eople soft
www.rational.co!
21 )erfor!ance Testing
The performance testing is a measure of the performance characteristics of an application. The main ob+ective of a performance testing is to demonstrate that the system functions to specification with acceptable response times while processing the re(uired transaction volumes in real-time production database. The ob+ective of a performance test is to demonstrate that the system meets re(uirements for transaction throughput and response times simultaneously. The main deliverables from such a test) prior to e3ecution) are automated test scripts and an infrastructure to be used to e3ecute automated tests for e3tended periods.
Typically to debug applications) developers would e3ecute their applications using different e3ecution streams Ai.e.) completely e3ercise the applicationB in an attempt to find errors. $hen loo#ing for errors in the application) performance is a secondary issue to featuresN
Performance Testing Process & Methodology 1-3 Proprietary & Confidential -
?
is
that
Buantitative> relevant> !easura0le> realistic> achieva0le re%uire!ents !s a foundation to all tests) performance re(uirements should be agreed prior to the test. This helps in determining whether or not the system meets the stated re(uirements. The following attributes will help to have a meaningful performance comparison. Xuantitative - e3pressed in (uantifiable terms such that when response times are measured) a sensible comparison can be derived. <elevant - a response time must be relevant to a business process. .easurable - a response time should be defined such that it can be measured using a tool or stopwatch and at reasonable cost.
<ealistic - response time re(uirements should be +ustifiable when compared with the durations of the activities within the business process the system supports. !chievable - response times should ta#e some account of the cost of achieving them.
Sta0le syste! ! test team attempting to construct a performance test of a system whose software is of poor (uality is unli#ely to be successful. &f the software crashes regularly) it will probably not withstand the relatively minor stress of repeated use. Testers will not be able to record scripts in the first instance) or may not be able to e3ecute a test for a reasonable length of time before the software) middleware or operating systems crash. $ealistic test environ!ent The test environment should ideally be the production environment or a close simulation and be dedicated to the performance test team for the duration of the test. "ften this is not possible. Gowever) for the results of the test to be realistic) the test environment should be comparable to the actual production environment. 4ven with an environment which is somewhat different from the production environment) it should still be possible to interpret the results obtained using a model of the system to predict) with some confidence) the behavior of the target environment. ! test environment which bears no similarity to the actual production environment may be useful for finding obscure errors in the code) but is) however) useless for a performance test.
$esponse ti!e re%uire!ents $hen as#ed to specify performance re(uirements) users normally focus attention on response times) and often wish to define re(uirements in terms of generic response times. ! single response time re(uirement for all transactions might be simple to define from the user*s point of view) but is unreasonable. Some functions are critical and re(uire short response times) but others are less critical and response time re(uirements can be less stringent. Load profiles The second component of performance re(uirements is a schedule of load profiles. ! load profile is the level of system loading e3pected to occur during a specific business scenario. ?usiness scenarios might cover different situations when the users* organiCation has different levels of activity or involve a varying mi3 of activities) which must be supported by the system. ata0ase volu!es 8ata volumes) defining the numbers of table rows which should be present in the database
Performance Testing Process & Methodology 101 Proprietary & Confidential -
after a specified period of live running complete the load profile. Typically) data volumes estimated to e3ist after one year*s use of the system are used) but two year volumes or greater might be used in some circumstances) depending on the business application.
Test 6lan
Test 8esign
Scripting
Test Scripts
Test 43ecution
Test !nalysis
6reliminary <eport
!ctivity
&s 6erforman ce %oal 8eliverable<eached1
"
8eliverable
&nternal
L4S
6reparation of <eports ;inal <eport
22.2.1
Test 6lan
elivera0les
elivera0le Sa!ple
Test6lan.doc
esign
?ased on the test strategy detailed test scenarios would be prepared. 8uring the test design period the following activities will be carried outD Scenario design 8etailed test e3ecution plan 8edicated test environment setup Script <ecordingK 6rogramming
Performance Testing Process & Methodology 101 Proprietary & Confidential -
Script CustomiCation A8elay) Chec#points) SynchroniCations pointsB 8ata %eneration 6arameteriCationK 8ata pooling
#or- ite!s Gardware and Software re(uirements that includes the server components ) the >oad %enerators used etc.) Setting up the monitoring servers Setting up the data 6reparing all the necessary folders for saving the results as the test is over. 6re Test and 6ost Test 6rocedures
22.3.1
elivera0les
elivera0le Sa!ple
Test 8esign
Test8esign.doc
22.".1
elivera0les
elivera0le Test Scripts
Sample Script.doc
Sa!ple
22.+.1
elivera0les
elivera0le Test 43ecution
Time Sheet.doc
Sa!ple
<un >ogs.doc
22.,.1
elivera0les
elivera0le Sa!ple
Proprietary & Confidential -
Test !nalysis
6reliminary <eport.doc
22.6.1
elivera0les
elivera0le ;inal <eport
;inal <eport.doc
Sa!ple
o %oals o general purpose model Y %oals ]UTechni(ues) .etrics) $or#load Y ot trivial Y ?iased %oals Y VTo show that "@< system is better than TG4&<S2 Y !nalysts ] Oury Y @nsystematic !pproach Y !nalysis without @nderstanding the 6roblem Y &ncorrect 6erformance .etrics Y @nrepresentative $or#load Y $rong 4valuation Techni(ue Y "verloo# &mportant 6arameters Y &gnore Significant ;actors Y &nappropriate 43perimental 8esign Y &nappropriate >evel of 8etail Y o !nalysis Y 4rroneous !nalysis Y o Sensitivity !nalysis Y &gnoring 4rrors in &nput Y &mproper Treatment of "utliers Y !ssuming o Change in the ;uture Y &gnoring =ariability Y Too Comple3 !nalysis Y &mproper 6resentation of <esults Y &gnoring Social !spects Y "mitting !ssumptions and >imitations
goals. 4stablish incremental performance goals throughout the product development cycle. !ll the members in the team should agree that a performance issue is not +ust a bugN it is a software architectural problem. 6erformance testing of $eb services and applications is paramount to ensuring an e3cellent customer e3perience on the &nternet. The $eb Capacity !nalysis A$ebC!TB tool provides $eb server performance analysisN the tool can also assess &nternet Server !pplication 6rogramming &nterface and application server provider A&S!6&K!S6B applications. Creating an automated test suite to measure performance is time-consuming and laborintensive. Therefore) it is important to define concrete performance goals. $ithout defined performance goals or re(uirements) testers must guess) without a clear purpose) at how to instrument tests to best measure various response times. The performance tests should not be used to find functionality-type bugs. 8esign the performance test suite to measure response times and not to identify bugs in the product. 8esign the build verification test A?=TB suite to ensure that no new bugs are in+ected into the build that would prevent the performance test suite from successfully completing. The performance tests should be modified consistently. Significant changes to the performance test suite s#ew or ma#e obsolete all previous data. Therefore) #eep the performance test suite fairly static throughout the product development cycle. &f the design or re(uirements change and you must modify a test) perturb only one variable at a time for each build. Strive to achieve the ma+ority of the performance goals early in the product development cycle becauseD .ost performance issues re(uire architectural change. 6erformance is #nown to degrade slightly during the stabiliCation phase of the development cycle.
!chieving performance goals early also helps to ensure that the ship date is met because a product rarely ships if it does not meet performance goals. Lou should reuse automated performance tests !utomated performance tests can often be reused in many other automated test suites. ;or e3ample) incorporate the performance test suite into the stress test suite to validate stress scenarios and to identify potential performance issues under different stress conditions. Tests are capturing secondary metrics when the instrumented tests have nothing to do with measuring clear and established performance goals. !lthough secondary metrics loo# good on wall charts and in reports) if the data is not going to be used in a meaningful way to ma#e improvements in the engineering cycle) it is probably wasted data. 4n sure that you #now what you are measuring and why.
Testing for most applications will be automated. Tools used for testing would be the tool specified in the re(uirement specification. The tools used for performance testing are >oadrunner 6., and $ebload :.,3
23 Tools
23.1 Load$unner ,.+
>oad<unner is .ercury &nteractive*s tool for testing the performance of clientKserver systems. >oad<unner enables you to test your system under controlled and pea# load conditions. To generate load) >oad<unner runs thousands of =irtual @sers that are distributed over a networ#. @sing a minimum of hardware resources) these =irtual users provide consistent. <epeatable and measurable load to e3ecute your clientKserver system +ust as real users would. >oad<unner*s in depth reports and graphs provide the information that you need to evaluate the performance of your clientKserver system.
T&&% N)6/
URL
C&8,
OS
D/85(-*,-&.
>oad test tool emphasiCing easeof-use. Supports all browsers and web serversN simulates up to 7-- users per playbac# machine at various connection speedsN records and allows viewing of e3act bytes flowing between browser and server. .odem simulation allows each virtual user to be bandwidth limited. Can automatically handle variations in session-specific items such as coo#ies) usernames) passwords) and any other parameter to simulate multiple virtual users. otesD downloadable) will emulate 7, users) and will e3pire in 7 wee#s Amay be e3tendB .ercuryIs loadKstress testing
-
6rice AdB per number of virtual usersD $indows T) 1:---1-$indows 7---) 7:9,-7->inu3 Solaris :99,-9-/99,-1--1199,-,---
A8,()
http?77BBB.astratryand
L&)1T/8,
ECy.com
toolN includes recordKplaybac# capabilitiesN integrated spreadsheet parameteriCes recorded input to e3ercise application with a wide variety of data. IScenario ?uilderI visually combines virtual users and host machines for tests representing real user traffic. IContent Chec#I chec#s for failures under heavy loadN <eal-time monitors and analysis otesD downloadable) evaluation version 4-commerce load testing tool from ClientKServer Solutions) &nc. &ncludes recordKplaybac#) web form processing) user sessions) scripting) coo#ies) SS>. !lso includes predeveloped industry standard benchmar#s such as !S9!6) SetXuery) $isconsin) $ebStone) and others. &ncludes optimiCed database drivers for vendorneutral comparisons - .S SX> Server) "racle / and 8) Sybase
-
B/.5+6)(< F)5,&(0
http?77BBB.EenchmarF factory.com
$indows T) $indows7---
System 11) "8?C) &?.Is 8?7 C>&) &nformi3. otesD downloadable A1B) after submitting information ! 6age with suggestion to apply for ne3t infos to closest dealers appeared Supports recording of SS> sessions) coo#ies) pro3ies) password authentication) dynamic GT.>N multiple platforms otesD downloadable) 4valuation version does not support SSl .icrosoft stress test tool created by .icrosoftIs &nternal Tools %roup A&T%B and subse(uently made available for e3ternal use. &ncludes recordKplaybac#) script recording from browser) SS>) ad+ustable delay between re(uests otesD one of the advanced tools in the listingb <ationalIs clientKserver and web performance testing tool. I>oadSmart SchedulingI capabilities allow
-
R)1>-/'A8 W/4L&)1
http?77BBB.radDieB.com
http?77homer.rte.microsoft. com
;ree
$indows T) $indows7---
http?77BBB.rational.com7 prodCcts
comple3 usage scenarios and randomiCed transaction se(uencesN handles dynamic web pages. otesD re(uest a cd only. ot downloadable >oad testing tool from ;acilita Software for web) client-server) networ#) and database systems otesD not downloadable ;ree web benchmar#ingKload testing tool available as source codeN will compile on any @ &F platform otesD unsupportable A1B) bro#en download lin#. >oad test tool from <S$ geared to testing web applications under load and testing scalability of 4commerce applications. ;or use in con+unction with test scripts from their e-Tester functional test tool. !llows on-the-fly changes and has real-time reporting capabilities. otesD
-
F&(/5)8,
http?77BBB.facilita.co.CF
@ni3
B/:8
http?77BeEperf.SeCs. co.CF7intro.html
;ree
@ni3
E#L&)1
$in9,K98K $indows T
downloadable) free cd re(uest) evaluation copy ;ree load test application to generate web server loads otesD free and easy. CompuwareIs X!>oad for loadKstress testing of database) web) and char-based systems) wor#s with such middleware asD SX>net) 8?>ib or C?>ib) SX> Server) "8?C) Telnet) and $eb otes D free cd re(uest >oad and performance testing component of SegueIs Sil# web testing toolset. otesD no download. Tool for load testing of up to 1---7-simulated usersN also includes functional and regression testing capabilities) and captureKplaybac# and scripting language. 4valuation copy avail. otesD downloadable
HTTP#L&)1
http?77BBB.acme.com7 softBare7httpUload
;ree
@ni3
CAL&)1
S-%<P/(2&(6/(
WEBA(,
http?77BBB.oclc.org7 BeEart
W/4%&)1
;inal 43am $eb>oad integration and predeployment testing ensures the reliability) performance) and scalability of $eb applications. &t generates and monitors load stress tests - which can be recorded during a $eb session with any browser - and assesses $eb application performance under user-defined variable system loads. >oad scenarios can include unlimited numbers of virtual users on one or more load servers) as well as single users on multiple client wor#stations. otesD downloadable) 1,day eval. period $eb load test tool from .icrosoft for load testing of .S &&S on T >oad testing toolN includes lin# testing capabilitiesN can simulate up to 1)--- clients from a single &6 addressN also supports multiple &6 addresses with or without aliases.
-
;ree
otesD not downloadable >oad testing and captureKplaybac# tools from Technovations. $ebSiCr load testing tool supports authentication) coo#ies) redirects otesD downloadable) 9eval. period.
W/4S-D(3 W/4C&(1/(
http?77BBB.technoDa tions.com7home.htm
Type of wor-load: in order to properly achieve the goals of the test) each test re(uires a certain type of wor#load. This methodology specification provides information on the appropriate script of pages or transactions for the user. 2ethodology: a list of suggested steps to ta#e in order to assess the system under test. #hat to loo- for: contains information on behaviors) issues and errors to pay attention to during and after the test.
6age Component brea#down time 6age 8ownload time Component siCe !nalysis 4rror Statistics 4rrors per Second Total SuccessfulK;ailed Transactions
2"." Conclusion
6erformance testing is an independent discipline and involves all the phases as the mainstream testing lifecycle i.e strategy) plan) design) e3ecution) analysis and reporting. $ithout the rigor described in this paper) e3ecuting performance testing does not yield anything more than finding more defects in the system. Gowever) if e3ecuted systematically with appropriate planning) performance testing can unearth issues that otherwise cannot be done through mainstream testing. &t is very typical of the pro+ect manager to be overta#en by time and resource pressures leading not enough budget being allocated for performance testing) the conse(uences of which could be disastrous to the final system. There is another flip side of the coin. Gowever there is an important point to be noted here. ?efore testing the system for performance re(uirements) the system should have been architected and designed for meeting the re(uired performance goals. &f not) it may be too late in the software development cycle to correct serious performance issues. $eb-enabled applications and infrastructures must be able to e3ecute evolving business processes with speed and precision while sustaining high volumes of changing and unpredictable user audiences. >oad testing gives the greatest line of defense against poor performance and accommodates complementary strategies for performance management and monitoring of a production environment. The discipline helps businesses succeed in leveraging $eb technologies to their best advantage) enabling new business opportunity lowering transaction costs and strengthening profitability. ;ortunately) robust and viable solutions e3ist to help fend off disasters that result from poor performance. !utomated load
Performance Testing Process & Methodology 142 Proprietary & Confidential -
testing tools and services are available to meet the critical need of measuring and optimiCing comple3 and dynamic application and infrastructure performance. "nce these solutions are properly adopted and utiliCed) leveraging an ongoing) lifecycle-focused approach) businesses can begin to ta#e charge and leverage information technology assets to their competitive advantage. ?y continuously testing and monitoring the performance of critical software applications) business can confidently and proactively e3ecute strategic corporate initiatives for the benefit of shareholders and customers ali#e.
2+ Load Testing
>oad Testing is creation of a simulated load on a real computer system by using virtual users who submit wor# as real users would do at real client wor#stations and thus testing the systems ability to support such wor#load. Testing of critical web applications during its development and before its deployment should include functional testing to confirm to the specifications) performance testing to chec# if it offers an acceptable response time and load testing to see what hardware or software configuration will be re(uired to provide acceptable response time and handle the load that will created by the real users of the system
#inally it shoCld also Ee taFen into consideration of the test tool Bhich sCpports load testing Ey determining its mCltithreading capaEilities and the creation of nCmEer of DirtCal Csers Bith minimal resoCrce consCmption and maGimal DirtCal Cser coCnt.
2,.3 Settings
(Cn time settings shoCld Ee defined the Bay the scripts shoCld Ee rCn in order to accCrately emClate real Csers. %ettings can configCre the nCmEer of concCrrent connectionsI test rCn timeI folloB $TTP redirects etc.I %ystem response times also can Dary Eased on the connection speed. $ence throttling EandBidth can emClate dial Cp connections at Darying modem speeds 82/./ 5Eps or 14.4 5Eps or T1 81.10M9 etc.
2,., Conclusion
Load testing is the measCre of an entire &eE applicationJs aEility to sCstain a nCmEer of simCltaneoCs Csers and transactionsI Bhile maintaining adeHCate response times. !t is the only Bay to accCrately test the end-to-end performance of a &eE site prior to going liDe. TBo common methods for implementing this load testing process are manCal and aCtomated testing. ManCal testing BoCld inDolDe
Performance Testing Process & Methodology 144 -
's load testing is iteratiDe in natCreI the performance proElems mCst Ee identified so that system can Ee tCned and retested to checF for EottlenecFs. #or this reasonI manCal testing is not a Dery practical option. TodayI aCtomated load testing is the preferred choice for load testing a &eE application. The testing tools typically Cse three maRor components to eGecCte a test? ' consoleI Bhich organiSesI driDes and manages the load VirtCal CsersI performing a ECsiness process on a client application Load serDersI Bhich are Csed to rCn the DirtCal Csers
&ith aCtomated load testing toolsI tests can Ee easily rerCn any nCmEer of times and the resClts can Ee reported aCtomatically. !n this BayI aCtomated testing tools proDide a more cost-effectiDe and efficient solCtion than their manCal coCnterparts. PlCsI they minimiSe the risF of hCman error dCring testing.
26 Stress Testing
26.1 Introduction to Stress Testing
This testing is accomplished through reviews Aproduct re(uirements) software functional re(uirements) software designs) code) test plans) etc.B) unit testing) system testing Aalso #nown as functional testingB) e3pert user testing Ali#e beta testing but in-houseB) smo#e tests) etc. !ll these Vtesting* activities are important and each plays an essential role in the overall effort but) none of these specifically loo# for problems li#e memory and resource management. ;urther) these testing activities do little to (uantify the robustness of the application or determine what may happen under abnormal circumstances. $e try to fill this gap in testing by using stress testing. Stress testing can imply many different types of testing depending upon the audience. 4ven in literature on software testing) stress testing is often confused with load testing andKor volume testing. ;or our purposes) we define stress testing as perfor!ing rando! operational se%uences at larger than nor!al volu!es> at faster than nor!al speeds and for longer than nor!al periods of ti!e as a !ethod to accelerate the rate of finding defects and verify the ro0ustness of our product . Stress testing in its simplest form is any test that repeats a set of actions over and over with the purpose of 0brea#ing the product2. The system is put through its paces to find where it may fail. !s a first step) you can ta#e a common set of actions for your system and #eep repeating them in an attempt to brea# the system. !dding some randomiCation to these steps will help find more defects. Gow long can your application stay functioning doing this operation repeatedly1 To help you reproduce your failures one of the most important things to remember to do is to log everything as you proceed. Lou need to #now what e3actly was happening when the system failed. 8id the system loc# up with 1-- attempts or 1--)--attempts1P1Q ote that there are many other types of testing which have not mentioned above) for e3ample) ris# based testing) random testing) security testing) etc. $e have found) and it seems they agree) that it is best to review what needs to be tested) pic# multiple testing types that will provide the best coverage for the product to be tested) and then master these testing types) rather than trying to implement every testing type. Some of the defects that we have been able to catch with stress testing that have not been found in any other way are memory lea#s) deadloc#s) software asserts) and configuration conflicts. ;or more details about these types of defects or how we were able to detect them) refer to the section VTypical 8efects ;ound by Stress Testing*. Table 1 provides a summary of some of the strengths and wea#nesses that we have found with stress testing.
Gelpful at finding memory lea#s) deadloc#s) software asserts) and configuration conflicts
$ith automated stress testing) the stress test is performed under computer control. The stress test tool is implemented to determine the applications* configuration) to e3ecute all valid command se(uences in a random order) and to perform data logging. Since the stress test is automated) it becomes easy to e3ecute multiple stress tests simultaneously across more than one product at the same time. 8epending on how the stress inputs are configured stress can do both Vpositive* and Vnegative* testing. 6ositive testing is when only valid parameters are provided to the device under test) whereas negative testing provides both valid and invalid parameters to the device as a way of trying to brea# the system under abnormal circumstances. ;or e3ample) if a valid input is in seconds) positive testing would test - to ,9 and negative testing would try W1 to 6-) etc. 4ven though there are clearly advantages to automated stress testing) it still has its disadvantages. ;or e3ample) we have found that each time the product application changes we most li#ely need to change the stress tool Aor more commonly commands need to be added toKor deleted from the input command setB. !lso) if the input command set changes) then the output command se(uence also changes given pseudo-randomiCation. Table 7 provides a summary of some of these advantages and disadvantages that we have found with automated stress testing.
!n sCmmaryI aCtomated stress testing oDercomes the maRor disadDantages of manCal stress testing and finds defects that no other testing types can find. 'Ctomated stress testing eGercises DarioCs featCres of the systemI at a rate eGceeding that at Bhich actCal end-Csers can Ee eGpected to doI and for dCrations of time that eGceed typical Cse. The aCtomated stress test randomiSes the order in Bhich the prodCct featCres are accessed. !n this BayI non-typical seHCences of Cser interaction are tested Bith the system in an attempt to find latent defects not detectaEle Bith other techniHCes.
Performance Testing Process & Methodology 1+: Proprietary & Confidential -
To ta#e advantage of automated stress testing) our challenge then was to create an automated stress test tool that wouldD 1. Simulate user interaction for long periods of time Asince it is computer controlled we can e3ercise the product more than a user canB. 7. 6rovide as much randomiCation of command se(uences to the product as possible to improve test coverage over the entire set of possible featuresKcommands. 9. Continuously log the se(uence of events so that issues can be reliably reproduced after a system failure. :. <ecord the memory in use over time to allow memory management analysis. ,. Stress the resource and memory management features of the system.
19
)rogra!!a0le Interfaces: &nterfaces li#e command prompts) <S-797) 4thernet) %eneral 6urpose &nterface ?us A%6&?B) @niversal Serial ?us A@S?B) etc. that accept strings representing command functions without regard to conte3t or the current state of the device. 3raphical 'ser Interfaces 83'IMs9: &nterfaces that use the $indows model to allow the user direct control over the device) individual windows and controls may or may not be visible andKor active depending on the state of the device.
29
;or additional comple3ity) other variations of the automated stress test can be performed. ;or e3ample) the stress test can vary the rate at which commands are sent to the interface) the stress test can send the commands across multiple interfaces simultaneously) Aif the product supports itB) or the stress test can send multiple commands at the same time.
iagra!
! stress test tool can have many different interactions and be implemented in many different ways. ;igure 1 shows a bloc# diagram) which can be used to illustrate some of the stress test tool interactions. The main interactions for the stress test tool include an input file and 8evice @nder Test A8@TB. The input file is used here to provide the stress test tool with a list of all the commands and interactions needed to test the 8@T.
I.*:, F-%/
DUT
occCrsI continCe to add additional data to the defect description. EDentCallyI oDer timeI yoC Bill Ee aEle to detect a patternI isolate the root caCse and resolDe the defect. %ome defects RCst seem to Ee Cn-reprodCciEleI especially those that reside aroCnd page faCltsI ECt oDerallI Be FnoB that the roECstness of oCr applications increases proportionally Bith the amoCnt of time that the stress test Bill rCn CninterrCpted.
' test coDerage analySer aCtomates this process. Test coDerage analysis is sometimes called code coDerage analysis. The tBo terms are synonymoCs. The academic Borld more often Cses the term Mtest coDerageM Bhile practitioners more often Cse Mcode coDerageM. Test coDerage analysis can Ee Csed to assCre HCality of the set of testsI and not the HCality of the actCal prodCct. CoDerage analysis reHCires access to test program soCrce code and often reHCires recompiling it Bith a special command. Code coDerage analysis is a strCctCral testing techniHCe 8Bhite EoG testing9. %trCctCral testing compares test program EehaDior against the apparent intention of the soCrce code. This contrasts Bith fCnctional testing 8ElacF-EoG testing9I Bhich compares test program EehaDior against a reHCirements specification. %trCctCral testing eGamines hoB the program BorFsI taFing into accoCnt possiEle pitfalls in the strCctCre and logic. #Cnctional testing eGamines Bhat the program accomplishesI BithoCt regard to hoB it BorFs internally.
ProEaEly the most Easic form of test coDerage is to measCre Bhat procedCres Bere and Bere not eGecCted dCring the test sCite. This simple statistic is typically aDailaEle from eGecCtion profiling toolsI Bhose RoE is really to measCre performance EottlenecFs. !f the eGecCtion time in some procedCres is SeroI yoC need to Brite neB tests that hit those procedCres. .Ct this measCre of test coDerage is so coarse-grained itJs not Dery practical.
27." Line/Level Test Coverage
The Easic measCre of a dedicated test coDerage tool is tracFing Bhich lines of code are eGecCtedI and Bhich are not. This resClt is often presented in a sCmmary at the procedCreI fileI or proRect leDel giDing a percentage of the code that Bas eGecCted. ' large proRect that achieDed 3:Q code coDerage might Ee considered a Bell-tested prodCct. Typically the line coDerage information is also presented at the soCrce code leDelI alloBing yoC to see eGactly Bhich lines of code Bere eGecCted and Bhich Bere not. ThisI of coCrseI is often the Fey to Briting more tests that Bill increase coDerage? .y stCdying the CneGecCted codeI yoC can see eGactly Bhat fCnctionality has not Eeen tested.
27.+ Condition Coverage and 4ther 2easures
!tJs easy to find cases Bhere line coDerage doesnJt really tell the Bhole story. #or eGampleI consider a ElocF of code that is sFipped Cnder certain conditions 8e.g.I a statement in an -2 claCse9. !f that code is shoBn as eGecCtedI yoC donJt FnoB Bhether yoC haDe tested the case Bhen it is sFipped. ,oC need condition coDerage to FnoB. There are many other test coDerage measCres. $oBeDerI most aDailaEle code coDerage tools do not proDide mCch Eeyond Easic line coDerage. !n theoryI yoC shoCld haDe more. .Ct in practiceI if yoC achieDe 31KQ line coDerage and still haDe time and ECdget to commit to fCrther testing improDementsI it is an enDiaEle commitment to HCalityV
27., &ow Test Coverage Tools #orTo monitor eGecCtionI test coDerage tools generally MinstrCmentM the program Ey inserting MproEesM. $oB and Bhen this instrCmentation phase happens can Dary greatly EetBeen different prodCcts.
Performance Testing Process & Methodology 1+4 Proprietary & Confidential -
'dding proEes to the program Bill maFe it Eigger and sloBer. !f the test sCite is large and time-consCmingI the performance factor may Ee significant.
27.,.1
Source/Level Instru!entation
%ome prodCcts add proEes at the soCrce leDel. They analySe the soCrce code as BrittenI and add additional code 8sCch as calls to a code coDerage rCntime9 that Bill record Bhere the program reached. %Cch a tool may not actCally generate neB soCrce files Bith the additional code. %ome prodCctsI for eGampleI intercept the compiler after parsing ECt Eefore code generation to insert the changes they need. One draBEacF of this techniHCe is the need to modify the ECild process. ' separate Dersion namelyI code coDerage Dersion in addition to other DersionsI sCch as deECg 8Cn optimiSed9 and release 8optimiSed9 needs to Ee maintained. Proponents claim this techniHCe can proDide higher leDels of code coDerage measCrement 8condition coDerageI etc.9 than other forms of instrCmentation. This type of instrCmentation is dependent on programming langCage -- the proDider of the tool mCst eGplicitly choose Bhich langCages to sCpport. .Ct it can Ee someBhat independent of operating enDironment 8processorI O%I or DirtCal machine9.
27.,.2
E.ecuta0le Instru!entation
ProEes can also Ee added to a completed eGecCtaEle file. The tool Bill analySe the eGisting eGecCtaEleI and then create a neBI instrCmented one. This type of instrCmentation is independent of programming langCage. $oBeDerI it is dependent on operating enDironment -- the proDider of the tool mCst eGplicitly choose Bhich processors or DirtCal machines to sCpport.
27.,.3
$unti!e Instru!entation
ProEes need not Ee added Cntil the program is actCally rCn. The proEes eGist only in the in-memory copy of the eGecCtaEle fileO the file itself is not modified. The same eGecCtaEle file Csed for prodCct release testing shoCld Ee Csed for code coDerage. .ecaCse the file is not modified in any BayI RCst eGecCting it Bill not aCtomatically start code coDerage 8as it BoCld Bith the other methods of instrCmentation9. !nsteadI the code coDerage tool mCst start program eGecCtion directly or indirectly. 'lternatiDelyI the code coDerage tool Bill add a tiny Eit of instrCmentation to the eGecCtaEle. This neB code Bill BaFe Cp and connect to a Baiting coDerage tool BheneDer the program eGecCtes. This added code does not affect the siSe or performance of the eGecCtaEleI and does nothing if the coDerage tool is not Baiting.
Performance Testing Process & Methodology 1++ Proprietary & Confidential -
LiFe EGecCtaEle !nstrCmentationI (Cntime !nstrCmentation is independent of programming langCage ECt dependent on operating enDironment.
(ational 8!.M9 PCrifyPlCs %oftBare (esearch TestBell Paterson Technology TC'T CTCKK LiDeCoDerage
CoDerage analysis is a strCctCral testing techniHCe that helps eliminate gaps in a test sCite. !t helps most in the aEsence of a detailedI Cp-to-date reHCirements specification. Each proRect mCst choose a minimCm percent coDerage for release criteria Eased on aDailaEle testing resoCrces and the importance of preDenting post-release failCres. ClearlyI safety-critical softBare shoCld haDe a high goal. &e mCst set a higher coDerage goal for Cnit testing than for system testing since a failCre in loBer-leDel code may affect mCltiple high-leDel callers.
TCP is a measCre of estimating the compleGity of an application. This is also Csed as an estimation techniHCe to calcClate the siSe and effort of a testing proRect. The TCP coCnts are nothing ECt ranFing the reHCirements and the test cases that are to Ee Britten for those reHCirements into simpleI aDerage and compleG and HCantifying the same into a measCre of compleGity. !n this coCrseBare Be shall giDe an oDerDieB aEoCt Test Case Points and not elaEorate on Csing TCP as an estimation techniHCe.
2I.2 Calculating the Test Case )oints:
.ased on the #Cnctional (eHCirement *ocCment 8#(*9I the application is classified into DarioCs modCles liFe say for a BeE applicationI Be can haDe WLogin and 'CthenticationT as a modCle and ranF that particClar modCle as %impleI 'Derage and CompleG Eased on the nCmEer and compleGity of the reHCirements for that modCle. ' %imple reHCirement is oneI Bhich can Ee giDen a DalCe in the scale of 1 to-. 'n 'Derage reHCirement is ranFed EetBeen 0 and +. ' CompleG reHCirement is ranFed EetBeen / and 1:. C&6*%/=-,0 &2 R/@:-(/6/.,8 (eHCirement Classification %imple 81--9 'Derage 80-+9 CompleG 8X /9 Total
The test cases for a particClar reHCirement are classified into %impleI 'Derage and CompleG Eased on the folloBing foCr factors. Test case compleGity for that reHCirement O( !nterface Bith other Test cases O( "o. of Derification points O( .aseline Test data
29.2.1.1 T/8, C)8/ C%)88-2-5),-&. C&6*%/=-,0 T0*/ C&6*%/=-,0 &2 T/8, C)8/
' sample gCideline for classification of test cases is giDen EeloB. 'ny Derification point containing a calcClation is considered JCompleGJ 'ny Derification pointI Bhich interfaces Bith or interacts Bith another application is classified as JCompleGJ 'ny Derification point consisting of report Derification is considered as JCompleGJ ' Derification point comprising %earch fCnctionality may Ee classified as JCompleGJ or J'DerageJ depending on the compleGity
*epending on the respectiDe proRectI the compleGity needs to Ee identified in a similar manner. .ased on the test case type an adRCstment factor is assigned for simpleI aDerage and compleG test cases. This adRCstment factor has Eeen calcClated after a thoroCgh stCdy and analysis done on many testing proRects. The 'dRCstment #actor in the taEle mentioned EeloB is pre-determined and mCst not Ee changed for eDery proRect.
Test Case Type %imple 'Derage CompleG T&,)% T/8, C)8/ P&-.,8
Number Result "o of %imple reHCirements in "CmEerL'dRCst factor ' the proRect 8(19 "o of 'Derage reHCirements in "CmEerL'dRCst factor . the proRect 8(29 "o of CompleG reHCirements in "CmEerL'dRCst factor C the proRect 8(-9 R1GR2GR3
#rom the EreaF Cp of CompleGity of (eHCirements done in the first stepI Be can get the nCmEer of simpleI aDerage and compleG test case types. .y mCltiplying the nCmEer of reHCirements Bith it s corresponding adRCstment factorI Be get the simpleI aDerage and compleG test case points. %Cmming Cp the three resCltsI Be arriDe at the count of Total Test ase "oints.