You are on page 1of 83

KJSCE/IT/BE/SEMVII/STQA/2013-14

K.J.SOMAIYA COLLEGE OF ENGINEERING


VIDYAVIHAR, MUMBAI 400 077
Department of Information Technology
Subject: Software Testing And Quality Assurance
Term: ODD (2013) Class / SEM: VII B.E. (IT)
List of Experiments
Sr.
No.
Title Outcomes Achieved
1
Study of tools and techniques used in
various phases of SDLC.
1.An ability to apply knowledge of
mathematics, science, and engineering.(a)
2. A recognition of the need for, and an ability
to engage in life-long learning(i)
3. An ability to use the techniques, skills, and
modern engineering tools necessary for
engineering practice .(k)
2
Use of IEEE-829 format for developing
test plan for an educational institute
application designed for online admission
system.
1.An ability to design a system, component, or
process to meet desired needs within realistic
constraints. (c)
2. An ability to adopt open source standards(l)
3. An understanding of best practices, standards
and their applications. (m)
3
Writing a Unit Test Plan using a standard
template for testing a client server
Program using UDP as a transport
protocol.
1.An ability to design a system, component, or
process to meet desired needs within realistic
constraints. (c)
2. An ability to adopt open source standards(l)
3. An understanding of best practices, standards
and their applications. (m)
4
White Box Testing using control flow :
Designing test cases using CFG.(Control
Flow Graph)
1.An ability to design a system, component, or
process to meet desired needs within realistic
constraints. (c)
2. An ability to identify and formulate engineering
problems. (e)
5
White Box Testing using data flow :
Designing test cases using DFG.(Data
Flow Graph)
1.An ability to design a system, component, or
process to meet desired needs within realistic
constraints. (c)
2. An ability to identify and formulate
engineering problems. (e)
6
Black Box Testing: Study of HP QTP
(QuickTestProfessional)V10.0Automation
1. An ability to use the techniques, skills, and
modern engineering tools necessary for
KJSCE/IT/BE/SEMVII/STQA/2013-14
of Functional & Regression Testing . engineering practice.(k)
2. An understanding of best practices, standards
and their applications. (m)
7
Automated Performance Testing: Study of
HP Load Runner.
1. An ability to use the techniques, skills, and
modern engineering tools necessary for
engineering practice.(k)
2. An understanding of best practices, standards
and their applications. (m)
8 Write the acceptance criteria for current
software project(BE Project) you are
working on.
1. An ability to apply knowledge of
mathematics, science, and engineering.(a)
2. An ability to identify and formulate engineering
problems. (e)
3.An understanding of best practices, standards
and their applications. (m)
9 Study of Software Quality Standard: ISO
9000:2000 Fundamentals and
Requirements.
1.An ability to apply knowledge of
mathematics, science, and engineering.(a)
2.A recognition of the need for, and an ability
to engage in life-long learning(i)
3.An understanding of best practices, standards
and their applications. (m)
10 Exploring WinRunner 1.An ability to use the techniques, skills, and
modern engineering tools necessary for
engineering practice.(k)
2. An ability to adopt open source standards(l)
3. An understanding of best practices, standards
and their applications. (m)
Text Books:
1. Software Testing and Quality Assurance: Theory and Practice, Sagar Naik, University
of Waterloo, Piyu Tripathy, Wiley , 2008
References:
1. Effective methods for Software Testing William Perry, Wiley.
Subject In-charge
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title:. Revision of testing tools and techniques
used in software development life cycle.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: Revision of testing tools and techniques used in software development lifecycle.
__________________________________________________________________________
Objective:
After completing this experiment you will be able to:
1.Understand basic tools and techniques commonly used in testing software in various
phases.
__________________________________________________________________________
Resources needed: Internet, Libre Office
__________________________________________________________________________
Theory
There are basically two types of software testing tools:
1. Manual tools- These are the tools which are used in early phases of Software
Development Life cycle (SDLC). It requires a tester to play the role of an end
user, and use most of the features of the application to ensure correct behavior.
To ensure completeness of testing, the tester often follows a
written test plan.
2. Automatic tools- These are the tools which are used in later phases of SDLC.
Test automation is the technique of testing software using some test program
rather than people. A test program is written that executes the software and
identifies its defects. These test programs may be written from scratch, or they
may be written utilizing a general test automation framework and can be
purchased from a third party vendor. Test automation can be used to automate
time consuming tasks.
SDLC has Six phases:
REQUIREMEN
GATHERING
DESIGN
PHASE
CODING
PHASE
TESTING
PHASE
DEPLOYMENT
PHASE
MAINTENANCE
PHASE
KJSCE/IT/BE/SEMVII/STQA/2013-14
Following are the testing tools and techniques used in each phase of SDLC
1. Tools and Techniques used in Requirement Gathering phase:
1. Checklist: checklist is a list of probing questions prepared by the tester for
reviewing a predetermined function.
2. Confirmation/ Examination: This verifies the correctness of many aspects of the
system by contacting third parties such as users. This also involves examining a
document to verify that it exists.
3. Desk checking: This mechanism is a review performed by the
originator of requirements, design or program so as to check on the work performed
by the other individuals.
4. Error Guessing: This is a mechanism where experience or judgments of
expert people is used to predetermine through guessing what the most
probable errors will be and then testing only for those errors to ensure whether
system can handle those test conditions.
5. Fact Finding: It is a mechanism where information needed to conduct a test or
to provide assurance is obtained through an investigative process.
6. Flow Chart: It is a graphical representation of the program flow in order to
evaluate the completeness of the requirements, design or program specification.
7. Inspection: This is the mechanism where deliverables produced in the each phase
of system development life cycle are reviewed step by step.
8. Modeling: This is the mechanism of simulating the functioning of the
application system and its environment to test if design specifications
can achieve the system objectives. The actual system is built based on the
results of the output.
9. Peer review: This is the process where programmers review the programs of
another programmer. Normally following things are checked. (This happens before
execution so this review is for a document of source code.)
1. compliance to standards (company)
2. compliance to producers standards
3. compliance to guidelines
4. use of good practices
5. efficiency
6. effectiveness
7. economy
10. Risk Matrix: This is the mechanism where risk s in the application system is
identified and adequacy of the controls in each part of the software is tested. The
objective is to reduce those risks to the level acceptable to the user.
KJSCE/IT/BE/SEMVII/STQA/2013-14
11. Scoring: This is the process used to determine the degree of testing for high
risk systems as well as low risk systems. This helps to decide the amount of testing
required for a particular application. If the score is high then more testing needed.
12. Walkthrough: This is a process where a programmer explains his
program to the test team (without actual execution, just a document).
Programmer may use simulation of the execution of the application system. The
objective of walkthrough is to provide a basis for the test team to identify the
defects.
2. Tools and Techniques used in Design phase of SDLC:
1. Cause-Effect analysis: This is a graphical tool which shows the effect of
every event taken place in the system. This helps the tester to categorize every
event by the effect it has produced. This also helps conditions required for
multiple test events which will produce the same effect.
2. Checklist: A set of questions designed to review the design of the application
system.
3. Confirmation/ Examination: This to examine the design document for the
application system.
4. Correctness proof: This is a mechanism which involves developing
a state of statements/ hypothesis which defines the correctness of processing. These
hypotheses are then tested to determine whether the application system
performs processing in accordance with these correctness statements.
5. Design based functional testing: this is a tool which maps the designed
based functions to the requirements. This tool identifies these functions for testing
purpose.
6. Design reviews: this is a mechanism used during the process software
development. This happens in accordance with software development
methodology. The basic objective of design review is to ensure compliance to the
design methodology.
7. Desk checking: This is mechanism where designer of the software reviews the
work done by other people in the team.
8. Error guessing: Here the experienced designer helps the testing team to
guess the probable errors so that test cases can be designed accordingly.
9. Executable specifications: These are the system specifications
which are written in a specific language compiled into testable program. The
compiled specification will have less detail and precision than the final version
of the program, but they are sufficient to evaluate the proper functioning of the
system.
KJSCE/IT/BE/SEMVII/STQA/2013-14
10. Fact Finding: This is the process of investigating the facts about design
documents.
11. Flow chart: This is a graphical representation of the program flow. It helps to
evaluate the completeness of the high level design.
12. Inspection: This is a step by step review of the deliverables produced in the
design phase so as to identify the defects.
13. Modeling: This is the method of simulating the functioning of the
application system and its environment to confirm if the design specifications
will achieve system objectives.
14. Peer review: Here experienced and senior designers review the work done by
others. Basically this review is for checking the compliance to standards, procedures
and guidelines and the use of good practices used in design.
15. Risk Matrix: Here high level design is checked to identify any risks and the
controls implemented to mitigate that risk. This is helpful in reaching the level of
acceptable risk to the user.
16. Scoring: This mechanism is used to decide the amount of testing required to test
the high level design. This helps to identify areas where more testing is required.
17. Test data: These are system transactions which are specifically created for the
purpose of testing the design of application system.
18. Walkthrough: This is a process where designer of the system explains the
details of design to the testing team so that they can create proper test cases.
The objective of the walkthrough is to provide a basis for questioning of
the test team as a basis of identifying the defects.
3. Tools and Techniques used in Coding phase of SDLC:
1. Boundary Value Analysis: It is a method for dividing code of application
program into segments so that testing can occur within the boundaries of those
segments. This is the concept from top down system design approach.
2. Cause Effect Graphics: This is a graphical tool which shows the effect of every
event taken place in the system. In the coding phase this helps the testing team to
see the effect produced by every piece of code written by the developer. This
helps the testing team to further categorize every event by the effect it has produced.
This also helps in reducing the number of test conditions required for multiple
events which produce same effect.
3. Checklist: In the coding phase checklist is a list of probing
questions prepared by the testing team with respect to coding strategies so
that they can design good test cases.
KJSCE/IT/BE/SEMVII/STQA/2013-14
4. Compiler based analysis: This tool utilizes the diagnostic produced by a compiler
to identify program defects during the compilation of the program. This helps the
testing team to design their test plan accordingly.
5. Complexity based metric testing: This mechanism uses the statistics and
mathematics to develop relationships that can be used to identify the complexity
of a compute programs. It also helps in identifying the completeness of testing in
evaluating the complex logic.
6. Control flow analysis: This is a graphical tool which is used to analyze the
branch logic within the program to indentify logic problems, so that testing team can
design appropriate test cases.
7. Confirmation/ Examination: This is a process to confirm that proper design
document exists and to examine that document as per the standard.
8. Coverage based metric testing: This is a tool which uses
mathematical relationship to show, what percentage of the application
system has been covered by the test process. The resulting metrics is used for
finding out the effectiveness of the test process.
9. Data flow analysis: This is a tool used in coding phase to ensure that the data
used in the program has been properly defined and the data which is being
defined is appropriately used.
10. Desk checking: This is the review mechanism performed by the
originator of the system so as to check on the work performed by the individual.
11. Error guessing: This is a mechanism where judgment and experience of
some senior people is taken into account to guess the probable errors. This
helps the testing team to write the test cases which will handle these errors.
12. Fact Finding: It is a mechanism where information needed to conduct a test
or to provide assurance is obtained through an investigative process.
13. Flow Chart: It is a graphical representation of the program flow in order to
evaluate the completeness of the requirements, design or program specification.
14. Inspection: This is the mechanism where deliverables produced in the each phase
of system development life cycle are reviewed step by step.
4. Tools and Techniques used in Testing phase of SDLC:
1. Acceptance test criteria: This tool is used by the testing team to develop system
standards and functionality which must be achieved before the user will accept
the system in the production environment.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Acceptance testing is also known as Black Box Testing or Functional
Acceptance test criteria: This is mechanism used by the testing team to develop
Testing or End User Testing or Confidence Testing or Validation Testing
or UAT (user acceptance testing ).
1. Boundary value analysis
2. Checklist
3. Complexity based metric testing
4. Confirmation/ Examination
5. Correctness proof
6. Coverage based metric testing
7. Data dictionary
8. Design based functional testing
9. Disaster testing
10. Error guessing
11. Exhaustive testing: this is the mechanism where every possible path and
condition is evaluated and tested. This is the only test method which
guarantees proper functioning of application program.
12. Fact finding
13. Inspections
14. Instrumentation
5. Tools and Techniques used in Deployment/Installation phase of
SDLC:
1. Acceptance Test Criteria: This is a process used by testing team to develop system
standards and functionality which must be achieved before the user will accept
the system in the production environment. Acceptance testing is also known
as Black Box Testing or Functional Acceptance test criteria: This is
mechanism used by the testing team to develop Testing or End User Testing or
Confidence Testing or Validation Testing or UAT (user acceptance
testing ).
2. Checklist: This is a list of questions prepared by the testing team to get the insight
into the deployment phase.
3. Confirmation/ Examination: To confirm and examine the documents
related to deployment phase.
4. Error Guessing: The people expert in deploying the application is contacted and
from the discussion the probable errors taking place in deployment phase are listed
down. Accordingly testing team will prepare the test plan to address those errors.
KJSCE/IT/BE/SEMVII/STQA/2013-14
5. Fact Finding: It is a mechanism where information needed to conduct a test or
to provide assurance is obtained through an investigative process. (Interviews,
Surveys )
6. Inspection: This is the mechanism where deliverables produced in the each phase
of system development life cycle are reviewed step by step.
8. Instrumentation: This makes use of a computer monitor or the
counter to know the frequency of particular error which occurs again and again.
7. Parallel operation: This is a process where old version of the software and new
version of the software are running parallel at the same time so that differences
between two versions can be found out and testing can be planned accordingly.
8. Peer review: This is a process where peers are requested to review the various
aspects of deployment phase. Normally peer review process checks for
compliance to various standards, procedures, guidelines, best practices etc.
9. System logs: This is a mechanism where information is collected during the
operation of a computer system for analysis purpose. This helps to determine
how well the system has performed. The logs are produced by operating software
such as DBMS systems operating systems, job accounting systems and they are used
for testing purpose. The installation logs created during installation process are
extremely useful to fix problems occurring during installation phase.
10. Utility programs: These are general purpose software packages which can be
used in the testing of an application system. The most valuable softwares are
those which analyze data files.
6. Tools and Techniques used in Maintenance phase of SDLC:
1. Checklist: List of questions prepared to understand the maintenance
phase of the system.
2. Code comparison: This tool is used by the tester to identify the difference
between two versions of the same program. This can be used either for the object
code or source code.
3. Confirmation/ Examination: To confirm and examine the relevant document.
4. Desk checking: Owners of modules/process keep check on the work done by
individuals so as to ensure quality.
5. Disaster testing: This is to check the preparedness of the user for
unanticipated disaster. Testing team prepares a special Disaster
Recovery Plan to address this issue.
6. Error guessing: Testing team prepares a list of probable errors by discussing with
experts in deployment phase. From those errors the test plan for deployment phase
is prepared so that system will be able to handle those errors.
KJSCE/IT/BE/SEMVII/STQA/2013-14
7. Fact finding: This is the process of investigation to find out facts about some
testing condition. This is done mainly by referring to the documents.
8. Inspections: This is a review process to check the deliverables produced by each
phase of SDLC.
9. Instrumentation: This is using the computers or counters to know the frequency
of particular error so that test plan can be prepared accordingly.
10. Integrated test facility: The test data is given as input to production
version of an application. So live application is being tested in parallel for test
data as well as live production data. This helps to compare the results obtained so
as modify the test cases if required.
11. Peer review: Peers are requested to review the process of
deployment from the point of view of best practices.
12. SCARF: System Control Audit Review File
This is a mechanism where the software is operated over a period of time and the
data is/info gathered during the operation to perform the analysis. E.g. all data entry
errors are gathered over a period of time and analysis is done whether quality of
input is improving over a time or not.
13. Test data: actual system transaction which are created for the purpose of
testing the application data.
14. Test data generator: These are software system which can be used to automatically
generate the test data for testing purposes. These generators repair parameters of the
data element values in order to generate large amounts of test transactions.
15. Tracing: A representation of the path followed by computer programs as
they process data or the paths followed in the database to locate one or more
pieces of data used to produce a logical record for processing.
16. Utility programs: A general purpose software package which can be
used in the testing of an application system. The most valuable utilities are
those which analyze data files.
Procedure / Approach /Algorithm / Activity Diagram:
Study various tools and techniques used in software testing throughout software
development life cycle.
1.Tools and Techniques used in Requirement Gathering phase.
2.Tools and Techniques used in Design phase
3Tools and Techniques used in Coding phase.
4Tools and Techniques used in Testing phase.
5Tools and Techniques used in Deployment phase.
6Tools and Techniques used in Maintenance phase.
_________________________________________________________________________
KJSCE/IT/BE/SEMVII/STQA/2013-14
Results: (Program printout with output / Document printout as per the format)
_________________________________________________________________________
Questions:
1.What is the difference between peer review, inspection and walkthrough?
2.What is the difference between code comparison and parallel operation?
_________________________________________________________________________
Outcomes:
1 1.An ability to apply knowledge of mathematics, science, and e ngineering.(a)
2. A recognition of the need for, and an ability to engage in life-long learning(i)
3.An ability to use the techniques, skills, and modern engineering tools necessary for
engineering practice .(k)
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
_________________________________________________________________________
__
References:
Books/ Journals/ Websites:
1. Book Software Testing and Quality Assurance by Williams Perry
2. www.softwaregeek.com
3. www.softwaretestinghelp.com
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: Use of IEEE-829 format for developing
test plan.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: To develop test plan for an application for educational institute for online
admission system.
_________________________________________________________________________
Objective:
After completing this experiment you will be able to:
1. Develop a test plan for any software project as per the standard IEEE format.
_________________________________________________________________________
Resources needed: Internet, LibreOffice
Theory
A test plan is a document detailing a systematic approach to testing a system such as
a machine or software. The plan typically contains a detailed understanding of what
the eventual workflow will be.
A test plan documents the strategy that will be used to verify and ensure that a
product or system meets its design specifications and other requirements. A test
plan is usually prepared by or with significant input from Test Engineers.
Depending on the product and the responsibility of the organization to which the
test plan applies, a test plan may include one or more of the following:
Design Verification or Compliance test - to be performed during the development or
approval stages of the product, typically on a small sample of units.
Manufacturing or Production test - to be performed during preparation or assembly
of the product in an ongoing manner for purposes of performance verification and
quality control.
Acceptance or Commissioning test - to be performed at the time of delivery or
installation of the product.
Service and Repair test - to be performed as required over the service life of the
product.
Regression test - to be performed on an existing operational product, to verify that
existing functionality didn't get broken when other aspects of the environment are
changed (e.g., upgrading the platform on which an existing application runs).
A complex system may have a high level test plan to address the overall
requirements and supporting test plans to address the design details of subsystems
and components.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Test plan document formats can be as varied as the products and organizations to
which they apply. There are three major elements that should be described in the
test plan: Test Coverage, Test Methods, and Test Responsibilities. These are also
used in a formal test strategy.
Test coverage
Test coverage in the test plan states what requirements will be verified during what
stages of the product life. Test Coverage is derived from design specifications and
other requirements, such as safety standards or regulatory codes, where each
requirement or specification of the design ideally will have one or more
corresponding means of verification. Test coverage for different product life stages
may overlap, but will not necessarily be exactly the same for all stages. For
example, some requirements may be verified during Design Verification test, but
not repeated during Acceptance test. Test coverage also feeds back into the design
process, since the product may have to be designed to allow test access.
Test methods
Test methods in the test plan state how test coverage will be implemented. Test
methods may be determined by standards, regulatory agencies, or contractual
agreement, or may have to be created new. Test methods also specify test
equipment to be used in the performance of the tests and establish pass/fail criteria.
Test methods used to verify hardware design requirements can range from very
simple steps, such as visual inspection, to elaborate test procedures that are
documented separately.
Test responsibilities
Test responsibilities include what organizations will perform the test methods and at
each stage of the product life. This allows test organizations to plan, acquire or
develop test equipment and other resources necessary to implement the test methods
for which they are responsible. Test responsibilities also includes, what data will be
collected, and how that data will be stored and reported (often referred to as
"deliverables"). One outcome of a successful test plan should be a record or report
of the verification of all design specifications and requirements as agreed upon by
all parties.
_________________________________________________________________________
Procedure / Approach /Algorithm / Activity Diagram:
IEEE 829 format template for test plan
KJSCE/IT/BE/SEMVII/STQA/2013-14
1. Test Plan Identifier
Some type of unique company generated number to identify this test plan, its level
and the level of software that it is related to. Preferably the test plan level will be the
same as the related software level. The number may also identify whether the test plan
is a Master plan, a Level plan, an integration plan or whichever plan level it
represents. This is to assist in coordinating software and testware versions within
configuration management. Keep in mind that test plans are like other software
documentation, they are dynamic in nature and must be kept up to date.
Therefore, they will have revision numbers. You may want to include author and
contact information including the revision history information as part of either the
identifier section of as part of the introduction.
2. References
List all documents that support this test plan. Refer to the actual
version/release number of the document as stored in the configuration
management system. Do not duplicate the text from other documents as this
will reduce the viability of this document and increase the maintenance effort.
Documents that can be referenced include:
Project Plan
Requirements specifications
High Level design document
Detail design document
Development and Test process standards
Methodology guidelines and examples
Corporate standards and guidelines
3. Introduction
State the purpose of the Plan, possibly identifying the level of the plan (master etc.).
This is essentially the executive summary part of the plan.
You may want to include any references to other plans, documents or items that
contain information relevant to this project/process. If preferable, you can create
a references section to contain all reference documents.
Identify the Scope of the plan in relation to the Software Project plan that it relates
to. Other items may include, resource and budget constraints, scope of the testing
effort, how testing relates to other evaluation activities (Analysis
& Reviews), and possible the process to be used for change control and
communication and coordination of key activities. As this is the Executive
Summary keep information brief and to the point.
KJSCE/IT/BE/SEMVII/STQA/2013-14
4. Test Items (Functions)
These are things you intend to test within the scope of this test plan.
Essentially, something you will test, a list of what is to be tested. This can be
developed from the software application inventories as well as other sources of
documentation and information. This can be controlled and defined by your local
Configuration Management(CM) process if you have one. This information
includes version numbers, configuration requirements where needed, (especially
if multiple versions of the product are supported). It may also include key
delivery schedule issues for critical elements. Remember, what you are testing is
what you intend to deliver to the Client. This section can be oriented to the level of
the test plan. For higher levels it may be by application or functional area, for
lower levels it may be by program, unit, module or build.
5. Software Risk Issues
Identify what software is to be tested and what the critical areas are, such as: A.
Delivery of a third party product.
B. New version of interfacing software
C. Ability to use and understand a new package/tool, etc. D.
Extremely complex functions
E. Modifications to components with a past history of failure
F. Poorly documented modules or change requests
There are some inherent software risks such as complexity; these need to be
identified.
A. Safety
B. Multiple interfaces
C. Impacts on Client
D. Government regulations and rules
Another key area of risk is a misunderstanding of the original requirements. This
can occur at the management, user and developer levels. Be aware of vague or
unclear requirements and requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help
identify potential areas within the software that are risky. If the unit testing
discovered a large number of defects or a tendency towards defects in a
particular area of the software, this is an indication of potential future
problems. It is the nature of defects to cluster and clump together. If it was
defect ridden earlier, it will most likely continue to be defect prone.
One good approach to define where the risks are is to have several
brainstorming sessions. Start with ideas, such as, what worries me about this
project/application.
KJSCE/IT/BE/SEMVII/STQA/2013-14
6. Features to be tested
This is a listing of what is to be tested from the USERS viewpoint of what
the system does. This is not a technical description of the software, but a USERS
view of the functions.
Set the level of risk for each feature. Use a simple rating scale such as (H,
M, L): High, Medium and Low. These types of levels are understandable to a
User. You should be prepared to discuss why a particular level was chosen. It should
be noted that Section 4 and Section 6 are very similar. The only true difference is
the point of view. Section 4 is a technical type description including version
numbers and other technical information and Section 6 is from the Users
viewpoint. Users do not understand technical software terminology; they understand
functions and processes as they relate to their jobs.
7. Features not to be tested
This is a listing of what is NOT to be tested from both the Users viewpoint of
what the system does and a configuration management/version control view. This
is not a technical description of the software, but a USERS view of the functions.
Identify WHY the feature is not to be tested, there can be any number of
reasons.
Not to be included in this release of the Software.
Low risk, has been used before and is considered stable.
Will be released but not tested or documented as a functional part of the
release of this version of the software. Sections 6 and 7 are directly related to
Sections 5 and 17. What will and will not be tested are directly affected by the
levels of acceptable risk within the project, and what does not get tested affects
the level of risk of the project.
8. Approach (Strategy )
This is your overall test strategy for this test plan; it should be appropriate to the
level of the plan (master, acceptance, etc.) and should be in agreement with all
higher and lower levels of plans. Overall rules and processes should be identified.
Are any special tools to be used and what are they?
Will the tool require special training?
What metrics will be collected?
Which level is each metric to be collected at?
How is Configuration Management to be handled?
How many different configurations will be tested?
Hardware
Software
Combinations of HW, SW and other vendor packages
What levels of regression testing will be done and how much at each test level?
Will regression testing be based on severity of defects detected?
How will elements in the requirements and design that do not make sense
or are untestable be processed?
If this is a master test plan the overall project testing approach and
coverage requirements must also be identified.
Specify if there are special requirements for the testing.
Only the full component will be tested.
KJSCE/IT/BE/SEMVII/STQA/2013-14
A specified segment of grouping of features/components must be tested
together.
Other information that may be useful in setting the approach are:
MTBF, Mean Time Between Failures - if this is a valid measurement for the test
involved and if the data is available.
SRE, Software Reliability Engineering - if this methodology is in use and if
the information is available.
How will meetings and other organizational processes be handled?
9. Item Pass/Fail Criteria:
What are the Completion criteria for this plan? This is a critical aspect of any test
plan and should be appropriate to the level of the plan.
At the Unit test level this could be items such as:
All test cases completed.
A specified percentage of cases completed with a percentage containing some
number of minor defects.
Code coverage tool indicates all code covered.
At the Master test plan level this could be items such as: All lower level plans
completed. Or a specified number of plans completed without errors. This could be
an individual test case level criterion or a unit level plan or it can be general
functional Requirements for higher level plans.
What is the number and severity of defects located? Is it possible to compare
this to the total number of defects? This may be impossible, as some defects
are never detected A defect is something that may cause a failure, and
may be acceptable to live in Application. A failure is the result of a defect as seen
by the user.
10. Suspension Criteria and Resumption Requirement:
Know when to pause in a series of tests.
If the number or type of defects reaches a point where the follow on testing has no
value, it makes no sense to continue the test; you are just wasting resources.
Specify what constitutes stoppage for a test or series of tests and what is the
acceptable level of defects that will allow the testing to proceed past the defects.
Testing after a truly fatal error will generate conditions that may be identified as
defects but are in fact ghost errors caused by the earlier defects that were ignored.
11. Test Deliverables:
What is to be delivered as part of this plan?
Test plan document.
Test cases.
Test design specifications.
Tools and their outputs.
Simulators.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Static and dynamic generators.
Error logs and execution logs.
Problem reports and corrective actions.
One thing that is not a test deliverable is the software itself that is listed under test
items and is delivered by development.
12. Remaining test tasks:
If this is a multi-phase process or if the application is to be released in increments
there may be parts of the application that this plan does not address. These areas
need to be identified to avoid any confusion should defects be reported back on
those future functions. This will also allow the users and testers to avoid incomplete
functions and prevent waste of resources chasing non-defects.
If the project is being developed as a multi-party process, this plan may only
cover a portion of the total functions/features. This status needs to be
identified so that those other areas have plans developed for them and to
avoid wasting resources tracking defects that do not relate to this plan.
When a third party is developing the software, this section may contain descriptions
of those test tasks belonging to both the internal groups and the external groups.
13. Environmental Needs:
Are there any special requirements for this test plan, such as:
Special hardware such as simulators, static generators etc.
How will test data be provided. Are there special collection
requirements or specific ranges of data that must be provided?
How much testing will be done on each component of a multi-part feature?
Special power requirements.
Specific versions of other supporting software.
Restricted use of the system during testing.
14. Staffing and Training Needs :
Training on the application/system.
Training for any test tools to be used.
Section 4 and Section 15 also affect this section. What is to be tested and
who is responsible for the testing and training.
15. Responsibilities:
Who is in charge?
This issue includes all areas of the plan. Here are some examples:
Setting risks.
Selecting features to be tested and not tested.
Setting overall strategy for this level of plan.
Ensuring all required elements is in place for testing.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Providing for resolution of scheduling conflicts, especially, if testing is done
on the production system.
Who provides the required training?
Who makes the critical go/no go decisions for items not covered in the test
plans?
16. Schedule:
Should be based on realistic and validated estimates. If the estimates for the
development of the application are inaccurate, the entire project plan will slip and
the testing is part of the overall project plan.
As we all know, the first area of a project plan is to get cut when it comes to
crunch time at the end of a project is the testing. It usually comes down to the
decision, Lets put something out even if it does not really work all that well.
And, as we all know, this is usually the worst possible decision.
How slippage in the schedule will to be handled should also be addressed
here. If the users know in advance that a slippage in the development will cause a
slippage in the test and the overall delivery of the system, they just may be a little
more tolerant, if they know its in their interest to get a better tested application.
By spelling out the effects here you have a chance to discuss them in advance of
their actual occurrence. You may even get the users to agree to a few defects in
advance, if the schedule slips.
At this point, all relevant milestones should be identified with their relationship to
the development process identified. This will also help in identifying and tracking
potential slippage in the schedule caused by the test process.
It is always best to tie all test dates directly to their related development activity
dates. This prevents the test team from being perceived as the cause of a delay. For
example, if system testing is to begin after delivery of the final build, then system
testing begins the day after delivery. If the delivery is late, system testing starts
from the day of delivery, not on a specific date. This is called dependent or relative
dating.
17. Planning Risks and contingencies.
What are the overall risks to the project with an emphasis on the testing process?
Lack of personnel resources en testing is to begin.
Lack of availability of required hardware, software, data or tools.
Late delivery of the software, hardware or tools.
Delays in training on the application and/or tools.
Changes to the original requirements or designs.
Specify what will be done for various events, for example:
Requirements definition will be complete by January 1, 20XX, and, if
the requirements change after that date, the following actions will be
taken.
KJSCE/IT/BE/SEMVII/STQA/2013-14
The test schedule and development schedule will move out an
appropriate number of days. This rarely occurs, as most projects tend
to have fixed delivery dates.
The number of test performed will be reduced.
The number of acceptable defects will be increased.
These two items could lower the overall quality of the delivered
product.
Resources will be added to the test team.
The test team will work overtime.
This could affect team morale.
The scope of the plan may be changed.
There may be some optimization of resources. This should be
avoided, if possible for obvious reasons.
You could just QUIT. A rather extreme option to say the least.
Management is usually reluctant to accept scenarios such as
the one above even though they have seen it happen in the past.
The important thing to remember is that, if you do nothing at all, the usual result is
that testing is cut back or omitted completely, neither of which should be an
acceptable option.
18. Approval:
Who can approve the process as complete and allow the project to proceed to the
next level (depending on the level of the plan)?
At the master test plan level, this may be all involved parties.
When determining the approval process, keep in mind that who the audience
is. The audience for a unit test level plans is different than that of an integration,
system or master level plan.
The levels and type of knowledge at the various levels will be different as
well. Programmers are very technical but may not have a clear
understanding of the overall business process driving the project. Users may
have varying levels of business acumen and very little technical skills. Always be
wary of users who claim high levels of technical skills and programmers that claim
to fully understand business process. These types of individuals can cause more
harm than good if they do not have the skills they believe they possess.
Results: (Program printout with output / Document printout as per the format)
Questions:
1.What is a test case? What are the objectives of testing.
2. Explain the difference between failure, error and fault.
_________________________________________________________________________
KJSCE/IT/BE/SEMVII/STQA/2013-14
Outcomes:
1.An ability to design a system, component, or process to meet desired needs within
realistic constraints. (c)
2. An ability to adopt open source standards (l)
3. An understanding of best practices, standards and their applications. (m)
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
_______________________________________________________________
References:
Books/ Journals/ Websites:
1.http://ieeexplore.ieee.org
2. http://en.wikipedia.org/wiki/Test_plan
3.http://gerrardconsulting.com/tkb/guidelines/ieee829/main.html
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: To develop unit test plan for client server program using
UDP as the transport protocol .
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: To develop unit test plan for client server program using UDP as the transport
protocol .
_________________________________________________________________________
Objective:
After completing this experiment you will be able to:
1. Develop a unit test plan for any software project .
_________________________________________________________________________
Resources needed: Internet, Libre Office
Theory
Unit Testing
unit testing is a method by which individual units of source code, sets of one or more
computer program modules together with associated control data, usage procedures, and
operating procedures, are tested to determine if they are fit for use. Intuitively, one can
view a unit as the smallest testable part of an application. In procedural a unit could be an
entire module but is more commonly an individual function or procedure. In object-
oriented programming a unit is often an entire interface, such as a class, but could be an
individual method. Unit tests are created by programmers or occasionally by white box
testers during the development process.
Ideally, each test case is independent from the others: substitutes like method stubs, mock
objects, fakes and test harnesses can be used to assist testing a module in isolation. Unit
tests are typically written and run by software developers to ensure that code meets its
design and behaves as intended. Its implementation can vary from being very manual
(pencil and paper)[citation needed] to being formalized as part of build automation.
Unit Test Plan
This document describes the Test Plan in other words how the tests will be carried out.
This will typically include the list of things to be Tested, Roles and Responsibilities,
prerequisites to begin Testing, Test Environment, Assumptions, what to do after a test is
successfully carried out, what to do if test fails, Glossary and so on
Procedure / Approach /Algorithm / Activity Diagram:
For a given program use the following template to develop the unit test plan.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Unit Test Plan
Module ID: _________ Program ID: ___________
1. Module Overview
Briefly define the purpose of this module. This may require only a single phrase: i.e.:
calculates overtime pay amount, calculates equipment depreciation, etc.
1.1 Inputs to Module
[Provide a brief description of the inputs to the module under test.]
1.2 Outputs from Module
[Provide a brief description of the outputs from the module under test.]
1.3 Logic Flow Diagram
[Provide logic flow diagram if additional clarity is required.]
2. Test Data
(Provide a listing of test cases to be exercised to verify processing logic.)
2.1 Positive Test Cases
[Representative data samples should provide a spectrum of valid field and processing
values including "Syntactic" permutations that relate to any data or record format issues.
Each test case should be numbered, indicate the nature of the test to be performed and the
expected proper outcome.]
2.2 Negative Test Cases
[The invalid data selection contains all of the negative test conditions associated with the
module. These include numeric values outside thresholds, invalid Characters, invalid or
missing header/trailer record, and invalid data structures (missing required elements,
unknown elements, etc.)
3. Interface Modules
Identify the modules that interface with this module indicating the nature of the interface:
outputs data to, receives input data from, internal program interface, external program
interface, etc. Identify sequencing required for subsequent string tests or sub-component
integration tests.
KJSCE/IT/BE/SEMVII/STQA/2013-14
4. Test Tools
[Identify any tools employed to conduct unit testing. Specify any stubs or utility programs
developed or used to invoke tests. Identify names and locations of these aids for future
regression testing. If data supplied from unit test of coupled module, specify module
relationship.
5. Archive Plan
Specify how and where data is archived for use in subsequent unit tests. Define any
procedures required to obtain access to data or tools used in the testing effort. The
unit test plans are normally archived with the corresponding module specifications.
6. Updates
Define how updates to the plan will be identified. Updates may be required due to
enhancements, requirements changes, etc.
____________________________________________________________________
Results: (Program printout with output / Document printout as per the format)
____________________________________________________________________
Questions:
1. List down the tools useful in unit testing and debugging the code.
________________________________________________________________________
Outcomes:
1.An ability to design a system, component, or process to meet desired needs within
realistic constraints. (c)
2. An ability to adopt open source standards(l)
3. An understanding of best practices, standards and their applications. (m)
_________________________________________________________________________
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
_________________________________________________________________________
References:
Books/ Journals/ Websites:
1.www.uml.org.cn/test/utp_template.doc
2.http://www.exforsys.com/tutorials/testing/unit-testing.html
3.www.softwaretestinghelp.com
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: White Box Testing using control flow.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: White Box Testing using control flow
______________________________________________________________________
Objective:
After completing this experiment you will be able to:
1.Design Control flow graph.
2.Understand path selection criteria.
3.Generate test input data.
_______________________________________________________________________
Resources needed: Libre Office
_________________________________________________________________________
Theory
Control Flow: Successive execution of program statements is viewed as flow of control.
Conditional statements alter the default sequential control flow in a program unit.
Control Flow Testing: The main idea in control flow testing is to appropriately select a few
paths in a program unit and observe whether or not the selected paths produce the expected
outcome. By executing a few paths in a program unit, the programmer tries to assess the
behavior of the entire program unit.
Control flow testing is a kind of structural testing, which is performed by
programmers to test the code written by them. Test cases for control flow testing are
derived from source code, such as a program unit rather than from the entire program.
Procedure / Approach /Algorithm / Activity Diagram:
Outline of control flow testing:
The overall idea of generating test input data for performing control flow testing is
depicted following .
Inputs to the test generation process
- Source code of the program unit
- Set of Path selection criteria: statement, branch.
Generation of control flow graph : A CFG is a graphical representation of a program
unit.
Idea behind drawing a CFG is to be able to visualize all the paths in the
program unit.
Selection of paths
- Paths are selected from CFG to satisfy path selection criteria.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Generation of test input data
- Two kinds of paths
Executable path: There exists input so that the path is executed,
such a path is called feasible path i.e. executable path
Infeasible path: If there is no input to execute t he path then
such a path is called infeasible path.
- Solve the path conditions to produce test input for each path.
Feasibility test of the path
Idea behind checking the feasibility test of a selected path is to meet path selection
criteria .if some chosen paths are found to be infeasible, and then some other paths are
selected to meet the criteria.
Control Flow Graph: It is graphical presentation of the program unit. Three symbols are
used to construct a CFG.
Rectangle: It represents a sequential computation .we label each computation and decision
box with the unique integer .
Decision box: The two branches of decision box are labeled with T and F to represent the
true and false evaluations ,respectively ,of the condition within the box.
Merge Point: we will not label a merge node, because one can easily identify the paths in a
CFG even without explicitly considering the merge nodes.
_________________________________________________________________________
__
Results: (Program printout with output / Document printout as per the format)
_________________________________________________________________________
Questions:
You are given the binary search routine in C shown in fig(b).the input array V is assumed
to be sorted in ascending order, n is array size, and you want to find the index of an
element X in an array. If X is not found in the array, the routine is supposed to return -1
int binsearch (int X, int V [ ], int n) {
int low, high, mid ;
low=0 ;
high=n-1 ;
while(low <= high) {
mid=(low + high)/2 ;
If ( X < V [mid] )
KJSCE/IT/BE/SEMVII/STQA/2013-14
high = mid - 1 ; else if ( X > V [mid])
low = mid + 1 ;
else
return mid ; }
return -1 ;
}
1. Draw a CFG for binsearch().
2. From the CFG, identify a set of entry-exit paths to satisfy the complete statement
coverage criterion.
3. Identify additional paths, if necessary, to satisfy the complete branch coverage criterion.
4. For each path identified above, derive their path predicate expressions.
5. Solve the path predicate expressions to generate test input and compute the
corresponding expected outcomes.
Outcomes:
1.An ability to design a system, component, or process to meet desired needs within
realistic constraints. (c)
2. An ability to identify and formulate engineering problems. (e)
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
References:
Books/ Journals/ Websites:
1. Book Software Testing and Quality Assurance by Kshirasagar Naik and Priyadarshi
Tripathy.
2. http://en.wikipedia.org/wiki/Control_flow_graph
3. http://suif.stanford.edu/~courses/cs243/joeq/adv_ex3.html
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: White Box Testing using data
flow.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: White Box Testing using data flow.
_________________________________________________________________________
Objective:
After completing this experiment you will be able to:
1.Design Data flow graph.
2.Understand path selection criteria
3.Generate test input data.
_________________________________________________________________________
Resources needed: Libre Office
Theory
A program unit accepts inputs, performs computations, assigns new values to variables,
and returns results. One can visualize of flow of data values from one statement to
another. A data value produced in one statement is expected to be used later.
Example:
Obtain a file pointer . use it later.
If the later use is never verified, we do not know if the earlier assignment is acceptable.
Motivations of data flow testing
The memory location for a variable is accessed in a desirable
way.
Verify the correctness of data values defined (i.e. generated)-
observe that all the uses of the value produce the desired
results.
Data flow testing can be performed at two conceptual levels.:
Static data flow testing
Dynamic data flow testing
Static data flow testing
Identify potential defects, commonly known as data flow anomaly. Analyze source code
without execution.
Dynamic data flow testing
- Involves actual program execution.
- Bears similarity with control flow testing.
Identify paths to execute them.
Paths are identified based on data flow testing criteria.
_________________________________________________________________________
KJSCE/IT/BE/SEMVII/STQA/2013-14
Procedure / Approach /Algorithm / Activity Diagram:
Data flow testing is outlined as follows:
- Draw a data flow graph from a program.
- Select one or more data flow testing criteria.
- Identify paths in the data flow graph satisfying the selection criteria.
- Derive path predicate expressions from the selected paths
- Solve the path predicate expressions to derive test inputs
Data Flow Graph:
It is drawn with the objective of identifying data definitions and their uses.
A data flow graph is a directed graph constructed as follows.
- A sequence of definitions and c-uses is associated with each node of
the graph.
- A set of p-uses is associated with each edge of the graph.
- The entry node has a definition of each edge parameter and each
nonlocal variable used in the program.
- The exit node has an un definition of each local variable.
Occurrence of data variable is classified as follows:
Definition: A variable gets a new value.
Un definition or kill: This occurs if the value and the
location become unbound.
Use: This occurs when the value is fetched from the memory location of the variable.
There are two forms of uses of a variable
Data Flow terms:-
Global c-use:-A c- use of a variable x in node i is set to be a global c use if x has been
defined before in a node other than node i
KJSCE/IT/BE/SEMVII/STQA/2013-14
Definition clear path:
path (i - n1 - nm - j), m 0, is called a definition clear pat h (def-clear path) with
respect to variable x from node i to node j, and
from node i to edge (nm, j),
if x has been neither defined nor undefined in nodes n1 - nm.
Global definition
A node i has a global definition of variable x if node i has a definition of x and there is a
def-clear path w.r.t. x from node i to some
node containing a global c-use, or
edge containing a p-use of variable x
Simple path:
A simple path is a path in which all nodes, except possibly the first and the last, are
distinct.
Loop-free paths:
A loop-free path is a path in which all nodes are distinct.
Complete path:
A complete path is a path from the entry node to the exit node
Du-path:
A path (n1 - n2 - - nj - nk) is a du-path path w.r.t. variable x if node n1 has a global
definition of x and either node nk has a global c-use of x and (n1 - n2 - - nj - nk) is a
def-clear simple path w.r.t. x, or Edge (nj, nk) has a p-use of x and (n1 - n2 - - nj - nk)
is adef-clear, loop-free path w.r.t. x.
Data Flow Testing Criteria
All-defs:
For each variable x and each node i, such that x has a global definition in node i, select a
complete path which includes a def-clear path from node i to
node j having a global c-use of x, or
edge (j, k) having a p-use of x.
All-c-uses:
For each variable x and each node i, such that x has a global
definition in node i, select complete paths which include def-clear
paths from node i to all nodes j such that there is a global c-use of x in j.
All-p-uses:
For each variable x and each node i, such that x has a global
definition in node i, select complete paths which include def-clear paths
from node i to all edges (j, k) such that there is a p-use of x on (j, k).
KJSCE/IT/BE/SEMVII/STQA/2013-14
All-p-uses/some-c-uses:
This criterion is identical to the all-p-uses criterion except when a variable x has no p-use.
If x has no p-use, then this criterion reduces to the some-c-uses criterion.
Some-c-uses: For each variable x and each node i, such that x has a
global definition in node i, select complete paths which include def-
clear paths from node i to some nodes j such that there is a global c-
use of x in j.
All-c-uses/some-p-uses:
This criterion is identical to the all-c-uses criterion except when a variable x has no c-use.
If x has no global c-use, then this criterion reduces to the some-p-uses criterion.
Some-p-uses: For each variable x and each node i, such that has a
global definition in node i, select complete paths which include def-
clear paths from node i to some edges (j, k) such that there is a p-use
of x on (j, k).
All-uses:
This criterion produces a set of paths due to the all-p-uses criterion and the all-c-uses
criterion.
All-du-paths:
For each variable x and for each node i, such that x has a global
definition in node i, select complete paths which include all du-
paths from node i
To all nodes j such that there is a global c-use of x in j
To all edges (j,k) such that there is a p-use of x on (j,k)
Feasible Paths and Test Selection Criteria:
Executable (feasible) path
- A complete path is executable if there exists an assignment of values
to input variables and global variables such that all the path predicates
evaluate to true.
Infeasible path
- We call the path infeasible if there is no such assignment of values to
input variables and global variables exists.
- For a criteria to be useful it must select a set of executables or feasible
paths.
KJSCE/IT/BE/SEMVII/STQA/2013-14
_________________________________________________________________________
Results: (Program printout with output / Document printout as per the format)
Questions:
1..Draw a data flow graph for binsearch() function given :
int binsearch (int X, int V [ ], int n) {
int low, high, mid ;
low=0 ;
high=n-1 ;
while(low<= high) {
mid=(low + high)/2 ;
If ( X < V [mid] )
high = mid - 1 ; else if ( X > V [mid])
low = mid + 1 ;
else
return mid ; }
return -1 ;
}
Q2. Assuming that input array V[ ] has at least one element in it, find an infeasible path in
data flow graph for bin search() function.
Q.3 By referring to data flow graph obtained in Q1.,find set of complete paths satisfying
the all-def selection criteria with respect to variable mid.
Q.4. By referring to data flow graph obtained in Q1.,find set of complete paths satisfying
the all-def selection criteria with respect to variable high.
Q5. Solve the path predicate expressions to generate test input and compute the
corresponding expected outcomes.
_________________________________________________________________________
_
Outcomes:
1.An ability to design a system, component, or process to meet desired needs within
realistic constraints. (c)
2. An ability to identify and formulate engineering problems. (e)
_________________________________________________________________________
KJSCE/IT/BE/SEMVII/STQA/2013-14
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
______________________________________________________________________
References:
Books/ Journals/ Websites:
1. Book Software Testing and Quality Assurance by Kshirasagar Naik and
Priyadarshi Tripathy.
2. http://en.wikipedia.org/wiki/Data_flow_diagram
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: Black Box testing : Study of QTP
(Quick test professional)
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: Black Box testing : Study of QTP (Quick test professional) : A tool for automated
functional testing and regression testing.
____________________________________________________________________
Objective:
After completing this experiment you will be able to:
1. Understand the concept of black box testing.
2. Understand the various features of QTP and know the facilities provided for
automation.
_________________________________________________________________________
Resources needed: Libre Office, Internet
Theory:
HP Quick Test professional (QTP)
Software for Automated Functional Testing and Regression testing
Introduction: HP QuickTest Professional is automated testing software designed for
testing various software applications and environments. It performs functional and
regression testing through a user interface such as a native GUI or web interface. It works
by identifying the objects in the application user interface or a web page and performing
desired operations (such as mouse clicks or keyboard events); it can also capture object
properties like name or handler ID. HP QuickTest Professional uses a VBScript scripting
language to specify the test procedure and to manipulate the objects and controls
of the application under test. To perform more sophisticated actions, users can edit the
underlying VBScript.
QTP has active screen which provides for snapshots of the application under test as it
appeared when testing was performed.
The object Data Table in QTP helps in parameterizing the test. In each new test Data
Table contains one global tab plus an additional tab for every action. The Data Table is a
Microsoft Excel like sheet which represents the data applicable to your test.
Although HP QuickTest Professional is usually used for "UI Based" Test Case
Automation, it also can automate some "Non-UI" based Test Cases such as file system
operations and database testing. Following are some of the important features of QTP 10.0
1. Exception handling:
HP Quick Test Professional manages exception handling using recovery scenarios; thegoal
is to continue running tests
KJSCE/IT/BE/SEMVII/STQA/2013-14
if an unexpected failure occurs. For example, if an application crashes and a message
dialog appears, HP Quick Test Professional can be instructed to attempt to restart the
application and continue with the rest of the test cases from that point. Because HP
Quick Test Professional hooks into the memory space of the applications being tested,
some exceptions may cause HP Quick Test Professional to terminate and be
unrecoverable.
2. Data-driven testing: This is also called as the process of Parameterization of objects.
That means constant value gets replaced by parameter. As we know every test is being
recorded in QTP. The process of playing the same test again with different input values is
called as Parameterization. It means same action and multiple data. This need is always
there for testing web based applications.
HP Quick Test Professional supports data-driven testing. For example, data can be output
to a data table for reuse elsewhere. Data-driven testing is implemented as a Microsoft
Excel workbook that can be accessed from HP Quick Test Professional. HP Quick Test
Professional has two types of data tables: the Global data sheet and Action (local) data
sheets. The test steps can read data from these data tables in order to drive variable data
into the application under test, and verify the expected result.
3. Automating custom and complex UI objects
HP Quick Test Professional may not recognize customized user interface objects and other
complex objects. Users can define these types of objects as virtual objects. HP Quick Test
Professional does not support virtual objects for analog recording or recording in low-level
mode.
4. Extensibility
HP Quick Test Professional can be extended with separate add-ins for a number of
development environments that are not supported out-of-the-box. HP Quick Test
Professional add-ins includes support for Web, .NET, Java, and Delphi. HP Quick Test
Professional and the HP QuickTest Professional add-ins are packaged together in HP
Functional Testing software.
5. Test results
At the end of a test, HP QuickTest Professional generates a test result. Using XML
schema, the test result indicates whether a test passed or failed, shows error messages, and
may provide supporting information that allows users to determine the underlying cause
of a failure. Release 10 lets users export HP QuickTest Professional test results into
HTML, Microsoft Word or PDF report formats. Reports can include images and screen
shots for use in reproducing errors.
KJSCE/IT/BE/SEMVII/STQA/2013-14
6. User interface
HP QuickTest Professional provides two views--and ways to modify-- a test script:
Keyword View and Expert View. These views enable HP QuickTest Professional to act as
an Integrated Development Environment (IDE) for the test, and HP QuickTest Professional
includes many standard IDE features, such as breakpoints to pause a test at predetermined
places.
7. Keyword view
Keyword View lets users create and view the steps of a test in a modular, table format.
Each row in the table represents a step that can be modified. The Keyword View can also
contain any of the following columns: Item, Operation, Value, Assignment, Comment, and
Documentation. For every step in the Keyword View, HP QuickTest Professional displays
a corresponding line of script based on the row and column value. Users can add, delete or
modify steps at any point in the testing Keyword View, users can also view properties for
items such as checkpoints, output values, and actions, use conditional and loop
statements, and insert breakpoints to assist in debugging a test.
In Expert View, HP QuickTest Professional lets users display and edit a test's source code
using VBScript. Here already recorded script of the test is displayed and users can edit it if
they need. Designed for more advanced users, users can edit all test actions except for
the root Global action, and changes are synchronized with the Keyword View.
9. Languages
HP QuickTest Professional uses VBScript as its scripting language. VBScript supports
classes but not polymorphism and inheritance. Compared with Visual Basic for
Applications (VBA), VBScript lacks the ability to use some Visual Basic keywords,
does not come with an integrated debugger, lacks an event handler, and does not have a
forms editor. HP has added a debugger, but the functionality is more limited when
compared with testing tools that integrate a full-featured IDE, such as those provided with
VBA, Java, or VB.NET.
10. Synchronization:
Synchronization is an important mechanism to compensate for inconsistencies in the
performance of applications which respond to the inputs slowly during testing. The default
wait interval in QTP is 20 seconds. If the application responds slowly this time interval is
not enough and the test run fails unexpectedly. Then QTP halts or waits till the object and
its properties get fulfilled. We can add the timeout statements such as Wait or
Conditional statement. Wait function is sued for hard coded timeout. Conditional
statements are used for synchronization point. The situations in which synchronization
may be needed are as below,
1. To retrieve information from the data base.
2. Time taken for a window to pop up.
3. Time taken for the progress bar to reach 100%.
4. Time taken for the status message to appear.
The Sync Point can be inserted using a dialogue box where we can specify the time in
millisecond after which QTP will continue to the next step. Default time is 10 seconds.
(10000) milliseconds.
KJSCE/IT/BE/SEMVII/STQA/2013-14
11. Facilities for creating actions:
In QTP every test is recorded and replayed back whenever required. This recorded test can
be divided into logical sections. So when new test is created some of the actions from the
earlier tests can be reused. This helps to design more modular and efficient tests. Users
can insert new actions at record time or after recording. An action has its own script
including all the steps recorded. Action can be usable or non usable.
12. Object Repository: The objects associated with each test and each action are stored in
the database which is called as object repository. There are two modes of object repository:
1. Default mode. 2. Shared mode.
13. Check Points: In QTP checkpoints allow us to compare the current behavior of the
application with its behavior in the earlier version. Standard checkpoints are used for
checking different properties of application objects. Bitmap checkpoints are used for
checking images. Text checkpoints are used for checking specific text and more. Database
checkpoints are used for checking contents of the database used in application.
14. User defined functions: When we have a large segments of code which we need to use
several times in one test or in several different tests then it is required to create user
defined functions. This will make testing easier. These functions can be defined in
individual tests. We can also create external VBScript library file containing these
functions.
Procedure / Approach /Algorithm / Activity Diagram:
Study the following concepts of mobile database system:
The features and applications of QTP.
Data driven test
The recording modes available in QTP
synchronization
________________________________________________________________________
Results: (Program printout with output / Document printout as per the format)
________________________________________________________________________
Questions:
1. What is the need for automating the functional and regression testing?
2. QTP uses VB scripts. What are the advantage and disadvantages of this?
KJSCE/IT/BE/SEMVII/STQA/2013-14
3. How the testing is performed in QTP?
4. What is synchronization? During testing when is it necessary to use
synchronization?
5. What are the check points? When are they needed?
6. What is data driven testing? (Parameterization)
7. What is action in QTP?
8. What is object repository in QTP?
_______________________________________________________________________
Outcomes:
1. An ability to use the techniques, skills, and modern engineering tools necessary for
engineering practice.(k)
2. An understanding of best practices, standards and their applications. (m)
_________________________________________________________________________
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
_________________________________________________________________________
References:
Books/ Journals/ Websites:
1.A Book prescribed for Diploma in Software Testing named Test Automation Tools a
book for official curriculum of SEED InfoTech, Pune.
2.http://askqtp.blogspot.com/
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: Automated Performance Testing :Study
of HP Load Runner
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.
Title: Automated Performance Testing: Study of HP Load Runner
______________________________________________________________
Objective:
After completing this experiment you will be able to:
1. Understand the concept of performance testing.
2. Understand the various features of HP Load Runner and know the facilities
provided for automation.
_________________________________________________________________________
Resources needed: Internet, Libre-office
_________________________________________________________________________
Theory
Performance Testing
performance testing is in general testing performed to determine how a system performs in
terms of responsiveness and stability under a particular workload. It can also serve to
investigate, measure, validate or verify other quality attributes of the system, such as
scalability, reliability and resource usage.
Performance testing types
Load testing
Load testing is the simplest form of performance testing. A load test is usually conducted
to understand the behaviour of the system under a specific expected load. This load can be
the expected concurrent number of users on the application performing a specific number
of transactions within the set duration. This test will give out the response times of all the
important business critical transactions. If the database, application server, etc. are also
monitored, then this simple test can itself point towards any bottlenecks in the application
software.
Stress testing
Stress testing is normally used to understand the upper limits of capacity within the
system. This kind of test is done to determine the system's robustness in terms of extreme
load and helps application administrators to determine if the system will perform
sufficiently if the current load goes well above the expected maximum.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Endurance testing
Endurance testing is usually done to determine if the system can sustain the continuous
expected load. During endurance tests, memory utilization is monitored to detect potential
leaks. Also important, but often overlooked is performance degradation. That is, to ensure
that the throughput and/or response times after some long period of sustained activity are
as good or better than at the beginning of the test. It essentially involves applying a
significant load to a system for an extended, significant period of time. The goal is to
discover how the system behaves under sustained use.
Spike testing
Spike testing is done by suddenly increasing the number of or load generated by, users by
a very large amount and observing the behaviour of the system. The goal is to determine
whether performance will suffer, the system will fail, or it will be able to handle dramatic
changes in load.
Configuration testing
Rather than testing for performance from the perspective of load, tests are created to
determine the effects of configuration changes to the system's components on the system's
performance and behaviour. A common example would be experimenting with different
methods of load-balancing.
Isolation testing
Isolation testing is not unique to performance testing but involves repeating a test
execution that resulted in a system problem. Often used to isolate and confirm the fault
domain.
Setting performance goals
Performance testing can serve different purposes.
It can demonstrate that the system meets performance criteria.
It can compare two systems to find which performs better.
Or it can measure what parts of the system or workload causes the system to
perform badly.
Many performance tests are undertaken without due consideration to the setting of realistic
performance goals. The first question from a business perspective should always be "why
are we performance testing?". These considerations are part of the business case of the
testing. Performance goals will differ depending on the system's technology and purpose
however they should always include some of the following:
Concurrency/throughput
If a system identifies end-users by some form of log-in procedure then a concurrency goal
is highly desirable. By definition this is the largest number of concurrent system users that
the system is expected to support at any given moment. The work-flow of a scripted
KJSCE/IT/BE/SEMVII/STQA/2013-14
transaction may impact true concurrency especially if the iterative part contains the log-in
and log-out activity.
If the system has no concept of end-users then performance goal is likely to be based on a
maximum throughput or transaction rate. A common example would be casual browsing of
a web site such as Wikipedia.
Server response time
This refers to the time taken for one system node to respond to the request of another. A
simple example would be a HTTP 'GET' request from browser client to web server. In
terms of response time this is what all load testing tools actually measure. It may be
relevant to set server response time goals between all nodes of the system.
Render response time
A difficult thing for load testing tools to deal with as they generally have no concept of
what happens within a node apart from recognizing a period of time where there is no
activity 'on the wire'. To measure render response time it is generally necessary to include
functional test scripts as part of the performance test scenario which is a feature not offered
by many load testing tools.
Performance specifications
It is critical to detail performance specifications (requirements) and document them in any
performance test plan. Ideally, this is done during the requirements development phase of
any system development project, prior to any design effort.
However, performance testing is frequently not performed against a specification i.e. no one
will have expressed what the maximum acceptable response time for a given population of
users should be. Performance testing is frequently used as part of the process of
performance profile tuning. The idea is to identify the weakest link there is inevitably a
part of the system which, if it is made to respond faster, will result in the overall system
running faster. It is sometimes a difficult task to identify which part of the system
represents this critical path, and some test tools include (or can have add-ons that provide)
instrumentation that runs on the server (agents) and report transaction times, database
access times, network overhead, and other server monitors, which can be analyzed together
with the raw performance statistics. Without such instrumentation one might have to have
someone crouched over Windows Task Manager at the server to see how much CPU load
the performance tests are generating (assuming a Windows system is under test).
Performance testing can be performed across the web, and even done in different parts of
the country, since it is known that the response times of the internet itself vary regionally. It
can also be done in-house, although routers would then need to be configured to introduce
the lag what would typically occur on public networks. Loads should be introduced to the
system from realistic points. For example, if 50% of a system's user base will be accessing
the system via a 56K modem connection and the other half over a T1, then the load
injectors (computers that simulate real users) should either inject load over the same mix of
connections (ideal) or simulate the network latency of such connections, following the same
user profile.
It is always helpful to have a statement of the likely peak numbers of users that might be
expected to use the system at peak times. If there can also be a statement of what constitutes
the maximum allowable 95 percentile response time, then an injector configuration could be
used to test whether the proposed system met that specification.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Pre-requisites for Performance Testing
A stable build of the system which must resemble the production environment as close as is
possible.
The performance testing environment should not be clubbed with User acceptance testing
(UAT) or development environment. This is dangerous as if an UAT or Integration test or
other tests are going on in the same environment, then the results obtained from the
performance testing may not be reliable. As a best practice it is always advisable to have a
separate performance testing environment resembling the production environment as much
as possible.
Test conditions
In performance testing, it is often crucial (and often difficult to arrange) for the test
conditions to be similar to the expected actual use. This is, however, not entirely possible
in actual practice. The reason is that the workloads of production systems have a random
nature, and while the test workloads do their best to mimic what may happen in the
production environment, it is impossible to exactly replicate this workload variability -
except in the most simple system.
Loosely-coupled architectural implementations (e.g.: SOA) have created additional
complexities with performance testing. Enterprise services or assets (that share a common
infrastructure or platform) require coordinated performance testing (with all consumers
creating production-like transaction volumes and load on shared infrastructures or
platforms) to truly replicate production-like states. Due to the complexity and financial
and time requirements around this activity, some organizations now employ tools that can
monitor and create production-like conditions (also referred as "noise") in their
performance testing environments (PTE) to understand capacity and resource
requirements and verify / validate quality attributes.
Timing
It is critical to the cost performance of a new system, that performance test efforts begin
at the inception of the development project and extend through to deployment. The later a
performance defect is detected, the higher the cost of remediation. This is true in the case
of functional testing, but even more so with performance testing, due to the end-to-end
nature of its scope. It is always crucial for performance test team to be involved as early
as possible. As key performance requisites e.g. performance test environment acquisition
and preparation is often a lengthy and time consuming process.
Tools
In the diagnostic case, software engineers use tools such as profilers to measure what
parts of a device or software contributes most to the poor performance or to establish
throughput levels (and thresholds) for maintained acceptable response time.
Technology
Performance testing technology employs one or more PCs or Unix servers to act as
injectors each emulating the presence of numbers of users and each running an
automated sequence of interactions (recorded as a script, or as a series of scripts to
emulate different types of user interaction) with the host whose performance is being
tested. Usually, a separate PC acts as a test conductor, coordinating and gathering metrics
from each of the injectors and collating performance data for reporting purposes. The
KJSCE/IT/BE/SEMVII/STQA/2013-14
usual sequence is to ramp up the load starting with a small number of virtual users and
increasing the number over a period to some maximum. The test result shows how the
performance varies with the load, given as number of users vs response time. Various
tools, are available to perform such tests. Tools in this category usually execute a suite of
tests which will emulate real users against the system. Sometimes the results can reveal
oddities, e.g., that while the average response time might be acceptable, there are outliers
of a few key transactions that take considerably longer to complete something that
might be caused by inefficient database queries, pictures etc.
Performance testing can be combined with stress testing, in order to see what happens
when an acceptable load is exceeded does the system crash? How long does it take to
recover if a large load is reduced? Does it fail in a way that causes collateral damage?
HP Load Runner
HP LoadRunner software is an automated performance and load testing product from
Hewlett-Packard for examining system behavior and performance, while generating
actual load. HP acquired LoadRunner as part of its acquisition of Mercury Interactive in
November 2006. HP LoadRunner can emulate hundreds or thousands of concurrent
users to put the application through the rigors of real-life user loads, while
collecting information from key infrastructure components (Web servers, database
servers etc.) The results can then be analyzed in detail, to explore the reasons for
particular behavior. HP LoadRunner is sold as part of the HP IT Management Software
category by HP Software & Solutions division.
Consider the client-side application for an automated teller machine (ATM). Although
each client is connected to a server, hundreds of ATMs may be open to the public.
During peak times such as 10 a.m. Monday, the start of the work week the load
may be much higher than normal. In order to test such situations, it is not practical to
have a testbed of hundreds of ATMs. So, one can use an ATM simulator and a computer
system with HP LoadRunner to simulate a large number of users accessing the server
simultaneously. Once activities are defined, they are repeatable. After debugging a
problem in the application, managers can check whether the problem persists by
reproducing the same situation, with the same type of user interaction.
Over the last 20 years, companies have turned to software as a means of automating
work. Software applications have been used to drive huge efficiency and
productivity gains and to provide a new medium for collaboration and
information sharing in a global economy. Software applications have, in fact,
become the primary channel both for business critical information sharing and transaction
processing of all kinds. While software development technologies have changed and
matured tremendously in this time period, the complexity of modern applications has
exploded. Applications may utilize tens and hundreds of components to do work once
done with paper or by-hand. There is a direct correlation between
the degree of application complexity and the number of potential points of failure in a
business process. This makes it increasingly difficult to isolate the root cause of a problem.
Moreover, software applications arent like cars. They dont have permanent parts that are
replaced only when they wear out. Whether to deliver competitive advantage or to
respond to changes in business conditions, software applications change weekly,
monthly, and yearly. This stream of change introduces yet another set of risks that
companies have to manage. The incredible pace of change and the explosion of
software complexity
introduce tremendous risk into the software development process. Rigorous performance
KJSCE/IT/BE/SEMVII/STQA/2013-14
testing is the most common strategy to both quantify and reduce this risk to a
business. Automated load testing with HP LoadRunner is an essential part of the
application deployment process.
HP LoadRunner consists of several different tools: Virtual User Generator (VuGen),
Controller, load generators, Analysis and Launches.
1. Virtual User Generator
1.1 Parameterization
1.2 Correlation
2. Controller
3. Analysis
4. HP LoadRunner in the Cloud
Virtual User Generator
The Virtual User Generator (VuGen) is used to emulate the steps of real human users.
Using VuGen, you can also run scripts for debugging. VuGen lets you record and/or
script a test to be performed against an application under test, and play back and make
modifications to the script as needed, such as defining Parameterization (selecting data for
keyword-driven testing).
HPLoadRunner supports more than 51 protocols including Web HTTP/HTTPS,
Remote Terminal Emulator, Oracle and Web Services. A protocol acts as a
communication medium between a clients and a server. For example an AS400 or
mainframe-based application can use a terminal emulator to talk to a server, and an on-
line banking application can use HTTP/HTTPS with some Java and Web services.
LoadRunner can record scripts in both single and multi-protocol modes.
During recording, VuGen records a tester's actions by routing data through a proxy. The
type of proxy depends upon the protocol being used and affects the resulting script. For
some protocols, you can select various recording modes to further refine the resulting
script. HP LoadRunner testing uses three types of recording modes: GUI based, URL
based and HTML based.
Parameterization HP LoadRunner allows you to replace recorded values in a script with
parameters. This is called parameterization.
Parameterization is often used:
1. When the application needs unique data (such as user name)
2. Data dependency (such as passwords)
3. Data cache
Correlation
HP Load Runner uses Correlation to handle dynamic content. Dynamic content refers
to page components that are dynamically created during the execution of a business
process, and the value may differ from the value generated in a previous run. Examples
of dynamic content include the ticket number in an on-line reservation system, a
transaction ID in an on-line banking application and most importantly the unique
session ID that is created each time a user logs in. The dynamic content is a part of the
server response. HP Load Runner saves the changing values into parameters, which are
used during emulation.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Controller
Once a script is prepared in VuGen, it runs using the Load Runner Controller. The
Controller manages and maintains the scenarios that are run. During a scenario run, you
can monitor your network and server resources. The Controller assigns virtual users
and load generators to specific scenarios.
You can have a multiple machines act as load generators. For example, to run a test of
1000 users, you can use three or more machines with an HP Load Runner agent
installed on Them. These machines are known as load generators because the actual
load is generated from them. Each run is configured with a scenario that describes
which scripts will run, when they will run, how many virtual users will run, and which
load generators will be used for each script. The tester connects each script in the
scenario to the name of a machine that is going to act as a load generator and sets the
number of virtual users to be run from that load generator. HP Load Runner can control
multiple load generators and collect results, and it can control load generators located at
remote networks (through a firewall) if required.
HP Load Runner uses monitors during a load test to monitor the performance of individual
components under load. HP Load Runner supports more than 60 monitors, including
Oracle and other database server monitors, Web Sphere and other web application
server Monitors, transaction and runtime monitors, system resource monitors, network
delay monitors, firewall monitors, web server resource monitors, streaming media
monitors and ERP/CRM server resource monitors. Once you create a scenario and run
Load Runner, you can view the results using the Analysis tool.
Analysis
The Analysis tool takes the result from the completed scenario and prepares graphs and
reports that are used to correlate system information and identify bottlenecks and
performance issues. We can also merge all the graphs that contain data that may affect
response time for a better understanding of the performance and to pinpoint performance
problems. We can then adjust the graph and prepare an HP Load Runner report. We can
save reports, including related graphs, in HTML or Microsoft Word format.
HP Load Runner in the Cloud Computing
In May 2010, HP announced that an on-demand version of the application performance
testing software would be available via Amazon Elastics Compute Cloud. HP Load
Runner in the Cloud is first being offered as beta software in the U.S. and is available
with pay-as-you-go pricing. The software is intended for performance testing for
businesses of any size.
Need for automated performance testing:
Automated Performance Testing is a discipline that leverages products, people, and
processes to reduce the risks of application, upgrade, or patch deployment. At its core,
automated performance testing is about applying production workloads to pre-
deployment systems while simultaneously measuring system performance and end-
user experience. A well constructed performance test answers questions such as:
KJSCE/IT/BE/SEMVII/STQA/2013-14
1.Does the application respond quickly enough for the intended users?
2.Will the application handle the expected user load and beyond?
3.Will the application handle the number of transactions required by th business?
4. Is the application stable under expected and unexpected user loads?
5.Are you sure that users will have a positive experience on go-live day?
By answering these questions, automated performance testing quantifies the impact of a
change in business terms. This in turn makes clear the risks of deployment. An effective
automated performance testing process helps you to make more informed release decisions,
and prevents system downtime and availability problems.
LoadRunner contains the following components:
The Virtual User Generator captures end-user business processes and creates an automated
performance testing script component, also known as a virtual user script.
The Controller component organizes drives, manages, and monitors the load test.
The Load Generators component create the load by running virtual users.
The Analysis component helps us view, dissect, and compare the performance results.
The Launcher provides a single point of access for all of the LoadRunner
components.
Understanding LoadRunner Terminology
A scenario is a file that defines the events that occur during each testing session, based on
performance requirements. In the scenario, LoadRunner replaces human users with virtual
users or Vusers. Vusers emulate the actions of human users working with your application.
A scenario can contain tens,
hundreds, or even thousands of Vusers. The actions that a Vuser performs during the
scenario are described in a Vuser script. To measure the performance of the server, you
define transactions. A transaction represents end-user business processes that you are
interested in measuring.
Load testing process:
Load testing typically consists of five phases: planning, script creation, scenario definition,
scenario execution, and results analysis. Plan Load Test: Define your performance testing
requirements, for example, number of concurrent users, typical business processes and
required response times.
Create Vuser Scripts: Capture the end-user activities into automated scripts. Define a
Scenario: Use the LoadRunner Controller to set up the load test environment.
Run a Scenario: Drive, manage, and monitor the load test from the LoadRunner Controller.
Analyze the Results: Use LoadRunner Analysis to create graphs and
reports, and evaluate the performance.
Building Scripts: To create load, we first build automated scripts that emulate real user
behaviour.
________________________________________________________________________
Procedure / Approach /Algorithm / Activity Diagram:
Study the following concepts :
Performance Testing
The features and applications of HP Load Runner.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Tools of HP Load Runner.
Automated Performance Testing in HP Load Runner
_________________________________________________________________________
Results: (Program printout with output / Document printout as per the format)
_________________________________________________________________________
Questions:
1. . What is performance testing? What are different types of tests involved in performance
testing?
2. How Hp LoadRunner helps in performance testing ?
________________________________________________________________________
Outcomes:
1. An ability to use the techniques, skills, and modern engineering tools necessary for
engineering practice.(k)
2. An understanding of best practices, standards and their applications. (m)
_________________________________________________________________________
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
________________________________________________________________________
References:
Books/ Journals/ Websites:
1 www.hp.com
2. www.wikipedia.org
3.www.tctcomuting.com
4. http://en.wikipedia.org/wiki/Software_performance_testing
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: Develop the Acceptance Criteria .
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: Write the acceptance criteria for current software project(BE Project) you are
working on.
_________________________________________________________________________
Objective:
After completing this experiment you will be able to:
1. Understand which quality attributes most important/critical to your project.
2. Write acceptance criteria for any software project.
_________________________________________________________________________
Resources needed: Libre Office,Internet
_________________________________________________________________________
Theory
Acceptance testing
Acceptance testing is a formal testing conducted to determine whether a system satisfies its
acceptance criteria.
User Acceptance Testing (UAT)
It is conducted by the customer to ensure that system satisfies the contractual acceptance
criteria before being signed-off as meeting user needs.
Business Acceptance Testing (BAT)
It is undertaken within the development organization of the supplier to ensure that the
system will eventually pass the user acceptance testing.
Three major objectives of acceptance testing:
Confirm that the system meets the agreed upon criteria
Identify and resolve discrepancies, if there is any
Determine the readiness of the system for cut-over to live operations

The acceptance criteria are defined on the basis of the following attributes:
Functional Correctness and Completeness
Accuracy
Data Integrity
Data Conversion
Backup and Recovery
Competitive Edge
Usability
Performance
KJSCE/IT/BE/SEMVII/STQA/2013-14
Start-up Time
Stress
Reliability and Availability
Maintainability and Serviceability
Robustness
Timeliness
Confidentiality and Availability
Compliance
Installability and Upgradability
Scalability
Documentation
Procedure / Approach /Algorithm / Activity Diagram:
1.Selection of Acceptance Criteria
The customer needs to select a subset of the quality attributes and prioritize them to suit
their specific situation.
Ultimately, the acceptance criteria must be related to the business goals of the customers
organization.
For example ,IBM used the quality attribute list CUPRIMDS for their products.
(Capability, Usability, Performance, Reliability, Installation, Maintenance,
Documentation, and Service)
For web base applications,
Reliability,usability,security,availability,scalablity,maintainability and time to market.
2. Define acceptance criteria for each identified attribute for your system.
For example ,the aim of the recovery acceptance test criteria is to outline the extent to
which data can be recovered after a system crash.
The goal of usability acceptance test criteria is to specify how will the system help the user
in the day-to-day job.
_________________________________________________________________________
Results: (Program printout with output / Document printout as per the format)
_________________________________________________________________________
Questions:
1. Why are the selected acceptance criteria the most critical ones for your
system?
_______________________________________________________________________
KJSCE/IT/BE/SEMVII/STQA/2013-14
Outcomes:
1. An ability to apply knowledge of mathematics, science, and e ngineering.(a)
2. An ability to identify and formulate engineering problems. (e)
3.An understanding of best practices, standards and their applications. (m)
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
_________________________________________________________________________
__
References:
Books/ Journals/ Websites:
1. Software Testing and Quality Assurance: Theory and Practice, Sagar Naik,
University of Waterloo, Piyu Tripathy, Wiley , 2008
2. Effective methods for Software Testing William Perry, Wiley.
3. http://en.wikipedia.org/wiki/Acceptance_testing
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: Study of software quality standard ISO
9001:2000 fundamental and requirements.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title: Study of software quality standard ISO 9001:2000 fundamental and requirements.
Objective:
_________________________________________________________________________
After completing this experiment you will be able to:
1. Understand software quality standard ISO 9001:2000 fundamental and
requirements.
2. Apply these standards to any organization.
_________________________________________________________________________
Resources needed: Libre Office, Internet
_________________________________________________________________________
Theory
There are the eight quality management principles on which the quality management
system standards of the ISO 9000:2000 and ISO 9000:2008 series are based. These
principles can be used by senior management as a framework to guide their organizations
towards improved performance. The principles are derived from the collective experience
and knowledge of the international experts who participate in ISO Technical, which is
responsible for developing and maintaining the ISO 9000 standards.
The eight quality management principles are as given below,
Principle 1: Customer focus
Principle 2: Leadership
Principle 3: Involvement of people
Principle 4: Process approach
Principle 5: System approach to management
Principle 6: Continual improvement
Principle 7: Factual approach to decision making
Principle 8: Mutually beneficial supplier relationships
1. Customer focus:
Organizations depend on their customers and therefore should understand current and
future customer needs, should meet customer requirements and strive to exceed customer
expectations.
Applying the principle of customer focus typically leads to:
KJSCE/IT/BE/SEMVII/STQA/2013-14
Researching and understanding customer needs and expectations.
Ensuring that the objectives of the organization are linked to customer needs and
expectations.
Communicating customer needs and expectations throughout the organization.
Measuring customer satisfaction and acting on the results.
Systematically managing customer relationships.
Ensuring a balanced approach between satisfying customers and other interested
parties (such as owners, employees, suppliers, financiers, local communities and society as
a whole).
2. Leadership:
Leaders establish unity of purpose and direction of the organization. They should create
and maintain the internal environment in which people can become fully involved in
achieving the organization's objectives.
Key benefits:
People will understand and be motivated towards the organization's goals and
objectives.
Activities are evaluated, aligned and implemented in a unified way.
Miscommunication between levels of an organization will be minimized.
Applying the principle of leadership typically leads to:
Considering the needs of all interested parties including customers, owners,
employees, suppliers, financiers, local communities and society as a whole.
Establishing a clear vision of the organization's future.
Setting challenging goals and targets.
Creating and sustaining shared values, fairness and ethical role models at all levels
of the organization.
Establishing trust and eliminating fear.
Providing people with the required resources, training and freedom to act with
responsibility and accountability.
Inspiring, encouraging and recognizing people's contributions.
3. Involvement of people:
People at all levels are the essence of an organization and their full involvement enables
their abilities to be used for the organization's benefit.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Key benefits:
Motivated, committed and involved people within the organization.
Innovation and creativity in furthering the organization's objectives.
People being accountable for their own performance.
People eager to participate in and contribute to continual improvement.
Applying the principle of involvement of people typically leads to:
People understanding the importance of their contribution and role in the
organization.
People identifying constraints to their performance.
People accepting ownership of problems and their responsibility for solving them.
People evaluating their performance against their personal goals and objectives.
People actively seeking opportunities to enhance their competence, knowledge and
experience.
People freely sharing knowledge and experience.
People openly discussing problems and issues.
4. Process Approach:
A desired result is achieved more efficiently when activities and related resources
are managed as a process.
Key benefits:
Lower costs and shorter cycle times through effective use of resources.
Improved, consistent and predictable results.
Focused and prioritized improvement opportunities.
Applying the principle of process approach typically leads to:
Systematically defining the activities necessary to obtain a desired result.
Establishing clear responsibility and accountability for managing key activities.
Analyzing and measuring of the capability of key activities.
Identifying the interfaces of key activities within and between the functions of the
organization.
Focusing on the factors such as resources, methods, and materials that will improve
key activities of the organization.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Evaluating risks, consequences and impacts of activities on customers, suppliers
and other interested parties.
5. System approach to management:
Identifying, understanding and managing interrelated processes as a system
contributes to the organization's effectiveness and efficiency in achieving its objectives.
Key benefits:
Integration and alignment of the processes that will best achieve the desired results.
Ability to focus effort on the key processes.
Providing confidence to interested parties as to the consistency, effectiveness and
efficiency of the organization.
Applying the principle of system approach to management typically leads to:
Structuring a system to achieve the organization's objectives in the most effective
and efficient way.
Understanding the interdependencies between the processes of the system.
Structured approaches that harmonize and integrate processes.
Providing a better understanding of the roles and responsibilities necessary for
achieving common objectives and thereby reducing cross-functional barriers.
Understanding organizational capabilities and establishing resource constraints
prior to action.
Targeting and defining how specific activities within a system should operate.
Continually improving the system through measurement and evaluation.
6. Continual improvement:
Continual improvement of the organization's overall performance should be a
permanent objective of the organization.
Key benefits:
Performance advantage through improved organizational capabilities.
Alignment of improvement activities at all levels to an organization's strategic
intent.
Flexibility to react quickly to opportunities.
Applying the principle of continual improvement typically leads to:
KJSCE/IT/BE/SEMVII/STQA/2013-14
Employing a consistent organization-wide approach to continual improvement of
the organization's performance.
Providing people with training in the methods and tools of continual improvement.
Making continual improvement of products, processes and systems an objective for
every individual in the organization.
Establishing goals to guide, and measures to track, continual improvement.
Recognizing and acknowledging improvements.
7. Factual approach to management: Effective decisions are based on the analysis
of data and information
Key benefits:
Informed decisions.
An increased ability to demonstrate the effectiveness of past decisions through
reference to factual records.
Increased ability to review, challenge and change opinions and decisions.
Applying the principle of factual approach to decision making typically leads to
Ensuring that data and information are sufficiently accurate and reliable.
Making data accessible to those who need it.
Analyzing data and information using valid methods.
Making decisions and taking action based on factual analysis, balanced with
experience and intuition.
8. Mutually beneficial supplier Relationship:
An organization and its suppliers are interdependent and a mutually beneficial relationship
enhances the ability of both to create value
Key benefits:
Increased ability to create value for both parties.
Flexibility and speed of joint responses to changing market/customer expectations.
Optimization of costs and resources.
Applying the principles of mutually beneficial supplier relationships typically leads to:
Establishing relationships that balance short-term gains with long-term
considerations.
Pooling of expertise and resources with partners.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Identifying and selecting key suppliers.
Clear and open communication.
Sharing information and future plans.
Establishing joint development and improvement activities.
Inspiring, encouraging and recognizing improvements and achievements by
suppliers.
ISO 9001:2000 Requirements:
1. Systemic Requirements: The concept of quality management system is the core
part of ISO 9001:2000 document. A quality management system is defined in terms of
quality policy and quality objectives.
An example of quality policy is to review all work products by at least two
skilled persons. Another quality policy is to execute all test cases for at least two test
cycles during system testing.
An example of quality objective is to fix all defects causing a system to crash
before release. Mechanisms are required in terms of processes to execute quality policies
and achieve quality objectives.
Documentation is an important part of a QMS. There is no QMS without
documentation. A QMS must be properly documented using quality manual. The quality
manual describes the quality policies and quality objectives.
Documentation part can be summarized as follows:
1. Document organizational policies and goals.
2. Document all quality processes and their relationships.
3. Review and approve updated documents.
4. Monitor documents coming from suppliers.
5. Document a procedure to control records.
2. Management Requirements: The concept of quality cannot be dealt with
developers and test engineers. Rather upper management must accept the fact that quality
is an all-pervasive concept.
Upper management must make an effort to see that the entire organization is
aware of quality policies and quality goals.
Following are some important activities for upper management to perform in this
regard:
KJSCE/IT/BE/SEMVII/STQA/2013-14
1. generate an awareness for quality to meet a variety of requirements such as
customer, regulatory and statutory
2. Develop a mechanism for continual improvement of the QMS.
3. Focus on customers by identifying and meeting their requirements in order to
satisfy them.
4. Deal with quality concept in a planned manner by ensuring that quality objectives
are set at organizational level, quality objectives support quality policy, and quality
objectives are measurable.
5. Clearly define individual responsibilities and authorities concerning the
implementation of quality policies.
6. Communicate effectiveness of QMS to the staff so that the staff is in a better
position to conceive improvements
in the existing QMS model.
7. Periodically review the QMS to ensure that it is an effective one and it adequately
meets organizational policy and objectives to satisfy the customers.
3. Resource Requirements: Resources are key to achieving organizational policies
and objectives. There are different kind of resources, namely, staff, equipment, tool,
financial etc.
Typically, different resources are controlled by different divisions of organization.
Important activities concerning resource management are as follows:
1. Identify and provide resources required to support the organizational quality policy
in order to realize the quality objectives. Here key factor is to identify resources to be able
to meet and even exceed customer expectations.
2. Allocate quality personnel resources to the projects. Here quality of personnel is
defined in terms of education, training, experience, and skills.
3. Put in place a mechanism to enhance the quality level of personnel.
4. Manage a work environment, including physical, social, psychological, and
environmental factors, that is conductive to producing efficiency and effectiveness in
people resources.
4. Realization Requirements: This part deals with processes that transform customer
requirements into products.
The key elements of realization part are as follows
1. Develop a plan to realize a product from its requirements. Important elements of
such plan are identification of processes needed to develop a product, sequencing the
processes, and controlling the processes.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Product quality objectives and methods to control quality during development are
identified during planning
2. To realize a product for a customer, much interaction with the customer is
necessary to understand and capture requirements.
3. Review the customers requirements before committing to the project
.Requirements that are not likely to be met should be rejected in this phase.
4. Once requirements are reviewed and accepted, product design and development
take place:
Product design and development start with planning: Identify the stages of design and
development, assign various responsibilities and authorities, manage interactions between
different groups, and update the plan as changes occur.
Specify and review the inputs for product design and development.
Create and approve the outputs of product design and development. Use the outputs to
control product quality.
Periodically review the outputs of design and development to ensure that progress is being
made.
Perform design and development verification on their outputs.
Per form design and development validations
Manage the changes effected to design and development: Identify the changes, record the
changes, review the changes, verify the changes, validate the changes, and approve the
changes.
5. Remedial Requirements:
This part is concerned with measurement, analysis of measured data, and continual
improvement. Measurement of performance indicators of processes allows one to
determine how well a process is performing. If it is observed process is performing
below the desired level, then corrective action can be taken to improve the performance of
the process.
Performance measurement needs as explained below:
3. The success of an organization is largely determined by satisfaction of its
customers. Thus standards require organizations to develop methods and procedures for
measuring and tracking customers satisfaction level on an ongoing basis. For example, the
number of calls to help line of an organization can be considered as a measure of customer
satisfaction-too many calls are a measure of less customer satisfaction.
4. An organization needs to plan and perform internal audits on a regular basis to
track the status of organizational QMS.
5. The standard requires that both processes, including QMS processes, and products
be monitored using a set of key performance indicators.
6. As a result of measuring product characteristics, it may be discovered that a
product does not meet its requirements Organizations need to ensure that such products are
KJSCE/IT/BE/SEMVII/STQA/2013-14
not released to the customers. The causes of an expected product and real one need to be
identified.
7. The standard requires that data collected in measurement processes are analyzed
for making objective decisions. Data analysis is performed to determine effectiveness of
QMS.
8. Process improvement includes both corrective actions and preventive actions to
improve the quality of products.
______________________________________________________________________
Procedure / Approach /Algorithm / Activity Diagram:
Case Study: Apply this standard to the MTNL : Mahanagar Telephone Nigam Limited
Principles:
Customer focus:
MTNL first aim should be customer focus. For developing any new product, MTNL must
understand needs of customer. Along with that, they must try to develop something which
is really useful and worth, along with user friendliness. Services should be provided to each
and every customer such that all their demands are fulfilled. In order to launch a new
service or product, customer demands must be taken into consideration. Long distance
telephone lines must be made free of any disturbances. Phone calls should be clear and free
from any disturbances.
Leaderships:
Leaderships are essential for any organization. A strong leadership helps and organization
to reach their destination easily and effectively. It helps in establishing a clear vision , and
goal set for an organization. Proper leadership is necessary for any organization for
satisfying customer demand and bringing profit to the company.
Involvement of People:
People in organization must understand their responsibility and role in the organization.
They must participate in activities to lead it to success. Participation of people is very
important, which helps in understanding their needs better and helping them in more
efficient manner
Process Approach:
A process approach is essential. Desired results are achieved successfully if activities and
related resources are managed as a process. Hence, an organization must follow process
approach. Proper implementation of process is optimum need of any organization.
System Approach to management:
Identify the processes , understand them and managing interrelated processes as system,
improve effectiveness , and improve the development.
Continual Improvement:
The objective of MTNL should be continuous development . Continuous improvements
,developments help in making effective products and giving better and efficient results.
KJSCE/IT/BE/SEMVII/STQA/2013-14
Factual Approach to decision making:
In MTNL factual data should be used take any decision. Decision making is essential and
must be done by understanding the data, and analyzing the data.
Mutually beneficial Supplier Relationships:
MTNL should have interdependent and mutually beneficial relationships which will enable
to enhance ability of both to create value.
Requirements
Systemic Requirements:
Documentations is an important part of QMS. Its important for MTNL to document
everything.QMS must be well documented and it should be done using quality manuals.
Management Requirements:
The concept of quality cannot be dealt with developers and test engineers. Rather upper
management must accept the fact that quality is an all-pervasive concept.
Resource Requirements:
Resources are key to achieve organizational policy and objectives. Resources are staff,
equipment , finance etc. These resources are typically handled by different divisions.
Realization Requirements:
It deals with converting requirements of customer into actual products. It includes
reviewing the product according to requirements , verification of outputs , validation and
verifying and identifying the changes along with their validation.
Remedial Requirements:
This part is concerned with measurement, analysis of measure data and continual
improvement. Measurement for performance indicators of processes allows one to
determine how well a process is performing.
THE VARIOUS QUALITY FACTORS THAT CAN BE CONSIDERED ARE:
The telephone should start functioning after registration in less than the periods
prescribed time.
The number of faults per 100 subscribers in a month should not exceed the number
of faults prescribed.
For short term , 85% of the faults booked should be cleared by the best working
day.
The average duration of fault clearance as per norms prescribed.
Grade of service is defined as the permissible limit of number of calls out of 100
calls that can fail during the busy hour.
KJSCE/IT/BE/SEMVII/STQA/2013-14
The call completion rate means the percentage of calls successful in first attempt in
a local network.
In the meter reading and telephone bills issued in a billing cycle, the percentage of
bills disputed should not exceed the percentage prescribed.
The urgent trunk calls booked should mature in less than 1 hr 30 mins.
This indicates that the percentage of calls made on operator assisted special services
like 199,197 and 180 services etc, to be answered by the operator in the prescribed
period.
95% of the request from customers for shifting , closing of telephones and
additional facility required should be compiled within the prescribed period.
The percentage of repeat faults implies that fault should not re-occur in more than
the prescribed percentage of original faults.
_________________________________________________________________
Results: (Program printout with output / Document printout as per the format)
Questions
1. Apply this standard to following organizations and submit the detailed reports.
Educational Institutes giving technical education.
Hospitals with facility for super specialization.
Research and Development facility of any organization.
Financial organization (Bank).
_________________________________________________________________________
Outcomes:
1.An ability to apply knowledge of mathematics, science, and e ngineering.(a)
2.A recognition of the need for, and an ability to engage in life-long learning(i)
3.An understanding of best practices, standards and their applications. (m)
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
KJSCE/IT/BE/SEMVII/STQA/2013-14
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
_________________________________________________________________________
References:
Books/ Journals/ Websites:
1.http://www.itgovernance.co.uk/
2.http://www.iso.org/iso/
3. http://www.qualitygurus.com/
KJSCE/IT/BE/SEMVII/STQA/2013-14
Experiment / assignment / tutorial No._______
Title: Exploring WinRunner
KJSCE/IT/BE/SEMVII/STQA/2013-14
Batch: Roll No.: Experiment / assignment / tutorial No.:
Title Exploring WinRunner.
Objective:
________________________________________________________________________
After completing this experiment you will be able to:
1. Understand the features and applications of testing tool WinRunner.
2. Perform functional testing of software application with testing tool WinRunner.
_______________________________________________________________________
Resources needed: WinRunner
________________________________________________________________________
Theory
Introducing WinRunner
If you have ever tested software manually, you are aware of its drawbacks. Manual testing
is time-consuming and tedious, requiring a heavy investment in human resources. Worst of
all, time constraints often make it impossible to manually test every feature thoroughly
before the software is released. This leaves you wondering whether serious bugs have gone
undetected.
Automated testing with WinRunner addresses these problems by dramatically speeding up
the testing process. You can create test scripts that check all aspects of your application,
and then run these tests on each new build. As WinRunner runs tests, it simulates a human
user by moving the mouse cursor over the application, clicking Graphical User Interface
(GUI) objects, and entering keyboard inputbut WinRunner does this faster than any
human user.
Features
WinRunner is:
Functional Regression Testing Tool Windows Platform Dependent
Only for Graphical User Interface (GUI) based Application Based on Object Oriented
Technology (OOT) concept
Only for Static content Record/Playback Tool
KJSCE/IT/BE/SEMVII/STQA/2013-14
Add Ins
WinRunner includes the following Addins:
Web Test Visual Basic ActiveX
Power Builder
How does Win Runner identify GUI Objects
GUI applications are made up of GUI objects such as windows, buttons, lists and menus.
WinRunners Rapid Test Script Wizard learns the descriptions of all GUI objects
It saves the object description in GUI Map file.gui, which is the Heart of Win Runner
When we run tests, WinRunner uses this file to identify and locate objects
Creating GUI Map file and Loading it
There are three ways of creating GUI Map file
o Rapid Test Script Wizard- systematically opens the windows in your
application and learns a description of every GUI object.
Used to learn the entire application User Interface Test
o Recording- adds windows and objects to the GUI Map as they are
encountered by the user
o GUI Map Editor- used to store all the information about GUI elements
present in your application. The GUI Map editor tool can be used edit the
information in the map file easily.
You can load the GUI Map file through
GUI Map editor / GUI_load(filename.gui)
KJSCE/IT/BE/SEMVII/STQA/2013-14
Recording Test
By recording, we can quickly create automated test scripts, clicking objects with mouse,
entering Keyboard input
Recording generates statements in TSL, Mercurys interactive Test Script Language, Case
sensitive.
Choosing Record Mode
Before you begin recording a test, you should select the appropriate record mode. There
are two record modes available
Context Sensitive Mode- records operation you perform in terms of GUI objects
e.g. button_press(Ok)
Analog Mode records exact co-ordinates traveled by mouse and keyboard inputs
e.g. Mtype(<kleft>+)
Running the Test
WinRunner provides three modes for running test
Use Verify mode when running a test to check the behavior of our application and when
we want to save the test result.
Use Debug Mode, when you want to check that the test script runs smoothly without
errors in syntax. The debug mode will not give the test result.
Use Update mode, when you want to create new expected results for a GUI check point or
bitmap check point
Win Runner Testing Process
Create GUI Map File: By creating GUI Map file the WinRunner can identify the GUI
objects in the application going to be tested.
Create Test Scripts: This process involves recording, programming or both. During the
process of recording tests, insert checkpoints where the response of the application
needs to be tested.
Debug Test: Run the tests in Debug mode to make sure whether they run smoothly.
Run Tests: Run tests in Verify mode to test the application.
View Results: This determines the success or failure of the tests.
Report Defects: If a particular test run fails due to the defect in the application
tested, defects can be directly reported through the Test Results window.
KJSCE/IT/BE/SEMVII/STQA/2013-14
KJSCE/IT/BE/SEMVII/STQA/2013-14
Sample Test Result
GUI Checkpoint
Checkpoints allow you to compare the current behavior of the application being tested to its
behavior in an earlier version. You can add four types of checkpoints to your test scripts
Object Checkpoint verifies information about GUI objects. For example, you can check
o Whether radio button is on or off
o Whether a push button is enabled or disabled
Text Checkpoint read text in GUI objects and in bitmaps and enables you to verify their
contents. For example, you
o Can read the text content of any button
Bitmap Checkpoint takes a snapshot of a window or area of your application and
compares this to an image captured in an earlier version.
o Capture drawings and graphs
Database Checkpoint check the contents and the number of rows and columns of
a result set, which is based on a query you create on your database.
o So you create a query and examine the result set
KJSCE/IT/BE/SEMVII/STQA/2013-14
Database Checkpoint- Compare Expected and Actual Outcomes
Data Driven Test
When you want to test your application, you may want to check how it performs the same
operation with multiple sets of data. So we use Data Driven Test. By replacing the
fixed values in your test with values stored in a data table (an external file), you can
generate multiple test scenarios using the same test.
Two ways of Data Driven Test
Data driven Wizard
Modify Test Script manually
Data Driven Testing Process
Creating a test
Converting it in to a Data Driven Test Preparing a Data table
Running the Test
Analyzing the results
KJSCE/IT/BE/SEMVII/STQA/2013-14
Application Performance with Data Driver Wizard
KJSCE/IT/BE/SEMVII/STQA/2013-14
Synchronization
Synchronization is used to have the uniformity between the application and test scripts. It
enables you to solve anticipated timing problems between the test and your application
For example, if you create a test that opens a database application, you can add a
synchronization point that causes the test to wait until the database records are loaded on
the screen.
So you could synchronize:
To retrieve information from a database For a window to popup
For a progress bar to reach 100% For a status bar message to appear
Batch Test
A Batch test is a test script that contains call statements to other tests It opens and executes
each test and saves the test results.
It suppresses the error message that occur while running the test script
A test becomes a batch test when you select the run in batch mode option
e.g. GUI_load(a1.gui);
call a1()
Dialog Boxes
You can create dialog boxes, that popup during interactive test execution .
It will prompt the user to perform an action such as typing in text or selecting an item from
the list
Types of Dialog boxes
Input Dialog boxes List Dialog boxes
Password Dialog boxes
Functions
Inbuilt functions
Insert Function- Object/ window
Function Generator- a visual tool that presents a quick and error-free way to program your
tests.
You can add TSL statements to your tests using the Function Generator in two ways:
o By pointing to a GUI object, or
o By choosing a function from a list.
KJSCE/IT/BE/SEMVII/STQA/2013-14
User defined functions are Compiled modules
A complied module is a script containing a library of user defined function
When you load a compiled modules in a script, its function are automatically compiled and
remain in memory.
Compiled modules can improve the performance of your tests
Since you debug the compiled module before using them, your test will require less error
checking
Regular expression enables Win Runner to identify objects with varying names and titles.
You can use regular expressions in TSL statements or in object descriptions in the GUI
map
3[0-] 3* *.*
KJSCE/IT/BE/SEMVII/STQA/2013-14
Exception Handling
Using exception handling, you can instruct Win Runner to deduct an unexpected event
when it occurs, and act to recover the test run
Types of Exceptions
Pop up exception TSL exception
Object exception
Break points
By setting a break point you can stop a test run at a specific place in the test script You can
set break points
Break at location Break in function
Appendix
GUI- Graphical User Interface
TSL- Test script Language
_________________________________________________________________________
Results: (Program printout with output / Document printout as per the format)
_________________________________________________________________________
Questions:
1.Compare and contrast QTP and Win Runner.
2.Which are the recording modes available in Win Runner.?
Outcomes:
1.An ability to use the techniques, skills, and modern engineering tools necessary for
engineering practice.(k)
2. An ability to adopt open source standards(l)
3. An understanding of best practices, standards and their applications. (m)
KJSCE/IT/BE/SEMVII/STQA/2013-14
________________________________________________________________________
Conclusion: (Conclusion to be based on the objectives and outcomes achieved)
Grade: AA / AB / BB / BC / CC / CD /DD
Signature of faculty in-charge with date
_________________________________________________________________________
References:
Books/ Journals/ Websites:
1.http://www.softwaretestinghelp.com/winrunner-automation-tool-preparation
2.http://www.kthmcollege.com/pdffiles/File-37.pdf2.

You might also like