You are on page 1of 39

SQT & Quality Course Contents Page 1

SQT & QualityV4.0




Author: Croma Campus Testing CoE Team




SQT & Quality Course Contents Page 2

Prerequisites of Software Testing
Testing Process Manual Testing Automation Testing
Requirement Document Functionality Non-Functional Req.
Test Environment Module Defect/Bug/Error
Version Centralized Location

Software Requirement Document (SRS):
Please refer the Sample Requirement Document.pdf.

Software Testing: An activity to ensure the correctness, completeness
& quality of the software system with respect to requirements.
OR
Software Testing is destructive process. The primary goal is to break the
software.
OR
Software Testing is a process of executing a program or application with
the intend of finding error.

Why Testing: Testing is not limited to the detection of bugs in the
software, but also increases confidence in its proper functioning and assists
with the evaluation of functional and nonfunctional properties.

Testing is an important and critical part of the software development
process, on which the quality and reliability of the delivered product strictly
depend.




SQT & Quality Course Contents Page 3


Why does Software have Bugs?
Miscommunication or no communication
Software complexity
Programming errors
Changing requirements
Time pressures
Egos
Poorly documented code

Defect - Priority and Severity:
Priority: Priority defines how important the bug is.
This field describes the importance and order in which a bug should be fixed. The
available priorities range from P1 (Most Important) to P5 (Least Important)
The Order in which the defect should be fixed.

Severity: Severity defines how severe the impact is on the system of the bug.
How critical the defect is.
Following are the list of Severity Levels and meaning of each level:
1. Blocker - Blocks development and/or testing work
2. Critical - Crashes, Loss of Data, directly impacts on the required functionality
3. Major - Major loss of function, directly impacts on the required functionality
4. Normal- Slight deviation from the required functionality



SQT & Quality Course Contents Page 4

5. Minor- Minor loss of function or problem where easy workaround is present
6. Suggestion- Proposal from Testing Team



Testing Methodology:

Black box Testing: An approach of testing where application/software is
considered as a black box. Black Box Testing, also known as Behavioral
Testing, is testing where tester only knows the inputs and what the expected
outcomes should be and not how the program arrives at those outputs. The
tester does not ever examine the programming code and does not need any
further knowledge of the program other than its specifications
Specific knowledge of the application's code/internal structure and
programming knowledge in general is not required.
Test cases are built around specifications and requirements, i.e., what the
application is supposed to do.
Testing, either functional or non-functional, without reference to the
internal structure of the component or system.





SQT & Quality Course Contents Page 5

I
e
Input test data
O
e
Output test results
System
Inputs causing
anomalous
behaviour
Outputs which reveal
the presence of
defects





White Box Testing:
White Box Testing (also called as Clear Box Testing, Glass Box Testing
and Transparent Box Testing or Structural Testing) is a method of
testing software that tests internal structures or workings of an
application.
An internal perspective of the system, as well as programming skills, are
required and used to design test cases.
It is usually done at the unit level.
White Box Testing is verification technique software engineers can use
to examine if their code works as expected.



SQT & Quality Course Contents Page 6








Types of White Box Testing:
Unit Testing:
The most micro scale of testing; to test particular functions or code modules.
Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code.
OR
Unit testing is the process of testing each unit of code in a single component.
This form of testing is carried out by the developer as the component is being
developed. The developer is responsible for ensuring that each detail of the
implementation is logically correct





SQT & Quality Course Contents Page 7

Types of Black Box Testing:
1. Functional Testing
2. Performance Testing
3. Compatibility Testing
4. Usability Testing
5. Negative Testing
6. Ad-Hoc Testing
7. Exhaustive Testing
Functional Testing:
Whether the application/module is functioning according to the stated
requirement.


Performance Testing:
Performance testing is the process of determining the Speed or effectiveness
of a computer, Network, Software Program or Device. This process can
involve quantitative tests done in a lab, such as measuring the response time or
the number of MIPS (millions of instructions per second) at which a system
functions. Qualitative attributes such as reliability, scalability and
interoperability.
.

Compatibility Testing:
Compatibility testing is a type of testing used to ensure compatibility of the
system/application/website built with various other objects such as other web
browsers, hardware platforms, users (in case if its very specific type of
requirement, such as a user who speaks and can read only a particular
language), operating systems etc. This type of testing helps find out how well a



SQT & Quality Course Contents Page 8

system performs in a particular environment that includes hardware, network,
operating system and other software etc.
Compatibility testing can be automated using automation tools or can be
performed manually and is a part of non-functional software testing.

Usability Testing:
Checks whether the layout, text and the messages displayed are user
friendly and meet the stated requirement.
Cursor is properly positioned, Navigation of cursor.
On-line list displayed in proper sort sequence.
Project screen standards are adhered to (i.e., colors, common field
lengths, protected fields, error highlighting, cursor position, etc.)
Usability Testing is needed to check if the user interface is easy to use
and understand.


Negative Testing:
Any testing carried out by passing Non-Recommended values with an aim of
breaking down the application is called negative testing".
"If a developer designed an edit box to accept only numeric value up to length
of 10 numbers and if we entered the alphabets and alphabets are accepted by
the Text Box then it is called as negative testing"


Ad Hoc Testing:
Testing without a formal test plan or outside of a test plan. With some projects
this type of testing is carried out as an adjunct to formal testing. If carried out
by a skilled tester, it can often find problems that are not caught in regular
testing. Sometimes, if testing occurs very late in the development cycle, this will
be the only kind of testing that can be performed. Sometimes ad hoc testing is
referred to as exploratory testing.

Exhaustive Testing



SQT & Quality Course Contents Page 9

Exhaustive testing is the testing where we execute single test case for multiple
test data. Exhaustive testing means testing the functionality with all possible
valid and invalid data.
Test to verify the behavior of every aspect of an application, including all
permutations. We execute a program with all possible combinations of inputs
or values for program variables. Generally we use Automation testing when
single test case executed for multiple test data


Low Level Categorization of Functional Testing:
1. Build Verification Testing (BVT)
2. Smoke Testing
3. Sanity Testing
4. Component Testing
5. Integration Testing
6. System Testing
7. System Integration Testing
8. User Acceptance Testing (UAT)
9. Alpha Testing
10. Beta Testing
11. Re-Testing
12. Regression Testing





SQT & Quality Course Contents Page 10

Build Verification Testing (BVT) /Build
Acceptance Test: BVT is a set of Tests Run on each New Build of a
product to verify that the build is testable before the build is released into the
hands of the Testing Team. The build acceptance test is generally a short set of
tests, which Test the Main Functionality of the application software. Any build
that fails the build verification test is rejected, and testing continues on the
previous build.

Smoke Testing:
Smoke testing is non-exhaustive software testing, ascertaining that the most
crucial functions of a program work, but not bothering with finer details. The
term comes to software testing from a similarly basic type of hardware testing,
in which the device passed the test if it didn't catch fire the first time it was
turned on.
Smoke Testing is done to ensure whether application/part of the application
is ready for Testing.

Sanity Testing:
A sanity test is a narrow regression test that focuses on one or a few areas of
functionality. Sanity testing is usually narrow and deep. Sanity test is used to
determine a small section of the application is still working after a minor
change.
Once a new build is obtained with minor revisions, instead of doing a through
regression, sanity is performed so as to ascertain the build has indeed rectified
the issues and no further issue has been introduced by the fixes.

Difference between Smoke Testing and Sanity Testing:
Smoke testing originated in the hardware testing practice of turning on a new
piece of hardware for the first time and considering it a success if it does not
catch fire and smoke. In software industry, smoke testing is a shallow and wide



SQT & Quality Course Contents Page 11

approach whereby all areas of the application without getting into too deep, is
tested. A sanity test is a narrow regression test that focuses on one or a few
areas of functionality. Sanity testing is usually narrow and deep.
A smoke test is scripted--either using a written set of tests or an automated
test. A sanity test is usually unscripted.
Smoke testing will be conducted to ensure whether the most crucial functions
of a program work, but not bothering with finer details. A Sanity test is used
to determine a small section of the application is still working after a minor
change


Component / Module Testing:
The testing of individual Software Components/Modules.
E.g.: If application has 3 modules ADD/EDIT/DELETE then testing of each
module individually is called as Component Testing.

Integration Testing:
Testing of combined modules of an application to determine whether they are
functionally working correctly. The parts can be code modules, individual
applications, client and server applications on a network, etc. This type of
testing is especially relevant to client/server and distributed systems.

Types of Integration Testing:

1. Big Bang: In this approach, all or most of the developed modules are coupled
together to forms a complete software system or major part of the system and then used
for integration testing. The Big Bang method is very effective for saving time in the
integration testing process. However, if the test cases and their results are not recorded
properly, the entire integration process will be more complicated and may prevent the
testing team from achieving the goal of integration testing.

2. Bottom up Testing: This is an approach to integrated testing where the lowest level
components are tested first, then used to facilitate the testing of higher level components. The
process is repeated until the component at the top of the hierarchy is tested.



SQT & Quality Course Contents Page 12


All the bottom or low-level modules, procedures or functions are integrated and then tested.
After the integration testing of lower level integrated modules, the next level of modules will be
formed and can be used for integration testing. This approach is helpful only when all or most of
the modules of the same development level are ready. This method also helps to determine the
levels of software developed and makes it easier to report testing progress in the form of a
percentage.

3. Top down Testing: This is an approach to integrated testing where the top integrated
modules are tested and the branch of the module is tested step by step until the end of the
related module.

4. Sandwich Testing: This is an approach to combine top down testing with bottom up
testing.

The main advantage of the Bottom-Up approach is that bugs are more easily found. With Top-
Down, it is easier to find a missing branch link.



System Testing:
System Testing tends to affirm the end-to-end quality of the entire system. It is
a process of performing a variety of tests on a system to explore functionality
or to identify problems. System Testing is a level of the software testing
process where a complete, integrated system/software is tested. The purpose of
this test is to evaluate the systems compliance with the specified requirements.
Non-functional quality attributes, such as reliability, security and compatibility
are also checked in system testing.


Example - During the process of manufacturing a ballpoint pen, the cap, the
body, the tail, the ink cartridge and the ballpoint are produced separately and
unit tested separately. When two or more units are ready, they are assembled
and Integration Testing is performed. When the complete pen is integrated,
System Testing is performed





SQT & Quality Course Contents Page 13

System Integration Testing:
System Integration testing verifies that a system is integrated to any external or
third party systems defined in the system requirements.


User Acceptance Testing (UAT):
Final testing based on specifications of the end-user or customer, or based on
use by end-users/customers over some limited period of time. UAT is a
process to obtain confirmation that a system meets mutually agreed-upon
requirements.
User Acceptance Testing (UAT) is a process to obtain confirmation that a
system meets mutually agreed-upon requirements

Alpha Testing:
Alpha Testing is done by users/customers or an independent Test Team at the
developers' site.
OR
Alpha Testing: Testing a new product in pre-release internally before testing it
with outside users.

Beta Testing
The testing conducted by end user at client place.
OR
In this type of testing, the software is distributed as a beta version to the users
and users test the application at their sites. As the users explore the software, in
case if any exception/defect occurs that is reported to the developers.





SQT & Quality Course Contents Page 14



Re-Testing:
In Re-Testing we test only particular functionality (which was failed during
testing) that is working properly or not after change is made. In this we did not
test all functionality.

Regression Testing:
The intent of regression testing is to provide a general assurance that no
additional errors were introduced in the process of fixing other defects.
OR
Regression Testing means to test the entire application to ensure that the fixing
of bug will be affecting anywhere else in the application.
OR
Regression Testing ensures code modifications have not inadvertently
introduced bugs into the system or changed existing functionality. Goals for
regression testing should include plans from the original unit, and functional
and system tests phases to demonstrate that existing functionality behaves as
intended.



SQT & Quality Course Contents Page 15

Determining when regression testing is sufficient can be difficult. Although it
is not desirable to test the entire system again, critical functionality
should be tested regardless of where the modification occurred. Regression
testing should be done frequently to ensure a baseline software quality is
maintained.

Note: Re-Testing and Regression are associated with all types of
Testing (like: Unit, BVT, Component, Integration, System etc.) because when
ever any defect is found both kind of testing are performed.


Low Level Categorization of Performance Testing:
1. Load Testing
2. Stress Testing
3. Volume Testing
Load Testing:
Load testing is a part of a more general process known as performance testing.
OR
It tests system work under loading. This type of testing is very important for
client-server systems including Web application (e-Communities, e-Auctions,
etc.), ERP, CRM and other business systems with numerous concurrent users.
Examples of load testing include:
Downloading a series of large files from the Internet.
Running multiple applications on a computer or server
simultaneously.
Assigning many jobs to a printer in a queue.
Subjecting a server to a large amount of e-mail traffic.
Writing and reading data to and from a hard disk continuously.





SQT & Quality Course Contents Page 16

Stress Testing:
Stress testing examines system behavior in unusual ("stress", or beyond the
bounds of normal circumstances) situations. E.g., a system behavior under
heavy loading, system crash, and lack of memory or hard disk space can be
considered as a stress situation. Fool-proof testing is another case which is
useful for GUI systems, especially if they are oriented at a wide circle of users.

Volume Testing :
Volume test shall check if there are any problems when running the system
under test with realistic amounts of data, or even maximum or more.
Volume test helps to find problems with max amounts of data. System
performance or usability often degrades when large amounts of data must be
searched, ordered etc.

Test Procedure:
The system is run with maximum amounts of data.
Tables, databases, files, disks etc. are loaded with a maximum of data.
Important functions where data volume may lead to trouble.






Test Case Design Check List

1. Reviewed & Approved SRS.
2. Mockup Screen or Functional Specification
3. 90% Closure of Query Log.







SQT & Quality Course Contents Page 17

Test Cases


Test Case: Test Case is a description of what is to be tested what data to be
used and what action to be done to check the actual result against the expected result.

Test Case is simply a test with formal steps and instructions.

Types of Test Case:
1. Functional Test Case
Smoke Test Case
Component Test Case
Integration Test Case
System Test Case
2. Usability Test Case
3. Negative Test Case (Non recommended Test Data.)
4. Performance Test Case







Mock Up Screens:

User Id:

Password:


Submit Cancel



SQT & Quality Course Contents Page 18

Forgot Password?
New User Click here to Register?




Test Case for Login Window:
1. Check whether First Name field accept only alphabets.
2. Check whether Last Name field accept only alphabets.
3. Check First Name Last Name cannot accept other than alphabets (Negative
Test Case).
4. Check the login name field accepts only alphabets numeric and special
characters except dot.
5. Check whether the desired login name is created only when it displays specific
name is available.
6. Check whether desired login name cannot be created with existing login name
(Negative Test Case).
7. Check password accept more than 8 characters.
8. Check password cannot accept less than 8 characters (Negative Test Case)
9. Check the password and re-enter password field is the matched.
10. Check it display a message when password and re-enter password mismatches
(Negative Test Case).
11. Check whether you can view the security question.
12. Check whether answer field cannot be null.
13. Verify whether secondary mail field entered with valid id.
14. Verify whether your location is present in the location field.
15. Verify whether the captcha entered and available is correct.
16. Check every time you refresh or selecting i accept it display a new captcha.



SQT & Quality Course Contents Page 19

17. Check captcha be unique.
18. Check captcha is case sensitive.
19. Check when i accept is selected it moves to next page.
20. Check when any required field is not entered or entered with the wrong values
it displays a message with red color font.

Test Case for PEN:
1. Pens ink should be dark so that normal human eyes can read clearly.
2. Pen should be continuously in writing mode.
3. Pens ink color and the pens body, cap color should be same so that easy to
understand the requirement for user.
4. Pen should have the soft grip at the middle place for usability.
5. Pen should be in continuously writing mode under hottest and coolest
temperature.
6. If pen drops from the several feet then also it should be in continuously writing
mode if ink is available in pen.
7. Pen should not have Use and throw type.
8. Pens refill should be replaceable.
9. Verify whether it looks like a pen or not?
10. Check whether the pen writes in which color
11. Check whether the pen is very attractive
12. Check whether the pen is very compact in writing
13. Check whether the pen`s liquid is very transparency
14. Verify whether the pen cap functioning well?
15. Likely test whether all of its parts working properly or not?
16. Verify it's working conditions on a piece of paper?
Negative case
1. Test whether it breaks if u applies external force to break it?
2. Check whether the pen is ballpoint or Ink
3. Check whether the pen is writing in different conditions
4. Check whether we are writing in the pen for too many hours it cannot be
tensed.
5. Check whether the pen falls down the pen cannot get affected
6. Verify it's working conditions on a piece of wood, floor etc?






SQT & Quality Course Contents Page 20








Test Case Design Guidelines:

Test Case
Design.ppt


Test Case Design Techniques/Methods (Test
Data Design Techniques/Methods):
Test case design methods commonly used
Equivalence Partitioning (EP)
Boundary value analysis (BVA)
Negative Testing

1. Equivalence Partitioning: Equivalence partitioning is a
software testing related technique with the goal:
To Reduce the Number of Test Cases to a Necessary Minimum.
To Select the Right Test Cases to cover all Possible Scenarios.

The Equivalence Partitions are usually derived from the specification of the
component. An input has certain ranges which are valid and other ranges
which are invalid.
This may be best explained by the example of a function which takes a
parameter "month". The valid range for the month is 1 to 12, representing
January to December. This valid range is called a partition. In this example
there are two further partitions of invalid ranges. The first invalid partition
would be <= 0 and the second invalid partition would be >= 13.
... -2 -1 0 1 ........................................12 13 14 15.....
--------------------|----------------------------------|---------------------
Invalid partition 1 valid partition Invalid partition 2




SQT & Quality Course Contents Page 21

The Testing theory related to equivalence partitioning says that only one test
case of each partition is needed to evaluate the behavior of the program for
the related partition. In other words it is sufficient to select one test case out
of each partition to check the behavior of the program. To use more or
even all test cases of a partition will not find new faults in the program. The
values within one partition are considered to be "equivalent". Thus the
number of test cases can be reduced considerably.

An additional effect by applying this technique is that you also find the so
called "dirty" test cases. An inexperienced tester may be tempted to use as
test cases the input data 1 to 12 for the month and forget to select some out
of the invalid partitions. This would lead to a huge number of unnecessary
test cases.

Equivalence Partitioning is No Stand Alone method to determine Test
Cases. It has to be supplemented by Boundary Value Analysis (BVA).
Having determined the partitions of possible inputs the method of
boundary value analysis has to be applied to select the most effective test
cases out of these partitions.

Equivalence Partitioning: A Black Box test design technique in which test cases are
designed to execute representatives from equivalence partitions. In principle, test cases are
designed to cover each partition at least once.

Equivalence partitioning is a software testing technique to minimize number of
permutation and combination of input data. In equivalence partitioning, data is
selected in such a way that it gives as many different output as possible with
the minimal set of data.

For example of EP: consider a very simple function for awarding grades to
the students. This program follows this guideline to award grades

Marks 00 - 39 ------------ Grade D

Marks 40 - 59 ------------ Grade C

Marks 60 - 70 ------------ Grade B

Marks 71 - 100 ------------ Grade A



SQT & Quality Course Contents Page 22


Based on the equivalence partitioning techniques, partitions for this
program could be as follows

Marks between 0 to 39 - Valid Input

Marks between 40 to 59 - Valid Input

Marks between 60 to 70 - Valid Input

Marks between 71 to 100 - Valid Input

Marks less than 0 - Invalid Input

Marks more than 100 - Invalid Input

Non numeric input - Invalid Input

From the example above, it is clear that from infinite possible test cases
(Any value between 0 to 100, Infinite values for >100 , < 0 and non
numeric) data can be divided into seven distinct classes. Now even if you
take only one data value from these partitions, your coverage will be good.

Most important part in equivalence partitioning is to identify equivalence
classes. Identifying equivalence classes needs close examination of the
possible input values. Also, you can not rely on any one technique to ensure
your testing is complete. You still need to apply other techniques to find
defects.


2. Boundary Value Analysis (BVA): Boundary Value Analysis is a Software
Test Case Design Technique used to determine test cases covering off-by-
one errors.

Testing experience has shown that the boundaries of input ranges to a
software component are likely to contain defects.

Example: A function that takes an integer between 1 and 12, representing a
month between January to December, might contain a check for this range:



SQT & Quality Course Contents Page 23


void exampleFunction(int month) {
if (month > 0 && month < 13)
....
A common programming error is to check an incorrect range e.g. starting
the range at 0 by writing:

void exampleFunction(int month) {
if (month >= 0 && month < 13)
....
For more complex range checks in a program this may be a problem which
is not so easily spotted as in the above simple example.

Applying Boundary Value Analysis (BVA):
To set up boundary value analysis test cases, the tester first determines
which boundaries are at the interface of a software component. This is done
by applying the equivalence partitioning technique. For the above example,
the month parameter would have the following partitions:

... -2 -1 0 1................................12 13 14 15.....
------------------------|-----------------------|-------------------------
Invalid partition 1 valid partition Invalid partition 2

To apply boundary value analysis, a test case at each side of the boundary
between two partitions is selected. In the above example this would be 0
and 1 for the lower boundary as well as 12 and 13 for the upper boundary.
Each of these pairs consists of a "clean" and a "negative" test case. A
"clean" test case should lead to a valid result. A "negative" test case should
lead to specified error handling such as the limiting of values, the usage of a
substitute value, or a warning.
Boundary value analysis can result in three test cases for each boundary; for example if n
is a boundary, test cases could include n-1, n, and n+1.

3. Negative Testing:

Non Recommended Test Data generates the Negative Test Cases.

Example:
Entering future date in Employee birth date field.



SQT & Quality Course Contents Page 24

Entering alphabets in numeric field like salary.
Entering number in name field.

Please refer the Test Case Design Techniques.ppt document also.



Test Scenario

A set of test cases that ensure that the business process flows are tested from end to
end. They may be independent tests or a series of tests that follow each other, each
dependent on the output of the previous one.
OR
Scenario is constructing the test cases in a certain flow, in order to perform a
functional / regression / acceptance / other test over your AUT.
OR
What are the situations we want to Test?
Example: Search Criteria of a Customer in a Bank.
Search by Name.
Search by Account Number.
Search by Customer Id.
Search by Mobile Number.
Initiate Intercom Calls
Initiate call from Directory.
Initiate call from Missed Call List.
Initiate call from Dialed Call List.
Initiate call from Received Call List.






SQT & Quality Course Contents Page 25


Reviews (Peer Review / Walkthrough / Inspection)


Reviews: Review is a process by which a PART or WHOLE as a software item
or product under development and the associated documents (SRS, Test Case, Test
Plan, Defect Report and Test Closure Report etc.) are examined by an individual or a
group of people.
We prefer to have a peer, rather than a customer finding s defect.
Application delivery was made without exhaustive code review. This is resulting in
more of Testing Effort and UAT Effort.
Purpose of Review: To ensure that selected work products meet their specific
requirements.

Types of Review:
Informal Review: Peer Review (Review done by the Team Member
working on the same module.). Walkthrough (Done by the Manager and Test
Lead in the presence of Test Engineer. Disciplined Approach to do the
Review), SMEs Review
Formal Review: Inspections (Done by the external, those are not the
part of project or company like: internal/external audits by the Quality
Management System (QMS)), Group Reviews

Review Checklist- Croma Campus - Review Log V1.0








SQT & Quality Course Contents Page 26


Process of Review:
PREPARE FOR REVIEW {Select Work Product for Review}

CONDUCT REVIEWS {Individual checking by all reviewers}

VERIFY DEFECT CLOSURE {Verification of Defect Closure}

ANALYZE REVIEW DATA {Analyzing Review Data}

Static Testing: Reviews, walkthroughs, or inspections are considered as Static
Testing.
Dynamic Testing: Executing programmed code with a given set of test
cases is referred to as Dynamic Testing.

Verification and Validation:
Software Testing is used in association with Verification and Validation.
Verification: Have we built the software right? (i.e., does it match the
specification).
Validation: Have we built the right software? (i.e., is this what the customer
wants).



SQT & Quality Course Contents Page 27

Verification is the process of evaluating a system or component to
determine whether the products of a given development phase (Requirement, Design,
Coding and Testing) satisfy the conditions imposed at the start of that phase.
Validation is the process of evaluating a system or component during or at
the end of the development process to determine whether it satisfies specified
requirements.
Example of Verification:
Input Process Output
High Level
Requirement
Document approved by
client.

Analyzing Requirement
Document.
Unapproved SRS,



Bug: A software bug is the common term used to describe an error, flaw, mistake,
failure, or fault in a computer program or system that produces an incorrect or
unexpected result, or causes it to behave in unintended ways.

Debugging: The process of finding and removing the causes of software failure
OR
Debugging is the process of locating and fixing or bypassing bugs (errors) in
computer program code.







SQT & Quality Course Contents Page 28


Defect Life Cycle








SQT & Quality Course Contents Page 29

Bug Status:
NEW A new bug reported by Testing QA.
ASSIGNED Test Lead verify whether it is a valid defect or not then assign to the
Development Team.
RESOLVED Development team resolved the bug with status Fixed/Later/Invalid.
VERIFIED QA Team verified the bug whether it is resolved.
REOPENED If bug is not resolved and still exists then QA team reopens the same
bug.
CLOSED If bug is resolved/fixed and verified by QA, then bug would be closed.
Please refer: Croma Campus - Defect Logging Report
Please refer: Croma Campus - How to Write a Good Bug Report


Test Plan

Test plan is one of the most significant documents for Software Testing Projects.
Test Plan contains the information related to Scope, Environment, Schedule, Risk,
Resources, Execution, Reporting, Automation, Completion Criteria etc.
Test Plan is usually created by Test Manager, Test Lead or Senior Testers in the
Team.
Before start preparing Test Plan, information should be captured from various
stakeholders of the project. Information captured from stake holder is reflected in the
Test Plan.

Please refer the Test Plan.pdf.






SQT & Quality Course Contents Page 30



Software Development Life Cycle:
The Software Development Lifecycle (SDLC) is a conceptual model used to describe
the various stages involved in development of software.

Stages of SDLC:
1. Requirement:
Requirement Gathering
Requirement Development (Develop SRS and User Case)
Requirement Analysis (Feasibility Study )
Requirement Validation (Need to revisit the client requirement)
Requirement Management (Track the Change Request)
2. Analysis and Design
Data Design (Create the Database as per the requirement.)
Architecture Design (HLD & LLD)
3. Coding and Unit Testing:
Code Generation
Code Review and Code Walkthrough
Unit Testing
Closure and Verification of Defects
Code baseline
4. Testing
Integration
System Testing
System Integration Testing
User Acceptance Testing
5. Maintenance
AMC
CR








SQT & Quality Course Contents Page 31


Software Life Cycle Models

{Please refer the Software Life Cycle Models.ppt document.}




Test Execution Check List

1. Approved and finalized Functional Requirement docs, SRS, Use cases etc.
2. Unit Test Cases/Results.
3. Final/Freeze code/Build deployed on QA server.
4. Project created on Bug Tracking
5. Project created on Task Tracking Tool.
6. Credentials for accessing required applications (URLs, User Id, Password &
Test Data etc.)



Test Life Cycle

A Sample Testing Life Cycle:
Although variations exist between organizations, there is a typical cycle for testing
1. Requirements analysis: Testing should begin in the requirements phase of
the software development life cycle. During the design phase, testers work
with developers in determining what aspects of a design are testable and
with what parameters those tests work.
2. Test planning: Test strategy, test plan, testbed creation. A lot of activities
will be carried out during testing, so that a plan is needed.
3. Test development: Test, test cases, test scripts to use in testing software.
4. Test execution: Testers execute the software based on the plans and
tests and report any errors found to the development team.



SQT & Quality Course Contents Page 32

5. Test reporting: Once testing is completed, testers generate metrics and make
final reports on their test effort and whether or not the software tested is ready
for release.
6. Test result analysis or Defect Analysis, is done by the development team
usually along with the client, in order to decide what defects should be treated,
fixed, rejected (i.e. found software working properly) or deferred to be
dealt with at a later time.
7. Retesting the resolved defects. Once a defect has been dealt with by the
development team, it is retested by the testing team.
8. Regression testing: It is common to have a small test program built of a
subset of tests, for each integration of new, modified or fixed software, in order
to ensure that the latest delivery has not ruined anything, and that the software
product as a whole is still working correctly.
9. Test Closure:Once the test meets the exit criteria, the activities such as
capturing the key outputs, lessons learned, results, logs, documents related to
the project are archived and used as a reference for future projects.




Software Testing Project Life Cycle




SQT & Quality Course Contents Page 33






Root Cause Analysis: Root cause analysis (RCA) is a class of problem
solving methods aimed at identifying the root causes of problems or incidents.

To remove the same type of defect in future







SQT & Quality Course Contents Page 34


Traceability Matrix

Please refer - Croma Campus - Traceability Matrix.pdf
Please refer - Croma Campus - Requirements Traceability.xls


Impact on Software development without Testing:
Lot of defects at UAT.
We have to pay penalty for slippage of delivery.
High Cost of Defect fixing.
Dissatisfied Customer.
Loss of business.
Low moral to developer.

Benefits Software Testing:
Software Development Perspective
To discover defects.
To avoid the end user from defect problems.
Number of defects detected will tell about the reliability of the
software.
To ensure that product works as user expected.
Business Perspective
To stay in business.
To avoid being sued by customer.
To detect defects early, this helps in reducing the cost of fixing
those defects later.
Increase customer satisfaction.



SQT & Quality Course Contents Page 35



Dos and Donts of Software Testing

A good Test Engineer should always work towards breaking the product
right from the first release till the final release of the application (Killer
attitude). Following are some Dos and Donts for software test engineer:
The Dos:
1. Ensure if the Testing activities are in sync with the Test Plan.
2. Identify technically not strong areas where you might need assistance
or trainings during testing. Plan and arrange for these technical
trainings to solve this issue.
3. Strictly follow the Test Strategies as identified in the Test Plan.
4. Try getting a release notes from the development team which
contains the details of that release that was made to QA for testing.
This should normally contain the following details
a. The version label of code under configuration management
b. Features part of this release
c. Features not part of this release
d. New functionalities added/Changes in existing functionalities
e. Known Problems
f. Fixed defects etc.
5. Stick to the input (reviewed and approved unit test case and test
plan) and exit criteria for all testing activities. For example, if the
input criteria for a QA release is sanity tested code from development
team, ask for sanity test results.
6. Update the test results for the test cases as and when you run them
7. Report the defects found during testing in the tool identified for
defect tracking
8. Take the code from the configuration management (as identified in
plan) for build and installation.
9. Ensure if code is version controlled for each release.



SQT & Quality Course Contents Page 36

10. Classify defects (It can be P1, P2, P3, P4 or Critical or High or
Medium or Low or anything) in a mutual agreement between the
development team so as to aid developers prioritize fixing defects
11. Do a sanity testing as and when the release is made from
development team.
The Donts
1. Do not update the test cases while executing it for testing. Track the
changes and update it based on a written reference (SRS or functional
specification etc). Normally people tend to update the test case based
on the look and feel of the application.
2. Do not track defects in many places i.e. having defects in excel sheets
and in any other defect tracking tools. This will increase the time to
track all the defects. Hence use one centralized repository for defect
tracking
3. Do not get the code from the developers sandbox for testing, if it is
a official release from the development team
4. Do not spend time in testing the features that are not part of this
release
5. Do not focus your testing on the non critical areas (from the
customers perspective)
6. Even if the defect identified is of low priority, do not fail to
document it.
7. Do not leave room for assumptions while verifying the fixed defects.
Clarify and then close!
8. Do not hastily update the test cases without running them actually,
assuming that it worked in earlier releases. Sometimes these pre
conceived notions would be a big trouble if that functionality is
suddenly not working and is later found by the customer.
9. Do not focus on negative paths, which are going to consume lots of
time but will be least used by customer. Though this needs to be
tested at some point of time the idea really is to prioritize tests.





SQT & Quality Course Contents Page 37





Quality

Quality: Quality means meeting customer needs.
OR
Doing it right the First Time.

OR
Quality is an attribute of a product/service.

Attributes of Quality:
Correctness: The extent to which a program satisfies its specifications and
fulfills the users mission and goals
Reliability: The extent to which a program can be expected to perform its
intended function with required precision
Integrity: The extent to which access to software or data by unauthorized
persons can be controlled
Usability: The effort required for learning, operating, preparing input, and
interpreting output of a program.

Quality Assurance (QA): Activity that Establishes and Evaluates
the processes that produce the products.

Preventing faults in the first place.

Project planning & monitoring
Purchase of Automated Tools
Standards development
Trainings




SQT & Quality Course Contents Page 38

Quality Control (QC): Processes and methods used to compare product
quality to requirements and applicable standards, and the action taken when
nonconformance is detected.

Detecting and fixing faults.


Reviews & Testing
Design Review
Code Review
Unit/Integration/System Testing

QA Vs QC:

Quality Control (QC) Quality Assurance (QA)
Product Oriented Process Oriented
Find Defect Prevent Defect
Review Defining Process
Testing Quality Audit
Training

Quality Management System (QMS): The system responsible for
establishes and evaluates the processes used to improve the quality of the software.




SQT & Quality Course Contents Page 39

Capability Maturity Model Integration
(CMMI):


To provide guidance for improving your organizations process and your ability to
integrate and manage the development and maintenance of the products and services

Level of CMMI:
(CMMI Level One) Initial: No process is implemented (Ad-hoc Work).
(CMMI Level Two)Managed: Process is implemented at project level
(Human Oriented not Process Oriented.)
(CMMI Level Three)Defined: Process is implemented at organization level.
(CMMI Level Four)Quantitatively Managed: Controlling and measuring
the process.
(CMMI Level Five)Optimizing: Continuous improvement.

You might also like