You are on page 1of 12

Criteria for completion of

testing
*Classical question raised in testing
*how to know that we have tested enough?
*No definitive answer
*There are few pragmatic responses and early
attempts as empirical guidance
*Few responses
*If testing gets skipped then the burden goes to
customer
*Software developer need rigorous criteria to
determine the sufficient testing has been done

*Musa and ackerman gave response based on


statistical criteria
*We can say with 95% confidence that the
probability of 1000 cpu hours of failure free
operation in a probabilistically defined
environment is atleast 0.995
*Model the failures as a function of execution time
*Logarithmic Poisson execution time model takes the
form
f(t)=(1/p)ln(l0pt+1)
f(t)= cumulative number of failures that are
expected to occur once the software has been
tested for a certain amount of time t
l0 = initial software failure intensity
p = the exponential reduction in failure
intensity as errors are uncovered

*Instantaneous failure intensity I(t) can be derived


from f(t)
I(t)= l0 / (l0pt+1)
*used to predict the drop off of errors as testing
progresses
*calculated values need to plotted against predicted
curve
*Product will be delivered if they are very close to the
curve

Strategic Issues
*Successful testing strategy should address the
following issues
*Specify product requirements in a quantifiable
manner long before testing commences
*Testing methods should include the other quality
characteristics like portability,
maintainability, usability
*Should be mentioned in measurable manner to
make the testing results as unambiguous
*State testing objectives explicitly
*Objectives of software testing should be
specified in a measurable terms like test
effectiveness, test coverage, mean time to
failure, frequency of occurrence etc..

*Understand the users of the software and develop a


profile for each user category
*Develop a usecase for each user category so
that it will reduce testing effort for each
users
*Develop a testing plan that emphasizes “rapid cycle
testing”
*Helps to control quality levels and the
corresponding test strategies
*Develop robust software that is designed to test
itself
*Should possess antibugging capability (able to
diagnose some errors by itself)
*Use effective formal technical reviews as a filter
prior to testing
*Formal technical reviews, walkthrough,
inspection can be used to reduce the
complexity of testing process
*Develop a continuous improvement approach for the
testing process
*Testing strategies should be measured using
various metrics
Unit testing
*Its white box oriented testing technique
*Focuses verification effort on the smallest unit of
software
*Uses the component level design description as a guide
*Complexity of tests and uncovered errors are limited by
the constrained scope of unit testing
*Conducted in parallel for various units
Unit test considerations
*Interface is tested to ensure that information properly
flows in and out of the program unit under test
*Local data structure is examined to ensure that data
stored temporarily maintains its integrity during all
the steps in its execution
*Boundary conditions are tested to ensure that the
module operates properly at boundaries
*All independent paths through the control structure are
exercised to ensure that all statements have been
executed atleast once
*All the error handling paths have to be exercised atleast
once

*Interfaces need to be tested else all the remaining


tests becomes doubtful
*Data structures need to be tested for its integrity
and the local impact on global data
*Selective testing should be done for executing paths
to uncover the erroneous computations,
incorrect comparisons, improper control flows
*Test case should uncover the following errors
*Comparison of different data types
*Incorrect logical operators
*Incorrect comparison of variables
*Improper loop termination
*Failure to exit when the iteration gets completed

*Error handling paths need to be covered to check


*Error description is Intelligible
*Error noted is really relevant to error
encountered
*Exception condition processed is correct
*Error description provides enough information to
assist in locating them in programs
*Programs need to exercised at boundaries since they
are the potential source for errors
Unit testing procedures
*It should be conducted adjunct to the coding step
*Use the design description of each component as a
guide and derive a test case
*Test case should posses the set of expected results
*Need to develop a driver and stub
*Driver
*A main program which accepts a test data,
passes that to the component and prints a
relevant results
*Stub
*Dummy sub program uses subroutine modules
interface do minimal data manipulation,
prints verification of entry and returns
control to the module under testing

Integration testing
*The tested components gets integrated to get a single
cohesive system which may contains
*Interface errors
*Adverse effects of some modules over others
*Global data structures can present problems
*2 approaches for integrating the modules
*Big bang approach(non incremental integration)
*Incremental approach
*Is a systematic technique for constructing the
program structure to uncover errors
associated with interfacing

Top down integration


*Is an incremental approach to construct the program
structure
*Modules are integrated by moving downward the control
hierarchy, beginning with main control module
*Subroutines to main modules are incorporated by
*Depth first manner (selects the components on the
major control path of the structure)
*Breadth first manner (selects all the components
directly subordinate at each level)
*Depth first integration
*M1,M2,M5,M8, then M6
*Breadth first integration
*M2,M3,M4

*5 steps involved in integration process


*Main control module is used as a test driver and
stubs are substituted for all components of
main module
*Based on the integration approach followed,
subordinate stubs are replaced one at a time
with actual components
*Tests are conducted as each component is
integrated
*On completion of each set of tests, another stub is
replaced with real component
*Conduct the regression testing to ensure that new
errors have not been introduced
Bottom up integration
*Begins construction and testing with atomic modules
*Components are integrated from bottom to top
*Need for stubs gets eliminated since the actual
processing elements are available
*Steps in bottom up integration
*Low level components are combined into clusters
which performs a specific function
*Develop a driver to coordinate the test case input
and output
*Test the clusters
*Drivers are removed and clusters gets integrated to
move up in program structure
Regression Testing
*Adding a new module results in the introduction of
*New control logic invocation
*New I/O
*New data flow
*Process of conducting some subset of tests which
have already been conducted to ensure that
changes have not propagated unintended
side effects
*Can be conducted
*Manually by reexecuting some test cases
*Automated capture/playback tools

*3 different classes of test cases


*A representative sample of tests that will
exercise all the functions
*Additional tests which focus on software
functions that are likely to affected by
change
*Tests that focus on the software components
that have been changed
*No. of regression tests grow as integration of modules
progress

Smoke Testing
*Type of integration testing method
*Used by the software team to assess their project on
frequent basis
*Set of activities involved
*Components translated into code are integrated into
a build which includes data files, libraries,
reusable modules etc.. to implement a product
functions
*A series of tests is designed to test whether the build
is working properly
*Builds gets integrated with other builds and entire
build have to be tested

*Benefits of smoke testing


*Integration risk is minimized
*Quality of end product is improved
*Error diagnosis and correction are simplified
*Progress is easier to assess
*Comments on integration testing
*Take care of critical modules (tested as early as
possible)
*Addresses the several software requirements
*Has high level of control
*Is complex

*Integration test documentation


*Plan for integration
*Test plan
*Test procedures
*Testing have to divided into many phases
*Different testing have to conducted at various phases
*Interface integrity
*Functional validity
*Information content
*Performance
*History of test results, problems encountered need to be
recorded
Validation Testing
*Process of checking whether the team had build
the right product or not
*Gets succeeded when the product functions in a manner
which are reasonably expected by the user
*Achieved by a series of black box testing techniques
*Validation test criteria
*Test plan lists the classes of tests to be conducted
*Test procedure defines a specific test cases that will
be used to conform the requirements

*2 possible conditions exists


*Functional characteristics conform to specification
and accepted
*Deviation from specification is uncovered and listed
*Errors are rarely corrected instead clients gets
negotiated to establish a method to resolve the
same
*Configuration review
*Helps to ensure that all the elements of software
configuration have been properly developed,
cataloged
*Alpha and beta testing
*Impractical for the developer to foresee how the
product will be used by customer
*Mismatched with input (strange values)
*Output may be unclear (user point of view)

*Need to conduct series of acceptance tests


*Ranges from reviews to series of tests
*Alpha testing
*Testing process conducted at the developers site by
customer
*Software is used in natural setting with the control of
developer
*Errors and deficiencies gets recorded by developers
*Beta testing
*Testing process conducted at the customers site
*Developers won’t be involved
*Entirely controlled by customer
*Errors have to recorded by customers and informed
to developed at regular intervals
System Testing
*It’s a series of different tests whose primary
purpose is to fully exercise the computer
based systems
*Each test has a different purpose
*Helps to ensure that all the components have been
integrated properly and working functionally right
*Recovery Testing
*System test which forces the software to fail in a
variety of ways and verifies that recovery is
properly performed

*Recovery may be automatic or need human


intervention
*Reinitialization, checkpointing mechanisms, data
recovery gets evaluated
*Mean time to repair (MTTR) gets evaluated to
test whether it is in acceptable limits
*Security Testing
*Attempts to verify the protection mechanisms built
into the system
*During the testing tester plays a role of hacker, looks
various methods to hack the data
*Stress Testing
*Attempts to verify the product ability in abnormal
situations
*Demands the resources in abnormal quantity,
frequency, volume
*Special tests may be designed to
*Generate more interrupts than average
*Increase the input in order of magnitude
*Acquire more memory and other resources
*Sensitivity testing is a variation of stress testing

*Performance Testing
*Attempts to verify the product performance
measures in the specific context
*Will be conducted throughout all the steps in the
testing process
*Can be coupled with stress testing

Debugging
*It’s a ordered process which occurs as a
consequence of testing
*Testing results in uncovers an error, debugging results in
the removal of such error
*It begins with the execution of a test case, results are
assessed, lack of correspondence between actual
and expected performance is encountered
*Two possible outcomes of debugging
*The cause will be found and corrected
*The cause will not found

*Debugging is a difficult process due to


*The symptom and cause many be geographically
remote
*The symptom may disappear when another error is
corrected
*The symptom may actually be caused by non errors
*It may be caused by human errors
*Debugging approaches
*Brute force
*Most commonly used but less efficient
*Backtracking
*Applicable for small programs. Starts with the
place where symptom gets uncovered,
works backward in the code until the cause
found.

*Cause elimination
*Data responsible for errors are isolated
*Cause hypothesis gets derived
*Isolated data gets used to prove or disprove the
hypothesis
*Correcting a bug, take care of
*Is the cause of the bug reproduced in another
part of the program
*What next bug might be introduced by fixing an
error
*What could we have done to prevent this bug in
the first place

You might also like