You are on page 1of 8

Software Testing

• The purpose of testing is to demolish the software that has


just been completed
• Testing cannot demonstrate absence of defects
• Testing uncovers errors only if you are willing to detect
them
• If a newly developed software does not have errors then
the software was too trivial to be developed
• Software errors follow Pareto Principle – about 80% of the
errors occur in about 20% of the code
• Exhaustive testing is impossible
• A typical software development project earmarks about
25% of the total effort for testing
Testing phases
• Unit Test
– Tests the code of each unit developed by a programmer. Usually
done by the programmer him(her)self.
• Integration Test
– Tests the design of the system by testing the module level
interfaces. Usually done by the person in charge of the
corresponding subsystem
• Validation Test
– Tests the requirements of the system
– Usually done by the ITG, Independent Test Group
• Acceptance Test
– Tests the entire system according to pre-specified criteria. Usually
done by the user
Test case design
• Black box testing
• Tests the functional requirements of the unit. Test cases are designed
keeping in mind what this portion of the software was supposed to do
– Incorrect or missing functions
– Interface errors
– Errors in external data access
– Performance error

• Glass box testing


– Tests the control structure of the unit
• Execute each independent paths
• Exercise all logical decisions
• Execute all loops at their boundaries
• Exercise internal data structures
Integration testing
• Is a systematic technique for constructing the program
structure while at the same time conducting tests to
uncover errors associated with interfacing
• Top-down integration
– Modules are integrated by moving downward through the control
hierarchy
– Uses stubs to represent lower level modules
• Bottom-up integration
– Low level modules are integrated first into clusters and the clusters
are integrated by moving up the control structure
– Uses drivers to represent upper level modules
• Regression testing
Stubs and Drivers

S
T
U
B Stub A Stub B Stub C Stub D
S Display a trace
Display passed Return a value Do a table search
parameter from a table or for the input and
message
external file return an output

D
R Driver A Driver B Driver C Driver D
I
V
E
R Invoke subordinate Send a parameter Display a A combination of
parameter Drivers B and C
S
Criteria for test completion
• How do we know we have tested enough?

• Use some statistical model to predict the number of defects


after the software has been tested for t units of time

• When the number of defects found in each of last n


consecutive hours of testing fall below a pre-specified
limit

• When a pre-specified percentage of planted errors get


discovered
Validation Testing
• Answers the question
‘Are we developing the right product?’
rather than
‘Are we developing the product right?’
• Alpha testing
– Conducted at developer’s site by a customer. Conducted in a
controlled environment in a natural setting with the developer
“looking over the shoulder” of the user and recording errors
• Beta testing
– Conducted at one or more customer’s sites by the end-user of the
software. It is a ‘live’ test of the software in an environment not
controlled by the developer
Acceptance Testing
• It is a complete test of the entire system by the end-user
according to pre-determined criteria
– “The software will run continuously for 48 hours”
– “Average query time should not exceed 1.5 second working on a
database of size not exceeding 1000 records”
– “Production planning will be done using real data of last four
months”
• System testing
– Recovery testing
– Security testing
– Stress testing
– Performance testing

You might also like