Professional Documents
Culture Documents
The software engineering procedure can be seen as a spiral. Initially the systems engineering
states the role of the software and lead the software requirement analysis, where the information
domain, function, behaviour, performance and validation criteria for the software are identified.
Moving inwards along the spiral, we come to design and finally coding.
A strategy for software testing may be to move upward along the spiral. Unit testing happens at
the vortex of the spiral and concentrates on each unit of the software as implemented by the
source code. Testing happens upwards along the spiral to integration testing, where the focus is
on design and the production of the software architecture. Finally we perform system testing,
where software and other system elements are tested together.
1
f (t ) = ln[ l 0 pt + 1]
p
where f(t) = cumulative number of failures that are anticipated to happen once the software has
been tested for a particular amount of execution time t
p = the exponential reduction in failure intensity as errors are discovered and repairs produced.
The instantaneous failure intensity, l(t) can be derived by taking the derivative of f(t):
l0
l( t ) = (a)
l 0 pt + 1
Using the relationship noted in equation (a), testers can estimate the drop off of errors as testing
progresses. The actual error intensity can be plotted against the estimated curve. If the actual
data gained during testing and the Logarithmic Poisson execution-time model are reasonably
close to another over a number of data points, the model can be used to estimate the total test
time required to produce an acceptably low failure intensity.
Unit Testing
Unit testing concentrates verification on the smallest element of the program – the module.
Using the detailed design description important control paths are tested to establish errors within
the bounds of the module.
The tests that are performed as part of unit testing are shown in the figure below. The module
interface is tested to ensure that information properly flows into and out of the program unit
being tested. The local data structure is considered to ensure that data stored temporarily
maintains its integrity for all stages in an algorithm’s execution. Boundary conditions are tested
to ensure that the modules perform correctly at boundaries created to limit or restrict processing.
All independent paths through the control structure are exercised to ensure that all statements in
been executed once. Finally, all error-handling paths are examined.
Interface
Local data structures
Boundary Conditions
Module Independent paths
Error-handling paths
Test cases
Unit test
Unit testing is typically seen as an adjunct to the coding step. Once source code has been
produced, reviewed, and verified for correct syntax, unit test case design can start. A review of
design information offers assistance for determining test cases that should uncover errors. Each
test case should be linked with a set of anticipated results. As a module is not a stand-alone
program, driver and/stub software must be produced for each test units. In most situations a
driver is a “main program” that receives test case data, passes this to the module being tested and
prints the results. Stubs act as the sub-modules called by the test modules. Unit testing is made
easy if a module has cohesion.
Integration Testing
Once all the individual units have been tested there is a need to test how they were put together
to ensure no data is lost across interface, one module does not have an adverse impact on another
and a function is not performed correctly. Integration testing is a systematic approach that
produces the program structure while at the same time producing tests to identify errors
associated with interfacing.
Top-Down integration
M1
M2 M3 M4
M5 M6 M7
M8
1. The main control module is used as a test driver and stubs are substituted for all modules
directly subordinate to the main control module.
2. Depending on the integration technique chosen, subordinate stubs are replaced one at a
time with actual modules.
3. Tests are conducted as each module is integrated.
4. On the completion of each group of tests, another stub is replaced with the real module.
5. Regression testing may be performed to ensure that new errors have been introduced.
Bottom-up Integration
Bottom-up integration testing, begins testing with the modules at the lowest level (atomic
modules). As modules are integrated bottom up, processing required for modules subordinates
to a given level is always available and the need for stubs is eliminated.
1. Low-level modules are combined into clusters that perform a particular software
subfunction.
2. A driver is written to coordinate test cases input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.
There has been much discussion on the advantages and disadvantages of bottom-up and top-
down integration testing. Typically a disadvantage is one is an advantage of the other approach.
The major disadvantage of top-down approaches is the need for stubs and the difficulties that are
linked with them. Problems linked with stubs may be offset by the advantage of testing major
control functions early. The major drawback of bottom-up integration is that the program does
not exist until the last module is included.
Validation Testing
As a culmination of testing, software is completely assembled as a package, interfacing errors
have been identified and corrected, and a final set of software tests validation testing are started.
Validation can be defined in various ways, but a basic one is valid succeeds when the software
functions in a fashion that can reasonably expected by the customer.
Software validation is achieved through a series of black box tests that show conformity with
requirements. A test plan provides the classes of tests to be performed and a test procedure sets
out particular test cases that are to be used to show conformity with requirements.
Configuration review
An important element of the validation process is a configuration review. The role of the review
is to ensure that all the components of the software configuration have been properly developed,
are catalogued and have the required detail to support the maintenance phase of the software
lifecycle.
System Testing
Ultimately, software is included with other system components and a set of system validation
and integration tests are performed. Steps performed during software design and testing can
greatly improve the probability of successful software integration in the larger system. System
testing is a series of different tests whose main aim is to fully exercise the computer-based
system. Although each test has a different role, all work should verify that all system elements
have been properly integrated and form allocated functions. Below we consider various system
tests for computer-based systems.
Recovery Testing
Many computer-based systems need to recover from faults and resume processing within a
particular time. In certain cases, a system needs to be fault-tolerant. In other cases, a system
failure must be corrected within a specified period of time or severe economic damage will
happen. Recovery testing is a system test that forces the software to fail in various ways and
verifies the recovery is performed correctly.
Security Testing
Any computer-based system that manages sensitive information or produces operations that can
improperly harm individuals is a target for improper or illegal penetration. Security testing tries
to verify that protection approaches built into a system will protect it from improper penetration.
During security testing, the tester plays the role of the individual who wants to enter the system.
The tester may try to get passwords through external clerical approaches; may attack the system
with customized software, purposely produce errors and hope to find the key to system entry.
The role of the designer is to make entry to the system more expensive than that which can be
gained.
Stress Testing
Stress testing executes a system in the demands resources in abnormal quantity, frequently or
volume. A variation of stress testing is an approach called sensitivity testing in some situation a
very small range of data contained with the bounds of valid data for a program may cause
extreme and even erroneous processing or profound performance degradation.