You are on page 1of 6

A Software Testing Strategy

The software engineering procedure can be seen as a spiral. Initially the systems engineering
states the role of the software and lead the software requirement analysis, where the information
domain, function, behaviour, performance and validation criteria for the software are identified.
Moving inwards along the spiral, we come to design and finally coding.

A strategy for software testing may be to move upward along the spiral. Unit testing happens at
the vortex of the spiral and concentrates on each unit of the software as implemented by the
source code. Testing happens upwards along the spiral to integration testing, where the focus is
on design and the production of the software architecture. Finally we perform system testing,
where software and other system elements are tested together.

Criteria for Completion Testing

A fundamental question in software testing is how do we know when testing is complete.


Software engineers need to have rigorous criteria for establishing when testing is complete.
Musa and Ackerman put forward an approach based on statistical response that states that we can
predict how long a program will go before failing with a stated probability using a certain model.
Using statistical modeling and software reliability theory, models of software failure as a test of
execution time can be produced. A version of failure model, known as logarithmic Poisson
execution-time model, takes the form

1
f (t ) = ln[ l 0 pt + 1]
p

where f(t) = cumulative number of failures that are anticipated to happen once the software has
been tested for a particular amount of execution time t

l 0 = the initial failure intensity at the start of testing

p = the exponential reduction in failure intensity as errors are discovered and repairs produced.

The instantaneous failure intensity, l(t) can be derived by taking the derivative of f(t):

l0
l( t ) = (a)
l 0 pt + 1

Using the relationship noted in equation (a), testers can estimate the drop off of errors as testing
progresses. The actual error intensity can be plotted against the estimated curve. If the actual
data gained during testing and the Logarithmic Poisson execution-time model are reasonably
close to another over a number of data points, the model can be used to estimate the total test
time required to produce an acceptably low failure intensity.
Unit Testing
Unit testing concentrates verification on the smallest element of the program – the module.
Using the detailed design description important control paths are tested to establish errors within
the bounds of the module.

Unit test considerations

The tests that are performed as part of unit testing are shown in the figure below. The module
interface is tested to ensure that information properly flows into and out of the program unit
being tested. The local data structure is considered to ensure that data stored temporarily
maintains its integrity for all stages in an algorithm’s execution. Boundary conditions are tested
to ensure that the modules perform correctly at boundaries created to limit or restrict processing.
All independent paths through the control structure are exercised to ensure that all statements in
been executed once. Finally, all error-handling paths are examined.

Interface
Local data structures
Boundary Conditions
Module Independent paths
Error-handling paths

Test cases

Unit test

Unit test procedures

Unit testing is typically seen as an adjunct to the coding step. Once source code has been
produced, reviewed, and verified for correct syntax, unit test case design can start. A review of
design information offers assistance for determining test cases that should uncover errors. Each
test case should be linked with a set of anticipated results. As a module is not a stand-alone
program, driver and/stub software must be produced for each test units. In most situations a
driver is a “main program” that receives test case data, passes this to the module being tested and
prints the results. Stubs act as the sub-modules called by the test modules. Unit testing is made
easy if a module has cohesion.

Integration Testing
Once all the individual units have been tested there is a need to test how they were put together
to ensure no data is lost across interface, one module does not have an adverse impact on another
and a function is not performed correctly. Integration testing is a systematic approach that
produces the program structure while at the same time producing tests to identify errors
associated with interfacing.
Top-Down integration

Top-down integration is an incremental approach to the production of program structure.


Modules are integrated by moving downwards through the control hierarchy, starting with the
main control module. Modules subordinate to the main control module are included into the
structure in either a depth-first or breadth-first manner. Relating to the figure below depth-first
integration would integrate the modules on a major control path of the structure. Selection of a
major path is arbitrary and relies on application particular features. For instance, selecting the
left-hand path, modules M1, M2, M5 would be integrated first. Next M8 or M6 would be
integrated. Then the central and right-hand control paths are produced. Breath-first integration
includes all modules directly subordinate at each level, moving across the structure horizontally.
From the figure modules M2, M3 and M4 would be integrated first. The next control level, M5,
M6 etc., follows.

M1

M2 M3 M4

M5 M6 M7

M8

The integration process is performed in a series of five stages:

1. The main control module is used as a test driver and stubs are substituted for all modules
directly subordinate to the main control module.
2. Depending on the integration technique chosen, subordinate stubs are replaced one at a
time with actual modules.
3. Tests are conducted as each module is integrated.
4. On the completion of each group of tests, another stub is replaced with the real module.
5. Regression testing may be performed to ensure that new errors have been introduced.

Bottom-up Integration
Bottom-up integration testing, begins testing with the modules at the lowest level (atomic
modules). As modules are integrated bottom up, processing required for modules subordinates
to a given level is always available and the need for stubs is eliminated.

A bottom-up integration strategy may be implemented with the following steps:

1. Low-level modules are combined into clusters that perform a particular software
subfunction.
2. A driver is written to coordinate test cases input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.

Comments on Integration Testing

There has been much discussion on the advantages and disadvantages of bottom-up and top-
down integration testing. Typically a disadvantage is one is an advantage of the other approach.
The major disadvantage of top-down approaches is the need for stubs and the difficulties that are
linked with them. Problems linked with stubs may be offset by the advantage of testing major
control functions early. The major drawback of bottom-up integration is that the program does
not exist until the last module is included.

Validation Testing
As a culmination of testing, software is completely assembled as a package, interfacing errors
have been identified and corrected, and a final set of software tests validation testing are started.
Validation can be defined in various ways, but a basic one is valid succeeds when the software
functions in a fashion that can reasonably expected by the customer.

Validation test criteria

Software validation is achieved through a series of black box tests that show conformity with
requirements. A test plan provides the classes of tests to be performed and a test procedure sets
out particular test cases that are to be used to show conformity with requirements.

Configuration review

An important element of the validation process is a configuration review. The role of the review
is to ensure that all the components of the software configuration have been properly developed,
are catalogued and have the required detail to support the maintenance phase of the software
lifecycle.

Alpha and Beta testing


It is virtually impossible for develop to determine how the customer will actually use the
program. When custom software is produced for customer a set of acceptance tests are
performed to allow the user to check all requirements. Conducted by the end user instead of the
developer, an acceptance test can range from an informal test drive to rigorous set of tests. Most
developers use alpha and beta testing to identify errors that only users seem to be able to find.
Alpha testing is performed at the developer’s sites, with the developer checking over the
customers shoulder as they use the system to determine errors. Beta testing is conducted at more
than one customer locations with the developer not being present. The customer reports any
problems they have to allow the developer to modify the system.

System Testing
Ultimately, software is included with other system components and a set of system validation
and integration tests are performed. Steps performed during software design and testing can
greatly improve the probability of successful software integration in the larger system. System
testing is a series of different tests whose main aim is to fully exercise the computer-based
system. Although each test has a different role, all work should verify that all system elements
have been properly integrated and form allocated functions. Below we consider various system
tests for computer-based systems.

Recovery Testing

Many computer-based systems need to recover from faults and resume processing within a
particular time. In certain cases, a system needs to be fault-tolerant. In other cases, a system
failure must be corrected within a specified period of time or severe economic damage will
happen. Recovery testing is a system test that forces the software to fail in various ways and
verifies the recovery is performed correctly.

Security Testing

Any computer-based system that manages sensitive information or produces operations that can
improperly harm individuals is a target for improper or illegal penetration. Security testing tries
to verify that protection approaches built into a system will protect it from improper penetration.
During security testing, the tester plays the role of the individual who wants to enter the system.
The tester may try to get passwords through external clerical approaches; may attack the system
with customized software, purposely produce errors and hope to find the key to system entry.
The role of the designer is to make entry to the system more expensive than that which can be
gained.

Stress Testing

Stress testing executes a system in the demands resources in abnormal quantity, frequently or
volume. A variation of stress testing is an approach called sensitivity testing in some situation a
very small range of data contained with the bounds of valid data for a program may cause
extreme and even erroneous processing or profound performance degradation.

You might also like