You are on page 1of 18

Software Development

Software Testing

Testing Definitions
There are many tests going under various
names. The following is a general list to get a
feel for the different types of testing required by
during typical high quality software
development.

Acceptance Test

The test performed by users of a new or


changed system in order to approve the
system and go live
Usually carried out at the customers
premises or using customers data with the
customer being present
This is usually the final phase of
development and results in the handover of
the software product to the customer

Active / Passive Testing

An active test introduces specific test data


and the results of processing that data are
observed for correctness
Passive testing works on real data and the
outputs are observed for correctness

Alpha and Beta Testing

Alpha testing is the first test phase carried


out by the developers in the lab
Beta testing is carried out when alpha testing
is complete and the developers can find no
errors (ideally). Beta testing is carried out by
selected real users who report back to the
developers. This enables the users to get
early exposure to the software and the
developers get usability and error feedback

Automated Testing

This is when software is used to test software. This


enables test repeatability and is faster as the user is
not generally required to enter lots of data manually
If the code is changed in a single module, all the
tests would be re-run on the whole application to
ensure that the changes have not impacted on any
other modules. Retesting everything in this way is
often referred to as regression testing

White and Black Box Testing

White box testing requires knowledge of the


internal operation of the software. This
enables the test to chack all paths through a
function for example
Black box testing assumes no knowledge of
the internal operation and based on the
specification will input test data at a system
level and observe the correctness of the
output

Dirty or Negative Testing

This method requires data to be input such


that it will maximise the chances of failure so
is used to test the applications error
recovery effectiveness
Examples could be user input validation
through to memory allocation failure

Functional Testing

Does the application do what it has been


specified to do?

Check all menu and tool bar options


Check all keyboard input

Likely to take the form of a task list which a


tester goes through to check all options and
navigation paths. This is not a usability test
but initial feedback on usability could be
collected at the same time

Recovery Testing

How well does the application recover from


hardware and software failure?

Reading a data file which you know is always


available will still need error recovery code in
case the disk you are reading from fails
How well does the system recover from the dirty
test phase?
What happens if the device your application is
trying to communicate with does not respond?

Test Case Testing

This consists of a set of test data, test programs (test


harnesses) and a set of expected results
Typically a set of test cases is prepared (this is a test
scenario). A file of input test data and expected
results for example is read by the test harness (test
driver program). A function (unit of code so is often
referred to as a unit test) for example is called with
the test data and the results are compared with the
expected results. The success or failure of the test is
reported and the next test is carried out. This method
of testing tends to be totally automated if possible

Test Suite

This is a collection of test scenarios which


themselves are a collection of test cases
In oop for example, a test suite may consist
of a test scenario for each class. The test
scenario may consist of a series of tests for
each of the member functions

Usability Testing

A series of tests carried out by users or subject


experts to determine the ease of use
Various techniques such as

Task driven
Observation
Keystroke and mouse recording
Interview
Think aloud
Questionnaires

Test Driven Development

This is not a test method as such but is test oriented.


Based on the system requirements, a series of tests
is specified which, when they pass, mean that a
piece of working code has been produced. i.e. the
test is specified before the software is written and
the software is written to pass the test.
Often used in Extreme Programming (XP) but can be
used in its own right as a development method

Compatibility Testing

Ensures that the software is compatible with the


hardware, operating systems, and other software
packages that it will be working with such as:

Works on required software platforms such as Windows and


Linux
Works on required hardware platform such as 16, 32 and 64
bit systems
Works with various software releases if required such as
Win 95, XP, Vista

Performance Testing

Ensures that the software runs fast enough


to meet speed requirements
e.g. executing algorithms in time specified

Ensures that the software does not consume


more memory than anticipated

Function Stubs

When developing an application as part of a


team, you may need to call functions (or
objects) that do not yet exist (because
someone is still writing them

Write a function which does nothing other than


enable the caller to call it and return a value or
series of values depending on the function spec.
The caller can test the calling functions response
to a variety of return values from the stubs

Test Harness Example

Performs 16 and 32 bit compatibility tests


Performs Test Case Testing on a function
Uses Automated Testing
Uses Active Testing
Data and results taken from a Test Scenario
Function under test is a C function to convert a
String to an Integer (like atoi but with extra features).
Errors found would need source header updating
with mod date and author and an overview of the
problem and fix

You might also like