You are on page 1of 67

Introduction to

Testing techniques

Felix Reste
Tiberiu Chis

Copyright Autoliv Inc., All Rights Reserved


Terminology according to IEEE 610/1990

 Test = an activity in which a system or component is executed


under specified conditions, the results are observed or
recorded, and an evaluation is made of some aspect of the
system or component

 Quality = a degree to which a system, component, or process


meets specified requirements.

 Software quality (quality assurance) = a planned and


systematic pattern of all actions necessary to provide
adequate confidence that an item or product conforms to
establish technical requirements.

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Terminology according to IEEE 610/1990 and
quoting Ilene Burnstein
 Error*
1) The difference between a computed, observed, or measured value or
condition and the true, specified, or theoretically correct value or condition.
(error)
2) An incorrect step, process, or data definition. (fault)
3) An incorrect result. (failure)
4) A human action that produces an incorrect result. (mistake)
 Defect (Fault)
 A defect (fault) is introduced into the software as the result of an error.
 Failure
 A failure is the inability of a software system or component to perform its
required functions within specified performance requirements.

*Note: while all four definitions are commonly used, one distinction assigns the definition to the word in brackets.

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


What is SW testing?
 Can be stated as the process which verifies and validates if
an application/ a program/ a product:
 Meets the requirements
 Works as expected
 Satisfies the needs of the stakeholders

OR

 A process of executing a program with the goal of finding


defects
OR

 A process that gives the SW quality


AUTOLIV Copyright Autoliv Inc., All Rights Reserved
Why is software testing necessary? (I/II)

 Software defects may cost: The cost of fixing a


 money defect increases with
 Time the time spent in the
system. (see Boehm’s
 human lives law below)

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Why is software testing necessary? (II/II)

 The agent of software  The aim of software


bugs: testing:
 human mistakes  generate information

Testing will not fix


defects, it will
generate information
for developers to fix
the defects.

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Famous SW Defects (I)
 4 June 1996, Ariane 501;
 500 million dollar loss;
 Reused software, no integration tests done;

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Caused by…

 Failure of Ariane 501, caused by complete loss of guidance


due to the inertial reference system:

 Conversion of a float variable into an integer value;


AUTOLIV Copyright Autoliv Inc., All Rights Reserved
Famous SW Defects (II)

 Mars Climate Orbiter


 Lost communication on September 23rd 1999
 Lockheed Martin, contrary to its Software Interface
Specification (SIS), used the United States customary unit
(pound-seconds lbf×s), and NASA, according to SIS, used the
metric units (newton-seconds N×s)

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Types of software quality assurance (I/II)

 Constructive – activities  Analytical – activities which find


which prevent defects defects

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Types of software quality assurance (II/II)

SW-QA

Constructive Analytical
(Developer) (Tester)

Prevent Fix Find

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Software testing and software quality

 Software quality consists of*:

Functional
quality • Functionality
attributes

Non- • Reliability
• Usability
functional • Efficiency
quality • Maintainability
attributes • Portability

*Note: according to ISO/IEC 9126.

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Functional quality attributes

The SW does
Suitability what it has to do
and not more.

Correctness –
Accuracy The SW meets
the requirements.

Completeness –
Functional quality
Functionality Compliance The SW meets all
attributes
the requirements.

The SW has to
Interoperability work with other
SW.

No access to
Security unauthorized
data.

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Non-functional quality attributes

How long does it


Reliability run until a fault is
reached?

How easy is it to
Usability learn to use the
SW?

How much does it


Non-functional
Efficiency do with how many
quality attributes
resources?

How easy is it to
Maintainability make modifications
in the code?

How easy is it to
Portability transfer it to a new
environment?

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Seven principles of testing (I)

1. Testing shows the


presence of defects.
 Testing can prove the
presence of defects, but 2. Exhaustive testing is
not their absence.
impossible.
 The absence of defects
does not prove the
 Exhaustive testing is an
approach in which the test suite
correctness of the
contains all combinations of
software.
input data and preconditions.
 For instance, if you have a set of
4 signals, each having 16
possible values (from 0x0 to
0xF), then to test all the valid
combinations you would need
65536 (164) tests.
AUTOLIV Copyright Autoliv Inc., All Rights Reserved
Seven principles of testing (II)
3. Early testing.
 Earlier a defect is discovered
the less costly it is to correct
it.
 The least effort and costs are
invested in correcting causes
4. Defect clustering.
of failures in the concept
phase.  When a defect is found more
may be found nearby.
 The time for test preparation
and review must be taken  As soon as a defect is found,
into account too. a good practice tip is to
emphasize on the module or
functionality where it has been
found and the ones with which
it exchanges information.

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Seven principles of testing (III)
5. Pesticide paradox.
 “Every method you use to prevent
or find bugs leaves a residue of
subtler bugs against which those
methods are ineffectual.” (Boris
Beizer)
 Repeating tests under the same 6. Testing is context
conditions is ineffective.
dependent.
 The input of each test case
should be unique and check a
 Each object subjected to
a test is tested differently.
certain feature of the test object.
 Testing is done differently
in different contexts. (e.g.
test lab vs production
environment).

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Seven principles of testing (IV)

7. Absence of errors fallacy.


 If the software does not meet the customer’s needs and expectations it
is not useful.
 Successful testing finds the most serious failures.
 This alone does not prove the quality of the software.
 Finding no defects does not mean the software is free of defects.

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Independence levels in testing

Outsourcing
Test teams
Team of
developers
Developer Highest
test independence
level

Lowest
independence
level

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


How much testing is enough?

 Testing should never end because:


 We cannot test everything
 All systems have project risks
 Our resources are finite

 But:
 Testing = Risk assessment/management
 Goal: manage risk (and not a perfect software)

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


What and how much shall we test?
 Prioritize the tests
 We have to run important tests first

 Completion / Exit criteria


 Define the conditions that make testing finished at the beginning
 Example:
 All planned tests have been finished
 All requirements have been covered
 All functional requirements have been reviewed
 All functional requirements have been tested
 All known and critical defects have been corrected
 80% of all branches have been executed

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Test strategies used in Autoliv

 Full test – all the requirements planned are tested;


 Delta – only the new added functionalities will be tested. The
rest of the requirements were tested previously;
 Sanity or Smoke tests – test the base functionality of the ECU.
The scope is to see that no major issues are present after SW
release;
 Ex: flashing is possible, we have communication, no major error is
present;
 Regression test – selective retesting of the system or
component to verify that modifications have not caused
unintended effects and that the system or component still
complies with its specified requirements / test is focusing only
on the modified functionalities. If a functionality was modified
only that functionality is tested;
AUTOLIV Copyright Autoliv Inc., All Rights Reserved
Verification and Validation (I)
(According to ISO 9000)
 During verification we want
to see that the design
outputs meet the design
inputs.
 This is usually done while
the design is still in CAD
(computer aided design) or
on paper and before
tooling and expensive
prototypes are made.  Validation, on the other hand,
is when we want to see if the
early sample parts actually
work and meet the specified
design inputs and outputs.
AUTOLIV Copyright Autoliv Inc., All Rights Reserved
Verification and Validation (II)

 Verification – we evaluate that everything we have in a


development phase satisfy the conditions imposed at the start
of that phase

 Validation – we evaluate that everything we have done in a


during or at the end of a development process satisfied
specified requirements
AUTOLIV Copyright Autoliv Inc., All Rights Reserved
Verification and Validation (III)

 Verification – have we done things right?


 Validation – have we done the right thing?
 Not a general rule but they will answer to the following
questions too:
 “Did/does it?” – verification
 “Will it?” – validation

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


How we do it here, V Cycle diagram

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


V Cycle diagram - simplified

Requirement Verifies Acceptance


Analysis Testing

Functional Verifies System


Specification testing

Technical Verifies Integration


Specification Testing

Component Verifies Component


Specification Testing

Coding
Coding

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Testing across V-cycle

System
system testing Test

regression tests done by


developers & testers Integration Test

Functional
Tests
Component Tests (Black Box)

Dynamic
Tests
Structural Tests
(White Box)
Module tests
Code inspections, MISRA,
Static Tests
structural analysis

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Categories

Statement coverage
Dynamic Branch/Decision coverage
White Box Condition Coverage
Path Coverage
Equivalence Partitioning
Boundary Value Analysis
Black Box State Transition
Decision Tables

Static Reviews

Control Flow Analysis

Data flow Analysis

Metrics

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Static Techniques (I)

 Reviews
 "A process or meeting during which a software product* is examined by
a project personnel, managers, users, customers, user representatives,
or other interested parties for comment or approval.“ (IEEE 1028)
 The main advantage of this technique is that with low costs errors are
found in early stages of the project which may save a lot of time and
money in the next stages.

*Note: Here, software product refers to "any technical document or partial document, produced as a deliverable of a software development activity“.

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Static Techniques (II)

 Control Flow Analysis


 This static code analysis technique involves the representation of the
program as a control flow graph with the aim of determining
inconsistencies.
 In this visual manner dead code and/or branches may be easily spotted
out.
 It is usually done using a tool.
 The nodes represent statements and the edges control flow transfer.

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Static Techniques (III)
Example: Control Flow Analysis – Code to Control Flow Graph

2
3 10

4 11
5 7
12
13 15
16
AUTOLIV Copyright Autoliv Inc., All Rights Reserved
Static Techniques (IV)

 Data Flow Analysis


 This static code analysis technique involves the representation of
variables as a state sequence.
 Let us suppose that n is our variable and it has the following states
during the execution of the program:
 u – means that no value is assigned to n (undefined) e.g.: int n
 d – means that a value is assigned to n (defined) e.g.: n=1
 r – means that a reference is taken and the value of n does not change
(referenced) e.g.: if (n == 0) or x=y+n
 A state sequence may look like: u d r r d r u

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Static Techniques (V)

 From such a state sequence we may identify the following


inconsistencies:
 ur – when an undefined value gets referenced
 e.g.: int n; x=n+1;
 du – when defined value gets undefined before reading
 e.g.: n=1; return;
 dd – when a defined value gets defined again before reading
 e.g.: n=0; n=1;

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Static Techniques (VII)
 Metrics
 These only have relevance on the measurable aspects of a program
 Some e.g. would be: number of lines of code, cyclomatic number etc.
 The advantage of using metrics is that they reflect the complexity of our
program part and helps in management decisions regarding the
estimation of time needed for testing
 The cyclomatic complexity M is calculated as follows:
M = E – N + 2P, where
E = number of edges in the graph
N = number of nodes in the graph
P = number of program parts
For the example on slide Static Techniques (III) we have:
E = 15, N = 12, P = 1 (usually P = 1, because we calculate M for each
program part) => M = 15 – 12 + 2 * 1 = 5
 Modules with M >= 10 need to be reworked

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Test types
 Static tests
The software is not executed but analyzed offline. In this category would be code
inspections, QAC checks, cross reference checks…
 Code inspection
 Dynamic tests
This requires the execution of the software or parts of the software (using stubs). It
can be executed in the target system, an emulator or simulator.
 Module test
A module is the smallest compliable unit of source code. Often it is too small to allow
functional tests (black-box tests).
 Component test
This is the black-box test of modules or groups of modules which represent a certain
functionality.
 Integration test
The software is step by step completed and tested by tests covering a collaboration of modules
or classes. The integration depends on the kind of system.
 System test
This is a black-box test of the complete software in the target system. The environmental
conditions have to be realistic (complete original hardware in the destination environment).
AUTOLIV Copyright Autoliv Inc., All Rights Reserved
Dynamic Techniques

 White-Box testing
 We know the internal structure (knowledge of the source code details)

 Black-Box testing
 We don’t know the internal structure

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Testing

White Box Black Box

Statement testing Equivalence partitioning

Branch / decision Boundary value


testing analysis

Path testing Interface testing

Data flow testing Memory testing

State transition testing

Real time testing

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


White Box Testing
(Structural Testing)

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Statement Testing
 Also known as line coverage or segment coverage
 Every line of the code (individual statement) needs to be
checked and executed

 It is measured via Statement coverage, which is expressed


in percentage

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Statement testing – example (I)

read x

read y

yes
x>y

print "x is
greater than y"

yes
x>5

print "x is print "x is less


greater than 5" or equal than y"

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Statement testing – example (II)

 To achieve 100% statement coverage we need to do the


following steps:
 Test 1: we choose x = 20 and y = 10 and we expect the output
to look like x > y and x > 5

 We have executed 6 out of 7 statements, obtaining an 86%


statement coverage
AUTOLIV Copyright Autoliv Inc., All Rights Reserved
Statement testing – example (III)

 We have 1 statement left which was not executed


 Test 2: we choose x = 10 and y = 20 and we expect the output
to look like x <= y

 We have executed the 7th statement, obtaining a 100%


statement coverage

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Branch / Decision testing

 A branch is the outcome of a decision


 A decision is an IF statement, a loop control statement (Do-
While or Repeat Until), or a case statement, where are two
or more outcomes from the statement
 If in the Statement testing we were focusing on the nodes of
the control graph, here we are interested in the edges
 It is measured via Statement coverage, which is expressed
in percentage

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Branch / Decision testing – example (I)

 We will use the same example as in read x

Statement testing read y

yes
x>y

1 print "x is
greater than y"

yes
x>5 4
print "x is
2 greater than 5" 3 print "x is less
or equal than y"

 We can identify 4 branches

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Branch / Decision testing – example (II)

 To achieve 100% branch coverage we need to do the


following steps:
 Test 1: we choose x = 20 and y = 10 and we expect the output
to look like x > y and x > 5
 We have exercised branches 1 and 2 obtaining a 50%
coverage so far
 Test 2: we choose x = 10 and y = 20 and we expect the output
to look like x <= y
 Now, we have covered branch 4 obtaining 75% coverage
 Test 3: we choose x = 4 and y = 1 and we expect the output to
look like x > y and x <= 5
 Now, we have covered branch number 3 obtaining 100%
coverage
AUTOLIV Copyright Autoliv Inc., All Rights Reserved
Statement vs Branch / Decision testing

 As we may have observed so far, using the same piece of


code, we need 2 tests to obtain 100% statement coverage
and 3 tests to obtain 100% branch/decision coverage
 An additional example:

Here, we need just 1 test to


obtain 100% statement
coverage, but we did not cover
all branches.

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Condition testing

 In case of Condition testing we have to test every condition is


in atomic form
 What does this mean? 
 E.g.: if (n >= 10 && m < 5)
n >= 10 && m < 5 is the condition
n >= 10 is one atomic condition
m < 5 is another atomic condition
 We may define an atomic condition as an expression that
contains only relational operators (<, <=, ==, !=, >=, >) and
no logical operators.

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Path testing

 Structured testing or white box testing technique used for


designing test cases intended to examine all possible paths of
execution at least once
 The starting point for path testing is a program flow graph
In this example, we
a have the following
possible paths:
b
abcde
c abfe
f g ag
d
3 possible paths!
e

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Path testing – example (I)

 Adding an extra edge to the previous example we change the


entire situation

Now, we have the previously


identified 3 paths, but the h-edge
a forms a loop.

h b We do not know how many times


c this loop will be executed, so the
f g number of paths is 3+x, where x
is the number of executed loops.
d
e

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Path testing – example (II)

if (a < 5) if (a < 5)
/* then */ /* else */ /* then */ /* else */
b = 30; b = 15; b = 30; b = 15;
if (a == 1 || x == 5 || y > 3) if (a == 1 || x == 5 || y > 3)
/* then */ /* else */ /* then */ /* else */
b = a * c; /* do nothing */ b = a * c; /* do nothing */

if (a < 5) if (a < 5)
/* then */ /* else */ /* then */ /* else */
b = 30; b = 15; b = 30; b = 15;
if (a == 1 || x == 5 || y > 3) if (a == 1 || x == 5 || y > 3)
/* then */ /* else */ /* then */ /* else */
b = a * c; /* do nothing */ b = a * c; /* do nothing */

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Black Box Testing
(Functional Testing)

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Equivalence partitioning

 Equivalence partitioning (EP) is a specification-based or


black-box technique
 It can be applied at any level of testing and is often a good
technique to use first;
 The idea behind this technique is to divide (i.e. to partition) a
set of test conditions into groups or sets that can be
considered the same (i.e. the system should handle them
equivalently), hence ‘equivalence partitioning’;

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Equivalence partitioning

 In equivalence-partitioning technique we need to test only one


condition from each partition
 These partitions may not overlap and no gaps are allowed

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Equivalence partitioning - example
if (x >= 50 && x <= 100)
For this example there are 3 equivalence classes of valid
values:

50 100
Class 1: x<=49 It would be sufficient
Class 2: x=50 to x=100 to select the values
x=40, x=55 and
Class 3: x>=101 x=120 for test.
Equivalence class of invalid values
They are not expected by the program.
.

E.g.: negative numbers, alphanumeric


characters
AUTOLIV Copyright Autoliv Inc., All Rights Reserved
Boundary-Value Analysis (I)

• Boundary value analysis (BVA) is based on testing at the


boundaries between partitions;
• Here we have both valid boundaries (in the valid partitions)
and invalid boundaries (in the invalid partitions).

Input: integer DAY, where 1 <= DAY <= 31

DAY
Valid inputs 1, 31
Invalid inputs 0, 32

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Boundary-Value Analysis (II)
• Special attention has to be paid to the boundaries of ranges;
• A test has to be performed INSIDE, ON and OUTSIDE of a
boundary;
if (x >= 50 && x <= 100)
Value Range: 50 <= x <= 100
Lower boundary – 1*: 49 Lower boundary: 50 Lower boundary + 1*: 51
Higher boundary – 1*: 99 Higher boundary: 100 Higher boundary + 1*: 101

• Boundaries defined by the data type have to be taken into


account and have to be checked:
• (signed) char x => test case x=128 has to be included
• unsigned char x => test case x=255 has to be included

*Note: “1” was chosen because it is the smallest step defined for the value.

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Interface testing
• Interface Testing is performed to evaluate whether systems or
components pass data and control correctly to one another
• It is to verify if all the interactions between these modules are
working properly and errors are handled properly

• Verify that communication between the systems are done correctly


• Verify if all supported hardware/software has been tested
• Verify if all linked documents be supported/opened on all platforms
• Verify the security requirements or encryption while communication
happens between systems

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


State transition testing (I)
• Most methods only deal with system behavior only in terms of
input and output data; in this case different states are NOT
taken into account

• State transition testing uses a


model of the states the component
may occupy, the transitions
between those states, the events
which cause those transitions, and
the actions which may result from
those transitions

• State transition diagrams are widely used within embedded


software industry and technical automation
AUTOLIV Copyright Autoliv Inc., All Rights Reserved
State transition testing (II) - Design

• Test case must include the following specification:


• Starting state
• Input of the component
• Expected output of the component
• Expected final state

• Transitions within the test case must include the following


specification:
• Starting state of the component
• Expected next state
• Event which caused transition to the next state
• Expected action caused by the transition

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


State transition testing (III) – example I/II

• For an easier view a transition tree is Start


constructed which helps to determine Card is inserted
the test cases; Enter
PIN
• The initial state is called the root, PIN is entered
while the end states are the PIN is OK
1st try
leaves of the tree;
PIN is NOK
• Every path from the root to a leaf PIN is OK
2nd try
represents a test case. PIN is NOK
• Exit criteria: 3rd try
• Every state was visited at least once PIN is OK PIN is NOK
• Every transition was covered Access Withhold
at least once account card

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


State transition testing (IV) – example II/II

State 1 State 2 State 3 State 4 State 5 State 6 End


state
Start Enter 1st try Access Access
PIN account account
Start Enter 1st try 2nd try Access Access
PIN account account
Start Enter 1st try 2nd try 3rd try Access Access
PIN account account
Start Enter 1st try 2nd try 3rd try Withhold Withhold
PIN card card

• The number of leaves does NOT give the minimum number of


required test cases, but it guarantees that all nodes and
transitions were transited at least once;
• Only test cases can be derived from the tree and NOT
functionality.
AUTOLIV Copyright Autoliv Inc., All Rights Reserved
Other testing techniques (I)

 Experience-based techniques
 Can be used when for some reason systematic testing is not feasible
 The specification is unsuitable to create test cases
 There is not enough time to execute a well structured test

 Error guessing
 Tests are created based on the knowledge, intuition and experience of
the tester
 For finding defects which nay be difficult to find with systematic tests
 Testing already known or expected SW errors

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Other testing techniques (II)

 Performance test – how fast does the system performs a


specific task?

 Load test – how does the system behaves under load?

 Stress test – what happens when we exceed the load?

 Volume test – how the system reacts when it has to process


high amounts of data?

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Choosing a technique

 Several aspects for choosing a technique


 Type of the system
 The goal of testing
 Regulations
 User requirements
 Level of risk
 Type of risk
 Available documentation
 The knowledge of tester
 The development cycle
 Use case models

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


Software Testing Literature
• ISTQB
• Standard for Software Test Documentation (IEEE 829)
• Standard for Software Verification and Validation (IEEE 1012)
• Standard for Software Unit Testing (IEEE 1008)
• Standard Glossary of Software Engineering Terminology (IEEE
0610)
• MISRA-C: 2012 (www.misra-c.com)
• “Software Testing. Testing Across the Entire Software
Development Life Cycle”, Gerald D. Everett, Raymond
McLeod Jr.
• “Software Testing and Analysis: Process, Principles and
Techniques”, Mauro Pezze, Michal Young

AUTOLIV Copyright Autoliv Inc., All Rights Reserved


AUTOLIV Copyright Autoliv Inc., All Rights Reserved

You might also like