You are on page 1of 20

Test Metrics

Reference from : GLT Testing Services


What are Test Metrics?
Parameters to objectively measure the
software testing process on various
aspects
– Test Effort
– Test Schedule
– Test Status
– Defects
– Test Efficiency
– Test Effectiveness
2
Why do we need Test Metrics?
• To quantitatively analyze the current level of maturity in testing and
set goals/objectives for future

• To provide means for managing, tracking and controlling the status


of testing

• To provide a basis for estimation (Today’s data is tomorrow’s


historic data)

• To objectively measure the effectiveness and efficiency of testing

• To identify areas for process improvements

• To give insight into the quality of the product


3
Index
Base Metrics
• Project Management Metrics
• Test Progress Metrics
• Defects Metrics
Derived Metrics
• Test Efficiency Metrics
• Test Effectiveness Metrics
• Group Standard Testing Metrics
4
Project Management Metrics
• Effort Variance
Indicates deviation of actual effort from planned
effort for the project.
Where collected Data Source Main User Reason for Use
All Phases NIKU Test Lead, Test Track progress,
Manager manage and replan
tasks

Test Effort Variance (Planned vs Actual)

82
Effort (Pe rson days)

80 80

78
77
76 76
75 75
74
72 72
71
70 70
68
66
64
1 2 3 4 5
Release version

Planned Effort (person days) Actual Effort (person days)

5
Project Management Metrics
• Schedule Variance
Indicates deviation of actual schedule dates (start
date and end dates) from planned schedule dates for
the project.
Where collected Data Source Main User Reason for Use
All Phases NIKU Test Lead, Test Track progress,
Manager manage and replan
tasks

Schedule Variance

15-Sep-07

26-Aug-07

06-Aug-07

17-Jul-07
Date

27-Jun-07

07-Jun-07

18-May-07

28-Apr-07

08-Apr-07
1 2 3
Release Version

Planned Start Date Actual Start Date Planned End Date Actual End Date 6
Test Progress Metrics
• Test Case Preparation Status
Indicates status of test case preparation against the
number of test cases planned to be written.
Where collected Data Source Main User Reason for Use
Test Case Preparation Manual Tracking, QC Test Lead, Test Track progress,
Phase Manager manage and replan
tasks

Test Cases Completed


Test Case Preparation Status
Test Cases Inprogress
Test Cases not completed
60

50
50

40
40

30

20

10
10

7
Test Progress Metrics
• Test Execution Status
Indicates status of test execution against the
number of test cases planned to be executed.
Where collected Data Source Main User Reason for Use
Test Execution Phase QC Test Lead, Test Track progress,
Manager manage and replan
tasks

8
Test Progress Metrics
• Planned vs Actual Execution
Where collected Data Source Main User Reason for Use
Test Execution Phase QC Test Lead, Test Track progress,
Manager manage and replan
tasks

Test Status - S Curve

140
120
120
116
105
102
100 92100
No. of Test Cases

80

60

40

20

0
1 2 3
Week No.

Planned Execution Actual Execution

9
Defect Metrics
• Defects Distribution by Severity/Status
Where collected Data Source Main User Reason for Use
Test Execution Phase QC Test Lead, Test Track progress,
Manager manage and replan
tasks

10
Defect Metrics
• Defects Distribution by Root cause
Where collected Data Source Main User Reason for Use
Test Execution Phase QC Test Lead, Test Process Improvement
Manager

Defects Distribution by Root cause

25

20
Number of defects

15
Number of defects

10

0
Requirements Environment Integration Code/Unit Test Script miss Control Data
miss/unclear Test Record Setup
Root Cause

11
Test Efficiency Metrics
• Test Case Writing Productivity
Total Test Cases created / Total person days of effort
involved in creating test cases
Where collected Data Source Main User Reason for Use
Test Case Preparation QC, NIKU Test Lead, Test Track progress,
Phase Manager manage and replan
Analyse test efficiency
, Process
Improvement

• Test Case Execution Productivity


Total Test Cases executed / Total person days of effort
involved in executing test cases
Where collected Data Source Main User Reason for Use
Test Execution Phase QC, NIKU Test Lead, Test Track progress,
Manager manage and replan
Analyse test efficiency
, Process
Improvement 12
Test Efficiency Metrics
• % (Review +Rework) Effort
( A / B) X 100
where A = (Review + Rework) effort for writing test
cases

B = Total effort for writing test cases (creation +


Where collected
review Data Source
+ rework) Main User Reason for Use
Test Case Preparation NIKU Test Lead, Test Track progress,
Phase Manager manage and replan
Analyse test
efficiency, Process
Improvement

13
Test Efficiency Metrics
• Rejected Defects
% rejected defects = (Number of defects rejected/ Total number of defects logged) X 100
Where collected Data Source Main User Reason for Use

Test Execution Phase QC Test Lead, Test Manager Analyse Test Efficicency,
Process Improvement

Defect Detected v/s Defect Rejected Defect Rejection Ratio

200
180
5 3%
160
Number of Defects

140
Total
120 Rejected
Rejected
100 Valid
80 154
60
40 97%
20
0

Root Caus e of rejected defects

5
Number of defects

3 Number of Defects

0
Code/Unit Test Data Use Case Update Working As Designed
14
Root Caus e
Test Effectiveness Metrics
• Defect Removal Efficiency (DRE)
% DRE (for a testing phase)

# valid defects detected in the current testing phase X100

(# valid defects detected in the current phase + # valid defects detected in the next testing phase)

% DRE (for all testing phases) X100


# valid defects detected pre-production
(# valid defects detected pre-production + # defects detected post-production)

Where collected Data Source Main User Reason for Use


Post Test Execution QC, Defects data for Test Lead, Test Analyse test
Phase next testing phase Manager effectiveness, Process
Improvement

15
Test Effectiveness Metrics
• Requirements Coverage
Indicates the distribution of requirements covered by test
cases along with the status
Where collected Data Source Main User Reason for Use
Test Case Preparation QC Test Lead, Test Analyse test
and Test Execution Manager effectiveness, Process
Phase Improvement

16
Test Effectiveness Metrics
• Test Coverage
# valid defects not mapped to test cases Vs
# valid defects mapped to test cases
Where collected Data Source Main User Reason for Use
Test Execution Phase QC Test Lead, Test Analyse test
Manager effectiveness, Process
Improvement

Test Coverage

Valid Defects not mapped to


Test Cases
11%

Valid Defects Mapped to Test


Cases

89%
17
Test Effectiveness Metrics
• Cost of Quality
Total Testing effort (person days)/ #valid defects found
Where collected Data Source Main User Reason for Use
Test Execution Phase QC, NIKU Test Lead, Test Analyse test
Manager effectiveness,
efficiency
Process Improvement

18
Group Standard Testing
• Testing Effort
Metrics
Actual Testing Effort / Total
Project Effort (in person hours )
Where collected Data Source Main User Reason for Use
All testing phases NIKU Test Lead, Test Measures the
Manager proportion of effort
spent on testing
against the whole
project

19
Group Standard Testing
• Metrics
Test Effectiveness Indicator
A / (A + B)
where A = number of defects found in all test stages
B = number of latent defects during the first month
after implementation
Where collected Data Source Main User Reason for Use
All testing phases QC, Manual Test Lead, Test Measures the
submission by Testing Manager proportion of the
CoE or Regional IT number of defects
Quality Leads. discovered in all
formal testing stages,
i.e. SIT, UAT, NFR or
performance test, OAT
etc, with that
discovered in the first
month of production
operations.

20

You might also like