You are on page 1of 25

CODE QUALITY

Introduction to Metrics for Implementation and M i t Maintenance

Software is intangible
Since the end user can't can t examine the product except through the UI and external behaviour, why should we care how well the code was designed and implemented?

If the software works, who cares about the quality of the code?

Does the Quality Model give any pointer towards the answer?

CSIS0403 Implementation, Testing and Maintenance of Software Systems

2012-2013

ISO SQuaRE Quality-in-Use model


Quality model for Quality-inuse: the users users view of quality the degree to which a product used by specific users meets their needs to achieve specific goals

Eff ti Effectiveness

Context coverage

Efficiency

Quality in Use

Safety

Satisfaction

CSIS0403 Implementation, Testing and Maintenance of Software Systems

2012-2013

View of quality depends on context


Whose view do we do we care about? The view of users. But users in the broadest sense - each has particular needs corresponding to their context of use. EFFECT OF SOFTWARE PRODUCT

Various contexts of use

QualityQualityQ y QualityiQuality U in-Use in-Use in-Use Attributes Attributes Properties

The quality in use of a system characterizes the impact that the system has on stakeholders. t k h ld

Quality-in-use Q y measures
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013

When is Code Quality important?

It is important when there is change which requires us to maintain the code. This also includes working with code during development - in modern iterative and incremental processes, the code is effectively in a constant state of maintenance.

Now need to ask: What code characteristics contribute to good maintainability?

CSIS0403 Implementation, Testing and Maintenance of Software Systems

2012-2013

ISO System/Software Product Quality Model


Functional Suitability Maintainability Performance Efficiency

Portability

Product Quality

Usability

Compatibility

Reliability

Security

CSIS0403 Implementation, Testing and Maintenance of Software Systems

2012-2013

Maintainability Sub-characteristics
Modularity: To what extent is the program composed of discrete components p such that a change g to one component p has minimal impact on others? Reusability: system How well can an asset be used in more than one system, or in building other assets? Analysability: How easy is it to find deficiencies or the cause of failure if the software does not function correctly? How easy is it to work out what needs to be modified when we want to change or add functionality? Modifiability: How easy is it to make modifications without introducing defects or degrading performance? Testability:
How easy is H i it t to establish t bli h t test t criteria it i and d perform f tests t t to t determine whether those criteria have been met?
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013

Common metrics for code quality


Ref: http://www.sqa.net/softwarequalitymetrics.html
Omitting test-based metrics: Size

measured in lines of code or function points. a measure of the structural complexity of code.

Cyclomatic complexity

Cohesion

measures how well the source code in a given module is related and works together to provide a single function. measures the level of dependency among software components. They can all be quantified. They are all related to one or more software quality characteristics. Many relate closely to a programmer's programmer s view of quality

Coupling

CSIS0403 Implementation, Testing and Maintenance of Software Systems

2012-2013

Metrics Suites: the CK Object Oriented Metrics Object-Oriented

Measures for OO software

o Cyclomatic complexity o Cohesion o Coupling


How should we measure these for object-oriented code? Are there any other measures, measures specific to OO code that we could add to the list?

CSIS0403 Implementation, Testing and Maintenance of Software Systems

2012-2013

10

OO Complexity
What is the goal? How does complexity impact quality? How should we measure the structural complexity p y of an OO program? Beyond B d cyclomatic l ti complexity, l it any other th relevant l t indicator of complexity we could use?

For example, a relevant OO heuristic is:


Minimize the number of messages in the protocol of a class

CSIS0403 Implementation, Testing and Maintenance of Software Systems

2012-2013

11

OO Cohesion
How to measure the level of cohesion for an OO program? For example, relevant OO heuristics are:
Keep related data and behaviour in one place, keep non-related information out A class should capture one and only one key abstraction Most of the methods defined on a class should be using g most of the data members most of the time A class should have only one reason to change Do not create God classes
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013

12

OO Coupling
How to measure coupling within an OO program? For example, relevant OO heuristics are:
Minimize the number of classes with which the class collaborates Minimize the number of messages sent between a class and its collaborator Minimize fan-out in a class

Any other useful OO measures beyond complexity, cohesion and coupling?


CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013

13

The CK Metrics Suite

The CK metrics suite


A huge number of OO metrics have been proposed and are still being proposed Current consensus is that it is more important to conduct empirical studies to validate/refute existing metrics than to p propose p new ones.

One very influential suite is the CK (Chidamber and Kemerer) metrics suite. The suite has been the subject of many empirical studies.

The goal in creating the suite was to provide a universal tool for assessing OO software in terms of attributes such as good abstraction, coupling, cohesion and complexity. The suite comprises six metrics.
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013

15

number: bad 1 Weighted Methods Per Class (WMC) high small number: good

WMC is the sum of the complexities of the methods of a class. The method of determining complexity is left open for choice. For example, cyclomatic complexity could be used. Sometimes, for ease of calculation, all complexity weights are fixed at 1. = simply counting no. of methods Class C has methods M1, ,Mn defined in the class. Let c1, ,c of the methods. , n be the complexities p Then: WMC =
n i=1

ci

WMC can indicate the effort required to implement and test a class. class Empirical studies show that WMC is a good indicator of potentially defective classes.
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013

16

2 Depth of the Inheritance Tree (DIT)


This is the depth of the class in the tree, defined as the length from the node (the class) to the root of the inheritance tree tree. This measures how many ancestor classes can potentially affect the class.

CLASS Class1 Class2 Class3 Cl 4 Class4

DIT

More difficult to understand Class3 and predict its behaviour. Also harder to test.

This metric has become less useful in practice as engineers are more aware of the dangers and avoid creating deep, fragile inheritance hierarchies.
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013

17

3 Number of Children (NOC)


This is the count of the immediate successors of a class in the inheritance hierarchy

CLASS Class1 Class2 Class3 Class4

NOC

Number of children indicates degree of potential influence on d i design. Focus F f for testing. Also greater likelihood of improper abstraction

A danger, for example, is that as the number of children increase, the quality of the abstraction represented by the parent is compromised
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013

18

4 Coupling between Object Classes (CBO)


small number: good

CBO is the number of classes to which a given class is coupled. Here, one class is coupled to another if it invokes methods of f the other.

CLASS ClassA ClassB ClassC ClassD

CBO
1

2
0

As CBO increases, the reusability of the class decreases. Modification and subsequent testing is also more difficult. As always, we should try to minimize unnecessary coupling in our software.
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013

19

5 Response for a Class (RFC)

small number: good

The response set of a class is the set of methods that can potentially be executed in response to a message received by a member of that class
RFC = |RS| where RS is the response set for the class RS = {M}U { } all i{R { i} where {Ri} = set of methods called by method i, and {M} = set of all methods in the class
{ myB.MB1(); myB.MB2(); }

CLASS
{ myB.MB2(); }

RFC
4

ClassA

CSIS0403 Implementation, Testing and Maintenance of Software Systems

2012-2013

20

5 Response for a Class (RFC) (cont.)

A large RFC is often correlated with more defects. Classes with high RFC tend to be more complex and more difficult to test test. A worst case value for possible responses is useful for allocating test time. Studies St di i C++ in C indicated i di t d that th t an increase i i in RFC was correlated with increased defect density and decreased quality. With automated tools, can count beyond first level and count the full response set.

CSIS0403 Implementation, Testing and Maintenance of Software Systems

2012-2013

21

6 Lack of Cohesion in Methods (LCOM)


LCOM attempts to indicate how closely or otherwise local methods are related to instance variables.
we want more cohesion

LCOM is a count of the number of method pairs whose similarity is 0, minus the count of method pairs whose similarity is non-zero. non zero The larger the number of dissimilar methods, the less cohesive the class. Could indicate a God class with many different objectives.

Consider a Class C1 with n methods M1, M2,..., Mn. Let { I j } = set of instance variables used by method Mj. } There are n such sets { I1 } ... { In }.
no access variables in common
some access variables in common

Let P = {(Ii, Ij) | Ii Ij = } and Q = {(Ii, Ij) | Ii Ij If all n sets { I1 } ... { In }are let P = .
LCOM = |P|-|Q|, | | |Q| if | |P|>|Q| | |Q| = 0 otherwise
2012-2013

}.

CSIS0403 Implementation, Testing and Maintenance of Software Systems

22

6 LCOM (toy example)


Method m1() uses instance variables a1, a2, a3. Method m2() uses instance variables a1 and a5 Method m3() uses instance variable a4

m1() m2() m3() ()

a1 1 a2 a3 a4 a a5

Any LCOM value greater than 0 indicates a lack of cohesion

P = 2, Q = 1

Note that this metric assumes that methods access instance variables directly directly. There are several modifications to LCOM to accommodate other styles of access.
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013

23

LCOM Ex2

public class Stack { private int topOfStack = 0; List<Integer> elements = new LinkedList<Integer>(); public int size() { return topOfStack; } public void push(int element) { topOfStack++; elements.add(element); ( ); } public int pop() throws PoppedWhenEmpty { if (topOfStack == 0) { throw new PoppedWhenEmpty(); } int element = elements.get(--topOfStack); elements.remove(topOfStack); return element; } }

LCOM = 0

CSIS0403 Implementation, Testing and Maintenance of Software Systems

2012-2013

24

Summary of CK Metrics in practice


Darcy and Kemerer, 2005

The metrics work successfully across a wide range of domains, programming languages and countries Consistent relationship to quality factors such as defect rates and maintainability A set consisting of size (or WMC), coupling (CBO or RFC), and cohesion (LCOM) is useful in prediction models Inheritance (DIT or NOC) is so sparsely used that a relationship to the outcome of a project is not clear

CSIS0403 Implementation, Testing and Maintenance of Software Systems

2012-2013

25

You might also like