Professional Documents
Culture Documents
Software is intangible
Since the end user can't can t examine the product except through the UI and external behaviour, why should we care how well the code was designed and implemented?
If the software works, who cares about the quality of the code?
Does the Quality Model give any pointer towards the answer?
2012-2013
Eff ti Effectiveness
Context coverage
Efficiency
Quality in Use
Safety
Satisfaction
2012-2013
The quality in use of a system characterizes the impact that the system has on stakeholders. t k h ld
Quality-in-use Q y measures
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013
It is important when there is change which requires us to maintain the code. This also includes working with code during development - in modern iterative and incremental processes, the code is effectively in a constant state of maintenance.
2012-2013
Portability
Product Quality
Usability
Compatibility
Reliability
Security
2012-2013
Maintainability Sub-characteristics
Modularity: To what extent is the program composed of discrete components p such that a change g to one component p has minimal impact on others? Reusability: system How well can an asset be used in more than one system, or in building other assets? Analysability: How easy is it to find deficiencies or the cause of failure if the software does not function correctly? How easy is it to work out what needs to be modified when we want to change or add functionality? Modifiability: How easy is it to make modifications without introducing defects or degrading performance? Testability:
How easy is H i it t to establish t bli h t test t criteria it i and d perform f tests t t to t determine whether those criteria have been met?
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013
measured in lines of code or function points. a measure of the structural complexity of code.
Cyclomatic complexity
Cohesion
measures how well the source code in a given module is related and works together to provide a single function. measures the level of dependency among software components. They can all be quantified. They are all related to one or more software quality characteristics. Many relate closely to a programmer's programmer s view of quality
Coupling
2012-2013
2012-2013
10
OO Complexity
What is the goal? How does complexity impact quality? How should we measure the structural complexity p y of an OO program? Beyond B d cyclomatic l ti complexity, l it any other th relevant l t indicator of complexity we could use?
2012-2013
11
OO Cohesion
How to measure the level of cohesion for an OO program? For example, relevant OO heuristics are:
Keep related data and behaviour in one place, keep non-related information out A class should capture one and only one key abstraction Most of the methods defined on a class should be using g most of the data members most of the time A class should have only one reason to change Do not create God classes
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013
12
OO Coupling
How to measure coupling within an OO program? For example, relevant OO heuristics are:
Minimize the number of classes with which the class collaborates Minimize the number of messages sent between a class and its collaborator Minimize fan-out in a class
13
One very influential suite is the CK (Chidamber and Kemerer) metrics suite. The suite has been the subject of many empirical studies.
The goal in creating the suite was to provide a universal tool for assessing OO software in terms of attributes such as good abstraction, coupling, cohesion and complexity. The suite comprises six metrics.
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013
15
number: bad 1 Weighted Methods Per Class (WMC) high small number: good
WMC is the sum of the complexities of the methods of a class. The method of determining complexity is left open for choice. For example, cyclomatic complexity could be used. Sometimes, for ease of calculation, all complexity weights are fixed at 1. = simply counting no. of methods Class C has methods M1, ,Mn defined in the class. Let c1, ,c of the methods. , n be the complexities p Then: WMC =
n i=1
ci
WMC can indicate the effort required to implement and test a class. class Empirical studies show that WMC is a good indicator of potentially defective classes.
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013
16
DIT
More difficult to understand Class3 and predict its behaviour. Also harder to test.
This metric has become less useful in practice as engineers are more aware of the dangers and avoid creating deep, fragile inheritance hierarchies.
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013
17
NOC
Number of children indicates degree of potential influence on d i design. Focus F f for testing. Also greater likelihood of improper abstraction
A danger, for example, is that as the number of children increase, the quality of the abstraction represented by the parent is compromised
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013
18
CBO is the number of classes to which a given class is coupled. Here, one class is coupled to another if it invokes methods of f the other.
CBO
1
2
0
As CBO increases, the reusability of the class decreases. Modification and subsequent testing is also more difficult. As always, we should try to minimize unnecessary coupling in our software.
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013
19
The response set of a class is the set of methods that can potentially be executed in response to a message received by a member of that class
RFC = |RS| where RS is the response set for the class RS = {M}U { } all i{R { i} where {Ri} = set of methods called by method i, and {M} = set of all methods in the class
{ myB.MB1(); myB.MB2(); }
CLASS
{ myB.MB2(); }
RFC
4
ClassA
2012-2013
20
A large RFC is often correlated with more defects. Classes with high RFC tend to be more complex and more difficult to test test. A worst case value for possible responses is useful for allocating test time. Studies St di i C++ in C indicated i di t d that th t an increase i i in RFC was correlated with increased defect density and decreased quality. With automated tools, can count beyond first level and count the full response set.
2012-2013
21
LCOM is a count of the number of method pairs whose similarity is 0, minus the count of method pairs whose similarity is non-zero. non zero The larger the number of dissimilar methods, the less cohesive the class. Could indicate a God class with many different objectives.
Consider a Class C1 with n methods M1, M2,..., Mn. Let { I j } = set of instance variables used by method Mj. } There are n such sets { I1 } ... { In }.
no access variables in common
some access variables in common
Let P = {(Ii, Ij) | Ii Ij = } and Q = {(Ii, Ij) | Ii Ij If all n sets { I1 } ... { In }are let P = .
LCOM = |P|-|Q|, | | |Q| if | |P|>|Q| | |Q| = 0 otherwise
2012-2013
}.
22
a1 1 a2 a3 a4 a a5
P = 2, Q = 1
Note that this metric assumes that methods access instance variables directly directly. There are several modifications to LCOM to accommodate other styles of access.
CSIS0403 Implementation, Testing and Maintenance of Software Systems 2012-2013
23
LCOM Ex2
public class Stack { private int topOfStack = 0; List<Integer> elements = new LinkedList<Integer>(); public int size() { return topOfStack; } public void push(int element) { topOfStack++; elements.add(element); ( ); } public int pop() throws PoppedWhenEmpty { if (topOfStack == 0) { throw new PoppedWhenEmpty(); } int element = elements.get(--topOfStack); elements.remove(topOfStack); return element; } }
LCOM = 0
2012-2013
24
The metrics work successfully across a wide range of domains, programming languages and countries Consistent relationship to quality factors such as defect rates and maintainability A set consisting of size (or WMC), coupling (CBO or RFC), and cohesion (LCOM) is useful in prediction models Inheritance (DIT or NOC) is so sparsely used that a relationship to the outcome of a project is not clear
2012-2013
25