You are on page 1of 153

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

UNIT I
DEVELOPMENT LIFE CYCLE PROCESS
1.1Overview of software development life cycle
There are various software development approaches defined and designed which are
used/employed during development process of software, these approaches are also referred
as Software Development Process Models (e.g. Waterfall model, incremental model, Vmodel, iterative model, etc.). Each process model follows a particular life cycle in order to
ensure success in process of software development.
Software life cycle models describe phases of the software cycle and the order in which
those phases are executed. Each phase produces deliverables required by the next phase in
the life cycle. Requirements are translated into design. Code is produced according to the
design which is called development phase. After coding and development the testing verifies
the deliverable of the implementation phase against requirements.
There are following six phases in every Software development life cycle model:
1. Requirement gathering and analysis
2. Design
3. Implementation or coding
4. Testing
5. Deployment
6. Maintenance
1) Requirement gathering and analysis: Business requirements are gathered in this
phase. This phase is the main focus of the project managers and stake holders. Meetings with
managers, stake holders and users are held in order to determine the requirements like; Who
is going to use the system? How will they use the system? What data should be input into
the system? What data should be output by the system? These are general questions that get
answered during a requirements gathering phase. After requirement gathering these
requirements are analyzed for their validity and the possibility of incorporating the
requirements in the system to be development is also studied.
Finally, a Requirement Specification document is created which serves the purpose of
guideline for the next phase of the model.
2) Design: In this phase the system and software design is prepared from the requirement
specifications which were studied in the first phase. System Design helps in specifying
hardware and system requirements and also helps in defining overall system architecture.
The system design specifications serve as input for the next phase of the model.
3) Implementation / Coding: On receiving system design documents, the work is divided
in modules/units and actual coding is started. Since, in this phase the code is produced so it is
the main focus for the developer. This is the longest phase of the software development life
cycle.
1

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

4) Testing: After the code is developed it is tested against the requirements to make sure
that the product is actually solving the needs addressed and gathered during the requirements
phase. During this phase unit testing, integration testing, system testing, acceptance testing
are done.
5) Deployment: After successful testing the product is delivered / deployed to the customer
for their use.
6) Maintenance: Once when the customers starts using the developed system then the actual
problems comes up and needs to be solved from time to time. This process where the care is
taken for the developed product is known as maintenance.
1.2Introduction to Process
An executing program along with all the things that the program can affect or be affected
byThe
dynamic execution context (active spirit) of a program (as opposed to the program,
which is static.)
Only one thing happens at a time within a process
The unit of execution and scheduling
Some systems allow only one process (mostly personal computers). They are called
uniprogrammingsystems (not uniprocessing; that means only oneprocessor). Easier to write
some parts of OS, but many other things are hard to do. E.g. compile a program in
background while you edit another file; answer your phone and take messages while youre
busy hacking. Very difficult to do anything network-related under uniprogramming.
Most systems allow more than one process. They are called multiprogramming
Whats in a process? A process contains all the state of a program in execution:
the code for the running programs
the data for the running program
the execution stack showing the state of all calls in progress
the program counter, indicating the next instruction
the set of CPU registers with current values
the set of OS resources held by the program (references to open files, network
connections)
Process State
Each process has an execution state that indicates what it is currently doing
readywaiting for the CPU
runningexecuting instructions on the CPU
waitingwaiting for an event, e.g., I/O completion
OS has to keep track of all the processes. Each process is represented in the OS
by a data structure called process control block (PCB):
2

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

queue pointers
process state
process number
program counter (PC)
stack pointer (SP)
general purpose register contents
floating point register contents
memory state
I/O state
scheduling information
accounting information
How can several processes share one CPU? OS must make sure that processes
dont interfere with each other. This means
Making sure each gets a chance to run (fair scheduling).
Making sure they dont modify each others state (protection).
Dispatcher: inner-most portion of the OS that runs processes:
Run process for a while
Save state
Load state of another process
Run it ...
1.3Personal software process(PSP)
The Personal Software Process (PSP) is a structured software development process that is
intended to help software engineers understand and improve their performance, by using a
"disciplined, data-driven procedure".[ The PSP was created by Watts Humphrey to apply the
underlying principles of the Software Engineering Institutes (SEI)Capability Maturity
Model (CMM) to the software development practices of a single developer. It claims to give
software engineers the process skills necessary to work on a Team Software Process (TSP)
team.
The PSP aims to provide software engineers with disciplined methods for improving
personal software development processes. The PSP helps software engineers to:
Improve their estimating and planning skills.
Make commitments they can keep.
Manage the quality of their projects.
Reduce the number of defects in their work.
The goal of the PSP is to help developers produce zero-defect, quality products on schedule.
Low-defect and zero defect products have become the reality for some developers and TSP
teams, such as the Motorola division in Florida that achieved zero defects in over 18 projects
through implementing PSP techniques.
3

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

PSP strucure
PSP training follows an evolutionary improvement approach: an engineer learning to
integrate the PSP into his or her process begins at the first level - PSP0 - and progresses in
process maturity to the final level - PSP2.1. Each Level has detailed scripts, checklists and
templates to guide the engineer through required steps and helps the engineer improve his
own personal software process. Humphrey encourages proficient engineers to customise
these scripts and templates as they gain an understanding of their own strengths and
weaknesses.
Process
The input to PSP is the requirements; requirements document is completed and delivered to
the engineer.
PSP0, PSP0.1 (Introduces process discipline and measurement)
PSP0 has 3 phases: planning, development (design, coding, test) and a post mortem. A
baseline is established of current process measuring: time spent on programming, faults
injected/removed, size of a program. In a post mortem, the engineer ensures all data for the
projects has been properly recorded and analysed. PSP0.1 advances the process by adding a
coding standard, a size measurement and the development of a personal process
improvement plan (PIP). In the PIP, the engineer records ideas for improving his own
process.
PSP1, PSP1.1 (Introduces estimating and planning)
Based upon the baseline data collected in PSP0 and PSP0.1, the engineer estimates how large
a new program will be and prepares a test report (PSP1). Accumulated data from previous
projects is used to estimate the total time. Each new project will record the actual time spent.
This information is used for task and schedule planning and estimation (PSP1.1).
PSP2, PSP2.1 (Introduces quality management and design)
PSP2 adds two new phases: design review and code review. Defect prevention and removal
are the focus at the PSP2. Engineers learn to evaluate and improve their process by
measuring how long tasks take and the number of defects they inject and remove in each
phase of development. Engineers construct and use checklists for design and code reviews.
PSP2.1 introduces design specification and analysis techniques
(PSP3 is a legacy level that has been superseded by TSP.)
One of the core aspects of the PSP is using historical data to analyze and improve process
performance. PSP data collection is supported by four main elements:
Scripts
Measures
Standards
Forms
The PSP scripts provide expert-level guidance to following the process steps and they
provide a framework for applying the PSP measures. The PSP has four core measures:
4

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Size the size measure for a product part, such as lines of code (LOC).
Effort the time required to complete a task, usually recorded in minutes.
Quality the number of defects in the product.
Schedule a measure of project progression, tracked against planned and actual
completion dates.
Applying standards to the process can ensure the data is precise and consistent. Data is
logged in forms, normally using a PSP software tool. The SEI has developed a PSP tool and
there are also open source options available, such as Process Dashboard.
The key data collected in the PSP tool are time, defect, and size data the time spent in each
phase; when and where defects were injected, found, and fixed; and the size of the product
parts. Software developers use many other measures that are derived from these three basic
measures to understand and improve their performance. Derived measures include:
estimation accuracy (size/time)
prediction intervals (size/time)
time in phase distribution
defect injection distribution
defect removal distribution
productivity
reuse percentage
cost performance index
planned value
earned value
predicted earned value
defect density
defect density by phase
defect removal rate by phase
defect removal leverage
review rates
process yield
phase yield
failure cost of quality (COQ)
appraisal COQ
appraisal/failure COQ ratio
Planning and tracking
Logging time, defect, and size data is an essential part of planning and tracking PSP projects,
as historical data is used to improve estimating accuracy.
The PSP uses the PROxy-Based Estimation (PROBE) method to improve a developers
estimating skills for more accurate project planning. For project tracking, the PSP uses
the earned value method.

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The PSP also uses statistical techniques, such as correlation, linear regression, and standard
deviation, to translate data into useful information for improving estimating, planning and
quality. These statistical formulas are calculated by the PSP tool.
Using the PSP
The PSP is intended to help a developer improve their personal process; therefore PSP
developers are expected to continue adapting the process to ensure it meets their personal
needs.
1.4Team software process(TSP)
In combination with the Personal Software Process (PSP), the Team Software
Process (TSP) provides a defined operational process framework that is designed to help
teams of managers and engineers organize projects and produce software products that range
in size from small projects of several thousand lines of code (KLOC) to very large projects
greater than half a million lines of code.The TSP is intended to improve the levels of quality
and productivity of a team's software development project, in order to help them better meet
the cost and schedule commitments of developing a software system.
The initial version of the TSP was developed and piloted by Watts Humphrey in the late
1990s and the Technical Report for TSP sponsored by the U.S. Department of Defense was
published in November 2000. The book by Watts Humphrey,Introduction to the Team
Software Process, presents a view the TSP intended for use in academic settings, that focuses
on the process of building a software production team, establishing team goals, distributing
team roles, and other teamwork-related activities.
How TSP Works
Before engineers can participate in the TSP, it is required that they have already learned
about the PSP, so that the TSP can work effectively. Training is also required for other team
members, the team lead, and management.
The TSP software development cycle begins with a planning process called the launch, led
by a coach who has been specially trained, and is either certified or provisional. The launch
is designed to begin the team building process, and during this time teams and managers
establish goals, define team roles, assess risks, estimate effort, allocate tasks, and produce a
team plan. During an execution phase, developers track planned and actual effort, schedule,
and defects, meeting regularly (usually weekly) to report status and revise plans. A
development cycle ends with a Post Mortem to assess performance, revise planning
parameters, and capture lessons learned for process improvement.
The coach role focuses on supporting the team and the individuals on the team as the process
expert while being independent of direct project management responsibility. The team leader
6

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

role is different from the coach role in that, team leaders are responsible to management for
products and project outcomes while the coach is responsible for developing individual and
team performance.
1.5 Unifiedprocesses
The Unified Software Development Process or Unified Process is a popular iterative and
incremental software development process framework. The best-known and extensively
documented refinement of the Unified Process is the Rational Unified Process (RUP). Other
examples are OpenUP and Agile Unified Process.
Overview
The Unified Process is not simply a process, but rather an extensible framework which
should be customized for specific organizations or projects. The Rational Unified Process is,
similarly, a customizable framework. As a result it is often impossible to say whether a
refinement of the process was derived from UP or from RUP, and so the names tend to be
used interchangeably.
The name Unified Process as opposed toRational Unified Process is generally used to
describe the generic process, including those elements which are common to most
refinements. The Unified Process name is also used to avoid potential issues of trademark
infringement since Rational Unified Processand RUP are trademarks of IBM. The first book
to describe the process was titled The Unified Software Development Process (ISBN 0-20157169-2) and published in 1999 by Ivar Jacobson, Grady Booch and James Rumbaugh.
Since then various authors unaffiliated with Rational Software have published books and
articles using the name Unified Process, whereas authors affiliated with Rational
Software have favored the name Rational Unified Process.

Unified Process Characteristics

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Iterative and Incremental

Diagram illustrating how the relative emphasis of different disciplines changes over the
course of the projectThe Unified Process is an iterative and incremental
development process. The Elaboration, Construction and Transition phases are divided into a
series of timeboxed iterations. (The Inception phase may also be divided into iterations for a
large project.) Each iteration results in an increment, which is a release of the system that
contains added or improved functionality compared with the previous release.Although most
iterations will include work in most of the process disciplines (e.g.Requirements, Design,
Implementation, Testing) the relative effort and emphasis will change over the course of the
project.
Use Case Driven
In the Unified Process, use cases are used to capture the functional requirements and to
define the contents of the iterations. Each iteration takes a set of use cases or scenarios from
requirements all the way through implementation, test and deployment.
Architecture Centric
The Unified Process insists that architecture sit at the heart of the project team's efforts to
shape the system. Since no single model is sufficient to cover all aspects of a system, the
Unified Process supports multiple architectural models and views.
One of the most important deliverables of the process is the executable architecture baseline
which is created during the Elaboration phase. This partial implementation of the system
serves to validate the architecture and act as a foundation for remaining development.

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Risk Focused
The Unified Process requires the project team to focus on addressing the most critical risks
early in the project life cycle. The deliverables of each iteration, especially in the Elaboration
phase, must be selected in order to ensure that the greatest risks are addressed first.
Project Lifecycle
The Unified Process divides the project into four phases:
Inception
Elaboration
Construction
Transition
Inception Phase
Inception is the smallest phase in the project, and ideally it should be quite short. If the
Inception Phase is long then it may be an indication of excessive up-front specification,
which is contrary to the spirit of the Unified Process.
The following are typical goals for the Inception phase.
Establish a justification or business case for the project
Establish the project scope and boundary conditions
Outline the use cases and key requirements that will drive the design tradeoffs
Outline one or more candidate architectures
Identify risks
Prepare a preliminary project schedule and cost estimate
The Lifecycle Objective Milestone marks the end of the Inception phase.
Develop an approximate vision of the system, make the business case, define the scope, and
produce rough estimate for cost and schedule.
Elaboration Phase
During the Elaboration phase the project team is expected to capture a healthy majority of
the system requirements. However, the primary goals of Elaboration are to address known
risk factors and to establish and validate the system architecture. Common processes
undertaken in this phase include the creation of use case diagrams, conceptual diagrams
(class diagrams with only basic notation) and package diagrams (architectural diagrams).
The architecture is validated primarily through the implementation of an Executable
Architecture Baseline. This is a partial implementation of the system which includes the
core, most architecturally significant, components. It is built in a series of small, time boxed
iterations. By the end of the Elaboration phase the system architecture must have stabilized
and the executable architecture baseline must demonstrate that the architecture will support
9

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

the key system functionality and exhibit the right behavior in terms of performance,
scalability and cost.
The final Elaboration phase deliverable is a plan (including cost and schedule estimates) for
the Construction phase. At this point the plan should be accurate and credible, since it should
be based on the Elaboration phase experience and since significant risk factors should have
been addressed during the Elaboration phase.
Construction Phase
Construction is the largest phase in the project. In this phase the remainder of the system is
built on the foundation laid in Elaboration. System features are implemented in a series of
short, timeboxed iterations. Each iteration results in an executable release of the software. It
is customary to write full text use cases during the construction phase and each one becomes
the start of a new iteration. Common UML (Unified Modelling Language) diagrams used
during this phase include Activity, Sequence, Collaboration, State (Transition) and
Interaction.
Transition Phase
The final project phase is Transition. In this phase the system is deployed to the target users.
Feedback received from an initial release (or initial releases) may result in further
refinements to be incorporated over the course of several Transition phase iterations. The
Transition phase also includes system conversions and user training.
Refinements and Variations
Refinements of the Unified Process vary from each other in how they categorize the
project disciplines or workflows.
TheRational
Unified
Process defines
nine
disciplines: Business
Modeling, Requirements, Analysis
and
Design, Implementation,Test, Deployment, Configuration and Change Management, Project
Management, and Environment. The Enterprise Unified Process extends RUP through the
addition of eight "enterprise" disciplines. Agile refinements of UP such as OpenUP/Basicand
the Agile Unified Process simplify RUP by reducing the number of disciplines.
Refinements also vary in the emphasis placed on different project artifacts. Agile
refinements streamline RUP by simplifying workflows and reducing the number of expected
artifacts.
Refinements also vary in their specification of what happens after the Transition phase. In
the Rational Unified Process the Transition phase is typically followed by a new Inception
phase. In the Enterprise Unified Process the Transition phase is followed by a Production
phase.

10

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The number of Unified Process refinements and variations is countless. Organizations


utilizing the Unified Process invariably incorporate their own modifications and extensions.
The following is a list of some of the better known refinements and variations.
Agile Unified Process (AUP), a lightweight variation developed by Scott W. Ambler
Basic Unified Process (BUP), a lightweight variation developed by IBM and a precursor
to OpenUP
Enterprise Unified Process (EUP), an extension of the Rational Unified Process
Essential Unified Process (EssUP), a lightweight variation developed by Ivar Jacobson
Open Unified Process (OpenUP), the Eclipse Process Framework software development
process
Rational Unified Process (RUP), the IBM / Rational Software development process
Oracle Unified Method (OUM), the Oracle development and implementation process
Rational Unified Process-System Engineering (RUP-SE), a version of RUP tailored
by Rational Software for System Engineering
1.6Agile Processes
In software development life cycle, there are two main considerations, one is to
emphasize on process and the other is the quality of the software and process itself. Agile
software processes is an iterative and incremental based development, where
requirements are changeable according to customer needs. It helps in adaptive planning,
iterative development and time boxing. It is a theoretical framework that promotes
foreseen interactions throughout the development cycle. There are several SDLC models
like spiral, waterfall, RAD which has their own advantages. SDLC is a framework that
describes the activities performed at each stage of a software development life cycle.The
software development activities such as planning, analysis, design, coding, testing and
maintenance which need to be performed according to the demand of the customer. It
depends on the various applications to choose the specific model. In this paper, however,
we will study the agile processes and its methodologies. Agile process is itself a software
development process.Agile process is an iterative approach in which customer
satisfaction is at highest priority as the customer has direct involvement in evaluating the
software.
The agile process follows the software development life cycle which includes
requirements gathering, analysis, design , coding , testing and delivers partially
implemented software and waits for the customer feedback. In the whole process ,
customer satisfaction is at highest priority with faster development time.
Characteristics of agile projects
Agile process requires less planning and it divides the tasks into small increments. Agile
process is meant for short term projects with an effort of team work that follows the
11

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

software development life cycle. Software development life cycle includes the following
phases 1.Requirements gathering, 2.Analysis, 3.Design, 4.Coding , 5.Testing,
6.Maintenance. The involvement of software team management with customers reduces
the risks associated with the software. This agile process is an iterative process in which
changes can be made according to the customer satisfaction. In agile process new features
can be added easily by using multiple iterations.
1. Iterative
The main objective of agile software processes is satisfaction of customers, so it focuses
on single
requirement with multiple iterations.
2. Modularity
Agile process decomposes the complete system into manageable pieces called modules.
Modularity plays a major role in software development processes.
3. Time Boxing
As agile process is iterative in nature, it requires the time limits on each module with
respective cycle.
4. Parsimony
In agile processes parsimony is required to mitigate risks and achieve the goals by
minimal number of modules.
5. Incremental
As the agile process is iterative in nature, it requires the system to be developed in
increments, each increment is independent of others, and at last all increments are
integrated into complete system.
6. Adaptive
Due to the iterative nature of agile process new risks may occurs. The adaptive
characteristic of agile process allows adapting the processes to attack the new risks and
allows changes in the real time requirements.
7. Convergent
All the risks associated with each increment are convergent in agile process by using
iterative and incremental approach.
8. Collaborative
As agile process is modular in nature, it needs a good communication among software
development team.Different modules need to be integrated at the end of the software
development process.
9. People Oriented
In the agile processes customer satisfaction is the first priority over the technology and
process. A good software development team increases the performance and productivity

12

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

of the software.
ADVANTAGES
1) Adaptive to the changing environment: In agile software development method,
software is developed over several iterations. Each iteration is characterized by analysis,
design, implementation and testing. After each iteration the mini project is delivered to
the customer for their use and feedback. Any changes that upgrade the software are
welcome from the customer at any stage of development and that changes are
implemented.
2) Ensures customer satisfaction: This methodology requires active customer
involvement throughout thedevelopment. The deliverables developed after each iteration
is given to the user for use and improvement is done based on the customer feedback
only. So at the end what we get as the final product is of high quality and it ensures the
customer satisfaction as the entire software is developed based on the requirements taken
from customer.
3) Least documentation: The documentation in agile methodology is short and to the
point though it depends on the agile team. Generally they dont make documentation on
internal design of the software. The main things which should be on the documentation
are product features list, duration for each iteration and date. This brief documentation
saves time of development and deliver the project in least possible time.
4) Reduces risks of development: As the incremented mini software is delivered to the
customers after every short development cycle and feedbacks are taken from the
customers, it warns developers about the upcoming problems which may occur at the
later stages of development. It also helps to discover errors quickly and they are fixed
immediately.
DISADVANTAGES
1) Customer interaction is the key factor of developing successful software: Agile
methodology is based on customer involvement because the entire project is developed
according to the requirements given by the customers. So if the customer representative is
not clear about the product features, the development process will go out of the track.
2) Lack of documentation: Though the least documentation saves development time as an
advantage of agile method, on the other hand it is a big disadvantage for developer. Here
the internal design is getting changed again and again depending on user requirements
after every iteration, so it is not possible to maintain the detail documentation of design
and implementation because of project deadline. So because of less available information,

13

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

it is very difficult for the new developers who join the development team at the later stage
to understand the actual method followed to develop the software.
3) Time consuming and wastage of resources because of constant change of
requirements: If the
customers are not satisfied by the partial software developed by certain iteration and they
change their requirements then that incremented part is of no use. So it is the total
wastage of time, effort and resources required to develop that increment.
4) More helpful for management than developer: The agile methodology helps
management to take
decisions about the software development, set goals for developers and fix the deadline
for them. But it is very difficult for the baseline developers to cope up with the ever
changing environment and every time changing the design, code based on just in time
requirements.

COMPARISON OF AGILE PROCESS WITH OTHER SDLC MODELS


TABLE I.
Features
Definition

Adaptability
Testing Phase

PRCOESS MODELS
Different Process Models
RAD Model
Agile Process
Spiral Model
Agile process is Spiral model is RAD model is
the ability to both the
high
speed
create
and software
adaptation
of
respond
development
linear sequential
tochanging
model
which model, in which
requirements of focuses
on component based
construction
is
software.
managing risks.
used.

y
y
Unit, Integration , Unit, Integration
System testing
and
System testing
Quality Factors
y
y
Risk Analysis
n
y
Off-the- Tools
n
n
Failure normally Code
Code

n
Unit

n
n
y
Architecture and
14

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

due to
Knowledge
Required
Entry & exit
Criteria
Mock up
Extendability
Project
management
involvement
Higher
Reliability
Time Boxing

design
Product
domain
n

and Product
domain
n

and Domain

y
y

y
y

n
n

1.7Choosing the right process


Software process consists of four fundamental activities:
1.Software specification where engineers or/and customers define what the product should
do and how should it operate.
2. Software development is designing and actual coding.
3.Software validation is generally testing. It is important to check if the system is designed
and implemented correctly.
4. Software evolution is modifying the system according to new needs of customer (s).
Different types of software need different development process.
Software process model is a simplified description of a software process that presents one
view of a process. And again, choice of a view depends on the system developing,
sometimes it is useful to apply a workflow model, sometimes, for example a role/action
model.
Most software process models are based on one of three general models or paradigms of
software development.
1.
The waterfall approach. In this case the development process and all activities are
divided into phases such as requirement specification, software design, implementation,
testing etc. Development goes phase-by-phase.
15

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

2.
Iterative development. An initial system is rapidly developed from very abstract
specifications. Of course, it can be reimplemented according to new, probably more detailed
specifications.
3.
Component-based software engineering (CBSE). The development process is done
assuming some parts of the system is already exist, so the process focuses on integrating
parts together rather than developing everything from scratch.
Four principal dimensions to system dependability are: Availability, Reliability, Safety and
Security.
All of these may be decomposed into another, for example security includes integrity
(ensuring that data is not damaged) and confidentiality. Reliability includes correctness,
precision and timeliness. All of them are interrelated.
Three complementary approaches that are used to improve the reliability of a system are:
1. Fault avoidance. Development techniques used to minimise the possibility of mistakes
before they result in system faults.
2. Fault detection and removal. Identifying and solving system problems before the system is
used.
3. Fault tolerance. Techniques used to ensure that some system errors doesnt not result in
failure.
The process of risk analysis consists of four steps:
1. Risk identification. Potential risks that might arise are identified. These are dependent on
the environment in which the system is to be used. In safety-critical systems, the principal
risks are hazards that can lead to an accident. Experienced engineers, working with domain
experts and professional safety advisors, should identify system risks. Group working
techniques such as brainstorming may be used to identify risks.
2. Risk analysis and classification. The risks are considered separately. Those that are
potentially serious and not implausible are selected for further analysis. Risks can be
categorised in three ways:
a. Intolerable. The system must be designed in such a way so that either the risk cannot
arise or, if it does arise, it will not result in an accident. Intolerable risks are those that
threaten human life or the financial stability of a business and which have a significant
probability of occurrence.
b. As low as reasonably practical (ALARP). The system must be designed so that the
probability of an accident arising because of the hazard is minimised, subject to other
considerations such as cost and delivery. ALARP risks are those which have less serious
consequences or which have a low probability of occurrence.

16

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

c. Acceptable. While the system designers should take all possible steps to reduce the
probability of an acceptable hazard arising, these should not increase costs, delivery time or
other non-functional system attributes.
3. Risk decomposition. Each risk is analysed individually to discover potential root causes
of that risk. Different techniques for risk decomposition exist. The one discussed in the book
is Fault-tree analysis, where analyst puts hazard at the top and place different states which
can lead to that hazard above. States can be linked with or and and symbols. Risks that
require a combination of root causes are usually less probable than risks that can result from
a single root cause.
4. Risk reduction assessment. Proposals for ways in which the identified risks may be
reduced or eliminated are made. Three possible strategies of risk deduction that can be used
are:
a. Risk avoidance. Designing the system in such a way that risk or hazard cannot arise.
b. Risk detection and removal. Designing the system in such a way that risks are detected
and neutralised before they result in an accident.
c. Damage limitation. Designing the system in such a way that the consequences of an
accident are minimised.
In the 1980s and 1990s, as computer control become widespread, the safety engineering
community developed standards for safety critical systems specification and development.
The process of safety specification and assurance is part of an overall safety life cycle that is
defined in an international standard for safety management IEC 61508 (IEC, 1998).
Security and safety requirements have something in common; however, there are some
differences between these types of requirements.
.

UNIT II
REQUIREMENT MANAGEMENT
17

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

2.1 Functional requirements and Quality attributes


Quality attributes, such as response time, accuracy,security, reliability, are properties that
affect the systemas a whole. Most approaches deal with quality attributes separately from the
functional requirements of a system.This means that the integration is difficult to achieve and
usually is accomplished only at the later stages of thesoftware development process.
Furthermore, current approaches fail in dealing with the crosscutting nature ofsome of those
attributes, i.e. it is difficult to represent clearly how these attributes can affect
severalrequirements simultaneously. Since this integration is not supported from
requirements to the implementation,some of the software engineering principles, such as
abstraction, localization, modularisation, uniformity andreusability, can be compromised.
What we propose is a model to identify and specify quality attributes that crosscut
requirements including their systematic integration into the functional description at an early
stage of the software development process, i.e. at requirements.
A model for early quality attributes
The process model we propose is UML compliant and is composed of three main activities:
identification, specification and integration of requirements. The first activity consists of
identifying all the requirements of asystem and select from those the quality attributes
relevant to the application domain and stakeholders. Thesecond activity is divided into two
main parts: (1)specifying functional requirements using a use case based approach; (2)
describe quality attributes using special templates and identify those that cut across (i.e.
crosscutting) functional requirements. The third activity proposes a set of models to
represent the integration of crosscutting quality attributes and functional requirements.
Figure 1 depicts this model.

18

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

To identify the crosscutting nature of some of the quality attributes we need to take into
account the informationcontained in rows Where and Requirements. If a quality attribute
cuts across (i.e. is required by) several requirements and models, then it is crosscutting.
The integration is accomplished by weaving the quality attributes with the functional
requirements in three different ways :
(1) Overlap: the quality attribute adds new behaviour to the functional requirements it
transverses. In this
case, the quality attribute may be required before those requirements, or, it may be required
after them.
(2) Override: the quality attribute superposes the functional requirements it transverses. In
this case, its
behaviour substitutes the functional requirements behavior.
(3) Wrap: the quality attribute encapsulates the requirements it transverses. In this case
the behaviour
of the requirements is wrapped by the behaviour of the quality attribute. We weave quality
attributes with functionalrequirements by using both standard diagrammatic representations
(e.g. use case diagram, interaction diagrams) and by new diagrams.
Identify requirements
19

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Requirements of a system can be classified into functional and non-functional (i.e. quality
attributes). Functional requirements are statements of services the system should provide,
how the system should react to particular inputs and how the system should behave in
particular situations. Different types of methods are used to specify functional requirements.
Use case driven approaches describe the ways in which a user uses a system that is why
use case diagram is often used for capturing functional requirements. Quality attributes
define global properties of a system. Usually these are only dealt with in the later stages of a
software development process, such as design andimplementation.
Identify actors and use cases.
For the road pricing system, the actors we identified are:
Vehicle owner: is responsible for registering a vehicle;
Vehicle driver: comprehends the vehicle, the driver and the gizmo installed on it;
Bank: represents the entity that holds the vehicle owners account;
System clock: represents the internal clock of the system that monthly triggers the
calculation of
debits.
The following are the use cases required by the actorslisted above:
Register vehicle: is responsible for registering a vehicle and its owner, and
communicate with the bank to guarantee a good account;
Pass single toll: is responsible for dealing with tolls where vehicles pay a fixed
amount. It reads thevehicle gizmo and checks on whether it is a good one. If the gizmo
is ok the light is turned green, andthe amount to be paid is calculated and displayed. If
the gizmo is not ok, the light is turned yellow and aphoto is taken.
Enter motorway: checks the gizmo, turns on the light and registers an entrance. If the
gizmo is invalid a photo is taken and registered in the system.
Exit motorway: checks the gizmo and if the vehicle has an entrance, turns on the light
accordingly, calculates the amount to be paid (as a function of the distance travelled),
displays it and records this passage. If the gizmo is not ok, or if the vehicle did not
enter in a green lane, the light is turned yellow and a photo is taken.
Pay bill: sums up all passages for each vehicle, issues a debit to be sent to the bank
and a copy to the vehicle owner.
Identify quality attributes.
Quality attributes can be assumptions, constraints or goals of stakeholders. By analysing the
initial of set requirements, the potential quality attributes are identified. For example, if the
owner of a vehicle has to indicate, during registration, his/her bank details so that automatic
transfers can be performed automatically, then security is an issue that the system needs to
address. Another fundamental quality attribute is response time that is a issue when a vehicle
passes a toll gate, or when a customer activates his/her own gizmo in an ATM: the toll gate
20

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

components have to react in time so that the driver can see the light and the amount being
displayed. Other concerns are identified in a similar fashion: Multiuser System,
Compatibility, Legal Issues, Correctness and Availability.
3.2 Specify functional requirements and quality attributes
The functional requirements are specified using the UML models, such as use cases,
sequence and class diagrams. The quality attributes are described in templates of the form
presented in Figure 2.
Build the use case diagram.
The set of all use cases can be represented in a use case diagram, where we can see the
existing relationshipsbetween use cases and the ones between use cases and actors. Figure 3
shows the use case diagram of the roadtraffic system.

Integrate functional requirements withcrosscutting quality attributes


Integration composes the quality attributes with the functional requirements, to obtain the
whole system. We use UML diagrams to show the integration. The two examples given
above (for response time and security) fall into two of the categories already described:
overlap and wrapper. We could extend the UML diagrams to represent some quality
attributes. For example, the sequence diagram shown in Figure 4 can be extended to show
how response time affects a scenario

21

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

2.2 Elicitation techniques


A major goal of Requirements Elicitation is to avoid the confusions between stakeholders
and analysts. This will often involve putting significant sort into requirements elicitation.
Unfortunately, Requirements Engineering is an immature discipline, perhaps not entirely
unfairly characterized as a battlefield occupied by competing commercial methods, firing
competing claims at each other, and leaving the consumers weary and confused.
The goal of this paper is to analyze and compare of the different methods of the requirements
elicitation process, which will be useful to compare the different characteristics and the
performance of the different elicitation methods. Hence, all the requirement elicitation
techniques are very handy for extracting the requirements and different organizations, which
can use different requirement elicitation techniques according to organizational culture and
needs.
As requirements elicitation is a process in which intensive interaction between stakeholders
and the analysts, so for finding the interaction between stakeholders and analysts will be easy
for improving the quality of extracted requirements. It is important to distinguish different
elicitation methods according to the four methods of communication .
1.
Conversational
2.
Observational
3.
Analytic
4.
Synthetic
Each category presents a specific interaction model between analysts and stakeholders.
Understanding the method category helps engineers understand different elicitation methods
and guides them to select appropriate method for requirements elicitation.
Four Methods of Communication
i. Conversational Methods

22

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The conversational method provides a means of verbal communication between stakeholders


and Analysts. As conversation is a natural way of communication and an effective mean of
expressing needs and ideas, and the conversational methods are used massively to
understand the problems and to elicit generic product requirements. The Conversational
Methods are also known as verbal methods, such as Interviews, Questionnaire, and
Brainstorming.
a.Interviews: A typical conversational method is interviews. It is most commonly used
method in requirements elicitation. An Interview is generally conducted by an experienced
analyst, who has some generic knowledge about the application domain as well. In an
interview, Analyst discusses the desired product with different stakeholders and develops an
understanding of their requirements. Generally Interviews are divided in two groups.
1. Closed Interview: In this interview the requirements, we have to prepare some
predefined questions and try to get the answers for these questions for the stakeholder.
2. Open-ended Interview: In this interview, we do not need to prepare any predefined
questions, and the information from the stakeholders in open discussions.
b.Questionnaire: Questionnaires are one of the methods of gathering requirements in less
cost. Questionnaires reach a large number of people, not only in less time but also in a lesser
cost. The general factors which affect the usage of the questionnaire are
1. The available resources to gather the requirements mainly depends on the available
resource
2. Type of Requirements that has to be gathering depends on the level of the
respondents knowledge and background.
3 Anonymity provided to the respondent
c.Brainstorming : Brainstorming is another conversation method. It has some similarities
with workshops and focus groups as in Brainstorming stakeholders are gather together for a
short time period but in this short time period they develop a large and broad list of ideas. In
this meeting out -of-the-box thinking approach is encouraged. The brainstorming involves
both idea generation and idea reduction.
Conversation is one of the most prevalent yet invisible forms of social interaction. People are
usually happy to describe their work and difficulties they face. The verbally expressive
demands, needs and constraints are often called non-tacit requirements. Conversational
methods are very commonly used in requirements development. However, they are labor
intensive : meeting setup and transcript producing and analyzing from records of a live
interaction take time.
iiObservational Methods:

23

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The observational method provides means to develop a better understanding about domain of
Application. Observation methods work by observing human activities at environment where
system is expected to be deployed. In addition to state able requirements, some requirements
are apparent to stakeholders, but stakeholders find it very hard to verbalize.
The observation methods come into play where Verbal communication becomes helpless for
collecting tacit requirements. Therefore, observing how people carry out their routine work
forms a means of acquisition of information which are hard to verbalize. The observational
methods appear to be well suited when stakeholders find it difficult to state their needs and
when analysts are looking for a better understanding of the context in which the desired
product is expected to be used. Observational methods is including, Social analysis,
Observation, Ethnographic study, and protocol analysis.
Social analysis, Observation, Ethnographic study: An observer spends some time in a
society or culture for making detailed observation of all their practices. This practice gives
the initial understanding of system, work flow and organizational culture.
Protocol analysis: In protocol analysis a stakeholder is observed when he is engaged in
some task, and concurrently speaks out loud and explains his thought. With the protocol
analysis it is easy to identify Interaction problems in existing systems and it gives better and
closer understanding of Work context and work flow.
For Observational methods, the observer must be accepted by the people being studied and
the people being studied should carry on with their normal activities as if the observer is not
there.
In both Conversational and Observation methods, requirement elicitation is done by studying
some individuals but a variety of documentation may prove out to be handy for extracting
the requirements of the desired product. The documentation may include problem analysis,
organizational charts, standards, user manuals of existing systems, survey report of
competitive systems in market, and so on. By studying these documents, engineers capture
the information about the application domain, the workflow, the product features, and map it
to the requirements specification.
iiiAnalytic Methods:
Conversational or Observational methods are used to directly extracted requirements from
peoples behavior and their verbalized thought. But still there is a lot of knowledge that is
not directly expressed, for example experts knowledge, information about regulation and
legacy products are some examples of such sources. All the stated sources provide engineers
24

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

rich information in relation to the product. Analytic methods provide ways to explore the
existing documentation or knowledge and acquire requirements from a series of deductions.it
will include Requirement reuse, documentation studies, laddering, and repertory grid
Requirement reuse: In this technique, glossaries and specification of legacy systems or
systems within the same product family is used to identify requirements of the desired
system.
It has been observed that many requirements in a new system are more or less same as they
were in a legacy systems requirement. So it is not a bad idea to reuse the details of
requirements of an earlier system in a new system.
Documentation studies: In this technique different available documents (e.g. Organizational
policies, standards, legislation, Market information, Specification of legacy systems) are read
and studied to find the content that can prove out to be relevant useful for the requirements
elicitation tasks.
Laddering: This technique can be divided in 3 parts: creation, reviewing and modification.
Laddering method is a form of structured interview that is widely used in the field of
knowledge elicitation activities to elicit stakeholders goals, aims and values Analyst used
laddering method to create, review and modify the hierarchical contents of experts
knowledge in the form of tree diagram. It was first introduced by the clinical psychologists in
1960 to understand the people score values and beliefs . Its success in the fields of
psychology allows other researchers in the industries to adapt it in their fields. Specifically
software developers have adapted the laddering techniques for gather the complex user tacit
requirements.
Repertory grid: Stakeholder is asked for attributes applicable to a set of entities and values
for cells in entity -attribute matrix.
In general, the analytic methods are not vital to requirements elicitation, since requirements
are captured indirectly from other sources, rather than end users and customers. However,
they form complementary ones to improve the efficiency and effectiveness of requirements
elicitation, especially when the information from legacy or related products is reusable.
ivSynthetic Methods:
So far, we have discussed Conversational, Observational and Analytic methods. It is
apparent that No single method is sufficient enough to develop all the requirement of a
system. All these methods are good and very handy in some certain context and
circumstances. It is often a good idea to combine different elicitation methods for developing
requirement. The combination helps the engineer uncover the basic aspects and gain a
25

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

generic knowledge of the application domain. Instead of combining different of individual


methods, the synthetic method forms a coherent whole by systematically combining
conversation, observation, and analysis into single methods. Analysts and stakeholder
representatives communicate and coordinate in different ways to reach a common
understanding of the desired product. Synthetic methods are known as collaborative methods
as they are collaboration of multiple requirement elicitation methods. Requirement elicitation
techniques of Synthetic methods are including scenarios, passive storyboards, prototyping,
interactive storyboards, JAD/RAD sessions, and Contextual inquiry .
Scenarios, passive storyboards: It is an interaction session. In this session a sequence of
actions and events described for executing some generic task which the system is intended to
accomplish. With the help of this technique, clear requirement related to procedure and data
flow can be achieved. With this technique initial set of requirement can be prepared in lesser
cost.
Prototyping, Interactive storyboards: In this technique, a concrete but partial system is
discussed with stakeholders. This concrete but partial system is expected to be delivered at
the end of project. The purpose of showing this system to stakeholders is to elicit and
validate functional requirement. The p
JAD/RAD session: It stands for Joint Application Development/Rapid Application
Development and emphasizes user involvement through group sessions with unbiased
facilitator. JAD is conducted in the same manner as brainstorming, except that the
stakeholders and the users are also allowed to participate and discuss on the design of the
proposed system. The discussion with the stakeholders and the users continues until the final
requirements are gathered.
Contextual inquiry: this technique is a combination of open-ended interview, workplace
observation, and prototyping. This method used for interactive systems design where user
interface design is critical.
All four requirement elicitation methods are commonly used but the selection of
requirement elicitation method entirely depends on the needs and organizational structure.
No matter what development project is, requirements development nearly always takes place
in the context of a human activity system, and problem owners are people .. It is essential for
requirements engineers to study how people perceive, understand, and express the problem
domain, how they interact with the desired product, and how the physical and cultural
environments affect their actions.

26

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The conversational methods provide a direct contact channel between engineers and
stakeholders, and the requirements are mainly no tacit. The observational methods provide
an indirect channel by observing users interaction with his work setting and context, and the
requirements fall into tacit knowledge. The analytic methods form one complementary
indirect contact channel to extract requirements proactively. The synthetic methods focus
more on collective effort on clarifying the features of desired products, and the
communication channel is therefore a mix of direct contact and indirect contact. Each type of
techniques has trade-offs. In reality, of course, the boundary between different types of
method is blurred.
Advantage and Disadvantage of Requirement Elicitation
After the discussion the different of the four group of requirement elicitation method. In
order to understand the each Requirement elicitation Methods and effective use them in the
real case ,we have to focus on the advantages and disadvantages of different requirement
elicitation methods: Conversational, Observational, Analytic and Synthetic one by one.
1) As conversation is a natural and effective way of communication, thats why the
conversational methods are used massively. Conversational methods include techniques such
as: interviews, Questionnaire and Brainstorming.
Advantages of Conversational Method: Conversational techniques are really helpful for
collection rich information about the requirements. Along with the requirements,
conversational methods uncover opinions, feelings and goals of different individuals. With
the help of conversational methods it is easy to dig into the details with the help of follow up
questions to what the person has told you.
Disadvantages of Conversational Method: Along with the number of advantages there are
certain disadvantages of conversational methods as this skill is very hard to master.
Conversational Methods for requirement elicitation depend a lot on the behavior and attitude
of conductor [4]. A Conductor is supposed to be neutral. As a result of conversational
method, a collection of information can be obtained and getting meaningful information
from gathered information will be difficult. In Conversational Methods the contexts of
conversation plays a very important role as well.
2) Observational methods are helpful in understanding the application domain by observing
human activities Observational methods are inefficient when the project have very tight
schedule at requirement stages. Method like ethnography and protocol analysis methods falls
under this category [22]. The Observational method involves: Social analysis, Observation,
Ethnographic study and Protocol Analysis.
27

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Advantages of Observational Methods: The observational methods are good choice for
uncovering basic aspects of routine order. Moreover they provide vital information for
designing solution. Observational Methods are very handy when the development team has
lack of experience about product domain.
Disadvantages of Observational Methods: Along with the advantages of observational
methods there are certain disadvantages as well. The Biggest disadvantage is that
observation methods need a lot of time and these techniques are not good choice when
schedule is tight. Just like conversational techniques, observational techniques are also hard
to master [10]. Moreover observational techniques require sensitivity and responsiveness to
physical environment.
3) Conversational or Observational methods are used to directly extracted requirements from
peoples behavior and their verbalized thought. But still there is a lot of knowledge that is
not directly expressed. For extracting this kind of knowledge and information analytical
skills are used. Analytical Skills include Requirement Reuse, Documentation Studies,
Laddering and Repertory Girds.
Advantages of Analytical Methods: Analytic Methods have numerous advantages as
People are not the only source of information in terms of requirements. Experts
Knowledge and Opinion plays an important role in requirement maturity. Moreover, reuse of
already available information saves time and cost. Analytical methods have hierarchical flow
of information as well.
Disadvantages of Analytical Methods: Along advantages, Analytical methods have certain
disadvantages as well. The biggest disadvantage is that an analytical method requires some
empirical data, documentation or experts opinions without these it is difficult to elicit proper
requirements. Similarly analytical methods can narrow the vision of product. As analytical
methods deal with some earlier knowledge so possibility of error replication is a serious and
constant threat. Analytical methods are never a good choice when you are going to develop
an altogether new system. [12]
2.3 Quality Attribute Workshops(QAW)
The Quality Attribute Workshop (QAW) is a facilitated method that engages system
stakeholders
early in the life cycle to discover the driving quality attributes of a software-intensivesystem.
The QAW was developed to complement the Architecture Tradeoff Analysis

28

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

MethodSM(ATAMSM) and provides a way to identify important quality attributes and


clarify systemrequirements before the software architecture has been created.
This is the third edition of a technical report describing the QAW. We have narrowed the
scopeof a QAW to the creation of prioritized and refined scenarios. This report describes the
newlyrevised QAW and describes potential uses of the refined scenarios generated during it.
The Quality Attribute Workshop (QAW) is a facilitated method that engages system
stakeholdersearly in the system development life cycle to discover the driving quality
attributes ofa software-intensive system. The QAW is system-centric and stakeholder
focused; it is usedbefore the software architecture has been created. The QAW provides an
opportunity to gatherstakeholders together to provide input about their needs and
expectations with respect to keyquality attributes that are of particular concern to them

Both the system and software architectures are key to realizing quality attribute
requirementsin the implementation. Although an architecture cannot guarantee that an
implementation willmeet its quality attribute goals, the wrong architecture will surely spell
disaster. As an example,consider security. It is difficult, maybe even impossible, to add
effective security to a systemas an afterthought. Components as well as communication
mechanisms and paths must bedesigned or selected early in the life cycle to satisfy security
requirements. The critical qualityattributes must be well understood and articulated early in
the development of a system, so thearchitect can design an architecture that will satisfy them.
The QAW is one way to discover,document, and prioritize a systems quality attributes early
in its life cycle.
It is important to point out that we do not aim at an absolute measure of quality; rather our
purposeis to identify scenarios from the point of view of a diverse group of stakeholders
(e.g.,architects, developers, users, sponsors). These scenarios can then be used by the system
engineersto analyze the systems architecture and identify concerns (e.g., inadequate
29

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

performance,successful denial-of-service attacks) and possible mitigation strategies (e.g.,


prototyping,modeling, simulation).
QAW Method
The QAW is a facilitated, early intervention method used to generate, prioritize, and
refinequality attribute scenarios before the software architecture is completed. The QAW is
focusedon system-level concerns and specifically the role that software will play in the
system. TheQAW is dependent on the participation of system stakeholdersindividuals on
whom the systemhas significant impact, such as end users, installers, administrators (of
database managementsystems [DBMS], networks, help desks, etc.), trainers, architects,
acquirers, system andsoftware engineers, and others. The group of stakeholders present
during any one QAW shouldnumber at least 5 and no more than 30 for a single workshop. In
preparation for the workshop,stakeholders receive a participants handbook providing
example quality attribute taxonomies,questions, and scenarios. If time allows, the handbook
should be customized to the
domain of the system and contain the quality attributes, questions, and scenarios that
areappropriate to the domain and the level of architectural detail available.
The contribution of each stakeholder is essential during a QAW; all participants are
expectedto be fully engaged and present throughout the workshop. Participants are
encouraged to commentand ask questions at any time during the workshop. However, it is
important to recognizethat facilitators may occasionally have to cut discussions short in the
interest of time or when itis clear that the discussion is not focused on the required QAW
outcomes. The QAW is anintense and demanding activity. It is very important that all
participants stay focused, are ontime, and limit side discussions throughout the day.
The QAW involves the following steps:
1. QAW Presentation and Introductions
2. Business/Mission Presentation
3. Architectural Plan Presentation
4. Identification of Architectural Drivers
5. Scenario Brainstorming
6. Scenario Consolidation
7. Scenario Prioritization
8. Scenario Refinement
The following sections describe each step of the QAW in detail.
Step 1: QAW Presentation and Introductions

30

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

In this step, QAW facilitators describe the motivation for the QAW and explain each step
ofthe method. We recommend using a standard slide presentation that can be
customizeddepending on the needs of the sponsor.
Next, the facilitators introduce themselves and the stakeholders do likewise, briefly
statingtheir background, their role in the organization, and their relationship to the system
being built.
Step 2: Business/Mission Presentation
After Step 1, a representative of the stakeholder community presents the business and/or
missiondrivers for the system. The term business and/or mission drivers is used carefully
here.Some organizations are clearly motivated by business concerns such as profitability,
whileothers, such as governmental organizations, are motivated by mission concerns and
find profitabilitymeaningless. The stakeholder representing the business and/or mission
concerns (typicallya manager or management representative) spends about one hour
presenting
the systems business/mission context
high-level functional requirements, constraints, and quality attribute requirements
During the presentation, the facilitators listen carefully and capture any relevant
informationthat may shed light on the quality attribute drivers. The quality attributes that will
be refined inlater steps will be derived largely from the business/mission needs presented in
this step.
Step 3: Architectural Plan Presentation
While a detailed system architecture might not exist, it is possible that high-level
systemdescriptions, context drawings, or other artifacts have been created that describe some
of thesystems technical details. At this point in the workshop, a technical stakeholder will
presentthe system architectural plans as they stand with respect to these early documents.
Informationin this presentation may include
plans and strategies for how key business/mission requirements will be satisfied
key technical requirements and constraintssuch as mandated operating systems,
hardware,
middleware, and standardsthat will drive architectural decisions
presentation of existing context diagrams, high-level system diagrams, and other
writtendescriptions
Step 4: Identification of Architectural Drivers
During steps 2 and 3, the facilitators capture information regarding architectural drivers
thatare key to realizing quality attribute goals in the system. These drivers often include
high-levelrequirements, business/mission concerns, goals and objectives, and various quality
31

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

attributes.Before undertaking this step, the facilitators should excuse the group for a 15minute break,during which they will caucus to compare and consolidate notes taken during
steps 2 and 3.
When the stakeholders reconvene, the facilitators will share their list of key architectural
driversand ask the stakeholders for clarifications, additions, deletions, and corrections. The
idea isto reach a consensus on a distilled list of architectural drivers that include high-level
requirements,business drivers, constraints, and quality attributes. The final list of
architectural driverswill help focus the stakeholders during scenario brainstorming to ensure
that theseconcerns are represented by the scenarios collected.
Step 5: Scenario Brainstorming
After the architectural drivers have been identified, the facilitators initiate the
brainstormingprocess in which stakeholders generate scenarios. The facilitators review the
parts of a goodscenario (stimulus, environment, and response) and ensure that each scenario
is well formedduring the workshop.
Each stakeholder expresses a scenario representing his or her concerns with respect to the
systemin round-robin fashion. During a nominal QAW, at least two round-robin passes are
madeso that each stakeholder can contribute at least two scenarios. The facilitators ensure
that atleast one representative scenario exists for each architectural driver listed in Step
4.Scenario generation is a key step in the QAW method and must be carried out with care.
Wesuggest the following guidance to help QAW facilitators during this step:
1. Facilitators should help stakeholders create well-formed scenarios. It is tempting
forstakeholders to recite requirements such as The system shall produce reports for
users.While this is an important requirement, facilitators need to ensure that the quality
attributeaspects of this requirement are explored further. For example, the following scenario
shedsmore light on the performance aspect of this requirement: A remote user requests a
databasereport via the Web during peak usage and receives the report within five
seconds.Note that the initial requirement hasnt been lost, but the scenario further explores
the performanceaspect of this requirement. Facilitators should note that quality attribute
names
by themselves are not enough. Rather than say the system shall be modifiable, the
scenarioshould describe what it means to be modifiable by providing a specific example of
amodification to the system vis--vis a scenario.
2. The vocabulary used to describe quality attributes varies widely. Heated debates
oftenrevolve around to which quality attribute a particular system property belongs. It

32

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

doesntmatter what we call a particular quality attribute, as long as theres a scenario


thatdescribes what it means.
3. Facilitators need to remember that there are three general types of scenarios and to
ensurethat each type is covered during the QAW:
a. use case scenarios - involving anticipated uses of the system
b. growth scenarios - involving anticipated changes to the system
c. exploratory scenarios - involving unanticipated stresses to the system that can includeuses
and/or changes
4. Facilitators should refer to the list of architectural drivers generated in Step 4 from time
totime during scenario brainstorming to ensure that representative scenarios exist for
eachone.
Step 6: Scenario Consolidation
After the scenario brainstorming, similar scenarios are consolidated when reasonable.To do
that, facilitators ask stakeholders to identify those scenarios that are very similar in
content.Scenarios that are similar are merged, as long as the people who proposed them
agree andfeels that their scenarios will not be diluted in the process. Consolidation is an
important stepbecause it helps to prevent a dilution of votes during the prioritization of
scenarios (Step 7).Such a dilution occurs when stakeholders split their votes between two
very similar scenarios.As a result, neither scenario rises to importance and is therefore never
refined (Step 8). However,if the two scenarios are similar enough to be merged into one, the
votes might be concentrated,and the merged scenario may then rise to the appropriate level
of importance and berefined further.Facilitators should make every attempt to reach a
majority consensus with the stakeholdersbefore merging scenarios. Though stakeholders may
be tempted to merge scenarios with abandon,they should not do so. In actuality, very few
scenarios are merged.
Step 7: Scenario Prioritization
Prioritization of the scenarios is accomplished by allocating each stakeholder a number
ofvotes equal to 30% of the total number of scenarios generated after consolidation. The
actualnumber of votes allocated to stakeholders is rounded to an even number of votes at the
discretionof the facilitators. For example, if 30 scenarios were generated, each stakeholder
gets 30 x0.3, or 9, votes rounded up to 10. Voting is done in round-robin fashion, in two
passes. . Stakeholders can allocate any number oftheir votes to any scenario or combination
of scenarios. The votes are counted, and the scenariosare prioritized accordingly.
Step 8: Scenario Refinement

33

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

After the prioritization, depending on the amount of time remaining, the top four or five
scenariosare refined in more detail. Facilitators further elaborate each one, documenting the
following:
Further clarify the scenario by clearly describing the following six things:
1. stimulus - the condition that affects the system
2. response - the activity that results from the stimulus
3. source of stimulus - the entity that generated the stimulus
4. environment - the condition under which the stimulus occurred
5. artifact stimulated - the artifact that was stimulated
6. response measure - the measure by which the systems response will be evaluated
Describe the business/mission goals that are affected by the scenario.
Describe the relevant quality attributes associated with the scenario.
Allow the stakeholders to pose questions and raise any issues regarding the scenario.
Suchquestions should concentrate on the quality attribute aspects of the scenario and any
concernsthat the stakeholders might have in achieving the response called for in the
scenario.See the example template for scenario refinement in Appendix A. This step
continues untiltime runs out or the highest priority scenarios have been refined. Typically,
time runs out first.
QAW Benefits
The QAW provides a forum for a wide variety of stakeholders to gather in one room at
onetime very early in the development process. It is often the first time such a meeting takes
placeand generally leads to the identification of conflicting assumptions about system
requirements.In addition to clarifying quality attribute requirements, the QAW provides
increased stakeholdercommunication, an informed basis for architectural decisions,
improved architecturaldocumentation, and support for analysis and testing throughout the
life of the system.
The results of a QAW include
a list of architectural drivers
the raw scenarios
the prioritized list of raw scenarios
the refined scenarios
This information can be used to
update the organizations architectural vision
refine system and software requirements
guide the development of prototypes
exercise simulations
34

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

understand and clarify the systems architectural drivers


influence the order in which the architecture is developed
describe the operation of a system
In short, the architect can use this information to design the architecture. In addition, after
thearchitecture is created, the scenarios can be used as part of a software architecture
evaluation.If the Architecture Tradeoff Analysis MethodSM (ATAMSM)4 is selected as the
software architectureevaluation method, the scenarios generated during the QAW can be
incorporated asseed scenarios in that evaluation .
The QAW lends itself well to the capture of many architecturally relevant materials.
Softwarearchitectural documentation is a collection of view packets plus any documentation
thatapplies to more than one view [Clements 02b]. Each view packet contains a primary
presentation, a catalog of the views elements (including element behavior), a context
diagram, a variabilityguide, architecture background (rationale, analysis results, and
assumptions about theenvironment), and other information including mapping to
requirements.
Several pieces of this information will be gleaned directly from the QAW. For example,
scenariogeneration can lead to the creation of use case diagrams, context diagrams, or
theirequivalent. Refined scenarios can be documented as sequence diagrams or collaboration
diagrams.Stakeholders concerns and any other rationale information that is captured should
berecorded individually in a form that can be included in the appropriate view packet or
overviewdocumentation. Details that explain how to transition these artifacts into
architecturaldocumentation is the subject of ongoing research.In addition to the more
immediate benefits cited above, the scenarios continue to provide benefitsduring later phases
of development. They provide input for analysis throughout the life ofthe system and can be
used to drive test case development during implementation testing.
2.4 Analysis ,prioritization,and trade off Architecture Centric Development
Method(ACDM)
Just as blueprints in the building construction industry guides the construction of a building,
the software architecture serves a blueprint that addresses technical concerns and
programmatic issues of a project. An architectural focus will:
help refine the functional requirements, quality attribute requirements, and constraints
help set and maintain expectations in stakeholders
define the team structure
aid in creating more accurate project estimates
35

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

establish the team vocabulary


help identify technical risk early
guide the creation of a more realistic and accurate production schedule and assist in
project tracking and oversight
provide an early vision of the solution/system
A number of methods have been created by the Software Engineering Institute to help
practitioners create better architectures. Some of these methods include: Quality Attribute
Workshop (QAW) ,Architecture Tradeoff Analysis Method (ATAM) ], Attribute Driven
Design (ADD). These methods have provided great value to practitioners trying to build
better architectures. However, these methods have two main problems. First, they are
intervention oriented. These methods were not designed with a particular development
philosophy (lifecycle or process) in mind. As such, they do not fit neatly into existing
development models or processes without significant tailoring
Little guidance exists that describes how to tailor these methods to fit into an organizations
development model. To maximize their effectiveness, these methods should be used together
and this requires significant tailoring. In order to tailor these methods, someone in an
organization has to know a great deal about each of them in order to tease them apart, and
reassemble them into a cohesive, usable development method/process. This is a risky and
difficult proposition in many organizations. The second problem with these methods is that
in their originally authored form they tend to be heavy-weight and expensive for the smaller
teams, projects, short deadlines, and iterative deliveries. Overcoming these two hurdles has
prevented many organizations in industry from embracing these methods, and more
importantly, adopting the entire body of work.
Organizations are constantly bombarded with emerging methods, tools, and techniques and
they must:
figure out if they are useful
how to use them
how to make them fit together
estimate the costs for adoption
show return on investment
After 20 years of process model promises, this is a tough sell in most organizations. Just as
technological components can have mismatch, so can processes, methods, and tools when we
try to bring them together in an organization. Software development teams need specific
guidance about how to create software architecture in the context of a product development

36

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

lifecycle. ACDM brings together some of the best practices into a lifecycle development
model. The key goals of ACDM are to help software development teams:
Get the information from stakeholders needed to define the architecture as early as
possible.
Create, refine, and update the architecture in an iterative way throughout the lifecycle
whether the lifecycle is waterfall or iterative.
Validate that the architecture will meet the expectations once implemented.
Define meaningful roles for team members to guide their efforts.
Create better estimates and schedules based on the architectural blueprint.
Provide insight into project performance.
Establish a lightweight, scalable, tailorable, repeatable process framework.
The ACDM is geared toward organizations and teams building software intensive systems
and puts the software architecture front-and-center during all phases of the project. The
method prescribes creating a notional architecture as soon as the most preliminary
requirements work has been completed. The architecture is developed early and iteratively
refined as a central focus of the project. The architecture is refined until the development
team is confident that a system can be implemented and it will meet the needs of the
stakeholder community. In ACDM, the architecture is the locus for defining all subsequent
processes, planning, activities, and artifacts. Preconditions for beginning ACDM are defining
roles for all of the team members. The method describes several roles and their
responsibilities. The ACDM essentially follows seven prescribed stages briefly described
below.

37

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

While ACDM emerged from small teams and projects (4 to 6 team members, 1 to 2 year
projects), it is designed to scale up to meet the needs of larger teams and projects as well. In
larger projects, the ACDM is used by a core architecture team to create and refine the overall
system architecture. The output from this ACDM cycle is an initial partitioning of the system
(or system of systems) into sub-elements (or subsystems) and their interactions. Detailed
architecting of the various elements is deferred to smaller teams, each using ACDM to
architect their part of the system (which may be another system). Later integration of the
entire system is undertaken in production stages 6 and 7. The ACDM has been evolved over
a five year period (since 1999) on small projects and is now being further refined for use on
larger projects in industry.
ACDM Preconditions
A precondition to beginning step 1 of ACDM is to establish the team roles for project.
The recommended roles and responsibilities for ACDM are listed in the table below:

38

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The ACDM also assumes that the functional requirements and constraints exist but does
not discuss in detail how to get them, document them, and organize them. This may seem
somewhat naive but this is intentional since requirement gathering, documenting, and
organization varies widely even in our small studio projects. While ACDM does not
address the gathering of initial requirements and constraints, it will help refine them,
clarify them, as the architecture is designed and matures. The relative completeness of the
functional requirements varies from project to project and may have to be discovered and
refined as a consequence of building the system. Some clients provide a documented list
of functional requirements; others just bring ideas to the team. The initial gathering of
functional requirements is assumed to have occurred prior to beginning step 1 of ACDM.
The requirements engineer will coordinate the gathering and documenting of functional
requirements. The term constraints as applied in this context can be confusing. A
constraint is an imposed design decision or a design decision that the architect is not at
liberty to make or change. Example constraints include being forced to use a particular
operating system, use a particular commercial off-the-shelf product, adhere to a particular
standard, or build a system using a prescribed implementation framework.
2.5 Requirements documentation and specification
A Software requirements specification (SRS), a requirements specification for a software
system, is a description of the behavior of a system to be developed and may include a set
of use cases that describe interactions the users will have with the software. In addition it
also contains non-functional requirements. Non-functional requirements impose constraints
on
the
design
or
implementation
(such
as performance
engineering requirements, quality standards, or design constraints) .
Software requirements specification establishes the basis for agreement between customers
and contractors or suppliers (in market-driven projects, these roles may be played by the
39

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

marketing and development divisions) on what the software product is to do as well as what
it is not expected to do. Software requirements specification permits a rigorous assessment of
requirements before design can begin and reduces later redesign. It should also provide a
realistic basis for estimating product costs, risks, and schedules.
The software requirements specification document enlists enough and necessary
requirements that are required for the project development.To derive the requirements we
need to have clear and thorough understanding of the products to be developed or being
developed. This is achieved and refined with detailed and continuous communications with
the project team and customer till the completion of the software.
2.6 Change management
Globalization and the constant innovation of technology result in a constantly evolving
business environment. Phenomena such as social media and mobile adaptability have
revolutionized business and the effect of this is an ever increasing need for change, and
therefore change management. The growth in technology also has a secondary effect of
increasing the availability and therefore accountability of knowledge. Easily accessible
information has resulted in unprecedented scrutiny from stockholders and the media and
pressure on management.
With the business environment experiencing so much change, organizations must then learn
to become comfortable with change as well. Therefore, the ability to manage and adapt to
organizational change is an essential ability required in the workplace today. Yet, major and
rapid organizational change is profoundly difficult because the structure, culture, and
routines of organizations often reflect a persistent and difficult-to-remove "imprint" of past
periods, which are resistant to radical change even as the current environment of the
organization changes rapidly.[10]
Due to the growth of technology, modern organizational change is largely motivated by
exterior innovations rather than internal moves. When these developments occur, the
organizations that adapt quickest create a competitive advantage for themselves, while the
companies that refuse to change get left behind. This can result in drastic profit and/or
market share losses.
Organizational change directly affects all departments from the entry level employee to
senior management. The entire company must learn how to handle changes to the
organization.

40

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Choosing what changes to implement


When determining which of the latest techniques or innovations to adopt, there are four
major factors to be considered:
1. Levels, goals, and strategies
2. Measurement system
3. Sequence of steps
4. Implementation and organizational change
Managing the change process
Regardless of the many types of organizational change, the critical aspect is a companys
ability to win the buy-in of their organizations employees on the change. Effectively
managing organizational change is a four-step process:
1. Recognizing the changes in the broader business environment
2. Developing the necessary adjustments for their companys needs
3. Training their employees on the appropriate changes
4. Winning the support of the employees with the persuasiveness of the appropriate
adjustments
As a multi-disciplinary practice that has evolved as a result of scholarly research,
organizational change management should begin with a systematic diagnosis of the current
situation in order to determine both the need for change and the capability to change. The
objectives, content, and process of change should all be specified as part of a Change
Management plan.
Change management processes should include creative marketing to enable communication
between changing audiences, as well as deep social understanding about leaderships styles
and group dynamics. As a visible track on transformation projects, Organizational Change
Management aligns groups expectations, communicates, integrates teams and manages
people training. It makes use of performance metrics, such as financial results, operational
efficiency, leadership commitment, communication effectiveness, and the perceived need for
change to design appropriate strategies, in order to avoid change failures or resolve troubled
change projects.
Successful change management is more likely to occur if the following are included:
1. Benefits management and realization to define measurable stakeholder aims, create a
business case for their achievement (which should be continuously updated), and
monitor assumptions, risks, dependencies, costs, return on investment, dis-benefits
and cultural issues affecting the progress of the associated work

41

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

2. Effective communication that informs various stakeholders of the reasons for the
change (why?), the benefits of successful implementation (what is in it for us, and
you) as well as the details of the change (when? where? who is involved? how much
will it cost? etc.)
3. Devise an effective education, training and/or skills upgrading scheme for the
organization
4. Counter resistance from the employees of companies and align them to overall
strategic direction of the organization
5. Provide personal counseling (if required) to alleviate any change-related fears
6. Monitoring of the implementation and fine-tuning as required
Examples
Mission changes
Strategic changes
Operational changes (including Structural changes)
Technological changes
Changing the attitudes and behaviors of personnel
Personality Wide Changes
2.7 Traceability of requirements
Traceability is the ability to verify the history, location, or application of an item by means
of documented recorded identification.
Other common definitions include to capability (and implementation) of keeping track of a
given set or type of information to a given degree, or the ability to chronologically interrelate
uniquely identifiable entities in a way that is verifiable.
Measurement
The term "measurement traceability" is used to refer to an unbroken chain of comparisons
relating an instrument's measurements to a known standard. Calibration to a traceable
standard can be used to determine an instrument's bias, precision, and accuracy. It may also
be used to show a chain of custody - from current interpretation of evidence to the actual
evidence in a legal context, or history of handling of any information.
In many countries, national standards for weights and measures are maintained by a National
Measurement Institute (NMI) which provides the highest level of standards for
thecalibration / measurement traceability infrastructure in that country. Examples of
government agencies include the National Physical Laboratory, UK (NPL) the National
Institute of Standards and Technology (NIST) in the USA, the PhysikalischTechnischeBundesanstalt (PTB)
in
Germany,
and
the IstitutoNazionale
di
RicercaMetrologica (INRiM) in Italy. As defined by NIST, "Traceability of measurement

42

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

requires the establishment of an unbroken chain of comparisons to stated references each


with a stated uncertainty."
Logistics
In logistics, traceability refers to the capability for tracing goods along the distribution chain
on a batch number or series number basis. Traceability is an important aspect for example in
the automotive industry, where it makes recalls possible, or in the food industry where it
contributes to food safety.
The international standards organization EPCglobal under GS1 has ratified the EPCglobal
Network standards (especially the EPC Information Services EPCIS standard) which codify
the syntax and semantics for supply chain events and the secure method for selectively
sharing supply chain events with trading partners. These standards for traceability have been
used in successful deployments in many industries and there are now a wide range of
products that are certified as being compatible with these standards.
Materials
In materials, traceability refers to the capability to associate a finished part with destructive
test results performed on material from the same ingot with the same heat treatment, or to
associate a finished part with results of a test performed on a sample from the same melt
identified by the unique lot number of the material. Destructive tests typically include
chemical composition and mechanical strength tests. A heat number is usually marked on the
part or raw material which identifies the ingot it came from, and a lot number may identify
the group of parts that experienced the same heat treatment (i.e., were in the same oven at the
same time). Material traceability is important to the aerospace, nuclear, and process industry
because they frequently make use of high strength materials that look identical to
commercial low strength versions. In these industries, a part made of the wrong material is
called "counterfeit," even if the substitution was accidental.
Supply chain
In the supply chain, traceability is more of an ethical or environmental issue.
Environmentally friendly retailers may choose to make information regarding their supply
chain freely available to customers, illustrating the fact that the products they sell are
manufactured in factories with safe working conditions, by workers that earn a fair wage,
using methods that do not damage the environment.
Software development
In software development, the term traceability (or Requirements Traceability) refers to the
ability to link product requirements back to stakeholders' rationales and forward to
corresponding design artifacts, code, and test cases. Traceability supports numerous software
engineering activities such as change impact analysis, compliance verification or traceback
of code, regression test selection, and requirements validation. It is usually accomplished in
the form of a matrix created for the verification and validation of the project. Unfortunately
43

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

the practice of constructing and maintaining a requirements trace matrix (RTM) can be very
arduous and over time the traces tend to erode into an inaccurate state unless date/time
stamped. Alternate automated approaches for generating traces using information
retrieval methods have been developed.
In transaction processing software, traceability implies use of a unique piece of data (e.g.,
order date/time or a serialized sequence number) which can be traced through the entire
software flow of all relevant application programs. Messages and files at any point in the
system can then be audited for correctness and completeness, using the traceability key to
find the particular transaction. This is also sometimes referred to as the transaction footprint.
Food processing
In food processing (meat processing, fresh produce processing), the term traceability refers
to the recording through means of barcodes or RFID tags & other tracking media, all
movement of product and steps within the production process. One of the key reasons this is
such a critical point is in instances where an issue of contamination arises, and a recall is
required. Where traceability has been closely adhered to, it is possible to identify, by precise
date/time & exact location which goods must be recalled, and which are safe, potentially
saving millions of dollars in the recall process. Traceability within the food processing
industry is also utilised to identify key high production & quality areas of a business, versus
those of low return, and where points in the production process may be improved.
In food processing software, traceability systems imply the use of a unique piece of data
(e.g., order date/time or a serialized sequence number, generally through the use of
abarcode / RFID) which can be traced through the entire production flow, linking all sections
of the business, including suppliers & future sales through the supply chain. Messages and
files at any point in the system can then be audited for correctness and completeness, using
the traceability software to find the particular transaction and/or product within the supply
chain.
Forest products
Within the context of supporting legal and sustainable forest supply chains, traceability has
emerged in the last decade as a new tool to verify claims and assure buyers about the source
of their materials. Mostly led out of Europe, and targeting countries where illegal logging has
been a key problem (FLEGT countries), timber tracking is now part of daily business for
many enterprises and jurisdictions. Full traceability offers advantages for multiple partners
along the supply chain beyond certification systems, including:
Mechanism to comply with local and international policies and regulations.
Reducing the risk of illegal or non-compliant material entering the supply chains.
Providing coordination between authorities and relevant bodies.
44

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Allowing automatic reconciliation of batches and volumes available.


Offering a method of stock control and monitoring.
Triggering real-time alerts of non-compliance.
Reducing likelihood of recording errors.
Improving effectiveness and efficiency.
Increasing transparency.
Promoting company integrity.

UNIT III
ESTIMATION,PLANNING AND TRACKING
3.1 Identifying and Prioritizing Risks
The formal process by which risks factors are systematically identified, assessed, and
responded to. Risk management concentrates on identifying and controlling areas or events
that have a potential of causing unwanted change. (Note that opportunities, also known as
positive risk, should also be managed/exploited. This document is focused on mitigating
negative risk, rather than maximizing positive risk.)
Definitions, Acronyms, and Abbreviations
Risk

A potential undesirable and unplanned event or circumstance,


anticipated in advance, which could prevent the project from
meeting one or more of its objectives.
Issue
An event or circumstance that has occurred with project
impact that needs to be managed and resolved, with
escalation if appropriate.
Task
/ Work packages from the Work Breakdown Structure (WBS)
Action Item or work resulting from project meetings or conversations.
Risk Management Approach
The project team will implement a continuous risk management process which entails two
major processes risk assessment and risk mitigation.
Risk assessment includes activities to identify risks, analyze and prioritize. Risk mitigation
includes developing risk contingency and mitigation strategies, as well as monitoring the
45

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

impact of the issue, action items, strategies and residual risks.

Risk
Identification

Risk Analysis and


Prioritization

Risk Response
Planning

Risk Monitoring
and Control

Risk Tolerance
Communication
The company has a very low threshold for risks to:
o The client experience
o The experience of users who directly support the client
o Non-public information (NPI)
o Potential for fraud or loss related to insufficient control or security
Risk Management Tasks
Risk Management activities are documented in the Risk Management workbook.
workbook is used to identify, prioritize, analyze, and plan a risk response.

The

Risk Identification: The process of determining which risks may affect the project and
documenting their characteristics.
Risk Assessment:The Risk Assessement and Mitigation tab in the Risk
Management workbook has a set of questions that need to be answered that help
determine the risk level of the project. Each question has a potential rating of High,
Medium, or Low in terms of potential impact.
Risk Register: This is located on the projects SharePoint site where project
specific risks can be entered. All risks identified through any means should be
entered individually in the Risk Register on SharePoint. Like all company
documentation, discretion should be used in documenting risk: all statements
should be fact-based and conclusions should be reviewed by management (and if
appropriate, Legal.) Risks should be stated in a standard format, to help the team
stay focused on risks versus root causes and results: Cause Risk Effect.
o Cause: specific situation that introduces risk
o Risk: uncertain event that can impact the project
o Effect: potential consequences of the risk occurring
Example: A shortage of skilled Business Analysts (cause) could result in many missed
requirements (risk), leading to rework or customer dissatisfaction (effect).
Risk Analysis: The process of analyzing and prioritizing risk. The analyzing and prioritizing
of risks is done in the Risk Management Workbook on the Risk Assessment-Mitigation tab
46

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

and in the Risk Register. Risks are prioritized as High, Medium or Low. The prioritization
of risks, determines other steps that may need to happen.
Risk Response Planning: The process of developing options and actions to enhance
opportunities and to reduce threat to project objectives. Mitigating actions are documented
on the Risk Assessment and Mitigation tab in the Risk Management workbook and in the
Risk Register. If a risk is prioritized as High, then mitigating actions must be documented
(area is unshaded). If a risk is prioritized as Medium, then mitigating actions are
recommended, but not required. If a risk is prioritized as Low, then mitigating actions are not
required.
3.2 Risk Mitigation Plans
Mitigating Actions to Consider
o Risk Avoidance - Actions taken to eliminate the source of risk (e.g. change
vendor, lower requirements, change project team member, etc.)
o Risk Mitigation - Actions taken to mitigate the severity and consequences of a
risk (e.g. greater training, delayed deployment, etc.)
o Risk Transfer - The transfer of risk from one group to another (e.g. purchasing
insurance, etc.)
o Risk Monitoring - The monitoring and periodic reevaluation of a risk for
changes to key risk parameters
o Risk Acceptance - Acknowledging the risk but not taking preventive measures
Risk-related change to Scope/Time/Cost
The risk response planning process may result in a decision to avoid a risk by
changing the project, or to mitigate a risk by taking action to lesser the probability
and/or impact in the event the risk occurs. Whenever risk response planning results in
potential change to the project, that change must first be requested, analyzed and
approved in accordance with the projects Change Management Plan and related
processes.
Risk Monitoring and Control: The process of implementing risk response plans, tracking
identified risks, monitoring residual risks, identifying new risk, and evaluating risk process
effectiveness throughout the project.
Monitoring Risks:Project teams should review project risks and on regular basis to
determine if there are any new project risks and to determine if any actions are needed
(if a risk turns to an issue).
Escalation:If a overall project risk is:
o Low: Project risks are low and therefore no additional review need to occur.
o Medium: Project risks should be reviewed on a monthly basis by the Business
Owner, Technical Owner and core project team.
47

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

o High: Project risks should be reviewed on a monthly basis by the Project


Sponsor and Project Steering Committee.
3.3 Estimation Techniques
Estimation of software projects can be done by different techniques. The important
techniques are:
1. Estimation by Expert Judgement
2. Estimation by Analogy
3. Estimation by Available Resources
4. Estimation by Software Price
5. Estimation by Parametric Modeling
3.4 Use Case Points
Use Case Points are used as an analysis phase technique for estimating software
development. Assuming the Business Analyst (BA) composes system use cases for
describing functional requirements, the BA can use this technique for estimating the followon implementation effort. This article reviews the process of estimating the follow-on
development effort for use cases. Use Case Points are derived from another software
development estimating technique called Function Points. However, Function Points are
used by systems analysts as a development phase technique that requires technical detail for
estimating.
Use Case Points
Conducted by business analysts during the analysis phase of a project
Can be classified as a class IV or III estimate (4)
Is based on information contained in a business requirement document
Functional requirements as modeled via system use cases, actors, and scenarios
Nonfunctional requirements
Assumptions on developmental risks
Calculating Use Case Points
To use this estimating technique, the business analyst uses four tables and four formulas.
Tables
Two tables represent functional requirements as described by system use cases. Note
that both of these tables are entitled unadjusted since the totals from these tables need
to be adjusted by the work entailed by nonfunctional requirements and the risk due to
the characteristics of the development team.
Unadjusted Actor Points
Unadjusted Scenario Points
Nonfunctional Requirements Adjustment Table
48

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Developmental Risk Adjustment Table


Formulas
Nonfunctional Adjustment Factor
Developmental Risk Adjustment Factor
Total Adjustment Factor
Total Unadjusted Use Case points
Total Adjusted Use Case Points
Development Work Estimate excluding project management, Infrastructure plus other
supporting work, and testing there has been some effort to include testing using a
similar method with historical average testing hours per use case point (5)
Step 1 Functional Requirement Tables
The first table (Table 1) considers the system use case actors. The purpose of the
Unadjusted Actor Point Table is to classify and score different types of system use case
actors. System use case actor roles can be portrayed by internal systems, external
systems or people/time/sensors depending on the application. Respectively, these role types
can be classified on the Karner weighted scale: simple 1, average 2, complex 3. For
example:

Weight
1 simple
Actor
Number
2 average
Actor Role Type
Scale
Actors
3

complex
Internal System Application 1
3
Simple
External
System
2
3
Average
Application
3
6
Complex Person, Time, Sensor
Total Unadjusted Actor Points

of Unadjusted
Points
3
6
18
27

Table 1. Unadjusted Actor Points


The next table (Table 2) considers the system use cases. The purpose of the
Unadjusted Use Case Point Table is to classify and score use cases on the basis of
the number of dialogue exchanges. A dialogue exchange is defined as an actor request and a
system response. Note that even though the system may have different responses to the same
actor request, for example an error message, it is still considered a single dialogue exchange.
These exchanges can be classified on a Karner weighted scale of simple (1-3 exchanges),
average (4-7 exchanges), and complex (over 7 exchanges).
For example:
49

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Weight
5 - simple
Unadjusted
Points
Use
Case Use Case Dialogue 10
- Number of Use
(weight x number of use
Scale
Exchanges
average
Cases
cases)
15
complex
1-3 dialogue exchanges 5
3
15
Simple
4-7 dialogue exchanges 10
10
100
Average
Over
7
dialogue
15
6
90
Complex
exchanges
Total Unadjusted Scenario Points
205
Table 2. Unadjusted Use Case Points
Step 2 Nonfunctional Requirement Table
The next table (Table 3) considers the nonfunctional requirements in order to adjust the
system use case and actor points for application complexity. The purpose of the
Nonfunctional Requirement Adjustment Factor Table is to classify and score nonfunctional
requirements or general characteristics of the application. The total nonfunctional score
along with the total developmental risk score (next step) is used as a factor to adjust the total
unadjusted actor and use case points. The nonfunctional requirements are weighted (0-5, 5
being most concern) on importance and on a complexity scale (0-5, 0 being least complex, 3
being average, and 5 being most complex). Note that in the example, I have stated
nonfunctional requirements, associated weights and complexity per my own
experience; this means that the constant in the nonfunctional requirement
adjustment factor formula needs to be determined for the average application.
Complexity
Weight
(0-5)
Score
Nonfunctional Requirement or General (0-5)
0 least (weight
x
5 most
Characteristic
3 average complexity)
concern
5 most
2
5
10
Distributed System
1
5
5
Performance
0.5
2
1
Usability
1
2
2
Backup and Recovery
1
2
2
Compatibility
1
3
3
Scalability
0.5
3
1.5
Transition Requirements
2
3
6
Portability
1
3
3
Maintainability
50

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

1
5
5
Reliability
1
3
3
Disaster Recovery
1
5
5
Availability
1
4
4
Security
1
3
3
Training/Operability
Total Nonfunctional Score
53.5
Table 3. Nonfunctional Requirement Adjustment Factor (may be augmented with
columns on weight and complexity rational)
Step 3 Developmental Risk Table
The next table (Table 4) considers the development risk to adjust the system use case and
actor points for risk associated with the team. The purpose of the Developmental Risk
Adjustment Factor Table is to classify and score perceived risk with the team in developing
the application. As stated previously, the total developmental risk score along with the total
nonfunctional score is used as a factor to adjust the total unadjusted actor and use case
points. As with the nonfunctional requirements, the developmental risk is weighted (0-5, 5
being most concern) on importance and rated on a scale (0-5, 0 being low, 3 being average,
and 5 being high).
However, in the case of developmental risk, some risks are opposite in nature and should
have a negative weight. For example, part-time members * (see Table 4) rather than full-time
members have a negative impact. Therefore, a high rating for part-time members should
have a negative score. Note that in the example, I have stated development risks,
associated weights and ratings per my own experience; this means that the constant
in the development risk adjustment factor formula needs to be determined for the
average application.
Rated
(0-5)
Weight
Score
(0-5)
0 low
Team Risk
(weight x impact)
5 most concern 3 average
5 high
1.5
3
4.5
Skill Level
1
3
3
Experience Level
1
2
2
OO Experience
1
3
3
Leadership
1
3
3
Motivation
2
2
4
Requirement Stability
*
-1
5
-5
Part-time Team Members
1
3
3
Tool Availability
Total Development Risk Score
17.5
51

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Table 4. Developmental Risk Adjustment Factor (may be augmented with columns on


weight and rating rational)
Step 4 - The Formulas
The business analyst uses the first two formulas to calculate the nonfunctional
adjustment and the developmental risk factors. But before we delve into the formulas,
I want to encourage the reader to understand the formulas. I recently read an article on
blindly following templates (6); I feel the same way about formulas. Dont just blindly use
them. Understand how the formula works. As mentioned before, if you change the tables
(modify entries or weights), ensure you adjust the formula constants so that the
formulas still yield factors of 1 for an average system..
Nonfunctional Adjustment Factor
tor = a constant + (.01*Total Nonfunctional Score)
Formula derived from function point technique
The higher total nonfunctional score the higher the nonfunctional adjustment factor
Formula needs to yield a factor of 1 (no effect) for an average system
a constant in the example in this article, given the nonfunctional requirements,
weights, and average complexity rating of 3 for all categories, the Total
Nonfunctional Score is 45 which yields (.01*45 = .45)
Therefore, a constant of .55 is used to provide a factor of 1 for an average
application
With the constant set to .55, the nonfunctional adjustment factor ranges from .55
to 1.3 (i.e., Total Nonfunctional Score 0 to 75 respectively)
For the example in this article the nonfunctional adjustment factor is .55 +
(.01*53.5) or 1.085
Developmental Risk Adjustment Factor = a constant + (-0.03*Total
0.03*Total Developmental
Risk Score)
No equivalent formula in function point technique
The higher total development risk score the lower the developmental risk adjustment
factor (i.e., multiplier is negative)
Multiplier 0.03 is used rather than the same multiplier used in the nonfunctional
adjustment factor; the higher multiplier lowers the developmental risk adjustment
factor more with a higher developmental risk score
Formula needs to yield a factor of 1 (no effect) for an average system
a constant in the example in this article, given the developmental risks,
weights, and average complexity rating of 3 for all categories, the Total
Development Risk Score is 22.5 which yields (-0.03*22.5 = -.675)
Therefore, a constant of 1.675 is used to provide a factor of 1 for an average
application

52

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

With the constant set to 1.675, the developmental risk adjustment factor ranges
from 1.825 to .55 (i.e., Total Development Risk Score -5 to 37.5 respectively)

With the above adjustment factors calculated, the business analyst proceeds with final
formulas for the use case point estimate. The business analyst first determines total
adjustment factor by multiplying the nonfunctional and development risk factors. Given the
nonfunctional requirements and developmental risks I have stated, the total adjustment
factor ranges from a minimum of .3025 to a maximum of 2.735. The BA then adds up the
total unadjusted scenario case and actor points, and finally multiplies the total adjustment
factor and total unadjusted use case and actor points to produce the total adjusted use case
points (see formulas and examples below).
Total Adjustment Factor = Nonfunctional * Developmental Risk
= 1.085 * 1.15
= 1.24775
Total Unadjusted Use Case Pts = unadjusted Scenario Pts + unadjusted Actor Pts
= 205 + 27
= 232
Total Adjusted Use Case Pts = Total Adjusted Factor * Total unadjusted Use Case Pts
= 1.24775 * 232
= 289 (rounded)
The Conversion Formula
With the total adjusted use case points, the business analyst can now determine the work
effort estimate in units of time for developing the use cases (note caveat at the beginning of
this article). The key element in this conversion is the development hours per use case point.
Unfortunately, the historical average development hours per use case point are probably not
available for your firm. However, averages for function points are available ranging from 15
to 45 hours per function point depending on the industry (7). These averages may be used in
lieu of established use case point averages. When in doubt, practitioners recommend to use
20 hours (8).
Work Effort Estimate = Total Adjusted Use Case Points * Average Hours per Use
Case Point
= 289 * 20
= 5780 hours or 723 days or 145 weeks or 36 months (rounded)
Summary
Figure 1 depicts a high-level view of the Use Case Points estimates process. As with any
estimating technique, the purpose of Use Case Points is to set expectations. The 2005 Agilis
53

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Solutions and FPT Software results have provided some evidence of its accuracy. Business
Analysts can use this estimating technique during the analysis phase of a project to generate
class II or III estimates. Note the caveat of project management and testing exclusion,
ensure the correct constant is used in the adjustment factor formulas, and limited
availability of historical averages on use case points per hour.

Figure 1. Use Case Points Estimating Process


3.5 Functional Points:
Function Points
Conducted by systems analysts during the development phase
Can be classified as a class II or I estimate which should be more detailed (4)
Is based on information contained in a technical specification
Data Functions using internal and external logical files
Transaction Functions using external inputs and external outputs, plus external inquiries
FPA Uses and Benefits in Project Planning
Project Scoping
A recommended approach for developing function point counts is to first functionally
decompose the software into its elementary functional components (base functional
components). This decomposition may be illustrated graphically on a functional hierarchy.
The hierarchy provides a pictorial table of contents or map of the functionality of the
application to be delivered. This approach has the advantage of being able to easily convey
the scope of the application to the user, not only by illustrating the number of functions
delivered by each functional area, but also a comparative size of each functional area
measured in function points.
Assessing Replacement Impact
If the software to be developed is planned to replace existing production applications, it is
useful to asses if the business is going to be delivered more, less or the same functionality.
The replacement systems functionality can be mapped against the functionality in the
existing system. A quantitative assessment of the difference can be measured in function
54

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

points. Note, this comparison can only be done if the existing applications have already been
sized in function points.
Assessing Replacement Cost
Multiplying the size of the application to be replaced by an estimate of the dollar cost per
function point to develop, enables project sponsors to develop quick estimates of
replacement costs. Industry derived costs are available and provide a ballpark figure for the
likely cost. Industry figures are a particularly useful reference if the re-development is for a
new software or hardware platform not previously experienced by the organisation. Ideally,
organisations should establish their own cost per function point metrics for their own
particular environment, based on project history.
If you are considering implementing a customised off the shelf package solution, then this
provides a quick comparison of the estimated package implementation costs to compare with
an in-house build. Package costs typically need to include the cost of re-engineering the
business to adapt the current business processes to those delivered by the package. These
costs are usually not a consideration for in-house developed software.
Negotiating Scope
Initial project estimates often exceed the sponsor's planned delivery date and budgeted cost.
A reduction in the scope of the functionality to be delivered is often needed so that it is
delivered within a predetermined time or budget constraints. The functional hierarchy
provides the sketch-pad to do scope negotiation. It enables the project manager and the user
to work together to identify and flag (label) those functions which are: mandatoryfor the first
release of the application; essential but not mandatory; or optional and could be held over to
a subsequent release.
The scope of the different scenarios can then be quickly determined by measuring the
functional size of the different scenarios. For example, the project size can be objectively
measured to determine what the size (and cost and duration) would be if all functions are
implemented,
only mandatory functions
are
implemented,
onlymandatory and essential functions are implemented. This allows the user to make more
informed decisions on which functions will be included in each release of the application,
based on their relative priority compared to what is possible given the time, cost and resource
constraints of the project.
Evaluating Requirements
Functionally sizing the requirements for the application quantifies the different types of
functionality delivered by an application. The function point count assigns function points to
each of the function types: External Inputs, Outputs, Enquiries, and Internal and External
Files.
Industry figures available from the ISBSG repository for projects measured with IFPUG
function points indicate that complete applications tend to have consistent and predictable
ratios of each of the function types. The profile of functionality delivered by each of the
55

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

function types in a planned application can be compared to that of the typical profile from
implemented applications, to highlight areas where the specifications may be incomplete or
there may be anomalies.
The following pie chart illustrates the function point count profile for a planned Accounts
Receivable application compared to that from the ISBGS data. The reporting functions
(outputs) are lower than predicted by industry comparisons. Incomplete specification of
reporting functions is a common phenomenon early in a projects lifecycle and highlights the
potential for substantial growth creep later in the project as the user identifies all their
reporting needs.
Estimating Project Resource Requirements
Once the scope of the project is agreed, the estimates for effort, staff resources, costs and
schedules need to be developed. If productivity rates (hours per function point, $cost per
function point) from previous projects are known, then the project manager can use the
function point count to develop the appropriate estimates. If your organisation has only just
begun collecting these metrics and does not have sufficient data to establish its own
productivity rates, then the ISBSG industry data can be used in the interim.
Allocating Testing Resources
The functional hierarchy developed as part of the function point count during project
development can assist the testing manager to identify high complexity functional areas
which may need extra attention during the testing phase. Dividing the total function points
for each functional area by the total number of functions allocated to that group of functions,
enables the assessment of the relative complexity of each of the functional areas.
The effort to perform acceptance testing and the number of test cases required is related to
the number and complexity of the user functions within a functional area. Quantifying the
relative size of each functional area will enable the project manager to allocate appropriate
testing staff and check relative number of test cases assigned.
Risk Assessment
Many organisations have large legacy software applications that, due to their age, are unable
to be quickly enhanced to changed business needs. Over time, these applications have been
patched and expanded until they have grown to monstrous proportions. Frustrated by long
delays in implementing changes, lack of support for their technical platform and expensive
support costs, management will often decide to redevelop the entire application. For many
organisations, this strategy of rebuilding their super-large applications has proved to be a
disaster, resulting in cancellation of the project mid-development. Industry figures show that
the risk of project failure rapidly increases with project size. Projects less than 500 function
points have a risk of failure of less than 20% in comparison with projects over 5,000 function
points which have a probability of cancellation close to 40%. This level of risk is
unacceptable for most organisations.

56

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Assessing planned projects for their delivered size in function points enables management to
make informed decisions about the risk involved in developing large, highly integrated
applications or adopting a lower risk phased approach described below.
Phasing Development
If the project manager decides on a phased approach to the project development, then related
modules may be relegated to different releases. This strategy may require temporary
interfacing functionality to be built in the first release, to be later decommissioned when the
next module is integrated. The function point count allows project managers to develop
what-if scenarios and quantify the project scope of each phase as a means of making
objective decisions. Questions to which quantitative answers can be provided are:
how much of the interfacing functionality can be avoided by implementing all of the
related modules in release one?
what is the best combination of potential modules to group within a release to
minimise the development of temporary interfacing functions?
If it is decided to implement the application as a phased development, then the size of each
release can be optimised to that which is known to be manageable. This can be easily done
by labelling functions with the appropriate release and performing what-if scenarios by
including and excluding functions from the scope of the count for the release.
FPA Uses and Benefits in Project Construction
Monitoring Functional Creep
Function point analysis provides project management with an objective tool by which
project size can be monitored for change, over the projects lifecycle.
As new functions are identified, functions are removed or changed during the project, the
function point count is updated and the impacted functions appropriately flagged. The
project scope can be easily tracked and reported at each of the major milestones.
If the project size exceeds the limits allowed in the initial estimates, then this will provide an
early warning that new estimates may be necessary or, alternatively, highlight a need to
review the functionality to be delivered by this release.
Assessing and Prioritising Rework
Function Point Analysis allows the project manager to objectively and quantitatively
measure the scope of impact of a change request, and to estimate the resulting impact on
project schedule and costs. This immediate feedback to the user on the impact of the rework
allows them to evaluate and prioritise change requests.
The cost of rework is often hidden in the overall project costs, and users and developers have
no means to quantify its impact on the overall project productivity rates. Function point
analysis enables the project manager to measure the functions that have been reworked due
to user-initiated change requests. The results provide valuable feedback to the business on
the potential cost savings of committing user resources early in the project to establish an
agreed set of requirements and minimising change during the project life-cycle.
57

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

FPA Uses and Benefits after Software Implementation


Planning Support Resources and Budgets
The number of personnel required to maintain and support an application is strongly related
to the applications size. Knowing the functional size of the applications portfolio allows
management to confidently budget for the deployment of support resources. For example,
within an Australian financial organisation the average maintenance assignment scope
(number of function points supported per person) is 833 function points per person. The
assignment scope has been found to be negatively influenced by the age of the application
and the number of users, i.e. as both these parameters increase the assignment scope
decreases. Capers Jones figures show similar assignment scopes where for ageing,
unstructured applications with high complexity, an assignment scope of 500 function points
per person is not unusual, whereas for newer, structured applications, skilled staff can
support around 1,500 2,000 function points.
Once implemented, applications typically need constant enhancement in order to respond to
changes in direction of an organisations business activities. Function points can be used to
estimate the impact of these enhancements. The baseline function point count of the existing
application will facilitate these estimates. As the application size grows with time, the
increasing assignment scope will provide the justification to assign more support staff.
Benchmarking
The function point count of delivered functionality provides input into productivity and
quality performance indicators. These can then be compared to those of other in-house
development teams and implementation environments. Benchmarking internally and
externally with industry data enables identification of best practice. External benchmarking
data is readily available in the ISBSG Repository.
Identifying Best Practice
Project managers seeking best practice in their software development and support areas
recognise the need to adopt new tools, techniques and technologies to improve the
productivity of the process and the quality of the products they produce. Baselining current
practice enables management to establish current status and set realistic targets for
improvement. Ongoing measurement of productivity and quality key performance indicators
enable management to assess the impact of their implemented changes and to identify where
further improvements can be made. Function points are the most universally accepted
method to measure the output from the software process. They are a key metric within any
process improvement program because of their ability to normalise data from various
software development environments, combined with their ability to measure output from a
business perspective as compared to a technical perspective.
Planning New Releases
The functional hierarchy of the functionality delivered by an application can also assist the
support manager in planning and grouping change requests for each new release of the
58

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

application. The hierarchy illustrates closely related functions and their relative size. If the
impact of change is focused on a group of related functions, then development effort will be
reduced, particularly in the design, testing and documentation stages of the project. This
strategy of evaluating the scope of impact of a change request also reduces project risk by
restricting projects to a manageable size and focusing change on a restricted set of related
business functions.
Software Asset Valuation
Function Point Analysis is being used increasingly by organisations to support the valuation
of their software assets. In the past, software has been considered an expense rather than a
capital asset and, as such, was not included in an organisation's asset register. The most
commonly used software valuation method is based on the deprival method. This method
values the software based on what it would cost to replace in todays technical environment,
rather than what it cost originally to build. The industry build rate (dollar cost per function
point) is determined and the total replacement value is calculated based on the current
functional size of the application.
Since FPA provides a means of reliably measuring software, then some organisations have
implemented accrual budgeting and accounting in their business units. Under this directive,
all assets must be valued based on deprival value and brought to account, thus ensuring
better accountability of the organisation's financial spending. Funding via budget allocation
is based on assets listed in their financial accounts and their depreciation. In the past, the
purchase price of the software was recorded as an expense within an accounting year. These
more recent accounting practices mean that it can now be valued as an asset and depreciated.
Publicly listed organisations have found that by using this accrual accounting method of
measuring software as an asset rather than an expense they can amortise the depreciation
over five years rather than artificially decrease the current years profit by the total cost of
the software. This strategy has a dramatic effect on their share price because their software is
listed as a capital asset, so contributes to the overall worth of the company. The total cost of
that asset has a reduced impact on the current years reported profit.
Outsourcing Software Production and Support
The benefits of functional size measurement in outsourcing contracts, is that functional size
enables suppliers to measure the cost of a unit of output from the IT process to the business,
and enables them to negotiate on agreed outcomes with their client.Specifically, these output
based metrics based on function point analysis have enabled suppliers to:
quantitatively and objectively differentiate themselves from their competitors
quantify extent of annual improvement and achievement of contractual targets
negotiate price variations with clients based on an agreed metric
measure financial performance of the contract based on unit cost of output
at contract renewal, be in a stronger bargaining position supported by an eststablished
set of metrics
59

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Conversely, these output based metrics based on function point analysis have
enabled clients to:
objectively assess supplier performance based on performance outputs delivered rather
than concentrating on inputs consumed.
establish quantitative performance targets and implement supplier penalties and
bonuses based on achievement of these targets
measure the difference between internal IT costs compared to the cost of outsourcing
based on similar output
quantitatively compare competing suppliers at contract tender evaluation stage.
Many international outsourcing companies use function point based metrics as part of their
client service level agreements. Whilst this method of contract management is relatively
new, its proponents are strong supporters of the usefulness of the technique. In our
experience once an outsourcing contract has been based on function point metrics,
subsequent contract renewals expand on their use.
Metrics initiatives have a high cost and need substantial investment, which is often
overlooked at contract price negotiation. Both the supplier and the client typically incur
costs. However, given the size of the penalties and bonuses associated with these contracts, it
soon becomes obvious that this investment is necessary.

Customising Packaged Software


Background
For selected MIS applications, implementing a packaged off the shelf solution is the most
cost effective and time efficient strategy to deliver necessary functionality to the business.
All of the benefits and uses of Function Point Analysis which applied to in-house
development projects as described in the previous section can also apply to projects which
tailor a vendor supplied package to an organisation's specific business needs.
Experience shows that Function Point Counting of packages is not always as straightforward
as sizing software developed in-house, for the following reasons:
only the physical and technical functions are visible to the counter.
the logical user view is often masked by the physical implementation of the original
logical user requirements.
in most cases the functional requirements, functional specifications, and logical design
documentation are not delivered with the software. The counter may have to rely on
the User Manual or online help to assist in interpreting the user view.
The modelling of the logical business transactions often requires the function point counter
to work with the client to identify the logical transactions. They do this by investigating the
user's functional requirements and interpreting the logical transactions from the packages
physical implementation.
60

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

In most cases, the names of the logical files accessed by the applications transactions are not
supplied by the package vendor.
The function point counter will need to develop the data model by analysing the data items
processed by the application.
However, with sufficient care, a reasonably accurate function point count of packaged
applications can usually be obtained.
Estimating Package Implementations
The estimates for a package solution need to be refined for each implementation depending
on the percentage of the functionality which is:
native to the package and implemented without change
functionality within the package which needs to be customised for this installation
functionality contained with the organisation's existing applications which needs to be
converted to adapt to the constraints of the package
to be built as new functions in addition to the package functions
to be built to as new functions to enable interfacing to other in-house applications
not to be delivered in this release.
The productivity rates for each of these different development activities (to implement,
customise, enhance or build) are usually different. This complexity of assigning an
appropriate productivity factor can be compounded when the package provides utilities
which enable quick delivery based on changes to rule tables. Change requests, which can be
implemented by changing values in rule-based tables, can be implemented very efficiently
compared to a similar user change request that requires source code modification. It is
recommended that these different types of activities are identified, and effort collected
against them accordingly so that productivity rates for the different activity types can by
determined.
The functions can be flagged for their development activity type and their relative
contributions to the functional size calculated. This will enable fine-tuning of the project
estimates.
Another area of concern when developing estimates for package integration is the need to
determine the extent that the application module needs to interface with existing
functionality. The function point count measures the external files accessed by transactions
within this application. A high percentage of interface files (>10%) suggests a high degree of
coupling between this application and existing applications. A high degree of interfacing
tends to have a significant negative impact on productivity rates and needs to be considered
when developing estimates.
3.6 COCOMO II
COCOMO I

61

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

In this section the model of COCOMO I also called COCOMO'81 is presented. The
underlying software lifecyle isa waterfall lifecycle. Detailed information about the ratings as
well as the cost drivers can be found in Boehm proposed three levels of the model: basic,
intermediate, detailed.
The basic COCOMO'81 model is a single-valued, static model that computes software
development
effort (and cost) as a function of program size expressed in estimated thousand delivered
source
instructions (KDSI).
The intermediate COCOMO'81 model computes software development effort as a
function of program
size and a set of fifteen "cost drivers" that include subjective assessments of product,
hardware,
personnel, and project attributes.
The advanced or detailed COCOMO'81 model incorporates all characteristics of the
intermediate
version with an assessment of the cost drivers impact on each step (analysis, design, etc.) of
the
software engineering process.
COCOMO'81 models depends on the two main equations
1. development effort: MM = a * KDSI b
based on MM - man-month / person month / staff-month is one month of effort by one
person. In
COCOMO'81, there are 152 hours per Person month. According to organization this values
may differ
from the standard by 10% to 20%.
2. effort and development time(TDEV) : TDEV = 2.5 * MM c
The coefficents a, b and c depend on the mode of the development. There are three modes of
development:
Table 1: development modes
Development Mode
Size
1.Organic
Small
2.Semi-detached Medium
3.Embedded
Large

Project Characteristics
Innovation Deadline/constraints
Dev. Environment
Little
Not tight
Stable
Medium
Medium
Medium
Greater
Tight Complex hardware/
customer interfaces

1.1 BASIC COCOMO

62

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The basic COCOMO applies the parameterised equation without much detailed
consideration of project
characteristics.
Basic COCOMO a
b
c
MM = a * KDSI b
Organic
2.4
1.05
0.38
Semi-detached
3.0
1.12
0.35
TDEV=2.5 * MM c
Embedded 3.6
1.20
0.32
Overview 1: COCOMO I; equations and parameters of the basic COCOMO I
INTERMEDIATE COCOMO
The same basic equation for the model is used, but fifteen cost drivers are rated on a scale of
'very low' to 'veryhigh' to calculate the specific effort multiplier and each of them returns an
adjustment factor which multipliedyields in the total EAF (Effort Adjustment Factor). The
adjustment factor is 1 for a cost driver that's judged asnormal.In addition to the EAF, the
model parameter "a" is slightly different in Intermediate COCOMO from the basicmodel.
The parameter "b" remains the same in both models.
Intermediate COCOMO a
b
c
MM = a * KDSI b
Organic
3.2
1.05
0.38
Semi-detached
3.0
1.12
0.35
TDEV=2.5 * MM c
Embedded
2.8
1.20
0.32
Overview 2: COCOMO I; equations and parameters of the intermediate COCOMO IMan
Month correction is now:
Equation 1: intermediate COCOMO; man month correction
Note: Only the Intermediate form has been implemented by USC in a calibrated software
tool.
ADVANCED, DETAILED COCOMO
The Advanced COCOMO model computes effort asa function of program size and a set of
cost drivers
weighted according to each phase of the softwarelifecycle. The Advanced model applies
theIntermediate model at the component level, andthen a phase-based approach is used
toconsolidate the estimate [Fenton, 1997].The four phases used in the detailed
COCOMOmodel are: requirements planning and productdesign (RPD), detailed design
(DD), code and unittest (CUT), and integration and test (IT). Each costdriver is broken down
by phases as in the exampleshown in Table 2.
Table 2: Analyst capability effort multiplier for detailed COCOMO
Cost Driver Rating
RPD
DD
ACAP
Very Low 1.80
1.35
Low
0.85
0.85

CUT
1.35
0.85

IT
1.50
1.20
63

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Nominal
1.00
1.00
1.00
1.00
High
0.75
0.90
0.90
0.85
Very High 0.55
0.75
0.75
0.70
Estimates for each module are combined into subsystems and eventually an overall project
estimate. Using thedetailed cost drivers, an estimate is determined for each phase of the
lifecycle.
ADA COCOMO
COCOMO has been continued to evolve and improve since its introduction. Beneath the
"ADA COCOMO" model:The model named "ADA_87" assumes that the ADA
programming language is being used. The use of ADA madeit easier to develop highly
reliable systems and to manage complex applications. The APM_88 model is basedupon a
different "process model" called the ADA Process Model. Among the assumptions are:
ADA is used to produce compiler checked package specifications by the Product
Design Review (PDR)
Small design teams are used
More effort is spent in requirements analysis and design
Less effort is spent in coding, integration, and testing
EXAMPLE OF INTERMEDIATE COCOMO
Required: database system for an office automationproject. Project = organic (a=3.2, b =
1.05, c=0.38),
4 modules to implement:
data entry
0.6 KDSI
MM Korr = (1.15*1.06*1.13*1.17) * 3.2 * 3.0
1.05
data update
0.6 KDSI
MM Korr = 1.61* 3.2*3.17
query
0.8 KDSI
MM Korr = 16.33
report generator
1.0 KDSI
TDEV = 2.5*16.33 0.38
System SIZE
3.0 KDSI
TDEV = 7.23 (>7months to complete)
Efforts are rated as follows (all others nominal, 1.0):
How many people should be hired?
MM Korr / TDEV = team members
16.33 / 7.23 = 2.26 (>2 team members)
cost drivers
level
EAF
complexity
high
1.15
storage
high
1.06
experience
low
1.13
prog capabilities low
1.17
ADVANTAGES OF COCOMO'81
64

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

COCOMO is transparent, one can see how it works unlike other models such as SLIM
Drivers are particularly helpful to the estimator to understand the impact of different
factors that affect
project costs
DRAWBACKS OF COCOMO'81
It is hard to accurately estimate KDSI early on in the project, when most effort
estimates are required
KDSI, actually, is not a size measure it is a length measure
Extremely vulnerable to mis-classification of the development mode
Success depends largely on tuning the model to the needs of the organization, using
historical datawhich is not always available
COCOMO II
At the beginning an overall context is given where the need of the reengineering of
COCOMO I is stressed aswell as the focused issues for the new COCOMO version are
presented.
REENGINEERING COCOMO I NEEDS
new software processes
new phenomena: size, reuse
need for decision making based on incomplete information
FOCUSED ISSUES
The reengineering process focussed on issues such as:
1. non-sequential and rapid-development process models
2. reuse-driven approaches involving commercial-off-the-shelf (COTS) packages
3. reengineering [reused, translated code integration]
4. applications composition
5. application generation capabilities
6. object oriented approaches supported by distributed middleware
7. software process maturity effects
8. process-driven quality estimation
STRATEGY
1. Preserve the openness of the original COCOMO
2. Key the structure of COCOMO II to the future software marketplace sectors
3. Key the inputs and outputs of the COCOMO II submodels to the level of information
available
4. Enable the COCOMO II submodels to be tailored to a project's particular process strategy
(early
prototyping stage [application composition model], Early Design stage, post-architecture
stage

65

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

COCOMO II provides a family (COCOMO suite) of increasingly detailed software cost


estimation models, eachtuned to the sectors' needs and type of information available to
support software cost estimation
THREE PRIMARY PREMISES
PARTICULAR PROCESS DRIVERS: current and future software projects tailor their
processes to their particularprocess drivers (no single, preferred software life cycle model
anymore), process drivers include COTS/reusable software availability, degree of
understanding requirements and architectures and otherschedule constraints as size and
required reliability
INFORMATION: Consistency between granularity of the software model and the
information available
RANGE ESTIMATES: coarse-grained cost driver information in the early project stages,
and increasingly finegrainedinformation in later stages. Not point estimates of software cost
and effort, but rather range
estimates tied to the degree of definition of the estimation inputs.
DIFFERENCES BETWEEN COCOMO I AND COCOMO II
The major differences between COCOMO I AND COCOMO II are:
COCOMO'81 requires software size in KDSI as an input, but COCOMO II is based on
KSLOC (logicalcode). The major difference between DSI and SLOC is that a single
Source Line of Code may beseveral physical lines. For example, an "if-then-else"
statement would be counted as one SLOC, butmight be counted as several DSI.
COCOMO II addresses the following three phases of the spiral life cycle: applications
development,early design and post architecture
COCOMO'81 provides point estimates of effort and schedule, but COCOMO II
provides likely ranges ofestimates that represent one standard deviation around the
most likely estimate.
The estimation equation exponent is determined by five scale factors (instead of the
three
development modes)
Changes in cost drivers are:
Added cost drivers (7): DOCU, RUSE, PVOL, PLEX, LTEX, PCON, SITE
Deleted cost drivers (5): VIRT, TURN, VEXP, LEXP, MODP
Alter the retained ratings to reflect more up-do-date software practices
Data points in COCOMO I: 63 and COCOMO II: 161
COCOMO II adjusts for software reuse and reengineering where automated tools are
used for
translation of existing software, but COCOMO'81 made little accommodation for these
factors
COCOMO II accounts for requirements volatility in its estimates
66

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

COCOMO II MODEL DEFINITION


COCOMO II provides three stage series of models for estimation of software projects:
APPLICATION COMPOSITION MODEL for earliest phases or spiral cycles [prototyping,
and any other prototypingoccurring later in the life cycle].
EARLY DESIGN MODEL for next phases or spiral cycles. Involves exploration of
architectural alternatives orincremental development strategies. Level of detail consistent
with level of information available and thegeneral level of estimation accuracy needed at this
stage.
POST-ARCHITECTURE MODEL: once the project is ready to develop and sustain a fielded
system it should have alife-cycle architecture, which provides more accurate information on
cost driver inputs, and enables moreaccurate cost estimates.
DEVELOPMENT EFFORT ESTIMATES
SIZING
A good size estimate is very important for a good model estimate. However determining size
can be very
challenging because projects are generally composed of new, reused (with or without
modifications) and
automatically translated code. The baseline size in COCOMO II is a count of new lines of
code.
SLOC is defined such that only source lines that are DELIVERED as part of the product are
included testdrivers and other support software (-as long as they aren't documented as
carefully as source code- isexcluded).One SLOC is one logical source statement of code (e.g.
declarations are counted, comments not)In COCOMO II effort is expressed as person months
(PM). Person month is the amount of time one personspends working on the software
development project for one month. Code size is expressed in thousands ofsource lines of
code (KSLOC). The goal is to measure the amount of intellectual work put into
programdevelopment. Counting source lines of Code (SLOC) takes account of new lines of
code (reused code has to beadjusted). There are two possibilities: either to count the source
lines of code (with the Software EngineeringInstitute (SEII) checklists or tool supported (see
10 Tools p. 19)) or to count the unadjusted function points andto convert them via
"backfiring" tables to source lines of code. For further information on counting
theunadjusted function points (UFP) as well as the ratios for the conversion can be found in
and
THE SCALE FACTORS
Most significant input to the COCOMO II model is size. Size is treated as a special cost
driver in that it has anexponential factor, E. This exponent is an aggregation of five scale
factors. They are only used at the projectlevel for both the Early Design and the PostArchitecture model. All scale factors have qualitative rating levels('extra low' to 'extra high').
67

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Each scale factor is the subjective weighted average of its characteristics. Theratings can be
fountat The five Scale Factors are:
1. PREC Precedentedness (how novel the project is for the organization)
2. FLEX Development Flexibility
3. RESL Architecture / Risk Resolution
4. TEAM Team Cohesion
5. PMAT Process Maturity
Scale Factors (Wi) annotation
PREC If a product is similar to several previouslydeveloped project, then the
precedentednessis high
FLEX Conformance needs with requirements /external interface specifications,Describe
much the same influencesthat the original Development Mode d id, largely intrinsic to a
project anduncontrollableRESL Combines Design Thoroughness and RiskElimination (two
scale factors in Ada).
TEAM accounts for the sources of project turbulenceand entropy because of difficulties
insynchronizing the project's stakeholders.
PMAT time for rating: project start. Two ways forrating: 1. by the results of an organized
evaluation based on the SEI CMM, 2. 18 KeyProcess Areas in the SEI CMM.Identify
management controllablesby which projects can reducediseconomies of scale by
reducingsources of project turbulence,entropy and rework.
COCOMO II MODELING METHODOLOGY
Preparation includes: Parameter definition, how to use parameters to produce effective
estimates. The
methodology tries to minimize risk of lost expert time and maximize estimates. Here the
question arises whichparameters are significant (most estimate efficient), in which way and
with which rating scale.
Seven modeling steps
1. analyse existing literature
2. review software cost modelling literature
3. to insight on improved functional forms potentially significant parameters
4. parameter definition issues (e.g. size)
5. identification of potential new parameters Process Maturity and Multisite Development
6. continuation of a number of parameters from COCOMO I
7. dropping of such COCOMO I parameters as turnaround time and modern programming
practices
(subsumed by process maturity)
The Bayesian approach was used for COCOMO II and is (will be) reused for COCOTS,
COQUALMO, COPSEMOand CORADMO.
THE ROSETTA STONE
68

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The ROSETTA STONE is a system that updates COCOMO I so that it can be used with
COCOMO II models. TheROSETTA STONE permits users to translate project files from
COCOMO 81 to COCOMO II (backwardscompatibility).
EMERGING EXTENSIONS
Because of the rapid changes in software engineering development not all directions of
impact could take
place in the COCOMO II model. So some emerging extensions were required to overcome
the deficiencies. Allof them are complementary to the COCOMO II model and some of them
are still experimental, their calibrationand counting rules aren't robust enough till now.
Further research still has to be done.In the following section they are presented briefly. For
further particulars please contact the list of references.
The discussed extensions are:
estimating the cost of software COTS integration (COCOTS)
Application Composition Model
phase distributions of schedule and effort (COPSEMO)
rapid application development effort and schedule adjustments (CORADMO)
quality in terms of delivered defect density (COQUALMO)
effects of applying software productivity strategies /improvement (COPROMO)
System Engineering (COSYSMO)
3.7 Top Down and Bottom Up Estimation
There are two approaches to estimating unit costs: top-down, bottom-up, which can be
combined to form a mixed approach. Generally, a bottom-up approach is used to estimate
the costs of service usage whereas top-down costing is more amenable to estimating the
society level costs which are often intangible and where data is scarce.
Top-down unit cost estimation
The top-down approach is based on a simple calculation: divide total expenditure (quantum
of funding available) for a given area or policy by total units of activity (e.g. patients served)
to derive a unit cost. The units of activity are specific to the services that are being costed,
for example the cost of a prison place, GP consultation, or social work assessment. Typically
this approach uses aggregate, budgetary data to estimate a unit cost. The advantages of the
top-down approach are:
Availability of data: the availability of budgetary data means that top-down approaches
can be applied easily;
Simplicity: the calculation required to estimate unit costs is easy to understand and direct,
providing a simple way to quantify the administrative and overhead costs associated with
a range of public services and
Low cost: the availability of aggregate cost data means that the time and costs required to
estimate a top-down unit cost are minimal.

69

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

There are, however, two main limitations associated with a top-down approach. First, it does
not identify what drives costs and therefore often masks the underlying factors that
determine why unit costs vary within a single yet heterogeneous group of service users - for
example, children in care. Second, top-down costing cannot be used to reliably forecast how
costs might rise or fall as a result of changes in that way that people use services (e.g. the
intensity, duration of service usage) or how costs might change due to improvements in
outcomes. Therefore, using top-down unit costs may incorrectly estimate the savings from
SIB interventions.
Bottom-up unit cost estimation
The bottom-up approach provides a greater level of granularity than the top-down method. It
involves identifying all of the resources that are used to provide a service and assigning a
value to each of those resources. These values are summed and linked to a unit of activity to
derive a total unit cost this provides a basis for assessment of which costs can be avoided
as a result of reduced demand.
The advantages of using a bottom-up approach are:
Transparency: detailed cost data allows potential errors to be investigated and their
impact tested this facilitates the quality assurance process;
Granularity: detailed cost data can highlight variations in cost data, and enable
practitioners to explore the drivers of variation and determine whether, for example, some
service users account for a disproportionate share of costs; and
Versatility: the methodology enables a practitioner to forecast how costs may change as a
result of a reduction in service usage or demand.
Estimating efficiency savings through a bottom-up approach to unit cost estimation is a more
robust method of estimating benefits to the commissioner,particuarly those related to time
savings and reductions in demand for services. However, the main disadvantage associated
with the bottom-up approach is that it is labour intensive; the cost, time and expertise require
to apply it may be prohibitive for providers.
The strengths and weaknesses of top-down and bottom-up unit cost estimation are
summarised in the table below. In practice, a combined approach may be most practicable
for SIB developers due to time constraints and the availability of cost data. Regardless of
which approach is used, the costs avoided through a SIB should take into account how
individuals (or groups of individuals) use a service. Service users are heterogeneous and their
service usage will reflect this - both in intensity and duration and this will impact on the
costs of service provision. Costs should therefore consider the specific cohort to be targeted
by a SIB intervention.
Approach to unit cost Bottom-Up
estimation

Top-Down
70

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Granularity/Transparency

High

Low

Credibility

High

Low

Ease
of
exploring
variation in costs
High

Low

Cost of data collection

Medium-High

Low

Data requirements

High: requires
detailed, local Low: budgetary
data that is often data is often
unavailable
available

Level of Approximation
(e.g. assumptions)
Low-Medium

Medium-High

Forecasting (changes in
cost
following
the
introduction or redesign of
a service)
Medium-High

Low

3.8 Work Breakdown Structure


A work breakdown structure (WBS), in project management and systems engineering, is a
deliverable-oriented decomposition of a project into smaller components.
A work breakdown structure element may be a product, data, service, or any combination
thereof. A WBS also provides the necessary framework for detailed cost estimating and
control along with providing guidance for schedule development and control.

Elements of each WBS Element:


71

CP7301

1.
2.
3.
4.

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The scope of the project, "deliverables" of the project.


Start and end time of the scope of project.
Budget for the scope of the project.
Name of the person related to the scope of project.

.
Example from MIL-HDBK-881, which illustrates the first three levels of a typical aircraft
system.[11]
Defense Materiel Item categories from MIL-STD-881C are:
Aircraft Systems WBS
Electronic Systems WBS
Missile Systems WBS
Ordnance Systems WBS
Sea Systems WBS
Space Systems WBS
Surface Vehicle Systems WBS
Unmanned Air Vehicle Systems WBS
Unmanned Maritime Systems WBS
Launch Vehicle Systems WBS
Automated Information Systems WBS
The common elements identified in MIL-STD-881C, Appendix L are: Integration, assembly,
test, and checkout; Systems engineering; Program management; System test and evaluation;
Training; Data; Peculiar support equipment; Common support equipment; Operational/Site
activation; Industrial facilities; Initial spares and repair parts. The standard also includes
72

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

additional common elements unique to Space Systems, Launch Vehicle Systems and
Automated Information Systems.
In 1987, the Project Management Institute (PMI) documented the expansion of these
techniques across non-defense organizations. The Project Management Body of
Knowledge(PMBOK) Guide provides an overview of the WBS concept, while the "Practice
Standard for Work Breakdown Structures" is comparable to the DoD handbook, but is
intended for more general application.[12]
3.9 Macro and Micro Plans
The Macro or Top-Down approach can provide a quick but rough estimate
Done when the time and expense of a detailed estimate are an issue
Usually occurs during conception stage when a full design and WBS are not
available
Requires experienced personnel to do the estimate
Can be highly inaccurate
A Micro or Bottom-Up approach can provide a fairly accurate estimate, but is time
consuming
Takes into account the project design and a roll-up of WBS elements
May require multiple personnel and time to complete
If done properly, a bottom-up estimate can yield accurate cost and time
estimates
Steps to developing the estimates
Start with a Macro estimate then refine with a Micro estimate
Develop the general project definition
Perform a macro cost and time estimate
Develop the detailed project definition and WBS
Roll-up the WBS elements as part of a micro estimate
Establish the project schedules
Reconcile differences between the macro and micro estimates
Macro Estimates
Scaling:Given a cost for a previous project then an estimate for a new project can be
scaled from the known cost. E.g NASA, at times, uses spacecraft weight to estimate
total cost.
Apportion:Given a similar previous project, costs for major subunits of the new
project would be proportional to similar subunits in the previous project.
Weighted Variables:Certain types of projects can be characterized by specific
parameters (e.g. number of inputs, number of detector channels). Historical costs &
times for single units of these parameters are weighted by the numbers required for the
new project.
73

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Learning Curve:If the same task is repeated a number of times there will be a cost /
time savings relative to the first time the task is done.
Micro Estimates
Template:Uses historical data to establish detailed costs and schedules for project
subunits. A new project composed of some combination of these subunits can then be
quickly estimated.
Ratio:Similar to the Macro ratio method but applied to specific tasks associated with
project subunits. For example, if it takes 1 day to build & test a particular sensor unit,
then an instrument with 10 sensors would take 2 technicians, 5 days to complete.
WBS Roll-up:Times and costs associated with the lowest level WBS work packages are
estimated and then these are added or rolled-up to yield the costs for higher level units. This
method provides the most accurate estimates at the expense of time devoted to developing
the estimate
3.10 Planning Poker
Planning poker, also called Scrum poker, is a consensus-based technique for estimating,
mostly used to estimate effort or relative size of development goals in software development.
In planning poker, members of the group make estimates by playing numbered cards facedown to the table, instead of speaking them aloud. The cards are revealed, and the estimates
are then discussed. By hiding the figures in this way, the group can avoid the cognitive bias
of anchoring, where the first number spoken aloud sets a precedent for subsequent estimates.
Planning poker is a variation of the Wideband Delphi method. It is most commonly used
in agile
software
development,
in
particular
the Scrum and Extreme
Programmingmethodologies.
Process
The reason
The reason to use Planning poker is to avoid the influence of the other participants. If a
number is spoken, it can sound like a suggestion and influence the other participants' sizing.
Planning poker should force people to think independently and propose their numbers
simultaneously. This is accomplished by requiring that all participants show their card at the
same time.
Equipment
Planning poker is based on a list of features to be delivered, several copies of a deck of cards
and optionally, an egg timer that can be used to limit time spent in discussion of each item.
The feature list, often a list of user stories, describes some software that needs to be
developed.
The cards in the deck have numbers on them. A typical deck has cards showing
the Fibonacci sequence including a zero: 0, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89; other decks use
similar progressions.
74

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Planning Poker card deck


The reason for using the Fibonacci sequence is to reflect the inherent uncertainty in
estimating larger items.Several commercially available decks use the sequence: 0, , 1, 2, 3,
5, 8, 13, 20, 40, 100, and optionally a ? (unsure) and a coffee cup (I need a break). Some
organizations use standard playing cards of Ace, 2, 3, 5, 8 and King. Where King means:
"this item is too big or too complicated to estimate." "Throwing a King" ends discussion of
the item for the current sprint.
Smartphones allow developers to use apps instead of physical card decks. When teams are
not in the same geographical
eographical locations,collaborative
locations,
software can be used as replacement for
physical cards.
Procedure
At the estimation meeting, each estimator is given one deck of the cards. All decks have
identical sets of cards in them.
The meeting proceeds as follows:
A Moderator, who will not play, chairs the meeting.
The Product Manager provides a short overview. The team is given an opportunity to ask
questions and discuss to clarify assumptions and risks. A summary of the discussion is
recorded by the Project Manager.
Each individual lays a card face down representing their estimate. Units used vary - they
can be days duration, ideal days or story points.. During discussion, numbers must not be
mentioned at all in relation to feature size to avoid anchoring.
Everyone calls their cards simultaneously by turning them over.
People with high estimates and low estimates are given a soap box to offer their
justification for their estimate and then discussion continues.
Repeat the estimation process until a consensus is reached. The developer who was likely
to own the deliverable has a large portion of the "consensus vote", altho
although the
Moderator can negotiate the consensus.
To ensure that discussion is structured; the Moderator or the Project Manager may at any
point turn over the egg timer and when it runs out all discussion must cease and another
round of poker is played. The structure
s
in the conversation is re-introduced
introduced by the soap
boxes.
75

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The cards are numbered as they are to account for the fact that the longer an estimate is, the
more uncertainty it contains. Thus, if a developer wants to play a 6 he is forced to reconsider
and either work through that some of the perceived uncertainty does not exist and play a 5, or
accept a conservative estimate accounting for the uncertainty and play an 8.
Planning poker benefits
Planning poker is a tool for estimating software development projects. It is a technique that
minimizes anchoring by asking each team member to play their estimate card such that it
cannot be seen by the other players. After each player has selected a card, all cards are
exposed at once.
A study by Molkken-stvold and Haugen[6] found that [the] set of control tasks in the same
project, estimated by individual experts, achieved similar estimation accuracy as the
planning poker tasks. However, for both planning poker and the control group, measures of
the median estimation bias indicated that both groups had unbiased estimates, as the typical
estimated task was perfectly on target.
3.11Wideband Delphi
The Wideband Delphi estimation method is a consensus-based technique for estimating
effort. It derives from the Delphi method which was developed in the 1950-1960s at
theRAND Corporation as a forecasting tool. It has since been adapted across many industries
to estimate many kinds of tasks, ranging from statistical data collection results to sales and
marketing forecasts.
Wideband Delphi Process
Barry Boehm and John A. Farquhar originated the Wideband variant of the Delphi method in
the 1970s. They called it "wideband" because, compared to the existing delphi method, the
new method involved greater interaction and more communication between those
participating. The method was popularized by Boehm's book Software Engineering
Economics (1981). Boehm's original steps from this book were:
1. Coordinator presents each expert with a specification and an estimation form.
2. Coordinator calls a group meeting in which the experts discuss estimation issues with
the coordinator and each other.
3. Experts fill out forms anonymously.
4. Coordinator prepares and distributes a summary of the estimates
5. Coordinator calls a group meeting, specifically focusing on having the experts discuss
points where their estimates vary widely
6. Experts fill out forms, again anonymously, and steps 4 to 6 are iterated for as many
rounds as appropriate.
A variant of Wideband Delphi was developed by Neil Potter and Mary Sakry of The Process
Group. In this process, a project manager selects a moderator and an estimation team with
76

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

three to seven members. The Delphi process consists of two meetings run by the moderator.
The first meeting is the kickoff meeting, during which the estimation team creates a work
breakdown structure (WBS) and discusses assumptions. After the meeting, each team
member creates an effort estimate for each task. The second meeting is the estimation
session, in which the team revises the estimates as a group and achieves consensus. After the
estimation session, the project manager summarizes the results and reviews them with the
team, at which point they are ready to be used as the basis for planning the project.
Choose the team. The project manager selects the estimation team and a moderator. The
team should consist of 3 to 7 project team members. The team should include
representatives from every engineering group that will be involved in the development of
the work product being estimated.
Kickoff meeting. The moderator prepares the team and leads a discussion to brainstorm
assumptions, generate a WBS and decide on the units of estimation.
Individual preparation. After the kickoff meeting, each team member individually
generates the initial estimates for each task in the WBS, documenting any changes to the
WBS and missing assumptions.
Estimation session. The moderator leads the team through a series of iterative steps to
gain consensus on the estimates. At the start of the iteration, the moderator charts the
estimates on the whiteboard so the estimators can see the range of estimates. The team
resolves issues and revises estimates without revealing specific numbers. The cycle
repeats until either no estimator wants to change his or her estimate or the estimators
agree that the range is acceptable.
Assemble tasks. The project manager works with the team to collect the estimates from
the team members at the end of the meeting and compiles the final task list, estimates and
assumptions.
Review results. The project manager reviews the final task list with the estimation team.
3.12 Documenting the Plan
To foster a successful planning phase, here are seven planning documents
1.Project management plan -- This is used as a reference index, encompassing all planning
and
project
documents.
2. High-level project schedule plan -- This document captures high-level project phases
and key milestones. It is the document most project stakeholders will see or want to see.
3. Project team planning -- This document provides a "who-is-doing-what" view of the
project. This document fosters efficient project execution and effective project
communication.
4. Scope plan -- The scope plan documents the project requirements, the agreed scope and
the RequirementsTraceabilityMatrix (RTM) summary.

77

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

5. Detailed project work plan -- This keeps track of the activities, work packages,
resources, durations, costs, milestones, project's critical path, etc. It will be an eessential
document
and
work
guideline
for
your
coreprojectteam.
6. Quality assurance planning -- This document tracks the quality standards your project
deliverables will have to align to. These may typically include product testing approach and
tools, quality
lity policies, quality checklists, deviations definitions, quality metrics, product
defect
severity
grades,
acceptance
criteria,costof
poor
quality,
etc.
7. Risk planning -- This document contains the project risks and the related mitigation
plans; as well as the project opportunities and the related exploiting plans. The importance of
this document is one of the most underestimated in project planning. Be prepared to have a
contingency plan in case something goes wrong or to take advantage of opportunities when
they
arise.
Start with this checklist when you sit down to plan for your next project
project-planning
phase. Depending on your project's needs, fine tune the checklist and tailor it by adding
and removing planning assets, determining the planning time frame,
frame, the underlying details
and
rigor.
Revisit this planning exercise, learn from it and enhance it, to continuously improve your
project planning skills.
3.13 Tracking the Plan
Project tracking

78

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

It is helpful to see an example of project tracking that does not include earned value
performance management. Consider a project that has been planned in detail, including a
time-phased
phased spend plan for all elements of work. Figure 1 shows the cumulative budget
(cost) for this project as a function of time (the blue line, labeled PV). It also shows the
79

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

cumulative actual cost of the project (red line) through week 8. To those unfamiliar with
EVM, it might appear that this project was over
over budget through week 4 and then under
budget from week 6 through week 8. However, what is missing from this chart is any
understanding of how much work has been accomplished during the project. If the project
was actually completed at week 8, then the pr
project
oject would actually be well under budget and
well ahead of schedule. If, on the other hand, the project is only 10% complete at week 8, the
project is significantly over budget and behind schedule. A method is needed to measure
technical performance objectively
tively and quantitatively, and that is what EVM accomplishes.
Project tracking with EVM
Consider the same project, except this time the project plan includes pre-defined
pre defined methods of
quantifying the accomplishment of work. At the end of each week, the project
manager identifies every detailed element of work that has been completed, and sums the PV
for each of these completed elements. Earned value may be accumulated monthly, week
weekly, or
as progress is made.
Earned value (EV)

Figure 2 shows the EV curve (in green) along with the PV curve from Figure 1. The chart
indicates that technical performance (i.e., progress) started more rapidly than planned, but
slowed significantly and fell
ll behind schedule at week 7 and 8. This chart illustrates the
schedule performance aspect of EVM. It is complementary to critical path or critical
chain schedule management.
Figure 3 shows the same EV curve (green) with the actual cost data from Figure 1 (in red). It
can be seen that the project was actually under bu
budget,
dget, relative to the amount of work
accomplished, since the start of the project. This is a much better conclusion than might be
derived from Figure 1.
Figure 4 shows all three curves together which is a typical EVM line chart. The best way
to read these three-line
line charts is to identify the EV curve first, then compare it to PV (for
schedule performance) and AC (for cost performance). It can be seen from this illustration
that a true understanding of cost performance and schedule performance relies first on
measuring technical performance objectively. This is the foundational principle of EVM.
3.14 Earned Value Method (EVM)
Earned value management (EVM), or Earned value
project/performance
management (EVPM) is a project management technique for measuring project
performance and progress in an objective manner.
Earned value management is a project management technique for measuring project
performance and progress. It has the ability to combine measurements of:
Scope
Schedule, and
80

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Costs
In a single integrated system, Earned Value Management is able to provide accurate
forecasts of project performance problems, which is an important contribution for project
management.
Early EVM research showed that the areas of planning and control are significantly impacted
by its use; and similarly, using the methodology improves both scope definition as well as
the analysis of overall project performance. More recent research studies have shown that the
principles of EVM are positive predictors of project success.[1] Popularity of EVM has grown
significantly in recent years beyond government contracting, in which sector its importance
continues to rise[2] (e.g., recent new DFARS rules), in part because EVM can also surface in
and help substantiate contract disputes.[4]
Essential features of any EVM implementation include
1. a project plan that identifies work to be accomplished,
2. a valuation of planned work, called Planned Value (PV) or Budgeted Cost of Work
Scheduled (BCWS), and
3. pre-defined earning rules (also called metrics) to quantify the accomplishment of
work, called Earned Value (EV) or Budgeted Cost of Work Performed (BCWP).
EVM implementations for large or complex projects include many more features, such as
indicators and forecasts of cost performance (over budget or under budget) and schedule
performance (behind schedule or ahead of schedule). However, the most basic requirement
of an EVM system is that it quantifies progress using PV and EV.
As a short illustration of one of the applications of the EVM consider the following example.
Project A has been approved for duration of 1 year and with the budget of X. It was also
planned, that after 6 months project will spend 50% of the approved budget. If now 6 months
after the start of the project a Project Manager would report that he has spent 50% of the
budget, one can initially think, that the project is perfectly on plan. However in reality the
provided information is not sufficient to come to such conclusion, as from one side within
this time project can spend 50% of the budget, whilst finishing only 25% of the work (which
would mean project is not doing well), similarly a project can spend 50% of the budget,
whilst completing 75% of the work (which would mean, that project is doing better, than
planned). EVM' is meant to address such and similar issues.

UNIT IV
CONFIGURATION AND QUALITY MANAGEMENT
4.1 Identifying Artifacts to be Configured
Identifying the artifacts that make up an application
The artifacts that make up a typical Endeca application definition include the following:
81

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The AppConfig.xml file


The AppConfig.xml file describes each of the application's provisioning information and is
stored in the EAC Central Server. The Deployment Template control scripts
use AppConfig.xml as the authoritative source for application definition. The Deployment
Template stores a copy of the AppConfig.xml file in the [appdir]/config/script directory.
Although you can modify an application configuration with the Workbench, we recommend
that modifications only be made in the AppConfig.xml file. That way, the application
configuration will be saved on disk, ready for sharing between environments. You can use
the Workbench for other tasks that do not involve modifying the configuration, such as
reviewing the configuration, and starting or stopping individual components.
Note: Some parts of the AppConfig.xml file include settings that are environment specific,
such as the application's name, file system paths, and host addresses in the environment.
These settings should be collected and stored in a custom file. For more information about
how to create this file, see the topic about Creating a custom file for environment-specific
settings.
The instance configuration
The instance configuration is a set of files that control the ITL process and the data loaded
into the MDEX Engine servers. The instance configuration files are controlled by the
Developer Studio, and optionally by the Workbench.
These files include configuration data such as dimension definition, search configuration, the
Forge pipeline, and Page Builder landing pages.
Page Builder templates
Page Builder templates are used to drive dynamic landing pages that can be created in Page
Builder. They are defined by xml files stored by the Workbench, and accessed through
the emgr_update command utility.
Command-line scripts
An application deployment typically includes command-line scripts that perform common
operations related to the application's functionality. By convention, these scripts are stored in
the Deployment Template's [appdir]/controldirectory.
The
Deployment
Template
includes
scripts
such
as baseline_update and set_baseline_data_ready_flag.
You can create additional scripts under the [appdir]/control directory. These scripts, together
with their input data and output directories, are a part of the application definition. For
example, a developer might create scripts to crawl web pages in preparation for a baseline
update. These scripts might take as input a seed-list file, and create an output file in a custom
directory under [appdir].
These command-line scripts, along with their input data and output directories, should be
shared among the development, staging and production environments.
Library files
82

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Many parts of an application use library files. For example, a Forge pipeline using a Java or
Perl manipulator typically requires access to library files implementing those manipulators.
BeanShell scripts may use application- or Endeca-specific Java classes. By convention,
library files are kept under [appdir]/config/lib.
Forge state files
Forge state files reside in the [appdir]/data/state directory.
In most cases these files do not need to be included as part of an application definition.
However, when an application uses dimension values from auto-generated or external
dimensions, then Forge state files do need to be synchronized across the environments. In
this situation, the state files contain the IDs of these dimension values and ensure that the
same dimension value always gets the same ID no matter how many times Forge is run.
These dimension values may be used in a variety of ways, including dynamic business rules,
landing pages, dimension ordering, and precedence rules.
In other words, Forge state files should be identified as part of the application definition if
the application uses auto-generated dimensions or external dimensions, and values from
these dimensions are referenced anywhere in the application configuration (for example, in
dynamic business rules, Page Builder landing pages, explicit dimension ordering, or in
precedence rules).
4.2 Naming Conventions and Version Control
This process describes the deliverable types, naming conventions and version control
mechanisms to be applied to deliverables produce by the project.
The following table describes the configurable item types used within the IFS project.
Configuration Application Naming Convention
Version
Item Type
Used
Control
Document
- MS Word
Business Area: 5 characters Manual via
Specifications
(e.g. HR or CeDICT)
version
Project: 4-6 characters (e.g. numbering
Tavern, OHSW)
Deliverable Type :2 or 3
characters
UT = Unit Task Specification
PM = Project Management
deliverable
TPR = Test Plan & Result
CR = Change Request
PF = Process Flow
OC = Organisation chart
RR = Resource Request
83

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Configuration Application Naming Convention


Item Type
Used
PR = Presentations
PS = Project Standard
MN = Minutes
AD
=
Architecture
Deliverable
AF = Acceptance Form
DR = Deliverable Review
Form
DI = Diagram
ST = Strategy document
Description: Brief description
of deliverable
Version : character followed by
major and minor numbering,
OR
Date in yymmdd format. This
version format is used for
deliverables that are to be kept
at a point in time.
File Type : as per application
Reviews: If deliverable review
comments and changes are
provided within the deliverable
itself, i.e. via Track Changes,
the reviewers initials are added
to the file name.
Example:
HR OHSW-PM QualityPlan
v0.1.doc
Review Example:
HR OHSW -PM QualityPlan
v0.1 PW.doc
Document MS Word
As above
Test Plans
Document MS Word
Test Results

As above

Version
Control

Manual via
version
numbering
Manual via
version
84

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Configuration Application Naming Convention


Item Type
Used

Version
Control
numbering
Document MS Word
As above
Manual via
Project
version
Management
numbering
Document MS Word
As Above
Manual via
Resource
date
Request
versioning
Document MS Word
As Above
Manual via
Meeting
date
Minutes
versioning
Presentations MS Power As Above
Manual via
Point
version
numbering
Process
Visio
As above
Manual via
Diagrams
version
numbering
Organisation
Visio
As Above
Manual via
Charts
date
versioning
Logs
and Excel
As Above
Manual via
Registers
Register held within project date
Status Report
versioning
Change
MS Word
As Above
Manual via
Requests
date
versioning
Risks
MS Word
As Above
Manual via
version
numbering
Issues
MS Word
As Above
Manual via
version
numbering
Defects
TBC
Assignment
MS Word
As Above
Manual via
Descriptions
date
versioning
Work
MS Project Descriptive name, e.g. HR Manual via
date
Schedules
or Excel
OHSW -yyyymmdd
85

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Configuration Application Naming Convention


Item Type
Used

Version
Control
versioning

Manual Version Control


If the deliverable version is controlled by the date in the deliverable file name then for each
new version of the deliverable the current date is used in the file name. If there are multiple
version created for the same deliverable on the same day then an alphabetic character is
appended to the date starting at a.
If the deliverable version is controlled by major and minor numbering the following process
is followed:
1. The Original draft of the deliverable is versioned using zero as the major number and 1 as
the minor number, i.e. v0.1
2. For each revision following internal reviews the minor number is incremented, i.e. v0.1
becomes v0.2 and then v0.3 etc.
3. Once the deliverable has completed internal review and is to be distributed to the business
representatives for review minor number is incremented.
4. For each revision pertaining to business representatives reviews the minor number is
incremented.
5. Following business representative review the deliverable is updated and when ready for
acceptance the major number is set to 1 and the minor number set to zero.
6. If the deliverable requires a revision due to changes identified in the acceptance process
the major number remains unchanged and the minor number is incremented.
7. The version numbering for changes to deliverables after acceptance follow the same
process except that the starting number is the deliverable accepted version number. Upon
completion of reviews the major number is incremented by one and the minor number is
set to zero, e.g. v1.6 becomes v2.0.
Automated Version Control
Approval Cycle
Role
Name
Reviewer(s
):

Signature

Date

Approver(s
):

86

CP7301

Change History
Version
Author
(State)
0.1
Peter Woolley

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Change Description

Date

Original draft

21/61/20
13

4.3 Configuration Control


Configuration control is an important function of the configuration management discipline.
Its purpose is to ensure that all changes to a complex system are performed with the
knowledge and consent of management. The scope creep that results from ineffective or
nonexistent configuration control is a frequent cause of project failure.
Configuration control tasks include initiating, preparing, analysing, evaluating and
authorising proposals for change to a system (often referred to as "the configuration").
Configuration control has four main processes:
1. Identification and documentation of the need for a change in a change request
2. Analysis and evaluation of a change request and production of a change proposal
3. Approval or disapproval of a change proposal
4. Verification, implementation and release of a change.

87

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The Configuration Control Process

Why Configuration Control is Important


Configuration control is an essential component of a project's risk management strategy. For
example, uncontrolled changes to software requirements introduce the risk of cost and
schedule overruns.
Scenario - Curse of the Feature Creep
A project misses several key milestones and shows no sign of delivering anything.
WHY?
The customer regularly talks directly to software developers asking them to make
'little changes' without consulting the project manager.
The developers are keen to show off the new technology they are using. They slip in
the odd 'neat feature' that they know the customer will love.
Solution: Implement configuration control. Document all requests for change and have them
considered by aConfiguration Control Board.
4.4 Quality Assurance Techniques
SQA
Software Quality Assurance (SQA) consists of a means of monitoring thesoftware
engineering processes and methods used to ensure quality. It does thisby means of audits of
the quality management system under which the softwaresystem is created. These audits are
backed by one or more standards, usually
ISO 9000.It is distinct from software quality control which includes reviewing
requirementsdocuments, and software testing. SQA encompasses the entire
softwaredevelopment process, which includes processes such as software design,
88

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

coding,source code control, code reviews, change management, configuration


management,and release management. Whereas software quality control is a control of
products, software quality assurance is a control of processes.
Software quality assurance is related to the practice of quality assurance inproduct
manufacturing. There are, however, some notable differences betweensoftware and a
manufactured product. These differences stem from the factthat the manufactured product is
physical and can be seen whereas the softwareproduct is not visible. Therefore its function,
benefit and costs are not as easily
measured. Whats more, when a manufactured product rolls off the assemblyline, it is
essentially a complete, finished product, whereas software is never finished.Software lives,
grows, evolves, and metamorphoses, unlike its tangiblecounterparts. Therefore, the processes
and methods to manage, monitor, andmeasure its ongoing quality are as fluid and sometimes
elusive as are the defects
that they are meant to keep in check. SQA is also responsible for gathering and presenting
software metrics.
For example the Mean Time Between Failure (MTBF) is a common softwaremetric (or
measure) that tracks how often the system is failing. This SoftwareMetric is relevant for the
reliability software characteristic and, by extension theavailability software
characteristic.SQA may gather these metrics from various sources, but note the
importantpragmatic point of associating an outcome (or effect) with a cause. In this waySQA
can measure the value or consequence of having a given standard process,
or procedure. Then, in the form of continuous process improvement, feedbackcan be given to
the various process teams (Analysis, Design, Coding etc.) anda process improvement can be
initiated.
Overview of methods
Software Quality Assurance takes several forms. A brief list of testing methodsthat should be
considered
Methods:
Black box testing - not based on any knowledge of internal design or code.Tests are based
on requirements and functionality.
White box testing - based on knowledge of the internal logic of an applicationscode.
Tests are based on coverage of code statements, branches,paths, conditions
Unit testing - the most micro scale of testing; to test particular functionsor code modules.
Typically done by the programmer and not by testers,as it requires detailed knowledge of the
internal program design and code.Not always easily done unless the application has a welldesigned architecture
with tight code; may require developing test driver modules or testharnesses Incremental
integration testing - continuous testing of an application asnew functionality is added;
89

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

requires that various aspects of an applicationsfunctionality be independent enough to work


separately before all parts of
the program are completed, or that test drivers be developed as needed;done by programmers
or by testers
Integration testing - testing of combined parts of an application to determineif they
function together correctly. The parts can be code modules,individual applications, client
and server applications on a network, etc.This type of testing is especially relevant to
client/server and distributedsystems
Functional testing - black-box type testing geared to functional requirementsof an
application; this type of testing should be done by testers.This doesnt mean that the
programmers shouldnt check that their codeworks before releasing it (which of course
applies to any stage of testing)
System testing - black-box type testing that is based on overall requirementsspecifications;
covers all combined parts of a system
End-to-end testing - similar to system testing; the macro end of the testscale; involves
testing of a complete application environment in a situationthat mimics real-world use, such
as interacting with a database, usingnetwork communications, or interacting with other
hardware, applications,or systems if appropriate
Sanity (Smoke) testing - typically an initial testing effort to determine ifa new software
version is performing well enough to accept it for a majortesting effort. For example, if the
new software is crashing systems every5 minutes, bogging down systems to a crawl, or
corrupting databases, thesoftware may not be in a sane enough condition to warrant further
testingin its current state Regression testing - re-testing after fixes or modifications of the
softwareor its environment. It can be difficult to determine how much re-testingis needed,
especially near the end of the development cycle. Automatedtesting tools can be especially
useful for this type of testing Acceptance testing -final testing based on specifications of the
end-useror customer, or based on use by end-users/customers over some limitedperiod of
time
Load testing - testing an application under heavy loads, such as testing ofa web site under
a range of loads to determine at what point the systemsresponse time degrades or fails
Stress testing - term often used interchange eably with load and performancetesting.
Also used to describe such tests as system functionaltesting while under unusually heavy
loads, heavy repetition of certain actionsor inputs, input of large numerical values, large
complex queries to adatabase system, etc.
Performance testing - term often used interchangeably with stress andload testing.
Ideally performance testing (and any other type of testing)is defined in requirements
documentation or QA or Test Plans
Usability testing - testing for user-friendliness. Clearly this is subjective,and will depend
on the targeted end-user or customer. User interviews,surveys, video recording of user
90

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

sessions, and other techniques can be used.Programmers and testers are usually not
appropriate as usability testers
Install/Uninstall testing - testing of full, partial, or upgrade install/uninstallprocesses
Recovery testing - testing how well a system recovers from crashes, hardwarefailures, or
other catastrophic problems
Failover testing - typically used interchangeably with recovery testing
Security testing - testing how well the system protects against unauthorizedinternal or
external access, willful damage, etc; may require sophisticated testing techniques
Compatibility testing - testing how well software performs in a
particularhardware/software/operating system/network/etc. environment
Exploratory testing - often taken to mean a creative, informal softwaretest that is not
based on formal test plans or test cases; testers may belearning the software as they test it
Ad-hoc testing - similar to exploratory testing, but often taken to meanthat the testers have
significant understanding of the software before testingit
Context-driven testing - testing driven by an understanding of the environment,culture,
and intended use of software. For example, the testingapproach for life-critical medical
equipment software would be completelydifferent than that for a low-cost computer game
User acceptance testing - determining if software is satisfactory to an enduseror customer
Comparison testing - comparing software weaknesses and strengths to competingproducts
Alpha testing -testing of an application when development is nearing completion;minor
design changes may still be made as a result of such testing.Typically done by end-users or
others, not by programmers or testers
Beta testing - testing when development and testing are essentially completedand final
bugs and problems need to be found before final release.Typically done by end-users or
others, not by programmers or testers
Mutation testing - a method for determining if a set of test data or testcases is useful, by
deliberately introducing various code changes (bugs)and retesting with the original test
data/cases to determine if the bugs aredetected. Proper implementation requires large
computational resources
4.5 Peer Reviews
Peer Reviews and CMMI
Does not dictate specific techniques, but instead requires that:
A written policy about peer reviews is required
Resources, funding, and training must be provided
Peer reviews must be planned
The peer review procedures to be used must be documented
SEI-CMMI Checklist for Peer Reviews
Are peer reviews planned?
91

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Are actions associated with defects that are identified during peer reviews tracked
until they are resolved?
Does the project follow a written organizational policy for performing peer reviews?
Do participants of peer reviews receive the training required to perform their roles?
Are measurements used to determine the status of peer review activities?
Are peer review activities and work products subjected to Software Quality Assurance
review and audit?
Peer review is a traditional organizational function designed to contribute to improving the
quality of care and appropriate utilization of health care resources.
Quality Management in CMMI 1.3
Process Areas
Configuration Management (CM)

Product Integration (PI)

Causal Analysis and Resolution (CAR)

Project Monitoring and Control


(PMC)

Decision Analysis and Resolution (DAR)

Project Planning (PP)

Integrated Project Management (IPM)

Quantitative Project Management


(QPM)

Measurement and Analysis (MA)

Requirements Development (RD)

Organizational
(OPM)

Performance

Management Requirements
(REQM)

Management

Organizational Process Definition (OPD)

Risk Management (RSKM)

Organizational Process Focus (OPF)

Supplier Agreement Management


(SAM)

Organizational Process Performance (OPP)

Technical Solution (TS)

Organizational Training (OT)

Validation (VAL)

Process and Product Quality Assurance


Verification (VER)
(PPQA)

PP
QA
Pro
duc
t
and
Pro
ces
s
Qu
alit
y
Ass
ura
nce

PP
QA

for Agile development

92

CP7301

Configuration

SOFTWARE PROCESS AND PROJECT MANAGEMENT

CM

Management

93

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

MA Measurement and Analysis.

94

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

VER - Verification

95

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

VAL Validation

96

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Peer Review
Reviews performed by peers in the development team
Can be from Fagans inspections to simple buddy checks
Peer Review Items
Participants / Roles
Schedule
4.6 Fegan Inspection
Researchers and Influencers
Fagan
Johnson
Ackermann
Gilb and Graham
Weinberg
Weigers
Inspection, Walkthrough or Review?
An inspection is a visual examination of a software product to detect and identify software
anomalies, including errors and deviations from standards and specifications
A walkthrough is a static analysis technique in which a designer or programmer leads
members of the development team and other interested parties through a software product,
and the participants ask questions and make comments about possible errors, violation of
development standards, and other problems
A review is a process or meeting during which a software product is presented to project
personnel, managers, users, customers, user representatives, or other interested parties for
comment or approval
Families of Review Methods
97

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Method
Family

Typical Goals

Typical Attributes

Walkthroughs

Minimal overhead
Developer training
Quick turnaround

Little/no preparation
Informal process
No measurement
Not FTR!

Technical
Reviews

Requirements elicitation
Ambiguity resolution
Training

Formal process
Author presentation
Wide
range
of
discussion

Inspections

Detect and remove all defects Formal process


efficiently and effectively
Checklists
Measurements
Verify phase

Informal vs. Formal


Informal
Spontaneous
Ad-hoc
No artifacts produced
Formal
Carefully planned and executed
Reports are produced
In reality, there is also a middle ground between informal and formal techniques
Cost-Benefit Analysis
Fagan reported that IBM inspections found 90% of all defects for a 9% reduction in
average project cost
Johnson estimates that rework accounts for 44% of development cost
Finding defects, finding defects early and reducing rework can impact the overall cost
of a project
Cost of Defects
What is the impact of the annual cost of software defects in the US?
$59 billion
98

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Estimated that $22 billion could be avoided by introducing a best-practice defect detection
infrastructure
Gilb project with jet manufacturer
Initial analysis estimated that 41,000 hours of effort would be lost through faulty
requirements
Manufacturer concurred because:
10 people on the project using 2,000 hours/year
Project is already one year late (20,000 hours)
Project is estimated to take one more year (another 20,000 hours)
Software Inspections
Why are software inspections not widely used?
Lack of time
Not seen as a priority
Not seen as value added (measured by loc)
Lack of understanding of formalized techniques
Improper tools used to collect data
Lack of training of participants
Pits programmer against reviewers
Twelve Reasons Conventional Reviews are Ineffective
1. The reviewers are swamped with information.
2. Most reviewers are not familiar with the product design goals.
3. There are no clear individual responsibilities.
4. Reviewers can avoid potential embarrassment by saying nothing.
5. The review is a large meeting; detailed discussions are difficult.
6. Presence of managers silences criticism.
7. Presence of uninformed reviewers may turn the review into a tutorial.
8. Specialists are asked general questions.
9. Generalists are expected to know specifics.
10.The review procedure reviews code without respect to structure.
11.Unstated assumptions are not questioned.
12.Inadequate time is allowed.
Fagans Contributions
Design and code inspections to reduce errors in program development (1976)
A systematic and efficient approach to improving programming quality
Continuous improvement: reduce initial errors and follow-up with additional
improvements
Beginnings of formalized software inspections

99

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Fagans Six Major Steps


1. Planning
2. Overview
3. Preparation
4. Examination
5. Rework
6.Follow-up
1.
2.
3.
4.
5.
6.

Planning: Form team, assign roles


Overview: Inform team about product (optional)
Preparation: Independent review of materials
Examination: Inspection meeting
Rework: Author verify defects and correct
Follow-up: Moderator checks and verifies corrections

Fagans Team Roles


Fagan recommends that a good size team consists of four people
Moderator: the key person, manages team and offers leadership
Readers, reviewers and authors
Designer: programmer responsible for producing the program design
Coder/ Implementer: translates the design to code
Tester: write, execute test cases

Common Inspection Processes

100

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

4.7 Unit , Integration , System , and Acceptance Testing


Testing methods
Static vs. dynamic testing
There are many approaches available in software testing. Reviews, walkthroughs,
or inspections are referred to as static testing, whereas actually executing programmed code
with a given set of test cases is referred to as dynamic testing. Static testing is often implicit,
as proofreading, plus when programming tools/text editors check source code structure or
compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic
testing takes place when the program itself is run. Dynamic testing may begin before the
program is 100% complete in order to test particular sections of code and are applied to
discrete functions or modules. Typical techniques for this are either using stubs/drivers or
execution from a debugger environment.
Static testing involves verification, whereas dynamic testing involves validation. Together
they help improve software quality. Among the techniques for static analysis, mutation
testing can be used to ensure the test-cases will detect errors which are introduced by
mutating the source code.
The box approach
Software testing methods are traditionally divided into white- and black-box testing. These
two approaches are used to describe the point of view that a test engineer takes when
designing test cases.
White-Box testing
Main article: White-box testing
White-box testing (also known as clear box testing, glass box testing, transparent box
testing and structural testing) tests internal structures or workings of a program, as opposed
to the functionality exposed to the end-user. In white-box testing an internal perspective of
101

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

the system, as well as programming


ng skills, are used to design test cases. The tester chooses
inputs to exercise paths through the code and determine the appropriate outputs. This is
analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).
While white-box
box testing can be applied at the unit, integration and system levels of the
software testing process, it is usually done at the unit level. It can test paths within a unit,
paths between units during integration, and between subsystems during a system
systemlevel test.
Though this method of test design can uncover many errors or problems, it might not detect
unimplemented parts of the specification or missing requirements.
Techniques used in white-box
box testing include:
API testing (application programming interface) testing of the application using public
and private APIs
Code coverage creating tests to satisfy some criteria of code coverage (e.g., the test
designer can create tests to cause all statements in the program to be executed at least
once)
Fault injection methods intentionally introducing faults to gauge the efficacy of testing
strategies
Mutation testing methods
Static testing methods
Code coverage tools can evaluate the completeness of a test suite that was created with any
method, including black-box testing. This allows the software team to examine parts of a
system that are rarely tested and ensures that the most important function points have been
tested.[23] Code coverage as a software metric can be reported as a percentage for:
Function coverage,, which reports on functions executed
Statement coverage,, which reports on the number of lines executed to complete the
test
100% statement coverage ensures that all code paths or branches (in terms of control
flow)) are executed at least once. This is helpful in ensuring correct functionality, but not
sufficient since the same code may process different inputs correctly or incorrectly.
Black-box testing
Main article: Black-box
box testing

Black box diagram


Black-box testing treats the software as a "black box", examining functionality without
any knowledge of internal implementation. The testers are only aware of what the
software is supposed to do, not how it does it. Black-box
box testing methods
include: equivalence partitioning
partitioning, boundary value analysis, all-pairs
pairs testing
testing, state
transition
tables, decision
table testing, fuzz
testing, model-based
based
testing
testing, use
case testing, exploratory testing and specification-based testing.
102

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Specification-based testing aims to test the functionality of software according to the


applicable requirements.[25] This level of testing usually requires thorough test cases to be
provided to the tester, who then can simply verify that for a given input, the output value
(or behavior), either "is" or "is not" the same as the expected value specified in the test
case. Test cases are built around specifications and requirements, i.e., what the
application is supposed to do. It uses external descriptions of the software, including
specifications, requirements, and designs to derive test cases. These tests can
be functional or non-functional, though usually functional.
Specification-based testing may be necessary to assure correct functionality, but it is
insufficient to guard against complex or high-risk situations.
One advantage of the black box technique is that no programming knowledge is required.
Whatever biases the programmers may have had, the tester likely has a different set and
may emphasize different areas of functionality. On the other hand, black-box testing has
been said to be "like a walk in a dark labyrinth without a flashlight."Because they do not
examine the source code, there are situations when a tester writes many test cases to
check something that could have been tested by only one test case, or leaves some parts
of the program untested.
This method of test can be applied to all levels of software
testing: unit, integration, system and acceptance. It typically comprises most if not all
testing at higher levels, but can also dominate unit testing as well.
Visual testing
The aim of visual testing is to provide developers with the ability to examine what was
happening at the point of software failure by presenting the data in such a way that the
developer can easily nd the information he or she requires, and the information is
expressed clearly.
At the core of visual testing is the idea that showing someone a problem (or a test
failure), rather than just describing it, greatly increases clarity and understanding. Visual
testing therefore requires the recording of the entire test process capturing everything
that occurs on the test system in video format. Output videos are supplemented by realtime tester input via picture-in-a-picture webcam and audio commentary from
microphones.
Visual testing provides a number of advantages. The quality of communication is
increased dramatically because testers can show the problem (and the events leading up
to it) to the developer as opposed to just describing it and the need to replicate test
failures will cease to exist in many cases. The developer will have all the evidence he or
she requires of a test failure and can instead focus on the cause of the fault and how it
should be fixed.

103

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Visual testing is particularly well-suited for environments that deploy agile methods in
their development of software, since agile methods require greater communication
between testers and developers and collaboration within small teams.[citation needed]
Ad hoc testing and exploratory testing are important methodologies for checking
software integrity, because they require less preparation time to implement, while the
important bugs can be found quickly. In ad hoc testing, where testing takes place in an
improvised, impromptu way, the ability of a test tool to visually record everything that
occurs on a system becomes very important.
Visual testing is gathering recognition in customer acceptance and usability testing,
because the test can be used by many individuals involved in the development
process. For the customer, it becomes easy to provide detailed bug reports and feedback,
and for program users, visual testing can record user actions on screen, as well as their
voice and image, to provide a complete picture at the time of software failure for the
developer.
Grey-box testing
Main article: Gray box testing
Grey-box testing (American spelling: gray-box testing) involves having knowledge of
internal data structures and algorithms for purposes of designing tests, while executing
those tests at the user, or black-box level. The tester is not required to have full access to
the software's source code. Manipulating input data and formatting output do not qualify
as grey-box, because the input and output are clearly outside of the "black box" that we
are calling the system under test. This distinction is particularly important when
conducting integration testing between two modules of code written by two different
developers, where only the interfaces are exposed for test.
However, tests that require modifying a back-end data repository such as a database or a
log file does qualify as grey-box, as the user would not normally be able to change the
data repository in normal production operations.Grey-box testing may also
include reverse engineering to determine, for instance, boundary values or error
messages.
By knowing the underlying concepts of how the software works, the tester makes betterinformed testing choices while testing the software from outside. Typically, a grey-box
tester will be permitted to set up an isolated testing environment with activities such as
seeding a database. The tester can observe the state of the product being tested after
performing certain actions such as executing SQL statements against the database and
then executing queries to ensure that the expected changes have been reflected. Grey-box
testing implements intelligent test scenarios, based on limited information. This will
particularly apply to data type handling, exception handling, and so on.

104

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Testing levels
There are generally four recognized levels of tests: unit testing, integration testing,
system testing, and acceptance testing. Tests are frequently grouped by where they are
added in the software development process, or by the level of specificity of the test. The
main levels during the development process as defined by the SWEBOK guide are unit-,
integration-, and system testing that are distinguished by the test target without implying
a specific process model.[32] Other test levels are classified by the testing objective.
Unit testing
Unit testing, also known as component testing, refers to tests that verify the functionality
of a specific section of code, usually at the function level. In an object-oriented
environment, this is usually at the class level, and the minimal unit tests include the
constructors and destructors.
These types of tests are usually written by developers as they work on code (white-box
style), to ensure that the specific function is working as expected. One function might
have multiple tests, to catch corner cases or other branches in the code. Unit testing alone
cannot verify the functionality of a piece of software, but rather is used to ensure that the
building blocks of the software work independently from each other.
Unit testing is a software development process that involves synchronized application of
a broad spectrum of defect prevention and detection strategies in order to reduce software
development risks, time, and costs. It is performed by the software developer or engineer
during the construction phase of the software development lifecycle. Rather than replace
traditional QA focuses, it augments it. Unit testing aims to eliminate construction errors
before code is promoted to QA; this strategy is intended to increase the quality of the
resulting software as well as the efficiency of the overall development and QA process.
Depending on the organization's expectations for software development, unit testing
might include static code analysis, data flow analysis, metrics analysis, peer code
reviews, code coverage analysis and other software verification practices.
Integration testing
Integration testing is any type of software testing that seeks to verify the interfaces
between components against a software design. Software components may be integrated
in an iterative way or all together ("big bang"). Normally the former is considered a better
practice since it allows interface issues to be located more quickly and fixed.Integration
testing works to expose defects in the interfaces and interaction between integrated
components (modules). Progressively larger groups of tested software components
corresponding to elements of the architectural design are integrated and tested until the
software works as a system.

105

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Component interface testing


The practice of component interface testing can be used to check the handling of data
passed between various units, or subsystem components, beyond full integration testing
between those units.[35][36] The data being passed can be considered as "message packets"
and the range or data types can be checked, for data generated from one unit, and tested
for validity before being passed into another unit. One option for interface testing is to
keep a separate log file of data items being passed, often with a timestamp logged to
allow analysis of thousands of cases of data passed between units for days or weeks.
Tests can include checking the handling of some extreme data values while other
interface variables are passed as normal values.[35] Unusual data values in an interface can
help explain unexpected performance in the next unit. Component interface testing is a
variation of black-box testing, with the focus on the data values beyond just the related
actions of a subsystem component.
System testing
System testing, or end-to-end testing, tests a completely integrated system to verify that it
meets its requirements.For example, a system test might involve testing a logon interface,
then creating and editing an entry, plus sending or printing results, followed by summary
processing or deletion (or archiving) of entries, then logoff.
In addition, the software testing should ensure that the program, as well as working as
expected, does not also destroy or partially corrupt its operating environment or cause
other processes within that environment to become inoperative (this includes not
corrupting shared memory, not consuming or locking up excessive resources and leaving
any parallel processes unharmed by its presence).
Acceptance testing
At last the system is delivered to the user for Acceptance testing.
Testing Types
Installation testing
An installation test assures that the system is installed correctly and working at actual
customer's hardware.
Compatibility testing
A common cause of software failure (real or perceived) is a lack of its compatibility with
other application software, operating systems (or operating system versions, old or new),
or target environments that differ greatly from the original (such as
a terminal or GUI application intended to be run on the desktop now being required to
become a web application, which must render in a web browser). For example, in the
case of a lack of backward compatibility, this can occur because the programmers
develop and test software only on the latest version of the target environment, which not
all users may be running. This results in the unintended consequence that the latest work
may not function on earlier versions of the target environment, or on older hardware that
106

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

earlier versions of the target environment was capable of using. Sometimes such issues
can be fixed by proactively abstracting operating system functionality into a separate
program module or library.
Smoke and sanity testing
Sanity testing determines whether it is reasonable to proceed with further testing.
Smoke testing consists of minimal attempts to operate the software, designed to
determine whether there are any basic problems that will prevent it from working at all.
Such tests can be used as build verification test.
Regression testing
Regression testing focuses on finding defects after a major code change has occurred.
Specifically, it seeks to uncover software regressions, as degraded or lost features,
including old bugs that have come back. Such regressions occur whenever software
functionality that was previously working, correctly, stops working as intended.
Typically, regressions occur as an unintended consequence of program changes, when the
newly developed part of the software collides with the previously existing code. Common
methods of regression testing include re-running previous sets of test-cases and checking
whether previously fixed faults have re-emerged. The depth of testing depends on the
phase in the release process and the risk of the added features. They can either be
complete, for changes added late in the release or deemed to be risky, or be very shallow,
consisting of positive tests on each feature, if the changes are early in the release or
deemed to be of low risk. Regression testing is typically the largest test effort in
commercial software development, due to checking numerous details in prior software
features, and even new software can be developed while using some old test-cases to test
parts of the new design to ensure prior functionality is still supported.
Acceptance testing
Acceptance testing can mean one of two things:
1. A smoke test is used as an acceptance test prior to introducing a new build to the
main testing process, i.e. before integration or regression.
2. Acceptance testing performed by the customer, often in their lab environment on
their own hardware, is known as user acceptance testing (UAT). Acceptance
testing may be performed as part of the hand-off process between any two phases
of development.
Alpha testing
Alpha testing is simulated or actual operational testing by potential users/customers or an
independent test team at the developers' site. Alpha testing is often employed for off-theshelf software as a form of internal acceptance testing, before the software goes to beta
testing.

107

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Beta testing
Beta testing comes after alpha testing and can be considered a form of external user
acceptance testing. Versions of the software, known as beta versions, are released to a
limited audience outside of the programming team. The software is released to groups of
people so that further testing can ensure the product has few faults or bugs. Sometimes,
beta versions are made available to the open public to increase the feedback field to a
maximal number of future users.
Functional vs non-functional testing
Functional testing refers to activities that verify a specific action or function of the code.
These are usually found in the code requirements documentation, although some
development methodologies work from use cases or user stories. Functional tests tend to
answer the question of "can the user do this" or "does this particular feature work."
Non-functional testing refers to aspects of the software that may not be related to a
specific function or user action, such as scalability or other performance, behavior under
certain constraints, or security. Testing will determine the breaking point, the point at
which extremes of scalability or performance leads to unstable execution. Non-functional
requirements tend to be those that reflect the quality of the product, particularly in the
context of the suitability perspective of its users.
Destructive testing
Destructive testing attempts to cause the software or a sub-system to fail. It verifies that
the software functions properly even when it receives invalid or unexpected inputs,
thereby establishing the robustness of input validation and error-management
routines.Software fault injection, in the form of fuzzing, is an example of failure testing.
Various commercial non-functional testing tools are linked from the software fault
injection page; there are also numerous open-source and free software tools available that
perform destructive testing.
Software performance testing
Performance testing is generally executed to determine how a system or sub-system
performs in terms of responsiveness and stability under a particular workload. It can also
serve to investigate, measure, validate or verify other quality attributes of the system,
such as scalability, reliability and resource usage.
Load testing is primarily concerned with testing that the system can continue to operate
under a specific load, whether that be large quantities of data or a large number of users.
This is generally referred to as software scalability. The related load testing activity of
when performed as a non-functional activity is often referred to as endurance
testing.Volume testing is a way to test software functions even when certain components
(for example a file or database) increase radically in size. Stress testing is a way to test
reliability under unexpected or rare workloads. Stability testing (often referred to as load

108

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

or endurance testing) checks to see if the software can continuously function well in or
above an acceptable period.
There is little agreement on what the specific goals of performance testing are. The terms
load testing, performance testing, scalability testing, and volume testing, are often used
interchangeably.
Real-time software systems have strict timing constraints. To test if timing constraints are
met, real-time testing is used.
Usability testing
Usability testing is to check if the user interface is easy to use and understand. It is
concerned mainly with the use of the application.
Accessibility testing
Accessibility testing may include compliance with standards such as:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
Security testing
Security testing is essential for software that processes confidential data to
prevent system intrusion by hackers.
Internationalization and localization
The general ability of software to be internationalized and localized can be automatically
tested without actual translation, by using pseudolocalization. It will verify that the
application still works, even after it has been translated into a new language or adapted
for a new culture (such as different currencies or time zones).
Actual translation to human languages must be tested, too. Possible localization failures
include:
Software is often localized by translating a list of strings out of context, and the
translator may choose the wrong translation for an ambiguous source string.
Technical terminology may become inconsistent if the project is translated by several
people without proper coordination or if the translator is imprudent.
Literal word-for-word translations may sound inappropriate, artificial or too technical
in the target language.
Untranslated messages in the original language may be left hard coded in the source
code.
Some messages may be created automatically at run time and the resulting string may
be ungrammatical, functionally incorrect, misleading or confusing.
Software may use a keyboard shortcut which has no function on the source
language's keyboard layout, but is used for typing characters in the layout of the target
language.
Software may lack support for the character encoding of the target language.
109

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Fonts and font sizes which are appropriate in the source language may be
inappropriate in the target language; for example, CJK characters may become
unreadable if the font is too small.
A string in the target language may be longer than the software can handle. This may
make the string partly invisible to the user or cause the software to crash or
malfunction.
Software may lack proper support for reading or writing bi-directional text.
Software may display images with text that was not localized.
Localized operating systems may have differently named system configuration
files and environment variables and different formats for date and currency.
Development testing[edit]
Main article: Development Testing
Development Testing is a software development process that involves synchronized
application of a broad spectrum of defect prevention and detection strategies in order to
reduce software development risks, time, and costs. It is performed by the software
developer or engineer during the construction phase of the software development
lifecycle. Rather than replace traditional QA focuses, it augments it. Development
Testing aims to eliminate construction errors before code is promoted to QA; this strategy
is intended to increase the quality of the resulting software as well as the efficiency of the
overall development and QA process.
Depending on the organization's expectations for software development, Development
Testing might include static code analysis, data flow analysis metrics analysis, peer code
reviews, unit testing, code coverage analysis, traceability, and other software verification
practices.
A/B testing[edit]
Main article: A/B testing
Concurrent testing[edit]
Main article: Concurrent testing
Conformance testing or type testing[edit]
Main article: Conformance testing
In software testing, conformance testing verifies that a product performs according to its
specified standards. Compilers, for instance, are extensively tested to determine whether
they meet the recognized standard for that language.
Testing process[edit]
Traditional waterfall development model[edit]
A common practice of software testing is that testing is performed by an independent
group of testers after the functionality is developed, before it is shipped to the
customer.[41]This practice often results in the testing phase being used as a project buffer
to compensate for project delays, thereby compromising the time devoted to testing.[42]

110

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Another practice is to start software testing at the same moment the project starts and it is
a continuous process until the project finishes.[43]
Further information: Capability Maturity Model Integration and Waterfall model
Agile or Extreme development model[edit]
In contrast, some emerging software disciplines such as extreme programming and
the agile software development movement, adhere to a "test-driven software
development" model. In this process, unit tests are written first, by the software
engineers (often with pair programming in the extreme programming methodology). Of
course these tests fail initially; as they are expected to. Then as code is written it passes
incrementally larger portions of the test suites. The test suites are continuously updated as
new failure conditions and corner cases are discovered, and they are integrated with any
regression tests that are developed. Unit tests are maintained along with the rest of the
software source code and generally integrated into the build process (with inherently
interactive tests being relegated to a partially manual build acceptance process). The
ultimate goal of this test process is to achieve continuous integration where software
updates can be published to the public frequently. [44] [45]
This methodology increases the testing effort done by development, before reaching any
formal testing team. In some other development models, most of the test execution occurs
after the requirements have been defined and the coding process has been completed.
Top-down and bottom-up[edit]
Bottom Up Testing is an approach to integrated testing where the lowest level
components (modules, procedures, and functions) are tested first, then integrated and
used to facilitate the testing of higher level components. After the integration testing of
lower level integrated modules, the next level of modules will be formed and can be used
for integration testing. The process is repeated until the components at the top of the
hierarchy are tested. This approach is helpful only when all or most of the modules of the
same development level are ready.[citation needed] This method also helps to determine the
levels of software developed and makes it easier to report testing progress in the form of
a percentage.[citation needed]
Top Down Testing is an approach to integrated testing where the top integrated modules
are tested and the branch of the module is tested step by step until the end of the related
module.
In both, method stubs and drivers are used to stand-in for missing components and are
replaced as the levels are completed.
A sample testing cycle[edit]
Although variations exist between organizations, there is a typical cycle for
testing.[46] The sample below is common among organizations employing the Waterfall
developmentmodel. The same practices are commonly found in other development
models, but might not be as clear or explicit.
111

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Requirements analysis: Testing should begin in the requirements phase of


the software development life cycle. During the design phase, testers work to
determine what aspects of a design are testable and with what parameters those tests
work.
Test planning: Test strategy, test plan, testbed creation. Since many activities will be
carried out during testing, a plan is needed.
Test development: Test procedures, test scenarios, test cases, test datasets, test scripts
to use in testing software.
Test execution: Testers execute the software based on the plans and test documents
then report any errors found to the development team.
Test reporting: Once testing is completed, testers generate metrics and make final
reports on their test effort and whether or not the software tested is ready for release.
Test result analysis: Or Defect Analysis, is done by the development team usually
along with the client, in order to decide what defects should be assigned, fixed,
rejected (i.e. found software working properly) or deferred to be dealt with later.
Defect Retesting: Once a defect has been dealt with by the development team, it is
retested by the testing team. AKA Resolution testing.
Regression testing: It is common to have a small test program built of a subset of
tests, for each integration of new, modified, or fixed software, in order to ensure that
the latest delivery has not ruined anything, and that the software product as a whole is
still working correctly.
Test Closure: Once the test meets the exit criteria, the activities such as capturing the
key outputs, lessons learned, results, logs, documents related to the project are
archived and used as a reference for future projects.
Automated testing[edit]
Main article: Test automation
Many programming groups are relying more and more on automated testing, especially
groups that use test-driven development. There are many frameworks to write tests in,
and continuous integration software will run tests automatically every time code is
checked into a version control system.
While automation cannot reproduce everything that a human can do (and all the ways
they think of doing it), it can be very useful for regression testing. However, it does
require a well-developed test suite of testing scripts in order to be truly useful.
Testing tools[edit]
Program testing and fault detection can be aided significantly by testing tools
and debuggers. Testing/debug tools include features such as:
Program monitors, permitting full or partial monitoring of program code including:
Instruction set simulator, permitting complete instruction level monitoring and
trace facilities

112

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Program
animation,
permitting
step-by-step
execution
and
conditional breakpoint at source level or in machine code
Code coverage reports
Formatted dump or symbolic debugging, tools allowing inspection of program
variables on error or at chosen points
Automated functional GUI testing tools are used to repeat system-level tests through
the GUI
Benchmarks, allowing run-time performance comparisons to be made
Performance analysis (or profiling tools) that can help to highlight hot spots and
resource usage
Some of these features may be incorporated into an Integrated Development
Environment (IDE).
Measurement in software testing[edit]
Main article: Software quality
Usually, quality is constrained to such topics as correctness, completeness, security,[citation
needed]
but can also include more technical requirements as described under
the ISOstandard ISO/IEC
9126,
such
as
capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
There are a number of frequently used software metrics, or measures, which are used to
assist in determining the state of the software or the adequacy of the testing.
4.8 Test Data and Test Cases
Test plan
A test specification is called a test plan. The developers are well aware what test plans
will be executed and this information is made available to management and the
developers. The idea is to make them more cautious when developing their code or
making additional changes. Some companies have a higher-level document called
a test strategy.
Traceability matrix
A traceability matrix is a table that correlates requirements or design documents to test
documents. It is used to change tests when related source documents are changed, to
select test cases for execution when planning for regression tests by considering
requirement coverage.
Test case
A test case normally consists of a unique identifier, requirement references from a
design specification, preconditions, events, a series of steps (also known as actions) to
follow, input, output, expected result, and actual result. Clinically defined a test case is
an input and an expected result.[47] This can be as pragmatic as 'for condition x your
derived result is y', whereas other test cases described in more detail the input scenario
and what results might be expected. It can occasionally be a series of steps (but often

113

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

steps are contained in a separate test procedure that can be exercised against multiple
test cases, as a matter of economy) but with one expected result or expected outcome.
The optional fields are a test case ID, test step, or order of execution number, related
requirement(s), depth, test category, author, and check boxes for whether the test is
automatable and has been automated. Larger test cases may also contain prerequisite
states or steps, and descriptions. A test case should also contain a place for the actual
result. These steps can be stored in a word processor document, spreadsheet, database,
or other common repository. In a database system, you may also be able to see past
test results, who generated the results, and what system configuration was used to
generate those results. These past results would usually be stored in a separate table.
Test script
A test script is a procedure, or programing code that replicates user actions. Initially
the term was derived from the product of work created by automated regression test
tools. Test Case will be a baseline to create test scripts using a tool or a program.
Test suite
The most common term for a collection of test cases is a test suite. The test suite often
also contains more detailed instructions or goals for each collection of test cases. It
definitely contains a section where the tester identifies the system configuration used
during testing. A group of test cases may also contain prerequisite states or steps, and
descriptions of the following tests.
Test fixture or test data
In most cases, multiple sets of values or data are used to test the same functionality of
a particular feature. All the test values and changeable environmental components are
collected in separate files and stored as test data. It is also useful to provide this data to
the client and with the product or a project.
Test harness
The software, tools, samples of data input and output, and configurations are all
referred to collectively as a test harness.
4.9 Bug Tracking
Bug tracking systems as a part of integrated project management systems[edit]
Bug and issue tracking systems are often implemented as a part of integrated project
management systems. This approach allows including bug tracking and fixing in a general
product development process, fixing bugs in several product versions, automatic generation
of a product knowledge base and release notes.
Distributed bug tracking[edit]
Some bug trackers are designed to be used with distributed revision control software. These
distributed bug trackers allow bug reports to be conveniently read, added to the database or
114

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

updated while a developer is offline.[3] Fossil and Veracity both include distributed bug
trackers.
Recently, commercial bug tracking systems have also begun to integrate with distributed
version control. FogBugz, for example, enables this functionality via the source-control tool,
Kiln.[4]
Although wikis and bug tracking systems are conventionally viewed as distinct types of
software, ikiwiki can also be used as a distributed bug tracker. It can manage documents and
code as well, in an integrated distributed manner. However, its query functionality is not as
advanced or as user-friendly as some other, non-distributed bug trackers such
asBugzilla.[5] Similar statements can be made about org-mode, although it is not wiki
software as such.
Bug tracking and test management[edit]
While traditional test management tools such as HP Quality Center and IBM Rational
Quality Manager come with their own bug tracking systems, other tools integrate with
popular bug tracking systems.[citation needed]
4.10 Casual Analysis
Causal analysis and resolution improves quality and productivity by preventing the
introduction of defects or problems and by identifying and appropriately incorporating the
causes of superior process performance.
The Causal Analysis and Resolution process area involves the following activities:
Identifying and analyzing causes of selected outcomes. The selected outcomes can
represent defects and problems that can be prevented from happening in the future or
successes that can be implemented in projects or the organization.
Taking actions to complete the following:
o Remove causes and prevent the recurrence of those types of
defects and problems in the future
o Proactively analyze data to identify potential problems and
prevent them from occurring
o Incorporate the causes of successes into the process to improve
future process performance
Reliance on detecting defects and problems after they have been introduced is not cost
effective. It is more effective to prevent defects and problems by integrating Causal
Analysis and Resolution activities into each phase of the project.
Since similar outcomes may have been previously encountered in other projects or in earlier
phases or tasks of the current project, Causal Analysis and Resolution activities are
mechanisms for communicating lessons learned among projects.

115

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Types of outcomes encountered are analyzed to identify trends. Based on an understanding


of the defined process and how it is implemented, root causes of these outcomes and future
implications of them are determined.
Since it is impractical to perform causal analysis on all outcomes, targets are selected by
tradeoffs on estimated investments and estimated returns of quality, productivity, and cycle
time.
Measurement and analysis processes should already be in place. Existing defined measures
can be used, though in some instances new measurement definitions, redefinitions, or
clarified definitions may be needed to analyze the effects of a process change.
Refer to the Measurement and Analysis process area for more information about aligning
measurement and analysis activities and providing measurement results.
Causal Analysis and Resolution activities provide a mechanism for projects to evaluate their
processes at the local level and look for improvements that can be implemented.
When improvements are judged to be effective, the information is submitted to the
organizational level for potential deployment in the organizational processes.
The specific practices of this process area apply to a process that is selected for quantitative
management. Use of the specific practices of this process area can add value in other
situations, but the results may not provide the same degree of impact to the organizations
quality and process performance objectives.
UNIT V
SOFTWARE PROCESS DEFINITION AND MANAGEMENT
5.1 Process Elements
Accountability
Each deliverables has an owner and a customer (i.e Requirements is owned
by
product
manager.
The
lead architect is the customer).
Each engineer or group of engineer responsible for a set of features or
components
should
support them
during Beta and early releases
Team
Spirit
Collaboration and proper diffusion of knowledge and information is key to
productivity
Management should intervene only in case of deviation from standards and
baseline
Performance
Metrics

116

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Overview
The basic process is to define a baseline for key performance metrics
following
a
clear
but
flexible
set
of
procedures.
Alerts , analysis and corrective/contingency actions should be planed
when
deviation
from
the
baseline
occurs.
Quality Coverage
Each development artifacts (Requirements, Use Cases, Design, Source
Code,
Test
cases,
Unit
Blackbox Test, Build, Release, Usability) must have one or more quality
criteria
such
as
%
approved
or
compliance deliverables.
The overall quality index could be weighted by deliverables with higher
weights
with
early
deliverables
which has more impact on the overall acceptance of the product by the
customer.
Functionality Coverage
Percentage of requirements validated by customers
Percentage of user scenario validated by customers
Productivity
Ratio of failed build, release, test cycle, reworked requirements
Accuracy of initial estimation
Defects turnaround time
Customer escalation turnaround time
Activity-based Costing
Average cost to fix a defect
Average cost of a failed build
Average cost of failed release
Average cost of incorrect requirement
Quality Function Deployment (QFD)

117

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Probes
Probes are used to collect statistics on the development process
API or Web services for statistics
Alerts on deviation
Review & Traceability

Procedures
Here is a sample of procedures for the software development process:

Creating/updating coding standards

Creating/updating design guideline

Creating/updating requirements

Creating/updating use cases

Creating/updating build scripts

Creating/updating test cases

Creating/updating user documentation

Creating/updating training guide

Unit testing
Black-box testing
Defect resolution
Data Integrity tests
Performance testing
Functional testing

Configuration management test lab

Test automation
Build automation
Release automation

Requirements review
Code review
Design review
Test cases/plan review

Build
118

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Release
Customer escalation
Update Product Roadmap
Build vs buy
Hiring
Engineering Suggestions
Reusability of components
Orientation new engineer
Hands-on training
Management by objectives
Performance appraisal
Source code management
Process Management
Project management

Selection of offshore teams


Synchronization development context with offshore teams

Automation & Tools


Here is a sample list of automation tools that can be used in software
engineering

Build environment/portal

Requirements tracking system

Defect database

Test configuration management

Development environment

Test harness

Unit Test generator

Execution profiler

Code coverage analyzer


119

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Black-box automation tool


Version control
Collaboration platform

5.2Process Architecture
6. Process architecture is the structural design of general process systems and applies to
fields such as computers (software, hardware, networks, etc.), business processes
(enterprise architecture, policy and procedures, logistics, project management, etc.),
and any other process system of varying degrees of complexity.[1]
7. Processes are defined as having inputs, outputs and the energy required to transform
inputs to outputs. Use of energy during transformation also implies a passage of time:
a process takes real time to perform its associated action. A process also requires space
for input/output objects and transforming objects to exist: a process uses real space.
8. A process system is a specialized system of processes. Processes are composed of
processes. Complex processes are made up of several processes that are in turn made
up of several processes. This results in an overall structural hierarchy of abstraction. If
the process system is studied hierarchically, it is easier to understand and manage;
therefore, process architecture requires the ability to consider process systems
hierarchically. Graphical modeling of process architectures is considered by Dualistic
Petri nets. Mathematical consideration of process architectures may be found
in CCS and the -calculus.
9. The structure of a process system, or its architecture, can be viewed as a dualistic
relationship of its infrastructure and suprastructure.[1][2] The infrastructure describes a
process system's component parts and their interactions. The suprastructure considers
the super system of which the process system is a part. (Suprastructure should not be
confused with superstructure, which is actually part of the infrastructure built for
(external) support.) As one traverses the process architecture from one level of
abstraction to the next, infrastructure becomes the basis for suprastructure and vice
versa as one looks within a system or without.
120

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

10.Requirements
Requirements for a process system are derived at every hierarchical level.[2] Black-box
requirements for a system come from its suprastructure. Customer requirements are
black-box
box requirements near, if not at, the top of a process architecture
architecture's hierarchy.
White-box
box requirements, such as engineering rules, programming syntax
syntax, etc., come
from the process system's infrastructure.
11.Process systems are a dualistic phenomenon of change/no-change
change/no change or form/transform
and as such, are well-suited
suited to being modeled by the bipartite Petri Nets modeling
system and in particular, process-class
proc
Dualistic Petri nets where processes can be
simulated in real time and space and studied hierarchically.
5.3 Relationship Between Elements
5.4 Process Modeling
The term process model is used in various contexts. For example, in business process
modeling the enterprise process model is often referred to as the business process model
model.

Process models are processes of the same nature that are classified together into a model.
Thus, a process model is a description of a process at the type level. Since the process model
is at the type level, a process is an instantiation of it. The same process model is used
repeatedly for the development of many applications and thus, has many instantiations. One
possible use of a process model is to prescribe how things must/should/could be done in
contrast to the process itself which is really what happens. A process model is roughly an
anticipation of what the process will look like. What the process shall be will be determined
during actual system development.[2]
The goals of a process model are to be:
Descriptive
Track what actually happens during a process
Take the point of view of an external observer who looks at the way a process has
been performed and determines the improvements that must be made to make it
perform more effectively or efficiently.
Prescriptive
Define the desired processes and how they should/could/might be performed.

121

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Establish rules, guidelines, and behavior patterns which, if followed, would lead to the
desired process performance. They can range from strict enforcement to flexible
guidance.
Explanatory
Provide explanations about the rationale of processes.
Explore and evaluate the several possible courses of action based on
rational arguments.
Establish an explicit link between processes and the requirements that the model
needs to fulfill.
Pre-defines points at which data can be extracted for reporting purposes.
Classification of process models[edit]
By coverage[edit]
There are five types of coverage where the term process model has been defined
differently:[3]
Activity-oriented: related set of activities conducted for the specific purpose of product
definition; a set of partially ordered steps intended to reach a goal.[4]
Product-oriented: series of activities that cause sensitive product transformations to reach
the desired product.[5]
Decision-oriented: set of related decisions conducted for the specific purpose of product
definition.
Context-oriented: sequence of contexts causing successive product transformations under
the influence of a decision taken in a context.
Strategy-oriented: allow building models representing multi-approach processes and plan
different possible ways to elaborate the product based on the notion of intention and
strategy.[6]
By alignment[edit]
Processes can be of different kinds.[2] These definitions correspond to the various ways in
which a process can be modelled.
Strategic processes
investigate alternative ways of doing a thing and eventually produce a plan for doing
it
are often creative and require human co-operation; thus, alternative generation and
selection from an alternative are very critical activities
Tactical processes
help in the achievement of a plan
are more concerned with the tactics to be adopted for actual plan achievement than
with the development of a plan of achievement
Implementation processes
are the lowest level processes

122

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

are directly concerned with the details of the what and how of plan implementation
By granularity[edit]
Granularity refers to the level of detail of a process model and affects the kind of guidance,
explanation and trace that can be provided. Coarse granularity restricts these to a rather
limited level of detail whereas fine granularity provides more detailed capability. The nature
of granularity needed is dependent on the situation at hand.[2]
Project manager, customer representatives, the general, top-level,
top level, or middle management
require rather coarse-grained
grained process description as they want to gain an overview of time,
budget, and resource planning for their decisions. In contrast, software engineers, users,
testers, analysts, or software system architects will prefer a fine-grained
fine grained process model where
the details of the model can provide them with instructions and important execution
dependencies such as the dependencies between people.
While notations for fine-grained
grained models exist, most traditional process models are coarse
coarsegrained descriptions. Process models should,
should, ideally, provide a wide range of granularity
[2][7]
(e.g. Process Weaver).
By flexibility[edit]

Flexibility of Method construction approaches [8]


It was found that while process models were prescriptive, in actual practice departures from
the prescription can occur.[6]Thus, frameworks for adopting methods evolved so that systems
development methods match specific organizational situations and thereby improve their
usefulness. The development of such frameworks is also called Situat
Situational Method
Engineering.
Method construction approaches can be organized in a flexibility spectrum ranging from
'low' to 'high'.[8]
Lying at the 'low' end of this spectrum are
are rigid methods, whereas at the 'high' end there are
modular method construction. Rigid methods are completely pre-defined
pre defined and leave little
scope for adapting them to the situation at hand. On the other hand, modular methods can be
modified and augmented to
o fit a given situation. Selecting a rigid methods allows each
project to choose its method from a panel of rigid, pre-defined
pre defined methods, whereas selecting a
path within a method consists of choosing the appropriate path for the situation at hand.
Finally, selecting
electing and tuning a method allows each project to select methods from different
approaches and tune them to the project's needs. [9]
123

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Quality of methods[edit]
As the quality of process models is being discussed in this paper, there is a need to elaborate
quality of modeling techniques as an important essence in quality of process models. In most
existing framework created for understanding the quality, the line between quality of
modeling techniques and the quality of models as a result of the application of those
techniques are not clearly drawn. This report will concentrate both on quality of process
modeling techniques and quality of process models to clearly differentiate the two. Various
frameworks were developed to help in understanding quality of process modeling
techniques, one example is Quality based modeling evaluation framework or known as QMe framework which argued to provide set of well defined quality properties and procedures
to make an objective assessment of this properties possible.[10] This framework also has
advantages of providing uniform and formal description of the model element within one or
different model types using one modeling techniques[10] In short this can make assessment of
both the product quality and the process quality of modeling techniques with regard to a set
of properties that have been defined before.
Quality properties that relate to business process modeling techniques discussed in [10] are:
Expressiveness: the degree to which a given modeling technique is able to denote the
models of any number and kinds of application domains.
Arbitrariness: the degree of freedom one has when modeling one and the same domain
Suitability: the degree to which a given modeling technique is specifically tailored for a
specific kind of application domain.
Comprehensibility: the ease with which the way of working and way of modeling are
understood by participants.
Coherence: the degree to which the individual sub models of a way of modeling
constitute a whole.
Completeness; the degree to which all necessary concepts of the application domain are
represented in the way of modeling.
Efficiency: the degree to which the modeling process uses resources such as time and
people.
Effectiveness: the degree to which the modeling process achieves its goal.
To assess the quality of Q-ME framework; it is used to illustrate the quality of the dynamic
essentials modeling of the organisation (DEMO) business modeling techniques.
It is stated that the evaluation of the Q-ME framework to the DEMO modeling techniques
has revealed the shortcomings of Q-ME. One particular is that it does not include
quantifiable metric to express the quality of business modeling technique which makes it
hard to compare quality of different techniques in an overall rating.
There is also a systematic approach for quality measurement of modeling techniques known
as complexity metrics suggested by Rossi et al. (1996). Techniques of Meta model is used as
a basis for computation of these complexity metrics. In comparison to quality framework
124

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

proposed by Krogstie, quality measurement focus more on technical level instead of


individual model level.[11]
Authors (Cardoso, Mendling, Neuman and Reijers, 2006) used complexity metrics to
measure the simplicity and understandability of a design. This is supported by later research
done by Mendling et al. who argued that without using the quality metrics to help question
quality properties of a model, simple process can be modeled in a complex and unsuitable
way. This in turn can lead to a lower understandability, higher maintenance cost and perhaps
inefficient execution of the process in question.[12]
The quality of modeling technique is important in creating models that are of quality and
contribute to the correctness and usefulness of models.
Quality of models[edit]
Earliest process models reflected the dynamics of the process with a practical process
obtained by instantiation in terms of relevant concepts, available technologies, specific
implementation environments, process constraints and so on.[13]
Enormous number of research has been done on quality of models but less focus has been
shifted towards the quality of process models. Quality issues of process models cannot be
evaluated exhaustively however there are four main guidelines and frameworks in practice
for such. These are: top-down quality frameworks, bottom-up metrics related to quality
aspects, empirical surveys related to modeling techniques, and pragmatic guidelines.[14]
Hommes quoted Wang et al. (1994)[11] that all the main characteristic of quality of models
can all be grouped under 2 groups namely correctness and usefulness of a model, correctness
ranges from the model correspondence to the phenomenon that is modeled to its
correspondence to syntactical rules of the modeling and also it is independent of the purpose
to which the model is used.
Whereas the usefulness can be seen as the model being helpful for the specific purpose at
hand for which the model is constructed at first place. Hommes also makes a further
distinction between internal correctness (empirical, syntactical and semantic quality) and
external correctness (validity).
A common starting point for defining the quality of conceptual model is to look at the
linguistic properties of the modeling language of which syntax and semantics are most often
applied.
Also the broader approach is to be based on semiotics rather than linguistic as was done by
Krogstie using the top-down quality framework known as SEQUAL.[15][16] It defines several
quality aspects based on relationships between a model, knowledge Externalisation, domain,
a modeling language, and the activities of learning, taking action, and modeling.
The framework does not however provide ways to determine various degrees of quality but
has been used extensively for business process modeling in empirical tests carried
out[17] According to previous research done by Moody et al.[18] with use of conceptual model

125

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

quality framework proposed by Lindland et al. (1994) to evaluate quality of process model,
three levels of quality[19] were identified:
Syntactic quality: Assesses extent to which the model conforms to the grammar rules of
modeling language being used.
Semantic quality: whether the model accurately represents user requirements
Pragmatic quality: whether the model can be understood sufficiently by all relevant
stakeholders in the modeling process. That is the model should enable its interpreters to
make use of it for fulfilling their need.
From the research it was noticed that the quality framework was found to be both easy to use
and useful in evaluating the quality of process models however it had limitations in regards
to reliability and difficult to identify defects. These limitations led to refinement of the
framework through subsequent research done by Krogstie. This framework is called
SEQUEL framework by Krogstie et al. 1995 (Refined further by Krogstie&Jrgensen, 2002)
which included three more quality aspects.
Physical quality: whether the externalized model is persistent and available for the
audience to make sense of it.
Empirical quality: whether the model is modeled according to the established regulations
regarding a given language.
Social quality: This regards the agreement between the stakeholders in the modeling
domain.
Dimensions of Conceptual Quality framework[20] Modeling Domain is the set of all
statements that are relevant and correct for describing a problem domain, Language
Extension is the set of all statements that are possible given the grammar and vocabulary of
the modeling languages used. Model Externalization is the conceptual representation of the
problem domain.
It is defined as the set of statements about the problem domain that are actually made. Social
Actor Interpretation and Technical Actor Interpretation are the sets of statements that actors
both human model users and the tools that interact with the model, respectively think the
conceptual representation of the problem domain contains.
Finally, Participant Knowledge is the set of statements that human actors, who are involved
in the modeling process, believe should be made to represent the problem domain. These
quality dimensions were later divided into two groups that deal with physical and social
aspects of the model.
In later work, Krogstie et al.[15] stated that while the extension of the SEQUAL framework
has fixed some of the limitation of the initial framework, however other limitation remain .
In particular, the framework is too static in its view upon semantic quality, mainly
considering models, not modeling activities, and comparing these models to a static domain
rather than seeing the model as a facilitator for changing the domain.

126

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Also, the frameworks definition of pragmatic quality is quite narrow, focusing on


understanding, in line with the semiotics of Morris, while newer research in linguistics and
semiotics has focused beyond mere understanding, on how the model is used and impact its
interpreters.
The need for a more dynamic view in the semiotic quality framework is particularly evident
when considering process models, which themselves often prescribe or even enact actions in
the problem domain, hence a change to the model may also change the problem domain
directly. This paper discusses the quality framework in relation to active process models and
suggests a revised framework based on this.
Further work by Krogstie et al. (2006) to revise SEQUAL framework to be more appropriate
for active process models by redefining physical quality with a more narrow interpretation
than previous research.[15]
The other framework in use is Guidelines of Modeling (GoM) [21] based on general
accounting principles include the six principles: Correctness, Clarity deals with the
comprehensibility and explicitness (System description) of model systems.
Comprehensibility relates to graphical arrangement of the information objects and, therefore,
supports the understand ability of a model. Relevance relates to the model and the situation
being presented. Comparability involves the ability to compare models that is semantic
comparison between two models, Economic efficiency; the produced cost of the design
process need at least to be covered by the proposed use of cost cuttings and revenue
increases.
Since the purpose of organizations in most cases is the maximization of profit, the principle
defines the borderline for the modeling process. The last principle is Systematic design
defines that there should be an accepted differentiation between diverse views within
modeling. Correctness, relevance and economic efficiency are prerequisites in the quality of
models and must be fulfilled while the remaining guidelines are optional but necessary.
The two frameworks SEQUAL and GOM have a limitation of use in that they cannot be used
by people who are not competent with modeling. They provide major quality metrics but are
not easily applicable by non-experts.
The use of bottom-up metrics related to quality aspects of process models is trying to bridge
the gap of use of the other two frameworks by non-experts in modeling but it is mostly
theoretical and no empirical tests have been carried out to support their use.
Most experiments carried out relate to the relationship between metrics and quality aspects
and these works have been done individually by different authors: Canfora et al. study the
connection mainly between count metrics (for example, the number of tasks or splits -and
maintainability of software process models;[22] Cardoso validates the correlation between
control flow complexity and perceived complexity; and Mendling et al. use metrics to predict
control flow errors such as deadlocks in process models.[12][23]

127

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The results reveal that an increase in size of a model appears to have a negative impact on
quality and their comprehensibility. Further work by Mendling et al. investigates the
connection between metrics and understanding [24] and[25] While some metrics are confirmed
regarding their impact, also personal factors of the modeler like competence are revealed
as important for understanding about the models.
Several empirical surveys carried out still do not give clear guidelines or ways of evaluating
the quality of process models but it is necessary to have clear set of guidelines to guide
modelers in this task. Pragmatic guidelines have been proposed by different practitioners
even though it is difficult to provide an exhaustive account of such guidelines from practice.
In,[26] 10 tips for process modeling are summarized, many technical definitions and rules are
provided, but it does not teach how to create process models that are effective in their
primary mission - maximizing shared understanding of the as-is or to-be process. Most of the
guidelines are not easily put to practice but label activities verbnoun rule has been
suggested by other practitioners before and analyzed empirically. From the research.[27] value
of process models is not only dependent on the choice of graphical constructs but also on
their annotation with textual labels which need to be analyzed. It was found that it results in
better models in terms of understanding than alternative labelling styles.
From the earlier research and ways to evaluate process model quality it has been seen that
the process model's size, structure, expertise of the modeler and modularity have an impact
on its overall understandability.[24] [28] Based on these a set of guidelines was presented[29] 7
Process Modeling Guidelines (7PMG). This guideline uses the verb-object style, as well as
guidelines on the number of elements in a model, the application of structured modeling, and
the decomposition of a process model. The guidelines are as follows:
G1 Minimize the number of elements in a model
G2 Minimize the routing paths per element
G3 Use one start and one end event
G4 Model as structured as possible
G5 Avoid OR routing elements
G6 Use verb-object activity labels
G7 Decompose a model with more than 50 elements
7PMG still though has limitations with its use: Validity problem 7PMG does not relate to the
content of a process model, but only to the way this content is organized and represented. It
does suggest ways of organizing different structures of the process model while the content
is kept intact but the pragmatic issue of what must be included in the model is still left out.
The second limitation relates to the prioritizing guideline the derived ranking has a small
empirical basis as it relies on the involvement of 21 process modelers only.
This could be seen on the one hand as a need for a wider involvement of process modelers
experience, but it also rises the question what alternative approaches may be available to
arrive at a prioritizing guideline.[29]
128

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

5.5 Process Definition Techniques


Software Engineering Process Concepts
3.1.1 Themes
Dowson [35] notes that All process work is ultimately directed at software process
assessment and improvement. This means that the objective is to implement new or
better processes in actual practices, be
they individual, project or organizational practices.

We describe the main topics in the software process engineering (i.e., the meta-level that has
been alluded to
earlier) area in terms of a cycle of process change, based on the commonly known PDCA
cycle. This cycle highlights that individual process engineering topics are part of a larger
process to improve practice, and that process
evaluation and feedback is an important element of process engineering.
Software process engineering consists of four activities as illustrated in the model in Figure
1. The activities are
sequenced in an iterative cycle allowing for continuous feedback and improvement of the
software process.
The Establish Process Infrastructure activity consists of establishing commitment to
process implementation and
change (including obtaining management buy-in), and putting in place an appropriate
infrastructure (resources and
responsibilities) to make it happen. The activities Planning of Process Implementation and
Change and Process Implementation and Change are the core ones in process
engineering, in that they are essential for any long-lasting benefit from process engineering
129

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

to accrue. In the planning activity the objective is to understand the current business
objectives and process needs of the organization1, identify its strengths and weaknesses, and
make a plan for process implementation and change. In Process Implementation and
Change, the objective is to execute the plan, deploy new processes (which may involve, for
example, the deployment of tools and training of staff), and/or change existing processes.
The fourth activity, Process Evaluation is concerned with finding out how well the
implementation and change went; whether the expected benefits materialized. This is then
used as input for subsequent cycles. At the centre of the cycle is the Process Experience
Base. This is intended to capture lessons from past iterations of the cycle (e.g., previous
evaluations, process definitions, and plans). Evaluation lessons can be qualitative or
quantitative. No assumptions are made about the nature or technology of this Process
Experience Base, only that it be a persistent storage. It is expected that during subsequent
iterations of the cycle, previous experiences will be adapted and reused. It is also important
to continuously re-assess the utility of information in the experience base to ensure that
obsolete information does not accumulate. With this cycle as a framework, it is possible to
map the topics in this knowledge area to the specific activities where they would be most
relevant. This mapping is also shown in Figure 1. The bulleted boxes contain the Knowledge
Area topics. It should be noted that this cycle is not intended to imply that software process
engineering is relevant to only large organizations. To the contrary, process-related activities
can, and have been, performed successfully by small organizations, teams, and individuals.
The way the activities defined in the cycle are performed would be different depending on
the context. Where it is relevant, we will present examples of approaches for small
organizations.

Figure 1 A model of the software process engineering cycle, and the relationship of its
activities to the KA topics.
130

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

The circles are the activities in the process engineering cycle. The square in the middle of the
cycle is a data store.
The bulleted boxes are the topics in this Knowledge Area that map to each of the activities in
the cycle. The numbers refer to the topic sections in this chapter. The topics in this KA are as
follows:
Process Infrastructure: This is concerned with putting in place an infrastructure for
software process
engineering.
Process Measurement: This is concerned with quantitative techniques to diagnose software
processes;
to identify strengths and weaknesses. This can be performed to initiate process
implementation and change, and afterwards to evaluate the consequences of process
implementation and change.
Process Definition: This is concerned with defining processes in the form of models, plus
the automated support that is available for the modeling task, and for enacting the models
during the software process.
Qualitative Process Analysis: This is concerned with qualitative techniques to analyze
software processes, to
identify strengths and weaknesses. This can be performed to initiate process implementation
and change, and afterwards to evaluate the consequences of process implementation and
change.
Process Implementation and Change: This is concerned with deploying processes for the
first time and with changing existing process. This topic focuses on organizational change. It
describes the paradigms, infrastructure, and critical success factors necessary for successful
process implementation and change. Within the scope of this topic, we also present some
conceptual issues about the evaluation of process change. The main, generally accepted,
themes in the software engineering process field have been described by Dowson in [35]. His
themes are a subset of the topics that we cover in this KA. Below are Dowsons themes:
Process definition: covered in topic 3.4 of this KA breakdown
Process assessment: covered in topic 3.3 of this KA breakdown
Process improvement: covered in topics 3.2 and 3.6 of this KA breakdown
Process support: covered in topic 3.4 of this KA breakdown
We also add one theme in this KA description, namely the qualitative process analysis
(covered in topic 3.5).
3.1.2 Terminology There is no single universal source of terminology for the software
engineering process field, but good sources that define important terms are [51][96], and the
vocabulary (Part 9) in the ISO/IEC TR 15504 documents [81].
3.2 Process Infrastructure

131

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

At the initiation of process engineering, it is necessary to have an appropriate infrastructure


in place. This includes
having the resources (competent staff, tools and funding), as well as the assignment of
responsibilities. This is an
indication of management commitment to and ownership of the process engineering effort.
Various committees may
have to be established, such as a steering committee to oversee the process engineering
effort. It is widely recognized that a team separate from the developers/maintainers must be
set up and tasked with process analysis, implementation and change [16]. The main reason
for this is that the priority of the developers/maintainers is to produce systems or releases,
and therefore process engineering activities will not receive as much attention as they
deserve or need. This, however, should not mean that the project organization is not involved
in the process engineering effort at all. To the contrary, their involvement is essential.
Especially in a small organization, outside help (e.g., consultants) may be required to assist
in making up a process team. Two types of infrastructure are have been used in practice: the
Experience Factory [8][9] and the Software Engineering Process Group [54]. The IDEAL
handbook [100] provides a good description of infrastructure for process improvement in
general.
3.2.1 The Software Engineering Process Group The SEPG is intended to be the central focus
for process
improvement within an organization. The SEPG typically has the following ongoing
activities:
Obtains and maintains the support of all levels of management
Facilitates software process assessments (see below)
Works with line managers whose projects are affected by changes in software engineering
practice
Maintains collaborative working relationships with software engineers
Arranges and supports any training or continuing education related to process
implementation and Change
Tracks, monitors, and reports on the status of particular improvement efforts
Facilitates the creation and maintenance of process definitions
Maintains a process database
Provides process consultation to development projects and management
Participate in integrating software engineering processes with other organizational
processes, such as systems engineering
Fowler and Rifkin [54] suggest the establishment of a steering committee consisting of line
and supervisory
management. This would allow management to guide process implementation and change,
align this effort with
132

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

strategic and business goals of the organization, and also provides them with visibility.
Furthermore, technical
working groups may be established to focus on specific issues, such as selecting a new
design method to setting up
a measurement program.
3.2.2 The Experience Factory
The concept of the EF separates the project organization (e.g., the software development
organization) from the
improvement organization. The project organization focuses on the development and
maintenance of applications. The EF is concerned with improvement. Their relationship is
depicted in Figure 2. The EF is intended to institutionalize the collective learning of an
organization by developing, updating, and delivering
to the project organization experience packages (e.g., guide books, models, and training
courses).2 The project
organization offers to the experience factory their products, the plans used in their
development, and the data gathered during development and operation. Examples of
experience packages include:
resource models and baselines3 (e.g., local cost models, resource allocation models)
change and defect baselines and models (e.g., defect prediction models, types of defects
expected for the
application)
project models and baselines (e.g., actual vs. expected product size)
process definitions and models (e.g., process models for Cleanroom, Ada waterfall model)
method and technique evaluations (e.g., best method for finding interface faults)
products and product parts (e.g., Ada generics for simulation of satellite orbits)
quality models (e.g., reliability models, defect slippage models, ease of change models),
and
lessons learned (e.g., risks associated with Ada development).

133

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Figure 2 The relationship between the Experience Factoryand the project organization as
implemented at the
Software Engineering Laboratory at NASA/GSFC. Thisdiagram is reused here from [10]
with permission of the
authors.
3.3 Process Measurement
Process measurement, as used here, means that quantitativeinformation about the process is
collected, analyzed, and
interpreted. Measurement is used to identify the strengthsand weaknesses of processes, and
to evaluate processes
after they have been implemented and/or changed (e.g.,evaluate the ROI from implementing
a new process).4
An important assumption made in most process engineering work is illustrated by the path
diagram in Figure 3. Here, we assume that the process has an impact on process outcomes.
Process outcomes could be, for example, product quality (faults per KLOC or per FP),
maintainability (effort to make a certain type of change), productivity (LOC or FP per person
month), time-to-market, the extent of process variation, or customer satisfaction (as
measured through a customer survey). This relationship depends on the particular context
(e.g., size of the organization, or size of the project).
Not every process will have a positive impact on all outcomes. For example, the introduction
of software inspections may reduce testing effort and cost, but may increase interval time if
each inspection introduces large delays due to the scheduling of large inspection meetings
[131]. Therefore, it is preferred to use multiple process outcome measures that are important
for the organizations business. In general, we are most concerned about the process
outcomes. However, in order to achieve the process outcomes that we desire (e.g., better
quality, better
maintainability, greater customer satisfaction) we have to implement the appropriate process.
Of course, it is not only process that has an impact on outcomes. Other factors such as the
capability of the staff and the tools that are used play an important role.5 Furthermore, the
extent to which the process is institutionalized or implemented (i.e., process fidelity) is
important as it may explain why good processes do not give the desired outcomes. One can
measure the quality of the software process itself, or the process outcomes. The methodology
in Section 3.3.1
is applicable to both. We will focus in Section 3.3.2 on process measurement since the
measurement of process
outcomes is more general and applicable in other
Knowledge Areas.

134

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

3.3.1 Methodology in Process Measurement A number of guides for measurement are


available [108][109][126]. All of these describe a goal-oriented process for defining
measures. This means that one should start from specific information needs and then identify
the measures that will satisfy these needs, rather than start from specific measures and try to
use them. A good practical text on establishing and operating a measurement program has
been produced by the Software Engineering Laboratory [123]. This also discusses the cost of
measurement. Texts
that present experiences in implementing measurement in software organizations include
[86][105][115]. An
emerging international standard that defines a generic measurement process is also available
(ISO/IEC CD 15939:
Information Technology Software Measurement Process)
[82].
Two important issues in the measurement of software engineering processes are the
reliability and validity of
measurement. Reliability is concerned with random measurement error. Validity is
concerned with the ability of
the measure to really measure what we think it is measuring. Reliability becomes important
when there is subjective
measurement, for example, when assessors assign scores to a particular process. There are
different types of validity
that ought to be demonstrated for a software process measure, but the most critical one is
predictive validity.
This is concerned with the relationship between the process measure and the process
outcome. A discussion of both of these and different methods for achieving them can be
found in [40][59]. An IEEE Standard describes a
methodology for validating metrics (IEEE Standard for a Software Quality Metrics
Methodology. IEEE Std 10611998) [76].
An overview of existing evidence on reliability of software process assessments can be
found in [43][49], and for
predictive validity in [44][49][59][88].
3.3.2 Process Measurement Paradigms
Two general paradigms that are useful for characterizing the type of process measurement
that can be performed
have been described by Card [21]. The distinction made by Card is a useful conceptual one.
Although, there may be
overlaps in practice. The first is the analytic paradigm. This is characterized as relying on
quantitative evidence to determine where improvements are needed and whether an
135

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

improvement initiative has been successful.6 The second, the benchmarking paradigm,
depends on identifying an excellent organization in a field and documenting its practices
and tools. Benchmarking assumes that if a lessproficient organization adopts the practices
of the excellent organization, it will also become excellent. Of course, both paradigms can be
followed at the same time, since they are based on different types of information. We use
these paradigms as general titles to distinguish
between different types of measurement.
3.3.2.1 Analytic Paradigm7
The analytic paradigm is exemplified by the Quality Improvement Paradigm (QIP)
consisting of a cycle of
understanding, assessing, and packaging [124]. Experimental and Observational Studies
Experimentation involves setting up controlled or quasi experiments in the organization to
evaluate processes [101]. Usually, one would compare a new process with the current
process to determine whether the former has better process outcomes. Correlational
(nonexperimental) studies can also provide useful feedback for identifying process
improvements (e.g., for example, see the study described by Agresti [2]). Process Simulation
The process simulation approach can be used to predict process outcomes if the current
process is changed in a certain way [117]. Initial data about the performance of the current
process needs to be collected, however, as a basis for the simulation. Orthogonal Defect
Classification
Orthogonal Defect Classification is a technique that can be used to link faults found with
potential causes.
It relies on a mapping between fault types and fault triggers [22][23]. There exists an IEEE
Standard on
the classification of faults (or anomalies) that may also be useful in this context (IEEE
Standard for the
Classification of Software Anomalies. IEEE Std 1044-1993) [74]. Statistical Process Control
Placing the software process under statistical process control, through the use of control
charts and their
interpretations, is an effective way to identify stability, or otherwise, in the process. One
recent book
provides a good introduction to SPC in the context of software engineering [53]. The
Personal Software Process
This defines a series of improvements to an individuals development practices in a
specified order [70]. It is bottom-up in the sense that it stipulates personal data collection
and improvements based on the data terpretations.
3.3.2.2 Benchmarking Paradigm

136

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

This paradigm involves measuring the maturity of an organization or the capability of its
processes. The
benchmarking paradigm is exemplified by the softwa re process assessment8 work. A
general introductory overview
of process assessments and their application is provided in [135].
Process assessment models
An assessment model captures what are believed to be good practices. The good practices
may pertain to
technical software engineering activities only, or may also encompass, for example,
management, systems
engineering, and human resources management activities as well.
Architectures of assessment models
There are two general architectures for an assessment model that make different assumptions
about the order
in which processes must be measured: the continuous and the staged architectures [110]. At
this point it is
not possible to make a recommendation as to which approach is better than another. They
have considerable differences. An organization should evaluate them to see which are most
pertinent to their needs and objectives when selecting a model.
Assessment models
The most commonly used assessment model in the software community is the SW-CMM
[122]. It is also
important to recognize that ISO/IEC 15504 is an emerging international standard on software
process
assessments [42][81]. It defines an exemplar assessment model and conformance
requirements on
other assessment models. ISO 9001 is also a common model that has been applied by
software organizations
(usually in conjunction with ISO 9000-1) [132]. Other notable examples of assessment
models are Trillium
[25], Bootstrap [129], and the requirements engineering capability model [128]. There are
also maturity models for other software processes available, such as for testing [18][19][20],
a measurement maturity model [17], and a maintenance maturity model [36] (although, there
have been many more capability and maturity models that have been defined, for example,
for design, documentation, and formal methods, to name a few). A maturity model for
systems engineering has also been developed, which would be useful where a project or
organization is involved in the development and maintenance of systems including software
[39]. The applicability of assessment models to small organizations is addressed in
[85][120], where assessments models tailored to small organizations are presented.
137

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Process assessment methods


In order to perform an assessment, a specificassessment method needs to be followed. In
additionto producing a quantitative score that characterizes thecapability of the process (or
maturity of theorganization), an important purpose of an assessmentis to create a climate for
change within theorganization [37]. In fact, it has been argued that thelatter is the most
important purpose of doing anassessment [38].The most well known method that has a
reasonableamount of publicly available documentation is theCBA IPI [37]. This method
focuses on assessments
for the purpose of process improvement using theSW-CMM. Many other methods are
refinements ofthis for particular contexts. Another well knownmethod using the SW-CMM,
but for supplierselection, is the SCE [6]. The activities performedduring an assessment, the
distribution of effort onthese activities, as well as the atmosphere during anassessment is
different if it is for the purpose ofimprovement versus contract award. Requirements on
both types of methods that reflect what are believed tobe good assessment practices are
provided in [81][99].
There have been criticisms of various models and methodsfollowing the benchmarking
paradigm, for example
[12][50][62][87]. Most of these criticisms were concernedwith the empirical evidence
supporting the use of
assessments models and methods. However, since thepublication of these articles, there has
been anaccumulation of systematic evidence supporting theefficacy of process assessments
[24][47][48][60][64][65][66][94].
3.4 Process Definition
Software engineering processes are defined for a number ofreasons, including: facilitating
human understanding andcommunication, supporting process improvement,supporting
process management, providing automated
process guidance, and providing automated executionsupport [29][52][68]. The types of
process definitions
required will depend, at least partially, on the reason.It should be noted also that the context
of the project and
organization will determine the type of process definitionthat is most important. Important
variables to consider
include the nature of the work (e.g., maintenance ordevelopment), the application domain,
the structure of the
delivery process (e.g., waterfall, incremental, evolutionary),and the maturity of the
organization.There are different approaches that can be used to defineand document the
process. Under this topic the approachesthat have been presented in the literature are
covered,although at this time there is no data on the extent to which
138

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

these are used in practice.


Types of Process Definitions
Processes can be defined at different levels ofabstraction (e.g., generic definitions vs.
tailoreddefinitions, descriptive vs. prescriptive vs.proscriptive). The differentiation amongst
these hasbeen described in [69][97][111].
Orthogonal to the levels above, there are also types ofprocess definitions. For example, a
process definition
can be a procedure, a policy, or a standard.
3.4.2 Life Cycle Framework Models
These framework models serve as a high leveldefinition of the phases that occur
duringdevelopment. They are not detailed definitions, butonly the high level activities and
theirinterrelationships. The common ones are: the waterfallmodel, throwaway prototyping
model, evolutionaryprototyping model, incremental/iterative development,
spiral model, reusable software model, and automatedsoftware synthesis. (see
[11][28][84][111][113]).
Comparisons of these models are provided in
[28][32], and a method for selection amongst many ofthem in [3].
3.4.3 Software Life Cycle Process Models
Definitions of life cycle process models tend to bemore detailed than framework models.
Anotherdifference being that life cycle process models do notattempt to order their processes
in time. Therefore, inprinciple, the life cycle processes can be arranged tofit any of the life
cycle frameworks. The two mainreferences in this area are ISO/IEC 12207:Information
Technology Software Life CycleProcesses [80] and ISO/IEC TR 15504: Information
Technology Software Process Assessment [42][81].Extensive guidance material for the
application of the
former has been produced by the IEEE (Guide forInformation Technology - Software Life
Cycle
Processes - Life cycle data, IEEE Std 12207.1-1998,and Guide for Information Technology Software Life
Cycle Processes Implementation. Considerations.IEEE Std 12207.2-1998) [77][78]. The
latter defines a
two dimensional model with one dimension beingprocesses, and the second a measurement
scale to
evaluate the capability of the processes. In principle,ISO/IEC 12207 would serve as the
process dimension
of ISO/IEC 15504.The IEEE standard on developing life cycle processesalso provides a list
of processes and activities fordevelopment and maintenance (IEEE Standard forDeveloping
Software Life Cycle Processes, IEEE Std1074-1991) [73], and provides examples of
mappingthem to life cycle framework models. A standard that
139

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

focuses on maintenance processes is also availablefrom the IEEE (IEEE Standard for
Software
Maintenance, IEEE Std 1219-1992) [75].
3.4.4 Notations for Process Definitions
Different elements of a process can be defined, forexample, activities, products (artifacts),
and resources [68].
Detailed frameworks that structure the types of informationrequired to define processes are
described in [4][98].
There are a large number of notations that have been usedto define processes. They differ in
the types of informationdefined in the above frameworks that they capture. A textthat
describes different notations is [125].
Because there is no data on which of these was found to bemost useful or easiest to use under
which conditions, this
Guide covers what seemingly are popular approaches inpractice: data flow diagrams [55], in
terms of process
purpose and outcomes [81], as a list of processesdecomposed in constituent activities and
tasks defined in
natural language [80], Statecharts [89][117] (also see [63]for a comprehensive description of
Statecharts), ETVX
[116], Actor-Dependency modeling [14][134], SADTnotation [102], Petri nets [5], IDEF0
[125], rule-based [7],
and System Dynamics [1]. Other process programminglanguages have been devised, and
these are described in
[29][52][68].
3.4.5 Process Definition Methods
These methods specify the activities that must beperformed in order to develop and maintain
a processdefinition. These may include eliciting information fromdevelopers to build a
descriptive process definition fromscratch, and to tailoring an existing standard or
commercialprocess. Examples of methods that have been applied inpractice are
[13][14][90][98][102]. In general, there is astrong similarity amongst them in that they tend
to follow atraditional software development life cycle.
3.4.6 Automation
Automated tools either support the execution of the processdefinitions, or they provide
guidance to humans performingthe defined processes. In cases where process analysis
isperformed, some tools allow different types of simulations(e.g., discrete event
simulation).There exist tools that support each of the above processdefinition notations.
Furthermore, these tools can executethe process definitions to provide automated support to
theactual processes, or to fully automate them in someinstances. An overview of process
modeling tools can befound in [52], and of process-centered environments in[57][58].Recent
140

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

work on the application of the Internet to theprovision of real-time process guidance is


described in [91].
3.5 Qualitative Process Analysis
The objective of qualitative process analysis is to identifythe strengths and weaknesses of the
software process. It canbe performed as a diagnosis before implementing orchanging a
process. It could also be performed after a
process is implemented or changed to determine whetherthe change has had the desired
effect.Below we present two techniques for qualitative analysisthat have been used in
practice. Although it is plausible that
new techniques would emerge in the future.
3.5.1 Process Definition Review
Qualitative evaluation means reviewing a process definition(either a descriptive or a
prescriptive one, or both), and
identifying deficiencies and potential processimprovements. Typical examples of this are
presented in[5][89]. An easily operational way to analyze a process is tocompare it to an
existing standard (national, international,
or professional body), such as ISO/IEC 12207 [80].With this approach, one does not collect
quantitative data
on the process. Or if quantitative data is collected, it plays asupportive role. The individuals
performing the analysis ofthe process definition use their knowledge and capabilitiesto
decide what process changes would potentially lead todesirable process outcomes.
3.5.2 Root Cause Analysis
Another common qualitative technique that is used inpractice is a Root Cause Analysis.
This involves tracing
back from detected problems (e.g., faults) to identify theprocess causes, with the aim of
changing the process to
avoid the problems in the future. Examples of this fordifferent types of processes are
described in[13][27][41][107].
With this approach, one starts from the process outcomes,and traces back along the path in
Figure 3 to identify the
process causes of the undesirable outcomes. TheOrthogonal Defect Classification technique
described in
Section 3.3.2.1 can be considered a more formalizedapproach to root cause analysis using
quantitative
information.
3.6 Process Implementation and Change
This topic describes the situation when processes aredeployed for the first time (e.g.,
introducing an inspection

141

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

process within a project or a complete methodology, suchas Fusion [26] or the Unified
Process [83]), and when
current processes are changed (e.g., introducing a tool, oroptimizing a procedure).9 In both
instances, existing
practices have to be modified. If the modifications areextensive, then changes in the
organizational culture may
be necessary.
3.6.1 Paradigms for Process Implementation and Change
Two general paradigms that have emerged for drivingprocess implementation and change are
the Quality
Improvement Paradigm (QIP) [124] and the IDEAL model

5.6 Etvx ( Entry Task Validation Exit)


(E)ntry Criteria
Business Requirements available for review
(T)asks
Review Business Requirements
Log all discrepancies/ questions in Bug Tracker
Provide High-level Test Estimates
(V)alidation
All items in the bug tracker are closed with valid comments
Business Requirements document is updated
(E)xit
Signed-off Business Requirement Document
High-level Test estimates are accepted by all stake holders
Design Phase
(E)ntry Criteria
Signed-off Business Requirements
Functional (FS) & Technical(TS) Specifications available for review
(T)asks
Review FS & TS
Raise questions/discrepancies in Bug Tracker to clarify if anything is ambiguous
Provide detailed Test Estimates
Review project plan and provide feedback to Program Manager
Create master test plan & automation test plan and get it reviewed by appropriate
stakeholders
Generate traceability matrix
Collaborate with all test teams to have complete coverage on integration areas
142

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

(V)alidation
All items in the bug tracker are in closed state
All review comments of master test plan and automation test plan are closed
(E)xit
Signed-off TS
Signed-off FS
Detailed Test Estimates are accepted by all stake holders and incorporated appropriately
in schedule
Signed-off project plan
Signed-off MTP
Signed-off ATP
Build/Coding Phase
(E)ntry Criteria
Signed-off TS
Signed-off FS
Signed-off MTP
Signed-off ATP
Availability of Test Environment details
(T)asks
Write test cases to cover entire functionality and affected areas both from UI and DB
perspective
Get the test cases reviewed by appropriate stakeholders and get sign off
Work with Operations /Support team to get test environments
Validate SQL scripts against test cases in Dev/Unit Test environments
(V)alidation
Test cases review
Sanity check of test environments
Sanity check of SQL scripts and UI automation scripts
(E)xit
Signed-off test cases
SQA environments are available for build deployment
Validated SQL scripts against test cases
Validated UI automation scripts
Stabilization/Testing Phase
(E)ntry Criteria
Signed-off test cases
Validated UI automation scripts
Validated SQL scripts against test cases
SQA environments are available for build deployment
143

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

(T)asks
Execute test cases both manual and automation
Publish daily report with test cases execution progress as well updated info on bugs
Raise bugs in Bug Tracker appropriately
Track bugs to closure
Collaborate with all test teams to have complete coverage on integration areas
(V)alidation
Execution of test cases is completed
All appropriate bugs are tracked to closure
(E)xit
All the BVTs passed on all the builds/patches
Code Complete Build:
Test should be able to execute 100% of test cases by end of code freeze build and 80%
of test cases should pass
By start of Test final build :
All failed test cases of Code complete build + other test cases planned should pass.
All bugs raised during the Code complete build execution should be resolved &
closed
No S1 & S2 bugs should be in proposed/active/resolved state
Active S3 & S4 bugs count should be within permissible limit like 5% of total bugs
or moved to future releases
Any known issues with technical constraints or anything should have an agreed
resolution
Test Final Build:
All planned test cases for final build should pass
No S1 & S2 bugs should be in proposed/active/resolved state
No S3 & S4 bugs should be in proposed/active/resolved state

5.7 Process Baselining


There are many different steps that organizations follow in benchmarking. However, most
baselining processes have these four steps:
1. Develop a clearly defined baseline in your organization: This means that all of the attributes
involved in your baseline are defined. In our example of defects per lines of code, clearly
defining what is meant by defect and a line of code would meet the objective of this step.
2. Identify the organizations you desire to baseline against: Many factors come into this
decision, such as do you want to benchmark within your industry, do you want to benchmark
what you believe are leading organizations, do you want to benchmark an organization that
144

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

uses the same tools that are used in your organization, and do you want to benchmark against
organizations with a similar culture.
3. Compare baseline calculations: Compare how your baseline is calculated versus the baseline
calculation in the company you want to benchmark against. Benchmarking is only effective
when you benchmark against an organization who has calculated their baseline using
approximately the same approach that your organization used to calculate the baseline.
4. Identify the cause of baseline variance in the organization you benchmarked against: When
you find a variance between the baseline calculation in your company and the baseline
calculation in the organization you are benchmarking against, you need to identify the cause
of variance. For example, if your organization was producing 20 defects per thousand lines
of code, and you benchmarked against an organization that only had 10 defects per thousand
lines of code you would want to identify the cause of the difference. If you cannot identify
the cause of difference, there is little value in benchmarking. Let us assume that the company
you benchmarked against had a different process for requirement definition than your
organization. For example, assume they use JAD (joint application development) and you
did not. Learning this, you may choose to adopt JAD in your organization as a means for
reducing your developmental defect rates.
5.8 Process Assessment and Improvement
Software process improvement (SPI) started in the 1990s from the process based approach to
software development. The main problem of product-centred development was the ignoring
of activities that had no visible results and regarding them as unimportant. Process-based
approach in software development puts emphasis on organisation development and the
reaching of business goals. A similar understanding of software processes creates a feeling
of unity among the developers in a company and a continuity in the development, that in turn
guarantee
higher
production
capability
and
quality
of
the
results.
As the first promoter of software process improvement Watts Humphrey said, the main
problems in software development are not caused by insufficient skills, but by unawareness
of how to use the best available methods and inability to efficiently solve detailed problems
related to the process and product. The result of software process improvement will be the
following of a detailed description of activities in every situation of development.
Software process improvement begins with an assessment of software processes. Different
process models and standards have been created for software process assessment. We
employ the two most widely used models CMMI or Capability Maturity Model Integration
and ISO/IEC 15504 (SPICE Software Process Improvement and Capability
Determination). The processes related to development are assessed on the basis of a
benchmark model, i.e. each specific process to be assessed in relation to the development
145

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

will be compared to the requirements described in the benchmark model, at the same time
taking into account also the unique character of the development company or a project.
Process assessment is project-based a recently ended project is chosen and the activities
related
to
its
development
will
be
evaluated.

1.
2.
3.
4.
5.

Assessment starts with a meeting for developers that introduces software process
improvement and assessment and in the course of which the project as well as the processes
are chosen that are considered by developers to be the most important for assessment. Often
these tend to be the processes that the developers consider to have been insufficient in
several projects. The detailed assessment of the chosen project and processes will be done
during an interview with the developers who took part in the project. Software process
assessment also ends with a development team meeting, where the capabilities of the
assessed processes are described according to process models, as well as the shortcomings of
the processes and a process improvement plan is put together. Despite the fact that the
processes are assessed in a specific project, the development processes will be improved in
each of the following development projects. It is only constant improvement that leads to the
software productivity growth in a company.
5.9 CMMI
A maturity level is a well-defined evolutionary plateau toward achieving a mature software
process. Each maturity level provides a layer in the foundation for continuous process
improvement.
In CMMI models with a staged representation, there are five maturity levels designated by
the numbers 1 through 5
Initial
Managed
Defined
Quantitatively Managed
Optimizing
CMMI Staged Represenation- Maturity Levels

146

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Now we will give more detail about each maturity level. Next section will list down all the
process areas related to these maturity levels.
Maturity Level Details:
Maturity levels consist of a predefined set of process areas. The maturity levels are measured
by the achievement of the specific and generic goals that apply to each predefined set of
process areas. The following sections describe the characteristics of each maturity level in
detail.
Maturity Level 1 - Initial
At maturity level 1, processes are usually ad hoc and chaotic. The organization usually does
not provide a stable environment. Success in these organizations depends on the competence
and heroics of the people in the organization and not on the use of proven processes.
Maturity level 1 organizations often produce products and services that work; however, they
frequently exceed the budget and schedule of their projects.
Maturity level 1 organizations are characterized by a tendency to over commit, abandon
processes in the time of crisis, and not be able to repeat their past successes.
Maturity Level 2 - Managed
At maturity level 2, an organization has achieved all the specific and generic goals of the
maturity level 2 process areas. In other words, the projects of the organization have ensured
that requirements are managed and that processes are planned, performed, measured, and
controlled.
The process discipline reflected by maturity level 2 helps to ensure that existing practices are
retained during times of stress. When these practices are in place, projects are performed and
managed according to their documented plans.
At maturity level 2, requirements, processes, work products, and services are managed. The
status of the work products and the delivery of services are visible to management at defined
points.
147

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Commitments are established among relevant stakeholders and are revised as needed. Work
products are reviewed with stakeholders and are controlled.
The work products and services satisfy their specified requirements, standards, and
objectives.
Maturity Level 3 - Defined
At maturity level 3, an organization has achieved all the specific and generic goals of the
process areas assigned to maturity levels 2 and 3.
At maturity level 3, processes are well characterized and understood, and are described in
standards, procedures, tools, and methods.
A critical distinction between maturity level 2 and maturity level 3 is the scope of standards,
process descriptions, and procedures. At maturity level 2, the standards, process descriptions,
and procedures may be quite different in each specific instance of the process (for example,
on a particular project). At maturity level 3, the standards, process descriptions, and
procedures for a project are tailored from the organization's set of standard processes to suit a
particular project or organizational unit. The organization's set of standard processes includes
the processes addressed at maturity level 2 and maturity level 3. As a result, the processes
that are performed across the organization are consistent except for the differences allowed
by the tailoring guidelines.
Another critical distinction is that at maturity level 3, processes are typically described in
more detail and more rigorously than at maturity level 2. At maturity level 3, processes are
managed more proactively using an understanding of the interrelationships of the process
activities and detailed measures of the process, its work products, and its services.
Maturity Level 4 - Quantitatively Managed
At maturity level 4, an organization has achieved all the specific goals of the process areas
assigned to maturity levels 2, 3, and 4 and the generic goals assigned to maturity levels 2
and 3.
At maturity level 4 Subprocesses are selected that significantly contribute to overall process
performance. These selected subprocesses are controlled using statistical and other
quantitative techniques.
Quantitative objectives for quality and process performance are established and used as
criteria in managing processes. Quantitative objectives are based on the needs of the
customer, end users, organization, and process implementers. Quality and process
performance are understood in statistical terms and are managed throughout the life of the
processes.
For these processes, detailed measures of process performance are collected and statistically
analyzed. Special causes of process variation are identified and, where appropriate, the
sources of special causes are corrected to prevent future occurrences.
Quality and process performance measures are incorporated into the organization.s
measurement repository to support fact-based decision making in the future.
148

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

A critical distinction between maturity level 3 and maturity level 4 is the predictability of
process performance. At maturity level 4, the performance of processes is controlled using
statistical and other quantitative techniques, and is quantitatively predictable. At maturity
level 3, processes are only qualitatively predictable.
Maturity Level 5 - Optimizing
At maturity level 5, an organization has achieved all the specific goals of the process areas
assigned to maturity levels 2, 3, 4, and 5 and the generic goals assigned to maturity levels 2
and 3.
Processes are continually improved based on a quantitative understanding of the common
causes of variation inherent in processes.
Maturity level 5 focuses on continually improving process performance through both
incremental and innovative technological improvements.
Quantitative process-improvement objectives for the organization are established,
continually revised to reflect changing business objectives, and used as criteria in managing
process improvement.
The effects of deployed process improvements are measured and evaluated against the
quantitative process-improvement objectives. Both the defined processes and the
organization's set of standard processes are targets of measurable improvement activities.
Optimizing processes that are agile and innovative depends on the participation of an
empowered workforce aligned with the business values and objectives of the organization.
The organization's ability to rapidly respond to changes and opportunities is enhanced by
finding ways to accelerate and share learning. Improvement of the processes is inherently
part of everybody's role, resulting in a cycle of continual improvement.
A critical distinction between maturity level 4 and maturity level 5 is the type of process
variation addressed. At maturity level 4, processes are concerned with addressing special
causes of process variation and providing statistical predictability of the results. Though
processes may produce predictable results, the results may be insufficient to achieve the
established objectives. At maturity level 5, processes are concerned with addressing common
causes of process variation and changing the process (that is, shifting the mean of the process
performance) to improve process performance (while maintaining statistical predictability) to
achieve the established quantitative process-improvement objectives.
Maturity Levels Should Not be Skipped:
Each maturity level provides a necessary foundation for effective implementation of
processes at the next level.
Higher level processes have less chance of success without the discipline provided by lower
levels.
The effect of innovation can be obscured in a noisy process.
Higher maturity level processes may be performed by organizations at lower maturity levels,
with the risk of not being consistently applied in a crisis.
149

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Maturity Levels and Process Areas:


Here is a list of all the corresponding process areas defined for a S/W organization. These
process areas may be different for different organization.
This section is just giving names of the related process areas, for more detail about these
Process Areas go through CMMI Process Areas Chapter.
Level

5
Optimizing

Focus

Continuous
Improvement

Key
Area

Process

Organizational
Innovation and
Process
Deployment
Causal Analysis
and Resolution
Organizational
Process
Performance
Quantitative
Project
Management

4
Quantitatively
Managed

Quantitatively
Managed

3
Defined

Requirements
Development
Technical
Solution
Product
Integration
Verification
Validation
Organizational
Process Standardization Process Focus
Organizational
Process
Definition
Organizational
Training
Integrated Project
Mgmt (with IPPD
extras)
Risk

Result
Highest
Quality
/
Lowest
Risk
Higher
Quality
/
Lower
Risk

Medium
Quality
/
Medium
Risk

150

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Management
Decision
Analysis
and
Resolution
Integrated
Teaming (IPPD
only)
Org.
Environment for
Integration (IPPD
only)
Integrated
Supplier
Management (SS
only)

2
Managed

1
Initial

Basic
Management

Requirements
Management
Project Planning
Project
Monitoring and
Control
Supplier
Project Agreement
Management
Measurement and
Analysis
Process
and
Product Quality
Assurance
Configuration
Management

Process is informal and


Adhoc

Low
Quality
/
High
Risk

Lowest
Quality
/
Highest
Risk

151

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

5.10 Six Sigma


Six Sigma has following two key methodologies:
DMAIC: refers to a data-driven
driven quality strategy for improving processes. This methodology
is used to improve an existing business process.
DMADV: refers to a data-driven
driven quality strategy for designing products & processes. This
methodology is used to create new product designs or process designs in such a way that it
results in a more predictable, mature and defect free performance.
There is one more methodology called DFSS - Design For Six Sigma. DFSS is a data
data-driven
quality strategy for designing design or re-design
re design a product or service from the ground up.
Sometimes a DMAIC project may turn into a DFSS project because the process in questio
question
requires complete redesign to bring about the desired degree of improvement.
DMAIC Methodology:
This methodology consists of following five steps.
Define --> Measure --> Analyze -->
-- Improve -->Control
Define : Define the Problem or Project Goals that needs to be addressed.
Measure: Measure the problem and process from which it was produced.
Analyze: Analyze data & process to determine root causes of defects and opportunities.
Improve: Improve the process by finding solutions to fix, diminish, and preve
prevent future
problems.
Control: Implement, Control, and Sustain the improvements solutions to keep the process
on the new course.

DMAIC is a data-driven
driven quality strategy used to improve processes. It is an integral part of a
Six Sigma initiative, but in general can be implemented as a standalone quality improvement
procedure or as part of other process improvement initiatives
such as lean.
DMAIC is an acronym for the five phases that make up the
process:
Define the problem, improvement activity, opportunity for
improvement, the project goals, and customer (internal and
external) requirements.
Measure process performance.
Analyze the process to determine root causes of variation, poor
performance (defects).
Improve process performance by addressing and eliminating
the root causes.
Control the improved process and future process performance.
The DMAIC process easily lends itself to the project approach to
quality improvement encouraged and promoted by Juran.
152

CP7301

SOFTWARE PROCESS AND PROJECT MANAGEMENT

Excerpted from The Certified Quality Engineer Handbook, Third Edition, ed. Connie M.
Borror, ASQ Quality Press, 2009, pp. 321332.
Read more about DMAIC

In the subsequent session we will give complete detail of DMAIC Methodology.


DMADV Methodology:
This methodology consists of following five steps.
Define --> Measure --> Analyze --> Design -->Verify
Define : Define the Problem or Project Goals that needs to be addressed.
Measure: Measure and determine customers needs and specifications.
Analyze: Analyze the process for meet the customer needs.
Design: Design a process that will meet customers needs.
Verify: Verify the design performance and ability to meet customer needs.
DFSS Methodology:
DFSS - Design For Six Sigma is a separate and emerging discipline related to Six Sigma
quality processes. This is a systematic methodology utilizing tools, training and
measurements to enable us to design products and processes that meet customer expectations
and can be produced at Six Sigma Quality levels.
This methodology can have following five steps.
Define --> Identify --> Design --> Optimize -->Verify
Define : Identify the Customer and project.
Identify: Define what the customers want, or what they do not want.
Design: Design a process that will meet customers needs.
Optimize: Determine process capability & optimize design.
Verify: Test, verify, & validate design.

153

You might also like