You are on page 1of 139

IT Project Directorate of

Management Distance & Online


Education
Today‘s highly competitive business environment forces organizations to make high-
quality products at a lower cost and in a shorter duration. Organizations therefore are MSc-IT
Sem -4
increasingly using project management because it allows you to plan and organize
resources to achieve a specified outcome within a given timeframe.
Table of Contents
Chapter 1- Software Project and Risk Management 4
1.0 Introduction: 4
1.1 The state of IT project management: 5
1.2 Context of project management, 6
1.3 Need of project management and project goals: 7
1.4 Project life cycle and IT development: 8
1.5 Some ways of categorizing software projects: 9
1.6 Problems with software projects and Management control: 10
1.7 Requirement specification: 13
1.8 Information and control in organization: 13
1.9 Introduction of Step Wise project planning, 17
1.10 The Software Risk : 18
1.11 Risk Identification and Risk Analysis: 20
1.12 Project Risk Impact Analysis: 22
1.13 Risk Mitigation Guidelines: 25
1.14 Risk Watch Guidelines 25
Chapter 2 Software Project Initiation & Effort Estimation 27
2.0 Software Project Initiation: 27
2.1 Cost-benefit analysis: 29
2.2 Cost - Benefit Examination Techniques: 31
2.3 Cash flow forecasting 36
2.4 Basis of software estimation: 37
2.5 Problems with IT Project Estimation: 43
2.6 Software process and project metrics: 46
2.7 Function Point (FP) and Line of Code (LOC) Metrics: 50
2.8 Constructive Cost Model (COCOMO): 58
Chapter 3 Software Project Activity Planning & Resource Allocation 62
3.0 Project management activities: 62
3.1. Work Breakdown and Schedule 64
3.2 Time-table of a project: 66
3.3 Composition of project team 69
3.4 Resource Allocation in Software Project Management: 72
3.5 Developing Project Schedule: 74
3.6 Project Management Software Tool: 74
3.7 RISK PLAN: 77
3.8 Developing the Project Budget: 78
3.9 Quality planning: 82
3.10 Quality Management 83
3.11 History of Quality Programs in the Software Industry 84
Chapter 4 Managing Change and organizing team 85
4.0 Software configuration management: 85
4.1 Needs for personnel: 86
4.2 Personnel management 87
4.3 Co-operation with upper management in planning a project 89
4.4 Release of a software 90
4.5 Project Management Tips 91
Chapter 5 Software Quality 98
5.1 Introduction to Software Quality: 98
5.2 Software Quality Measurement Concepts Overview: 107
5.3 Techniques to enhance s/w quality: 108
5.4 Capability Maturity Model 110
5.5 CMMI Process Areas and Capability Levels 111
5.6 Six Sigma and quality management solutions 111
5.7 Process Management vs. Project Management 116
Chapter 6 Overview of Management Information System and Decision Making 119
6.0 Introduction to MIS: 119
6.1 MIS Definition 120
6.2 ROLE OF THE MANAGEMENT INFORMATION SYSTEM 121
6.3 IMPACT OF THE MANAGEMENT INFORMATION SYSTEM 122
6.4 MANAGEMENT INFORMATION SYSTEM AND COMPTER 123
6.5 Decision Making Systems 125
6.6 Methods for Deciding Decision Alternatives 127
6.7 BEHAVIOURAL CONCEPTS IN DECISION MAKING 130
6.8 ORGANISATION DECISION MAKING 131
6.9 MIS AND DECISION MAKING CONCEPTS: 133
6.10 Bias in information 134
6.11 Internal versus external information 137
Chapter 1- Software Project and Risk Management

1.0 Introduction:
Defining what a project is: You have been handed a project by your organization. Your job now is to
effectively manage the project to completion. For your project to be successful, you need to understand what
exactly constitutes a project, and which criteria are used to determine whether a project is successful or not.

A project has the following characteristics:


A start and end date: projects have dates that specify when project activities start and
when they end.
Resources: time, money, people and equipment, used by the project.
For example, to produce a brochure you will need a team (designers, copywriters, creative
directors, etc.), equipment (computers, printers, paper, delivery trucks, etc.) and money to
pay the salaries/fees, buy equipment, and so on.
An outcome: a project has a specific outcome such as new highway, a satellite, a new
office building, a new piece of software, and so on.
Project success criteria: Whatever its size, a project‘s success is based on three main criteria as
shown by the following triangle

Outcome

Time Budget

Figure: Success Triangle of Project

Your project will therefore be deemed successful if it:


Delivers the outcome with an agreed upon quality.
Does not overrun its end date.
Remains within budget (cost of resources).
Note however, that outcome, time and budget are interrelated, and during a project you may need to do
trade-offs between them. For example, if you want to get something done more quickly, you may have to
pump in more money into your project for additional resources.

1.1 The state of IT project management:

The problem with project management, particularly with IT projects, is it doesn't have a particularly
good reputation. Tales cries and complaints of over-budget, over-schedule and under-performing, if
not outright cancelled projects, are rife in both the public and private sectors, and add grist to the
media mill. This is particularly so with IT projects. From a project management perspective, the
problem with IT projects is that they're too visible. The argument that there should be more
appreciation for IT because it is an enabler of virtually every element of commercial and
bureaucratic activity is also its biggest downside. When IT fails, everybody knows about it.
According to a 2001 OECD report, as little as 28 percent of IT projects undertaken in the US were
successful in relation to budget, functionality and timeliness. An equivalent number of IT projects
were cancelled. The OECD recognized that problems with IT projects represented a significant
economic, political, efficiency and effectiveness risk to government, and that IT implementations
that do not achieve their objectives put at risk e-government initiatives. More recent reports have
substantiated this view.
Why are organizations using project management?

Today‘s highly competitive business environment forces organizations to make high-quality products at a
lower cost and in a shorter duration. Organizations therefore are increasingly using project management
because it allows you to plan and organize resources to achieve a specified outcome within a given
timeframe. The techniques of project management also help you manage and anticipate risks in a
structured manner. Surveys of organizations using project management have shown that project
management allows for better utilization of resources, shorter development times, reduced costs,
interdepartmental cooperation that builds synergies across the organization, and a better focus on results
and quality.
Software project management is the process of planning, organizing, staffing, monitoring,
controlling and leading a software project. Every software project must have a manager who leads
the development team and is the interface with the initiators, suppliers and senior management.

The project manager:

1. Produces the Software Project Management Plan (SPMP).


2. Defines the organizational roles and allocates staff to them.
3. Controls the project by informing staff of their part in the plan.
4. Leads the project by making the major decisions and by motivating staff to perform well.
5. Monitors the project by measuring progress.
6. Reports progress to initiators and senior managers.

There are three main points that are most important to a successful project:

1. A Project must meet customer requirements.


2. A Project must be under budget.
3. A Project must be on time.

1.2 Context of project management,


The biggest difference between software and the products of other kinds of projects is it's not
physical. Software consists of ideas, designs, instructions and formulae. Creating software is almost
entirely a cognitive activity. The stuff we can see and measure, from code files (How strange is it to
use a collection of arbitrary symbols visible through a computer as an example of something
relatively real?) They stand in for the real stuff, vs. the other way around. Still, software only matters
when it appears as something real, even as barely real as colored squiggles on a computer screen.
Maintaining that connection from thought stuff to real stuff is one of software's peculiar challenges.

Project initiating process: Your project has been selected, and you have been appointed as the Project
Manager. You should now use the Project Charter or commercial contract, to get the wheels spinning in
motion. At the minimum your Project Charter should:

Designate you as the Project Manager with the authority to use resources to bring the
project to completion -- this is formally done by the project sponsor/main stakeholders.
Provide a short description of the result, outcome, product or services to be produced by
the project.
Refer to the commercial contract as the basis for initiating the project (if there is such a
formal contract).
After having reviewed the Project Charter, do the following:
Ask the Project Sponsor and main stakeholders to share with you any emails, letters,
memos, project feasibility, meeting minutes, requirements or other documents related to
the project.
If a similar kind of project has already been completed, get your hands on all the
documentation that was produced for that particular project. Set up a meeting with the
project manager of that project to ask for advice.
The SOW (Statement of Work)
The next thing that you want to do is start working on your Statement of Work (SOW), a crucial document
that you will constantly update and use as a baseline for your project. Depending on the size and
complexity of the project, and your knowledge about the subject matter, you will need to organize
meetings with the stakeholders in order to refine the SOW and get it approved. A well-thought out SOW
generally contains the following sections:
An Executive Summary: Provides a short overview on the purpose of the project, its background, its
scope and sometimes a high-level project plan.

Objectives:
The majority of project management literature recommends SMART objectives that are:
Specific: your objectives must be clear so that if someone reads them, he or she can
interpret them without ambiguity.
Measurable: you should be able to measure whether you are meeting the objectives or not.
Achievable: do not try to attempt more than you can.
Realistic: do you have the resources to achieve your objective?
Time-specific: specify when an objective will be attained (date).

1.3 Need of project management and project goals:


There are four phases a project goes through.
The role of the project manager in project management is one of great responsibility. It's the project
manager's job to direct and supervise the project from beginning to end. Here are some other roles:

1. The project manager must define the project, reduce the project to a set of manageable tasks,
obtain appropriate and necessary resources, and build a team or teams to perform the project
work
2. The project manager must set the final goal for the project and must motivate his workers to
complete the project on time.
3. A project manager must have is technical skills. This relates to financial planning, contract
management, and managing creative thinking and problem solving techniques are promoted.
4. No project ever goes 100% as planned, so project managers must learn to adapt to change.
There are many things that can go wrong with project management. These are commonly called
barriers. Here are some possible barriers:
1. Poor Communication
o Many times a project may fail because the project team does not know exactly what to

get done or what's already been done.


2. Disagreement
o Project must meet all elements in a contract.

o Customer and project manager must agree on numerous elements.


3. Failure to comply with standards and regulations.
4. Inclement weather.
5. Union strikes.
6. Personality conflicts.
7. Poor management.
8. Poorly defined project goals.

1.4 Project life cycle and IT development:


There are various software development approaches defined and designed which are used/employed
during development process of software, these approaches are also referred as ―Software
Development Process Models‖ (e.g. Waterfall model, incremental model, V-model, iterative model,
etc.). Each process model follows a particular life cycle in order to ensure success in process of
software development.
Software life cycle models describe phases of the software cycle and the order in which those phases
are executed. Each phase produces deliverables required by the next phase in the life cycle.
Requirements are translated into design. Code is produced according to the design which is called
development phase. After coding and development the testing verifies the deliverable of the
implementation phase against requirements.

There are following six phases in every Software development life cycle model:

1. Requirement gathering and analysis


2. Design
3. Implementation or coding
4. Testing
5. Deployment
6. Maintenance
1) Requirement gathering and analysis: Business requirements are gathered in this phase. This
phase is the main focus of the project managers and stake holders. Meetings with managers, stake
holders and users are held in order to determine the requirements like; Who is going to use the
system? How will they use the system? What data should be input into the system? What data
should be output by the system? These are general questions that get answered during a
requirements gathering phase. After requirement gathering these requirements are analyzed for their
validity and the possibility of incorporating the requirements in the system to be development is also
studied.
Finally, a Requirement Specification document is created which serves the purpose of guideline for
the next phase of the model.

2) Design: In this phase the system and software design is prepared from the requirement
specifications which were studied in the first phase. System Design helps in specifying hardware and
system requirements and also helps in defining overall system architecture. The system design
specifications serve as input for the next phase of the model.

3) Implementation / Coding: On receiving system design documents, the work is divided in


modules/units and actual coding is started. Since, in this phase the code is produced so it is the main
focus for the developer. This is the longest phase of the software development life cycle.
4) Testing: After the code is developed it is tested against the requirements to make sure that the
product is actually solving the needs addressed and gathered during the requirements phase. During
this phase unit testing, integration testing, system testing, acceptance testing are done.

5) Deployment: After successful testing the product is delivered / deployed to the customer for their
use.

6) Maintenance: Once when the customers starts using the developed system then the actual
problems comes up and needs to be solved from time to time. This process where the care is taken
for the developed product is known as maintenance.

1.5 Some ways of categorizing software projects:


Desktop project management software gives individual users the most responsive and highly-
graphical interface. Desktop applications normally store their data in a local file, although some
allow collaboration between users or store their data in a central database. A simple file-based
project plan can be shared between users if it is stored on a networked drive, and only one user
accesses it at any given time.
Web-based project management software can be accessed through an intranet or extranet using a
web browser and has all the usual advantages and disadvantages of web applications:

Can be accessed from any type of computer without installing software


Ease of access-control
Provides multi-user facilities
Only one software version and installation needs to be maintained
Typically slower to respond than desktop applications
Limited graphical capability compared to desktop applications
Project information is not available offline.
Single-user project management systems work on the basis that only one person will need to edit
the project plan at any time. This may be used in small organizations, or only a few people are
involved in project planning. Desktop applications usually come into this category.
Collaborative project management systems are designed to support multiple users modifying
different sections of the plan at once, eg updating the areas they are personally responsible for so that
those estimates get integrated into the overall plan. Web-based tools often fall into this category, but
they can only be used when the user is online. Some client-server-based software tools replicate
project and task information through a central server when users connect to the network.
Integrated systems combine project management or project planning, with many other aspects of
company operations, eg bug tracking issues can be assigned to each project, the list of project
customers becomes a customer relationship management module, and each person on the project
plan has their own task lists, calendars, messaging etc associated with their projects.

1.6 Problems with software projects and Management control:

Here are seven project management problems that designers and developers often face, as well as
how to deal with them when they arise.
1. Client Gives You Vague, Ever-changing Requirements:
Fickle clients can be a huge hassle. If a client doesn‘t know what they want until a certain stage is
complete, then schedule those decision points into the project as milestones. It is important to have a
clear path mapped out from start to finish because it forces the client to be specific with their
requirements, as well as keeping the project on track.
Be clear at the outset about what your task is going to be on the project and how much leeway is
available. If you will need to be compensated for big revisions or changes in direction, then set a
clear outline about the number of adjustments you can make before you need to charge more. If you
can, quantify these adjustments with a number; it makes it much easier to keep track of things.
2.Your Client is Slow with Communication
People are busy, but it‘s tough for you to move forward on a project if you can never get answers
from the person you‘re working with.
The good news is that you will drastically increase your response rate if you do a little bit of work
ahead of time. Instead of waiting for the back-and-forth discourse to finally take place, simply start
moving in the direction that you think is best and then seek verification. This strategy makes it easy
for your client to quickly say yes (or no).
Here is an example:
Hi Mark,
Last time we spoke, you mentioned that we needed to make a decision on task X. I went ahead and
started doing Y since that sounded best based on our previous discussion. If you‘re happy with that, I
can move forward and we can review the progress as scheduled on Friday.
Sound good?
- John
The beauty of this framework is that it shifts the client‘s mindset from, "What decision am I going to
make?" to "Should I say Yes or No?" Saying yes or no is much easier than thinking up a new
solution (which, as the hired professional, should be our job).
Additionally, you will get a response much faster because there is now a time constraint on the work.
If they like what you‘re doing, then they will give you the go-ahead. If they don‘t, then they know
that they need to get back to you right away because, otherwise, things will be moving in the wrong
direction.
However, it‘s very important to use sound judgment. Obviously, you won‘t be able to work ahead
and then ask for approval on all aspects of the project, especially those that will cost a lot of time and
resources to update should the client say no. That said, you‘ll be surprised how much quicker things
get done by making it easy for your clients to say, "Yes."
3. The Project Doesn’t Start On Time
Maybe you had a slow go of it last month, but now, you‘re swamped. You know you need to take on
the work when you can get it, but now you‘re worried that you won‘t be able to start all of your
projects on time as you promised. Or perhaps your client says you‘re a top priority — but tomorrow
a different project becomes more important.
If the holdup is on your end, then it‘s important that you do something to jump-start the project —
even if it‘s in a really small way. Give the client a call to discuss their expectations and set a more
realistic timeframe for the first milestone. This could take as little as a few minutes, but it makes the
client feel like things have started. However, beware of doing this more than once. That‘s known as
stringing the client along — they don‘t take that too well, and for good reason.
If the hold up is on their end, then you need to communicate very clearly how that alters things
moving forward. Be sure to let them know exactly how this change affects the completion dates of
future milestones and you should check the revised schedule against other commitments with other
projects.
4. You Try to Manage Every Project the Same Way
There has never been a project that has the same circumstance, requirements, and needs as another
project. Situations, people, and goals change over time.
Instead of squeezing every project into the same template, spend some time crafting milestones
specific to the needs of each project. Every job requires specific milestones that meet the schedules
of all parties involved. Resist using the standard "2 weeks until X" type of thinking.
To put it simply, your schedule changes all the time, right? That means the way you plan your
projects needs to change as well.
5. The Client Doesn’t Like What You Created
If this happens often, then there is a communication issue that needs to be addressed. Make sure you
understand not just the technical requirements of a project, but also the underlying rationale of your
clients. Why did they decide to do this in the first place? What are they hoping your work will enable
them to do when all is said and done? How do they see your project fitting in with their overall
strategic vision?
Good project managers create a shared vision between all parties. It‘s your responsibility to
understand the direction of your particular project as well as the overall strategy of your client — and
then to make sure those two items match up.
6. Your Point of Contact Doesn’t Seem to Care About Your Project
Working on a project that isn‘t high on a client‘s priority list can be frustrating. In some cases, the
person responsible for communicating with you has little to no interest in your project. The
completed product will have no direct effect on their job, they are hard to ask questions to, even
harder to get answers from, and they provide minimal guidance.
This issue is best solved ahead of time.
When screening potential clients, do your best to find out if the contact person has a vested interest
in the project. Pay attention to their awareness about potential problems or risks you could run into,
their level of urgency when scheduling this project in their calendar, and their desire to communicate
with you quickly and consistently from the beginning. If they brush these issues to the side, then it is
worth your time to talk with someone else and establish a second point of contact before deciding
whether to take on the project or to avoid the project all together.

7. Too Much Time is Spent Solving Problems After Projects Are "Live"
There are bound to be a few bugs here and there, but this is a classic problem caused by focusing too
much on production, and not enough on testing. If this continually becomes an issue, then there are
two possible solutions.
First, schedule in more time to test your projects from the start. Double your typical testing time if
needed. Yes, it will stretch your schedule further, but in the long run, it will save you from the
countless little problems that prevent your days from being productive.
Second, if your ongoing issues are a result of clients constantly wanting you to tweak something here
and there, then you need to be clearer about what you do and don‘t provide with your services. When
you set guidelines with a client at the beginning of a project, you need to state very clearly that your
work ends after the final product is created and handed off. This can be avoided by outlining
boundaries at the beginning of a project that explicitly state that additional service after delivery will
cost extra.

1.7 Requirement specification:


A software requirements specification describes the essential behavior of a software product from a
user's point of view.
The purpose of the SRS is to:
1. Establish the basis for agreement between the customers and the suppliers on what the
software product is to do. The complete description of the functions to be performed by the
software specified in the SRS will assist the potential user to determine if the software specified
meets their needs or how the software must be modified to meet their needs
2. Provide a basis for developing the software design. The SRS is the most important
document of reference in developing a design
3. Reduce the development effort. The preparation of the SRS forces the various concerned
groups in the customer's organization to thoroughly consider all of the requirements before design
work begins. A complete and correct SRS reduces effort wasted on redesign, recoding and retesting.
Careful review of the requirements in the SRS can reveal omissions, misunderstandings and
inconsistencies early in the development cycle when these problems are easier to correct
4. Provide a basis for estimating costs and schedules. The description of the product to be
developed as given in the SRS is a realistic basis for estimating project costs and can be used to
obtain approval for bids or price estimates
5. Provide a baseline for validation and verification. Organisations can develop their test
documentation much more productively from a good SRS. As a part of the development contract, the
SRS provides a baseline against which compliance can be measured
6. Facilitate transfer. The SRS makes it easier to transfer the software product to new users or
new machines. Customers thus find it easier to transfer the software to other parts of their
organization and suppliers find it easier to transfer it to new customers
7. Serve as a basis for enhancement. Because the SRS discusses the product but not the
project that developed it, the SRS serves as a basis for later enhancement of the finished product. The
SRS may need to be altered, but it does provide a foundation for continued product evaluation.

1.8 Information and control in organization:


Information system has been defined in terms of two perspectives: one relating to its function;
the other relating to its structure. From a functional perspective; an information system is
atechnologically implemented medium for the purpose of recording, storing, and disseminating
linguistic expressions as well as for the supporting of inference making. From a structural
perspective; an information system consists of a collection of people, processes, data, models,
technology and partly formalized language, forming a cohesive structure which serves some
organizational purpose or function.
The functional definition has its merits in focusing on what actual users - from a conceptual
point of view- do with the information system while using it. They communicate with experts to
solve a particular problem. The structural definition makes clear that IS are socio-technical systems,
i.e., systems consisting of humans, behavior rules, and conceptual and technical artifacts.
An information system can be defined technically as a set of interrelated components that
collect (or retrieve), process, store, and distribute information to support decision making and control
in an organization. In addition to supporting decision making, coordination, and control, information
systems may also help managers and workers analyze problems, visualize complex subjects, and
create new products.
Three activities in an information system produce the information that organizations need to make
decisions, control operations, analyze problems, and create new products or services. These activities
are input, processing,
and output. Input captures or collects raw data from within the organization or from its external
environment. Processing converts this raw input into a more meaningful form. Output transfers the
processed information to the people who will use it or to the activities for which it will be used.
Information systems also require feedback, which is output that is returned to appropriate members
of the organization to help them evaluate or correct the input stage.

Figure: Functions of an information system

Components of Information Systems

Resources of people: (end users and IS specialists, system analyst, programmers, data
administrators etc.).
Hardware: (Physical computer equipments and associate device, machines and media).
Software: (programs and procedures).
Data: (data and knowledge bases), and
Networks: (communications media and network support).

People Resources
• End users: are people who use an information system or the information it produces. They can
be accountants, salespersons, engineers, clerks, customers, or managers. Most of us are
information system end users.
• IS Specialists: people who actually develop and operate information systems. They include
systems analysts, programmers, testers, computer operators, and other managerial, technical, and
clerical IS personnel. Briefly, systems analysts design information systems based on the
information requirements of end uses, programmers prepare computer programs based on the
specifications of systems analysts, and computer operators operate large computer systems.
Hardware Resources
• Machines: as computers and other equipment along with all data media, objects on which data is
recorded and saved.
• Computer systems: consist of variety of interconnected peripheral devices. Examples are
microcomputer systems, midrange computer systems, and large computer systems.
Software Resources: Software Resources includes all sets of information processing instructions.
This generic concept of software includes not only the programs, which direct and control computers
but also the sets of information processing (procedures). Software Resources includes:
• System software, such as an operating system
• Application software, which are programs that direct processing for a particular use of computers
by end users.
• Procedures, which are operating instructions for the people, who will use an information system.
Examples are instructions for filling out a paper form or using a particular software package.
Data Resources
• Data resources include data (which is raw material of information systems) and database. Data
can take many forms, including traditional alphanumeric data, composed of numbers and
alphabetical and other characters that describe business transactions and other events and entities.
• Text data, consisting of sentences and paragraphs used in written
• Communications; image data, such as graphic shapes and figures; and audio data, the
human voice and other sounds, are also important forms of data. Data resources must
meet the following criteria:
• Comprehensiveness: means that all the data about the subject are actually present in
the database.
• Non-redundancy: means that each individual piece of data exists only once in the
database.
• Appropriate structure: means that the data are stored in such a way as to minimize the
cost of expected processing and storage.
Network Resources: Telecommunications networks consist of computers,
communications processors, and other devices interconnected by communications media
and controlled by communications software. The concept of Network Resources
emphasizes that communications networks are a fundamental resource component of all
information systems. The network resources are:
• Communications media: such as twisted pair wire, coaxial cable, fiber-optic cable,
microwave systems, and communication satellite systems.
• Network support: This generic category includes all of the people, hardware,
software, and data resources that directly support the operation and use of a
communications network. Examples include communications control software such
as network operating systems and Internet packages.
Figure: Components of Information System

1.9 Introduction of Step Wise project planning,


A project lifecycle typically has the following processes as defined by the Project
Management Institute (PMI):

This book assumes that your project has already been selected, and that a Project Charter has
been produced. A Project Charter is generally a document that provides a short description of
the project and designates the Project Manager. Sometimes a commercial contract also leads
to the initiation of project especially in firms specialized in providing professional/consulting
services.
Initiating

During the initiating process, you will refine the project goals, review the expectations of all
stakeholders, and determine assumptions and risks in the project. You will also start project
team selection -- if the project team has been imposed, then you need to familiarize yourself
with their skill set and understand their roles in the project. At the end of this phase you will
produce a Statement of Work (SOW), which is a document that provides a description of the
services or products that need to be produced by the project.
Planning

During the planning process, you will detail the project in terms of its outcome, team
members‘ roles and responsibilities, schedules, resources, scope and costs. At the end of this
phase, you will produce a project management plan, which is a document that details how
your project will be executed, monitored and controlled, and closed. Such a document also
contains a refined project scope, and is used as the project baseline.
Executing:

During the executing process, you apply your project management plan. In other words you
direct your team so that it performs the work to produce the deliverables as detailed in the
plan. The executing process also involves implementing approved changes and corrective
actions.
Controlling and monitoring

During the controlling and monitoring process, you supervise project activities to ensure that
they do not deviate from the initial plan and scope. When this happens, you will use a change
control procedure to approve and reject change requests, and update the project plan/scope
accordingly. The controlling and monitoring phase also involves getting approval and signoff
for project deliverables.
Closing:

During the closing process, you formally accept the deliverables and shut down the project or
its phases. You will also review the project and its results with your team and other
stakeholders of the project. At the end of th e project you will produce a formal project closure
document, and a project evaluation report.

1.10 The Software Risk :


The proactive management of risks throughout the software development lifecycle is
important for project success. The software industry is fraught with failed and delayed
projects, most of which far exceed their original budget. The Standish Group reported that
only 28 percent of software projects are completed on time and on budget. Over 23 percent of
software projects are cancelled before they ever get completed, and 49 percent of projects cost
145 percent of their original estimates. In hindsight, many of these companies indicated that
their problems could have been avoided or strongly reduced if there had been an explicit early
warning of the high-risk elements of the project. Many projects fail either because simple
problems were reported too late or because the wrong problem was addressed. The risk
management process can be broken down into two interrelated phases, risk assessment and
risk control. These phases are further brokendown. Risk assessment involves risk
identification, risk analysis, and risk prioritization. Risk control involves risk planning, risk
mitigation, and risk monitoring.(Boehm, 1989) Each of these will be discussed in this section.
It is essential that risk management be done iteratively, throughout the project, as a part of the
team‘s project management routine.
1.11 Risk Identification andRisk Analysis:
In the risk identification step, the team systematically enumerates as many project risks as possible to
make them explicit before they become problems. There are several ways to look at the kinds of
software project risks, as shown in Table 1. It is helpful to understand the different types of risk so that
a team can explore the possibilities of each of them. Each of these types of risk is described below.

Generic Risks Product-Specific Risks


Project Risks Product Risks Business Risks
Factors to consider:

People, size, process, technology, tools, organizational, managerial,


customer, estimation, sales, support

Table 1: General Categories of Risk

Generic risks are potential threats to every software project. Some examples of generic risks are
changing requirements, losing key personnel, or bankruptcy of the software company or of the
customer. It is advisable for a development organization to keep a checklist of these types of risks.
Teams can then assess the extent to which these risks are a factor for their project based upon the
known set of programmers, managers, customerand policies. Product- specific risks can be
distinguished from generic risks because they can only be identified by those with a clear
understanding of the technology, the people, and the environment of the specific product. An example
of a product-specific risk is the availability of a complex network necessary for testing.Generic and
product-specific risks can be further divided into project, product, and business risks. Project risks are
those that affect the project schedule or the resources (personnel or budgets) dedicated to the project.
Product risks are those that affect the quality or performance of the software being developed. Finally,
business risks are those that threaten the viability of the software, such as building an excellent
product no one wants or building a product that no longer fits into the overall business strategy of the
company.
There are some specific factors to consider when examining project, product, and business risks.
People risks are associated with the availability, skill level, and retention of the people on the
development team.
Size risks are associated with the magnitude of the product and the product team. Larger products
are generally more complex with more interactions. Larger teams are harder to coordinate.
Process risks are related to whether the team uses a defined, appropriate software development
process and to whether the team members actually follow the process.
Technology risks are derived from the software or hardware technologies that are being used as
part of the system being developed. Using new or emerging or complex technology increases the
overall risk.
Tools risks, similar to technology risks, relate to the use, availability, and reliability of support
software used by the development team, such as development environments and other Computer-
Aided Software Engineering (CASE) tools.
Organizational and managerial risks are derived from the environment where the software is
being developed. Some examples are the financial stability of the company and threats of company
reorganization and the potential of the resultant loss of support by management due to a change in
focus or a change in people.
Customer risks are derived from changes to the customer requirements, customers‘ lack of
understanding of the impact of these changes, the process of managing these requirements changes,
and the ability of the customer to communicate effectively with the team and to accurately convey
the attributes of the desired product.
Estimation risks are derived from inaccuracies in estimating the resources and the time required to
build the product properly.
Sales and support risks involve the chances that the team builds a product that the sales force does
not understand how to sell or that is difficult to correct, adapt, or enhance.
Risk Analysis:
After risks have been identified and enumerated, the next step is risk analysis. Through risk
analysis, we transform the risks that were identified into decision-making information. In turn, each
risk is considered and a judgment made about the probability and the seriousness of the risk. For
each risk, the team must do the following:
Assess the probability of a loss occurring. Some risks are very likely to occur. Others are very
unlikely. Establish and utilize a scale that reflects the perceived likelihood of a risk. Depending
upon the degree of detail desired and/or possible, the scale can be numeric, based on a percentage
scale, such as ―10 percent likely to lose a key team
Member‖ or based on categories, such as: very improbable, improbable, probable, or frequent. In the case that a
categorical assignment is used, the team should establish a set numerical probability for each qualitative value (e.g.
very improbable= 10 percent, improbable = 25 percent).Assess the impact of the loss if the loss were to occur.
Delineate the consequences of the risk, and estimate the impact of the risk on the project and the product. Similar to
the probability discussion above, the team can choose to assign numerical monetary values to the magnitude of loss,
such as $10,000 for a two-week delay in schedule.
Alternately, categories may be used and assigned values, such as 1=negligible, 2=marginal, 3=critical, or
4=catastrophic.

Determining the probability and the magnitude of the risk can be difficult and can seem to be arbitrarily chosen.
One means of determining the risk probability is for each team member to estimate each of these values
individually. Then, the input of individual team members is collected in a round robin fashion and reported to the
group. Sometimes the collection and reporting is done anonymously. Team members debate the logic behind the
submitted estimates. The individuals then re-estimate and iterate on the estimate until assessment of risk
probability and impact begins to converge. This means of converging on the probability and estimate is called
the Delphi Technique (Gupta and Clarke, 1996). The Delphi Technique is a group consensus method that is
often used when the factors under consideration are subjective.

1.12 Project Risk Impact Analysis:


Project Risk Impact Analysis is a risk management database that is designed to help the project team identify,
prioritize, and communicate project risk. The database is an Excel spreadsheet with detail project risk information
(riskreport.xls). Detailed instructions for completing the companion spreadsheet are contained in this section of the
document.
Risk impact analysis is a plan for identifying, quantifying, analyzing, mitigating, and reporting project risks. This
section includes descriptions of risks and corresponding mitigation actions that have been identified. It guides the
project-wide risk reduction efforts. It is applicable to all projects and its requirements affect all functions of a
project management office.
The questions "How Much?" and "How Long?" must be answered by most organizations before specific project
risk information is known. As a result, project estimates inherently include uncertainties, assumptions, and risks.
Successful project planning and implementation requires risk management, change management, and meaningful
contingency planning.
Risk management helps to align the expectations of the project stakeholders and the Project Manager regarding
project process, issue resolution, and project outcome. Clients often have involuntary risks or constraints imposed
upon them. They often are taking project risks they don't even know they are taking due to poor articulation of the
risks and their possible impact on the project. As a result, clients are often surprised by negative consequences and
unmet expectations. It is the Project Manager‘s job to identify and to articulate the potential risks and their possible
impacts to the client. The clients then assume the risks on a voluntary basis and can be actively involved in
assisting with risk management.
Overview of Risk Reporting

To provide visibility of risks and progress in mitigating them, the following reports should be distributed on a
regular basis as part of the normal project status reporting system:

Table 1 - Risk Reporting Sections


TITLE LEVEL DESCRIPTION

Risk Watch List Organization & Project Lists risks to facilitate monitoring risks and initiating
risk responses.
Risk Mitigation Plan Organization & Project Lists avoidance/mitigation actions, if and when risks
occur.
Risk Profile Project Displays planned, actual and projected progress in
reducing risks.

Management Contribution to Risk Management

The keys to effective risk resolution are early identification, communication, and risk management. All
issues and risks must be identified and recorded in one place for easy reference by every project team
member. Every user and team member must be aware of outstanding issues and accept ownership for
their existence (and possible resolution). Finally, the Project Manager must manage and control the issues
through an established documented procedure.

The Project Manager must use a structured approach to resolving issues and problems. By clearly defining the
underlying problem (root cause), by identifying alternative solutions, and by objectively evaluating the
consequences, the Project Manager can minimize adverse effects on the project. Three (3) major issue types are
relevant to the any major project:
 Business,
 Technical, and
 Team.
The project stakeholders review the business and technical issues while the team issues remain internal to the
project team. When defining and resolving technical issues, priority is a factor. Prioritization of technical issues is
handled using a 1-5 scale.
Resolving issues is an ongoing process that occurs throughout the life cycle of any project. Expect, however, that
some issues cannot be resolved within the scope of a project. Still, the
Project Manager should identify those issues that cannot be resolved and develop action plans to resolve them at a
later time.
When resolving issues, a priority should be assigned to help determine the appropriate resolution. Low, medium,
high, and emergency can be assigned to issues associated with the project.
Risk Management Response

Risk management alternatives include either:


Risk avoidance,
Transfer / sharing of risk (insurance),
Prevent the risk, or
Develop a risk mitigation and / or contingency plan.

Mitigation of Global Risks

The cost / benefit and funding requirements of both potential and encountered risks should be documented in the
finalized business requirements.
Appropriate measures should be taken to protect each parties‘ interests incorporated in contractual arrangements.
This will be achieved through the project Statement of Work (SOW) or Document of Understanding (DOU).

Mitigation of Scope Related Risks

The scope of a project should be completely defined to help avoid inadvertent requirements‘ omissions, errors, and
misunderstandings via Statements of Work and the Project Management Plan. Management is expected to honor
its commitments and to provide the necessary resources required to have a positive and timely outcome. There
must be well-defined and enforced acceptance requirements in order to have a successful outcome.

Mitigation of Timeline-Related Risks

There must be specific support in providing resources outside of the immediate development group via internal /
external contract agreements and coordination with organization management. Everyone must agree to multiple
phases of a project in order to achieve short-term objectives.
Mitigation of Cost-Related Risks

The financial situation must continue to be assessed and justified, based on up-to-date business case and economic
evaluations. All costs should be reviewed with the section responsible for funding the software development
effort.
Mitigation of Quality/Performance Risks

Acceptance criteria and quality and technical performance criteria (as defined by the requirements and state
standards) must be documented. Any State performance standard must be followed under a client/server project.

1.13 Risk Mitigation Guidelines:


Risks can arise from any aspect of a project. Thus, a complete identification of all project risks can only be
obtained by involving a sufficient number of people to ensure that in-depth competence and experience is applied
to the process for all significant aspects of the project scope.
Some project risks can be identified by simply deducing the defined project risks that are applicable to the project.
It may be necessary to restate these risks in the context of the project scope. Other project risks can only be
identified by carefully analyzing the project management plan, project schedule and requirements.
Some of the risk mitigation actions can be incorporated into the Project Management Plan in conjunction with
detailed project planning activities. Other risk mitigation actions represent contingency plans to be implemented
only if the risk actually occurs.
A secondary use of the Risk Mitigation Plan occurs if and when implemented risk responses do not prove
effective. When this happens, the Risk Mitigation Plan provides information on other, alternative risk responses
that should be reconsidered for implementation.

1.14 Risk Watch Guidelines


One option is to include a rating for each risk that is a combination of the Level of Impact and Level of Confidence
ratings.
Another option is to identify and include in this Section ―Risk Triggers‖, which are symptoms to watch for that
signal potential or actual occurrence of the risk.
A simple scheme should be used to quantify risks. The quantification process is somewhat subjective, regardless of
the method used to assign the numerical values. Using a more complex quantification scheme will probably not do
much to reduce this inherent subjectivity.
Consideration should be given to devising a scheme for quantifying the impact of joint occurrence of multiple,
closely related risks. The effect of such occurrences may be much more significant than is implied by simply
summing their individual values. The likelihood of such occurrences is usually calculated by multiplying the
individual risks‘ estimated probability of occurrence by each other. The usual handling of joint risks is:
To combine them and reassess the resulting, more global risk
To define an additional risk described as a joint occurrence risk and assess it in terms of the incremental impact
of the joint occurrence over and above that of the individual risks.
Consideration should also be given to relating each risk to the corresponding level of Project planning. This will
prove helpful in keeping management attention to each risk at an appropriate level of detail.
Chapter 2Software Project Initiation & Effort Estimation

2.0 Software Project Initiation:


Software Project Initiation starts after the organization acquires a project from one of its clients. The objectives of the
Software Project Initiative ensure that ownership for project execution, delivery and customer acceptance is
entrusted to a Software Project Manager (SPM).The SPM is provided with support commitments from service
departments of the organization.
The project is started on the right footing–-a project well started, is half completed.
The experience of the organization is brought to bear upon the project.
Software Project Initiation (SPI) activities are shared between the organization and the Software Project Manager
(SPM).
Normally an organization that is organized for executing software development projects will have a department
entrusted with the responsibility of acting as the repository of project records, as well as the nodal agency for
initiating and closing of projects.
a) Assumptions for project initiation:

Before initiating a project the following questions should be answered:


Are resources necessary for completing the project available?
Is the probability for getting finances big enough (that is, is the need for the project outcome widely
enough recognized, are the external conditions favourable etc?
Does the project contribute to achieving strategic goals of the institution, or will it exhaust the institution?

Project initiation is justified only after having satisfactory answers to all these and some other questions. We
will discuss availability of resources separately in this chapter.

b) Additionally there are some general suggestions for Software Project Initiation:
Before initiating the first project, one should take part as a partner in a project managed by someone else;
Before initiating a big project, one should have managed small-scale projects;
Before initiating an international project, one should have participated in international projects.

Existence of a good idea is the most important prerequisite for a project initiation. One should have a clear
understanding what will be the main goal of the project and what problems will be solved by achieving this
goal. The project should necessarily be innovative, methods or procedures will be applied or products
developed that have not been performed or developed before.
c) Determination of the objective of a project

Decision-makers who have authority to approve projects often do not have enough time to go deeply into
details. Therefore the features that will be considered first – first of all the objective of the project and
sometimes the name of the project as well – should be elaborated with double care. Otherwise "A lot of time
can be wasted in producing a very good plan to achieve the wrong objective.
The main objective is important because the whole planning of project activities bases on project objectives.
In this sense the planning can be considered as a backward process, from outcome/objectives to activities.
A number of factors should be taken into account when determining the objective:
1.Market needs (for example, production of digital content).
2.Institutional needs (educational institution develops a new teaching tool for reducing teaching costs).
3.Customers need (an ICT company offers to an airport to set up a WiFi network for in a waiting room).
4.Technological opportunities (producing videogames after introduction of high performance personal
computers).
5.Social need (public Internet access points in remote areas).
6.Legislation (web based support system on handling property rights for content developers).

A clear objective is necessary not only for decision-makers but for getting support by the project partners and
for forming project team.
For example, a master student proposed to develop an IT model for a small company. Discussing this with
his supervisor it turned out that the student is the only person in a company with 60 employees who takes
care of IT systems and offers support to colleagues. Finally it was decided to devote the master thesis to IT
risk management in a small company.
The formation of the objective needs time, dialogue and energy. Potential users of the project’s outcome should
necessarily be involved into the project initiation process already from the very beginning; they are able to evaluate
the outcome from a different perspective.
The objective and its formulation should be understandable for customers and adequate, reflecting the most
significant aspects. It is recommended to apply the SMART principle. According this, the objective and its formulation
should be:

Simple. Everybody who has basic knowledge of the area should understand what exactly the project is
aiming to complete.
Measurable. It should be possible to measure to what extent the project goal has been achieved.
Agreed. The outcome should meet the customers/end users needs, should solve some problems. Agreement
bases on information exchange with the customers and as a side effect, increases devotion of the project
team.
Realistic. The objective should correspond to the resources (incl knowledge) available. One should not
plan outcomes or activities that require much more knowledge than the project team actually has; this can
cause an unexpected need to perform additional research or education. When nevertheless some tasks that
should be performed where there is a lack of competence, then it is recommended to consider the possibility
to acquire necessary goods and services from outside the performing organization.
Timed. Is a planned duration sufficient for achieving the project goal? What possible compensation
mechanisms are available, in case unexpected delays will occur?

The SMART principles are applicable to the project‘s activities as well. For example, a bank wanted to raise
the number of customers and hired experts to figure out what are the most effective ways for increasing the
image of the bank. The experts suggested increasing the customer friendliness. How to measure this? It
turned out that non- formal conversation (about weather, habits etc) with the customers was the best
indicator. The whole personnel did pass a seminary where relevant examples, procedures etc were discussed.
The number of customers started to rise very rapidly.

2.1 Cost-benefit analysis:


As information technology (IT) projects face more cost and schedule scrutiny (hard start/finish, scope,
budget, etc.). Executive managers frequently require business cases to justify new project expenditures.
Project approvals are tough to come by unless you have a well-documented plan on paper. Project executive
does not want to commit the big bucks until you have demonstrated your schedule of expenditures and the
corresponding return on the investment. A business case will help sell your project to the executive by
providing this information or, if you've managed to obtain funding this year without a business case, what
better way to secure future funding than to develop a business case on an existing project. During a multiyear
project, it's inevitable that someone in the organization will ask, "Why are we spending on this project? What
are the benefits?" You can quickly answer these questions with the business case that's in your hip pocket.
Developing the business case will force you to answer the tough questions that are guaranteed to be asked at
some point in your project. This paper discusses the reasons for doing a cost-benefit analysis, the steps
involved in the analysis, and the typical benefit areas and trends for information technology in the Energy
Delivery industry.
Why Cost - Benefit analysis?
Why should you do a cost-benefit analysis for your project? In the authors‘ recent experience, executive
managers have commonly required cost-benefit analyses, particularly for information technology (IT)
projects. According to the Project Management Institute, IT projects frequently overpromise and
underdeliver. Executive managers have become aware of this performance issue and the cost-benefit analysis
is their guarantee that the project team has carefully evaluated the project before commencement, studying
the whole life cycle costs and the expected benefits. Utility regulators are imposing heightened scrutiny as
well. In today‘s unbundled energy delivery environment, regulators want more information about projects
that will be added to the rate base and more detail regarding how the projects will benefit the consumer.
There is a high likelihood that executive management and/or the regulator will require a cost-benefit analysis
for your next project.
Information technology has tremendous potential to effect positive change in an organization. Business
processes, which may have been static for years, can be radically transformed with the advent of IT. The
cost-benefit framework will assist the company in documenting the promised transformations. After all, the
project‘s benefits rely on effective transformation. Therefore it is essential to thoroughly document the
benefits to prove the credibility of the project. Stakeholders will continually reference a quality benefits
analysis throughout the project, adopting it as their mental model for business transformation success. The
benefits portion of your analysis should be communicated to your project team, vendor partners, end users,
and everyone else involved in the project. The project team‘s understanding of the benefits targets will align
their goals just as it aligned yours. George S. Patton, the famous military general, attributes his success not to
his own military prowess but instead to the execution of his plan by the troops. Patton maintained that
effectively communicating the plan to his troops was the single most important component to the Allied
victory in Europe. IT can effect change only when the strategy is shared with stakeholders.

Finally, the cost-benefit analysis can be used as a measuring device once the IT project begins to deliver its
promised functionality. The type of analysis performed before the project is known as ex-ante. Ex-ante is
your prediction of how the project will progress from a cost-benefit standpoint. When the project progresses
toward completion, you can implement an auditing scheme to check the actual costs and the actual
attainment of benefits. The ex-ante analysis becomes the baseline for your audit. When the project is
complete, users have been trained, and new processes are in place you then measure actual performance
against the ex-ante analysis. Once the new systems are in place you cannot go back and measure the how the
company previously performed. Here, the ex-ante cost benefit is invaluable for the auditing function because
you invested the time to study the ―business-as-usual‖ case before implementation.
2.2 Cost - Benefit Examination Techniques:

You have accepted the advantages of cost-benefit analysis and are committed to conducting an analysis for one of
your current or planned projects, but where do you begin?

This section discusses the major activities, as follows:


Determine your audience and its requirements.
Determine the project scope.
Determine baseline or ―do nothing‖ cost of business.
Estimate comprehensive lifecycle project costs.
Determine benefits.
Schedule the investment and benefits.
Analyze the cash streams.
Communicate the results of your analysis with your audience and gain acceptance.
Assign responsibility for benefits attainment using the A-R-C-I methodology (which stands for
Accountable, Responsible, Contributing, and Informed).

a) Understand the audience:


Determining your audience and understanding their expectations is one of the cornerstone‘s that will ensure
the ultimate success and acceptance of your cost-benefit analysis. Who will be the recipient of your study?
What format are they seeking? What level of detail are they expecting? Your audience will help you
determine the level of detail needed in your analysis. This can range anywhere from simple lists of costs and
benefits to a detailed time-phased, cash-stream analysis. The level of effort required to produce the different
levels of detail varies considerably, so understand the requirements before you produce overkill.

b) Write the scope:


Establish an understanding of the business problem you are trying to solve with the information technology.
Companies do not implement technology for its own sake; they implement technology to solve a particular
problem, improve a certain process, make a task more efficient, etc. Document these business drivers. Then
list your project‘s deliverables and how they support the business drivers. Also list what the project will not
deliver. Setting the scope will require quite a bit of effort and socialization of your results with the company.
The scope must balance the needs of the users with the constraints established by your management.
Establish the baseline cost of business One component of a detailed cost-benefit analysis is the baseline cost
of business. Baseline means what the company spends today in the domain under which the project‘s
information technology will operate. Recalling the business drivers mentioned above, how much does the
company spend today on those business processes, materials, equipment, old information systems, etc.?
Anything that your information technology project will ultimately affect belongs in the baseline. By
establishing the baseline you will know the size of the domain of your project and, once you have calculated
the benefits, what the benefits are in that domain. The baseline is important because you can use it to
effectively communicate the magnitude of the business domain and today‘s expenditures in that domain.

c) Determine project cost:


You‘ve written the scope, you understand the business domain and drivers, now it‘s time to calculate your
expenditure estimate of the project‘s costs. There are two things to keep in mind here. One, ensure that the
analysis includes all the cost categories.
Typical cost categories are as follows:
Software
Software configuration and customization
Integration with other systems
Hardware (clients and servers)
Local-area and/or wide-area network
Data
Internal staff
Delivery services (project management, change management, etc.)
Training (both core software and business process context training)
Hardware and software maintenance

Two, express how much it will cost to maintain the system after implementation. This is known as the full
lifecycle cost. Calculating the full lifecycle cost and accounting for each of the abovementioned cost areas
demonstrates to management that the analysis is comprehensive. Your estimates of project cost will aid the
project team in evaluating software vendor price proposals, forcing the team to consider whether proposals
that are out of the budget range will adversely affect the nature of the project‘s financial performance.

d) Determine the benefits:


Going back to the scope and the business domain, list the benefits that the project will target. Then for each
listed benefit, write the quantification method. Next calculate the ex-ante benefit. Determining the benefit of
software project amount can be challenging. One asset you already have is the baseline cost of business.
Against the baseline you can calculate percentage benefits estimates using benchmarks for the business
domain. Sources of benefits benchmarks are industry trade groups, software vendors, consulting companies
who specialize in the business domain, benchmarking institutes, and research firms. Section 3 of this paper
discusses the types of benefits you can target with an information technology project in the electric and gas
utility business domain.

e) Schedule the investment and benefits:


At this step, the analysis contains the baseline cost of business, the ex-ante project costs, and the ex-ante
project benefits. Depending on the level of detail required for the analysis, you may be done! But, if a more
detailed cash-stream analysis is required you must schedule the costs and benefits over the expected project
implementation and maintenance duration. This requires a fairly detailed project schedule. The schedule first
specifies the timing of the project investment and results in statement of negative cash flow. Next the
schedule indicates the timing of the benefits, which results in positive cash flow. Adding the two cash flows
together produces the net project cash stream. The project‘s financial indicators can be calculated using this
cash stream.
f) Analyze the cash stream:
The project‘s net cash stream will contain a dollar value for each year starting with the first year of the
project and ending with the terminus of the study period, usually 10 years. The first years are negative
reflecting the investment required to conduct the project, then the benefits kick in and the project eventually
reaches a break-even point at which the net cash stream become positive. The values will remain positive for
the duration of the study period unless you expect a significant reinvestment later on. The cash stream is the
cornerstone of the financial analysis. From it you can calculate Internal Rate of Return (IRR), Net Present
Value (NPV), and Payback.

g) Communicate results with audience and gain acceptance:

The cost-benefit analysis has little value unless it is communicated to the company. If you do not effectively
explain the results of the analysis, no one will take action, and your project will not progress. The authors
have found that personal one-on-one meetings with the audience are necessary to reinforce the study‘s
credibility. Your audience will be much more accepting of the information if they understand the details
behind it. They will also offer constructive criticism, which you can use to further refine your case and
bolster its credibility. An effective means of communicating your analysis is with charts and graphs that
clearly depict the financial parameters of the project and the benefits.

h) Assign benefits accountability:


If your company is serious about achieving the expenditure benefits, the company must establish an
accountability mechanism wherein individual managers are held accountable for the attainment of each listed
benefit. The benefits must show on the managers‘ personal radar screens. All other parties involved must
understand their role in delivering the benefits. One successful accountability methodology is known as ―A-
R-C-I.‖

Accountable - Makes the decision. The person ultimately accountable. Includes strategic authority, yes/no,
veto and assignment powers, and final approval.
Responsible - Performs the work. The person(s) assigned the job by the ―A.‖ Includes tactical
responsibility for doing the work and completing the tasks.
Contributing - Communicates the work (two-way). The person(s) who provide special support or should
be consulted in making decisions or doing work.
Informed – Explains the work (one-way). The person(s) needing to be informed at key decision points
during the work. The work‘s providers, customers, and beneficiaries.

To use this methodology, develop a matrix that lists each targeted benefit down the left and the letters A,R,C,
and I in individual columns across the top. At the intersecting cells in the matrix, fill in the names of the
people who are assigned to each benefit. Table 1 illustrates an A-R-C-I matrix.
TABLE 1 A-R-C-I Responsibility Matrix

Accountable
Responsible
Contributing
Informed
Benefit Description
1. Achieve 12% labor reduction in work design process
Grabowski
Omoto Gay, NoonanDesign Team
2. Improve SAIFI index by 1 point Jones Faust All Dispatchers
Outage Center
3. Reduce field staff time in office by 20% Houston Omoto Noonan, Blakely
All Field Staff

i) Benefit Categories:

Your benefits catalog will probably contain a lengthy list of benefits. Your audience will grasp the benefits
catalog much easier if you assign each benefit to a category. Table 2 lists a set of categories that the authors
have used in practice along with an example benefit for each category.

TABLE 2: Benefit Categories and Examples


Benefit Category
Example
Labor efficiency
Reduce crews' office delay time
Equipment Postpone capital expenditure by effective load
Cost avoidanceSupplant planned Y2K upgrade to existing information
Revenue generation
Enable performance-based rates with accurate outage
Customer service
Commit to exact appointment times with
Service reliability
Analyze network problem areas with accurate outage statistics
Regulatory Create regulatory reports more efficiently with ready
After categorizing the benefits, you may also need to classify your benefits as either ―Capital‖ or ―Expense.‖
This further classification will assist the company‘s finance department in understanding the accounts that
the benefits are associated with.

j) Information Technology and Benefits:


Information technology is actively driving significant benefits in the Energy Delivery industry. Utilities
repeatedly rate information technology as a strategic business enabler. But which technologies drive which
benefits? The following table depicts a typical set of Energy Delivery information technologies and the
benefits that the individual technologies drive.
TABLE 3 Benefits Derived from Energy Delivery InformationTechnology

Trends in benefits analysis:

These trends are as follows:


IT systems integration is a multiplier of benefits. Sharing of information between systems creates a synergy
unknown to silos of automation. In order to achieve the synergistic benefits of integration the implementation
must take into account how IT will interact within the business process. Therefore, business domain
knowledge is a requisite.
In addition to labor efficiency, projects are proving savings in other areas. For example, the savings
resulting from maintaining an accurate asset database are beginning to overshadow savings related to more
efficient record keeping. Asset management is king.
The industry desires to put decision making at the economic level, not the engineering or accounting level.
Accurate economic decision making requires precise, timely asset data.
Good asset data drives capital equipment savings and enables reliability-centered maintenance.
Automation is infusing the field workforce. Savings are significant in the mobile workforce management
arena.
Strategic benefits are solely justifying projects. Examples of strategic benefits are:
Customer service
Regulatory compliance

2.3 Cash flow forecasting

Cash flow forecasts are required to be issued periodically with payment certificates and as requested by the
Project Manager. It is a key aspect of financial management of a business, planningits future cash
requirements to avoid a crisis of liquidity.

Why is cash flow forecasting important? If a business runs out of cash and is not able to obtain new
finance, it will become insolvent. It is no excuse for management to claim that they didn't see a cash flow
crisis coming. So in business, "cash is king". Cash flow is the life-blood of all businesses – particularly start-
ups and small enterprises. As a result, it is essential that management forecast (predict) what is going to
happen to cash flow to make sure the business has enough to survive. How often management should
forecast cash flow is dependent on the financial security of the business. If the business is struggling, or is
keeping a watchful eye on its finances, the business owner should be forecasting and revising his or her cash
flow on a daily basis. However, if the finances of the business are more stable and 'safe', then forecasting and
revising cash flow weekly or monthly is enough. Here are the key reasons why a cash flow forecast is so
important:
Identify potential shortfalls in cash balances in advance – think of the cash flow forecast as an "early
warning system". This is, by far, the most important reason for a cash flow forecast.
Make sure that the business can afford to pay suppliers and employees. Suppliers who don't get paid will
soon stop supplying the business; it is even worse if employees are not paid on time.
Spot problems with customer payments – preparing the forecast encourage the business to look at how
quickly customers are paying their debts. Note – this is not really a problem for businesses (like retailers) that
take most of their sales in cash/credit cards at the point of sale.
As an important discipline of financial planning – the cash flow forecast is an important management
process, similar to preparing business budgets.
External stakeholders such as banks may require a regular forecast. Certainly, if the business has a bank
loan, the bank will want to look at the cash flow forecast at regular intervals.

2.4Basis of software estimation:


An effective software estimate provides the information needed to design a workable software development
plan. How well the project is estimated is ultimately the key to the project success. An effective software
estimate provides important information for making project decisions, projecting performance, and defining
objectives and plans. Without the proper guidance in a project, the results could be disastrous. Viable
estimation is extremely valuable for project success for nearly any software project from small agile projects
to huge projects.

The main focus of this section is how to make software projects more successful by properly estimating and
planning costs, schedules, risks, and resources. Most estimates are prepared early on in the life cycle of a
project, when there are typically a large number of undefined areas related to the project.
Following steps are considered for estimation of software project:

Step One: Establish Estimate Scope and Purpose:

Define and document estimate expectations. When all participants understand the scope and purpose of the
estimate, you‘ll not only have a baseline against which to gauge the effect of future changes; you‘ll also head
off misunderstandings among the project group and clear up contradictory assumptions about what is
expected. Documenting the application specifications, including technical details, external dependencies and
business requirements, will provide valuable input for estimation.

Resources required completing the project is one of the most needed estimation. The more detailed the specs,
the better. Only when these requirements are known and understood can you establish realistic development
costs. An estimate should be considered a living document; as data changes or new information becomes
available, it should be documented and factored into the estimate in order to maintain the project‘s integrity.
Step Two: Establish Technical Baseline, Ground rules,and Assumptions:

To establish a reasonable technical baseline, you must first identify the functionality included in the estimate.
If detailed functionality is not known, ground rules and assumptions should clearly state what is and isn‘t
included in the estimate. Issues of COTS, reuse, and other assumptions should be documented as well.
Ground rules and assumptions form the foundation of the estimate and, although in the early stages of the
estimate they are preliminary and therefore rife with uncertainty, they must be credible and documented.
Review and redefine these assumptions regularly as the estimate moves forward.
Step Three: Collect Data:

Any estimate, by definition, encompasses a range of uncertainty, so you should express estimate inputs as
least, likely and most rather than characterizing them as single data points. Using ranges for inputs permits
the development of a viable initial estimate even before you have defined fully the scope of the system you
are estimating. Certain core information must be obtained in order to ensure a consistent estimate. Not all
data will come from one source and it will not all be available at the same time, so a comprehensive data
collection form will aid your efforts. As new information is collected, you will already have an organized
and thorough system for documenting it.

Step Four: Software Sizing:

If you lack the time to complete all the activities described in the ten-step process, prioritize the estimation
effort: Spend the bulk of the time available on sizing (sizing databases and tools like SEER-AccuScope
can help save time in this process). Using an automated software cost and schedule tool like SEER-SEM
can provide the analyst with time-saving tools (SEER-SEM knowledge bases save time in the data
collection process).Size is generally the most significant (but certainly not the only) cost and schedule
driver. Overall scope of a software project is defined by identifying not only the amount of new software
that must be developed, but also must include the amount of preexisting, COTS, and other software that
will be integrated into the new system. In addition to estimating product size, you will need to estimate any
rework that will be required to develop the product, which will generally be expressed as source lines of
code (SLOC) or function points, although there are other possible units of measure. To help establish the
overall uncertainty, the size estimate should be expressed as a least—likely—most range.

Predicting Size
Whenever possible, start the process of size estimation using formal descriptions of the requirements such
as the customer‘s request for proposal or a software requirements specification. You should re-estimate
the project as soon as more scope information is determined. The most widely used methods of estimating
product size are:

Expert opinion— this is an estimate based on recollection of prior systems andassumptions regarding
what will happen with this system, and the experts‘ past experience.

Analogy — a method by which you compare a proposed component to a known component it is thought
to resemble, at the most fundamental level of detail possible. Most matches will be approximate, so for
each closest match, make additional size adjustments as necessary. A relative sizing approach such as
SEER-AccuScope can provide viable size ranges based on comparisons to known projects.
Formalized methodology— Use of automated tools and/or pre- defined algorithmssuch as counting the
number of subsystems or classes and converting them to function points.
Statistical sizing — provides a range of potential sizes that is characterized by least,likely, and most.
Use the Galorath sizing methodology to quantify size and size uncertainty. This includes preparing as
many size estimates as time permits and putting them all in a table (then choosing the size range from the
variety of sources.

Step Five: Prepare Baseline Estimate:

Budget and schedule are derived from estimates, so if an estimate is not accurate, the resulting schedules and
budgets are likely to be inaccurate also. Given the importance of the estimation task, developers who want to
improve their software estimation skills should understand and embrace some basic practices. First, trained,
experienced, and skilled people should be assigned to size the software and prepare the estimates. Second, it
is critically important that they be given the proper technology and tools. And third, the project manager
must define and implement a mature, documented, and repeatable estimation process.
To prepare the baseline estimate there are various approaches that can be used, including guessing (which is
not recommended), using existing productivity data exclusively, the bottom-up approach, expert judgment,
and cost models.

Bottom-Up Estimating: Bottom-up estimating, which is also referred to as ―grassroots‖or ―engineering‖


estimating, entails decomposing the software to its lowest levels by function or task and then summing the
resulting data into work elements. This approach can be very effective for estimating the costs of smaller
systems. It breaks down the required effort into traceable components that can be effectively sized, estimated,
and tracked; the component estimates can then be rolled up to provide a traceable estimate that is comprised
of individual components that are more easily managed. You thus end up with a detailed basis for your
overall estimate.

Software cost models: Different cost models have different information requirements.However, any cost
model will require the user to provide at least a few — and sometimes many — project attributes or
parameters. This information describes the project, its characteristics, the team‘s experience and training
levels, and various other attributes the model requires to be effective, such as the processes, methods, and
tools that will be used.

Parametric cost models provide a means for applying a consistent method for subjecting uncertain situations
to rigorous mathematical and statistical analysis. Thus they are more comprehensive than other estimating
techniques and help to reduce the amount of bias that goes into estimating software projects. They also
provide a means for organizing the information that serves to describe the project, which facilitates the
identification and analysis of risk.

A cost model uses various algorithms to project the schedule and cost of a product from specific inputs.
Those who attempt to merely estimate size and divide it by a productivity factor are sorely missing the mark.
The people, the products, and the process are all key components of a successful software project. Cost
models range from simple, single formula models to complex models that involve thousands of calculations.

Step Six: Quantify Risks and Risk Analysis:

It is important to understand what a risk is and that a risk, in itself, does not necessarily pose a threat to a
software project if it is recognized and addressed before it becomes a problem. Many events occur during
software development. Risk is characterized by a loss of time, or quality, money, control, understanding, and
so on. The loss associated with a risk is called the risk impact. We must have some idea of the probability
that the event will occur. The likelihood of the risk, measured from 0 (impossible) to 1 (certainty) is called
the risk probability. When the risk probability is 1, then the risk is called a problem, since it is certain to
happen. For each risk, we must determine what we can do to minimize or avoid the impact of the event. Risk
control involves a set of actions taken to reduce or eliminate a risk.

Risk management enables you to identify and address potential threats to a project, whether they result from
internal issues or conditions or from external factors that you may not be able to control. Problems associated
with sizing and estimating software potentially can have dramatic negative effects. The key word here is
potentially, which means that if problems can be foreseen and their causes acted upon in time, effects can be
mitigated. The risk management process is the means of doing so.

Step Seven: Estimate Validation and Review:

At this point in the process, your estimate should already be reasonably good. It is still important to validate
your methods and your results, which is simply a systematic confirmation of the integrity of an estimate. By
validating the estimate, you can be more confident that your data is sound, your methods are effective, your
results are accurate, and your focus is properly directed.
There are many ways to validate an estimate. Both the process used to build the estimate and the estimate
itself must be evaluated. Ideally, the validation should be performed by someone who was not involved in
generating the estimate itself, who can view it objectively. The analyst validating an estimate should employ
different methods, tools and separately collected data than were used in the estimate under review.

When reviewing an estimate you must assess the assumptions made during the estimation process. Make sure
that the adopted ground rules are consistently applied throughout the estimate. Below-the-line costs and the
risk associated with extraordinary requirements may have been underestimated or overlooked, while
productivity estimates may have been overstated. The slippery slope of requirements creep may have created
more uncertainty than was accounted for in the original estimate.

A rigorous validation process will expose faulty assumptions, unreliable data and estimator bias, providing a
clearer understanding of the risks inherent in your projections. Having isolated problems at their source, you
can take steps to contain the risks associated with them, and you will have a more realistic picture of what
your project will actually require to succeed.
Despite the costs of performing one, a formal validation should be scheduled into every estimation project,
before the estimate is used to establish budgets or constraints on your project process or product engineering.
Failing to do so may result in much greater downstream costs, or even a failed project.

Step Eight: Generate A Project Plan:

The process of generating a project plan includes taking the estimate and allocating the cost and schedule to a
function and task-oriented work breakdown structure.
To avoid tomorrow‘s catastrophes, a software manager must confront today‘s challenges. A good software
manager must possess a broad range of technical software development experience and domain knowledge,
and must be able to manage people and the unique dynamics of a team environment, recognize project and
staff dysfunction, and lead so as to achieve the expected or essential result.
Some managers, mainly due to lack of experience, are not able to evaluate what effects their decisions will
have over the long run. They either lack necessary information, or incorrectly believe that if they take the
time to develop that information the project will suffer as a result. Other managers make decisions based on
what they think higher management wants to hear. This is a significant mistake. A good software manager
will understand what a project can realistically achieve, even if it is not what higher management wants. His
job is to explain the reality in language his managers can understand. Both types of ―problem manager,‖
although they may mean well, either lead a project to an unintended conclusion or, worse, drift down the
road to disaster.
Software management problems have been recognized for decades as the leading causes of software project
failures. In addition to the types of management choices discussed above, three other issues contribute to
project failure: bad management decisions, incorrect focus, and destructive politics. Models such as SEER-
SEM handle these issues by guiding you in making appropriate changes in the environment related to people,
process, and products.

Step Nine: Document Estimate and Lessons Learned:

Each time you complete an estimate and again at the end of the software development, you should document
the pertinent information that constitutes the estimate and record the lessons you learned. By doing so, you
will have evidence that your process was valid and that you generated the estimate in good faith, and you
will have actual results with which to calibrate your estimation models. Be sure to document any missing or
incomplete information and the risks, issues, and problems that the process addressed and any complications
that arose. Also document all the key decisions made during the conduct of the estimate and their results and
the effects of the actions you took. Finally, describe and document the dynamics that occurred during the
process, such as the interactions of your estimation team, the interfaces with your clients, and trade-offs you
had to make to address issues identified during the process.

You should conduct a lessons-learned session as soon as possible after the completion of a project while the
participants‘ memories are still fresh. Lessons-learned sessions can range from two team members meeting to
reach a consensus about the various issues that went into the estimation process to highly structured meetings
conducted by external facilitators who employ formal questionnaires. No matter what form it may take, it is
always better to hold a lessons-learned meeting than not, even if the meeting is a burden on those involved.
Every software project should be used as an opportunity to improve the estimating process.

Step Ten: Track Project throughout Development:

In-process information should be collected and the project should be tracked and compared to the original
plan. If projects are varying far off their plans refined estimates should also be prepared.
Ideally, the following attributes of a software project would be tracked:
Cost, in terms of staff effort, phase effort and total effort.
Defects found or corrected, and the effort associated with them.
Process characteristics such as development language, process model and technology.
Project dynamics including changes or growth in requirements or code and schedule .
Project progress (measuring performance against schedule, budget, etc.)
Software structure in terms of size and complexity
Earned value, combined with quality and growth can be used to forecast completion very accurately and flag
areas where managers should be spending time controlling.

2.5 Problems with IT Project Estimation:


The greatest risk during Estimation of IT project is management‘s persistent claim for estimation too early in
the process leading to the result that estimation is mistaken for bargaining.Before project start, the estimation
of effort, costs, dates and duration is the basis for deep planning as well as for the measurement of project
success.

a) Some pragmatic hints which help to sharpen the consciousness for the problem of estimation:

The earlier estimation – the larger is the inaccuracy of the estimation.

Every estimation is better than no estimation.

The better estimations are documented – the better is the chance to gain experience in estimation.

The more documented estimations are available – the better projects can be estimated.

The more precise information about an object to be estimated is available – the more precise the estimation
can be.

The estimated objects should be kept small and the work units independent.

The communication factor will mostly be forgotten.

Estimation should help in decision making and shouldn‘t be an end in itself.

The requirements for controlling estimations should be met, i.e. Estimations should be repeated with
growing knowledge during project progress in order to actualize and make precise the estimation and to
document the experiences for future estimations. Only through such a consequent management of estimation
can expertise in estimation be gained.

b) Basic Parameters of Estimation:


From the above mentioned premise can be learned, that an estimation
- should be repeated;

A follow up estimation allows - with more precise information – a better estimate. Comparison with the preceding
estimation delivers experiences for future estimations.
A continuous tracking of the estimation allows the installation of an early warningsystem for deviationsand
supports the transparency of actual changes (e.g.,measurement of requirements creep, usually about 1 – 3 % each
month of project duration).

-should be performed in more than one variant;

The use of multiple estimation methods allows comparison of estimations from diverse view points, reduces the
inaccuracy of the estimation and reinforces the sensitivity.

- should be queried critically;

In any case, the parameters of estimation must be transparent because they strongly influence the result of
estimation a priori. Developments in client-server environments or host programming with 4GL-languages must
be estimated differently from usual host COBOL developments. Large companies are different from small firms
with only few staff. It must be clearly documented for Lines of Code, if comments in programs are counted or
when using generators – the generated commands. Generally, standards must be provided for time accounting;
e.g., how many hours in a person day, person month or person year.

- should be controllable;

Only controllable estimations give the chance to compare and allow a feed forward learning from past
estimations for future estimations.
- should be documented;

The main problem of estimation is the lack of available documentation and hence the
lack of experience from past estimations.The better the estimations are documented –
The more precise estimations can be and expertise in estimation be gained.

c) Estimation Methods:

Most estimation methods deliver as result a figure as measure for the size of the object to be estimated. Based on
this a time relevant figure (effort) is elaborated from which costs can be derived. The total effort should be
divided into the project phases according to a percentagemethod. The high esteem of the total effort from the
actual effort of the first project phase dueto the percentage method can parallel be used as comparative estimation.

There exist many known and valuable estimation methods. We learned from literature thatthe Function Point
method is champion in comparison of the methods. Besides COCOMO (Constructive cost model, LOC-based) is
in common usage. In principle this two approaches to estimate effort exist, based on requirements or program
size.

There also exist tools supporting the estimation process as well as different estimation
Methods (e.g. Checkpoint, Function Point Workbench etc.).

The problems of the LOC Methods are that LOC are first available in a late phase of the project and that coding
makes up only 10% of the size of system development. When the coding phase is reached, there exist some LOC
methods for the estimation of the effort for component- and integration test.
The advantage of the Function Point method can be seen from the fact that there exists some variants of the
Function Point method: the Data Point Method, the Object Point Method,Mark II, the Full Function Point Method
and IFPUG 4.1. IFPUG is the International FunctionPoint User Group, which posted an international standard for
this most commonly used method, which evaluates as object for estimation the requirements analysis document
designed from user’s perspective.
The problem of the Function Point method is that the requirements analysis documents in early phases of the
project are not precise enough.
Important is the result of a study performed by Jeffrey, 1987, who found that the effort in projects up to a size of
about 10 person years grows approximately linear and exceedingly exponential.
Many estimation methods can be found in literature: the following are the better known:

- The Analogy method

\Comparison of size in LOC of project post mortems.

-The Relational Method

Comparison of indices of project post mortems, e.g. COBOL=100, Assembler=130, PL/I=85 or Skill = 100, 120
or 90.
- The Weightiness method

Estimation with formulas which give different weights to different parameters and / or phases of system
development.
- The method of parametrical estimation equations

Estimation with formulas for parameters which strongly influence the effort of system development; e.g., the
formula of Putnam (SLIM = Software Life Cycle Management).
- The Multiplication Method

The average productivity of programmers in LOC is multiplied by the estimated LOC, e.g.

- The Percentage Method

The effort is relatively divided into the phases of system development, e.g.:

Phase Percentage
Phase Phase Percentage
Phase
No. or No.
1. 10 %
Requirements 1. 11 %Requirements
Analysis Analysis
2. 30 %
Requirements 2. 11 %Anforderungs-
Specification Specification
3. 30 %
DP-Concept 3. 5%Logical System
Specification
4. 25 %
Coding 4. 10 %Physical
Design
5. 5%
Delivery 5. 46 %Coding and
Module Test
6. 5% Implementation
7. 12 %System Test

- The COCOMO Method


A three step LOC based method, developed by Barry Boehm from TRW.

2.6 Software process and project metrics:

a) Overview

Software process and project metrics are quantitative measures that enable software engineers to gain insight
into the efficiency of the software process and the projects conducted using the process framework. In
software project management, we are primarily concerned with productivity and quality metrics. There are
four reasons for measuring software processes, products, and resources (to characterize, to evaluate, to
predict, and to improve).
b)Process and Project Metrics

Metrics should be collected so that process and product indicators can be ascertained
Process metrics used to provide indictors that lead to long term process improvement
Project metricsenable project manager to
o Assess status of ongoing project
o Track potential risks
o Uncover problem are before they go critical
o Adjust work flow or tasks
o Evaluate the project team‘s ability to control quality of software work products
c) Process Metrics

Private process metrics (e.g. defect rates by individual or module) are only known to by the individual or
team concerned.
Public process metrics enable organizations to make strategic changes to improve the software process.
Metrics should not be used to evaluate the performance of individuals.
Statistical software process improvement helps and organization to discover where they are strong and
where are week.

d) Statistical Process Control

1.Errors are categorized by their origin


2.Record cost to correct each error and defect
3.Count number of errors and defects in each category
4.Overall cost of errors and defects computed for each category
5.Identify category with greatest cost to organization
6.Develop plans to eliminate the most costly class of errors and defects or at least reduce their frequency
e) Project Metrics
A software team can use software project metrics to adapt project workflow and technical activities.
Project metrics are used to avoid development schedule delays, to mitigate potential risks, and to assess
product quality on an on-going basis.
Every project should measure its inputs (resources), outputs (deliverables), and results (effectiveness of
deliverables).
f) Software Measurement

Direct process measures include cost and effort.


Direct process measures include lines of code (LOC), execution speed, memory size, defects reported over
some time period.
Indirect product measures examine the quality of the software product itself (e.g. functionality,
complexity, efficiency, reliability, maintainability).
g) Size-Oriented Metrics

Derived by normalizing (dividing) any direct measure (e.g. defects or human effort) associated with the
product or project by LOC.
Size oriented metrics are widely used but their validity and applicability is widely debated.
h)Function-Oriented Metrics
Function points are computed from direct measures of the information domain of a business software
application and assessment of its complexity.
Once computed function points are used like LOC to normalize measures for software productivity,
quality, and other attributes.
The relationship of LOC and function points depends on the language used to implement the software.

i) Reconciling LOC and FP Metrics


The relationship between lines of code and function points depends upon the programming language that is
used to implement the software and the quality of the design
Function points and LOC-based metrics have been found to be relatively accurate predictors of software
development effort and cost
Using LOC and FP for estimation a historical baseline of information must be established.

j) Object-Oriented Metrics
Number of scenario scripts (NSS)
Number of key classes (NKC)
Number of support classes (e.g. UI classes, database access classes, computations classes, etc.)
Average number of support classes per key class
Number of subsystems (NSUB)

k) Use Case-Oriented Metrics


Describe (indirectly) user-visible functions and features in language independent manner
Number of use case is directly proportional to LOC size of application and number of test cases needed
However use cases do not come in standard sizes and use as a normalization measure is suspect
Use case points have been suggested as a mechanism for estimating effort

l) WebApp Project Metrics


Number of static Web pages (Nsp)
Number of dynamic Web pages (Ndp)
Customization index: C = Nsp / (Ndp + Nsp)
Number of internal page links
Number of persistent data objects
Number of external systems interfaced
Number of static content objects
Number of dynamic content objects
Number of executable functions

m) Software Quality Metrics

Factors assessing software quality come from three distinct points of view (product operation, product
revision, product modification).
Software quality factors requiring measures include
o correctness (defects per KLOC)
o maintainability (mean time to change)
o integrity (threat and security)
o usability (easy to learn, easy to use, productivity increase, user attitude)
Defect removal efficiency (DRE) is a measure of the filtering ability of the quality assurance and control
activities as they are applied throughout the process framework
DRE = E / (E + D)
E = number of errors found before delivery of work product
D= number of defects found after work product delivery
n) Arguments for Software Metrics
If you don‘t measure you have no way of determining any improvement
By requesting and evaluating productivity and quality measures software teams can establish meaningful
goals for process improvement
Software project managers are concerned with developing project estimates, producing high quality
systems, and delivering product on time
Using measurement to establish a project baseline helps to make project managers tasks possible
o) Metrics for Small Organizations

Most software organizations have fewer than 20 software engineers.


Best advice is to choose simple metrics that provide value to the organization and don‘t require a lot of
effort to collect.
Even small groups can expect a significant return on the investment required to collect metrics, if this
activity leads to process improvement.

p) Establishing a Software Metrics Program


1.Identify business goal
2.Identify what you want to know
3.Identify sub goals
4.Identify sub goal entities and attributes
5.Formalize measurement goals
6.Identify quantifiable questions and indicators related to sub goals
7.Identify data elements needed to be collected to construct the indicators
8.Define measures to be used and create operational definitions for them
9.Identify actions needed to implement the measures
10. Prepare a plan to implement the measures

2.7 Function Point (FP) and Line of Code (LOC) Metrics:


Objective

The objective of this section is to discuss the differences between the two most common sizing metrics:
Function Points (FP) and Lines of Code (LOC). This also offers insight into the advantages of using
Function Points for measuring the size of software.

Introduction:

One of the most important activities in the early stages of software development is estimation. Size of the
software, be it Function Points or Lines of Code, plays a pivotal role in this process, and forms the base for
deriving number of metrics to measure various aspects of the software, throughout the development cycle.
Hence measuring the size of software becomes critical. Though many other sizing measure are in practice
such as, objects, classes, modules, screens, programs and so on, Lines of Code and Function Points are most
widely used.

Function Points
Function Point Analysis is an objective and structured technique to measure software size by quantifying its
functionality provided to the user, based on the requirements and logical design. This technique breaks the
system into smaller components so they can be better understood and analyzed. Function Point count can be
applied to Development projects, Enhancement projects, and existing applications as well. There are 5 major
components of Function Point Analysis which capture the functionality of the application. These are:
External Inputs (EIs), External Outputs (EOs), External Inquiries (EQs), Internal Logical Files (ILFs) and
External Interface Files (EIFs). First three are treated as Transactional Function Types and last two are called
Data Function Types. Function Point Analysis consists of performing the following steps:

Determine the type of Function Point count


Determine the application boundary
Identify and rate transactional function types to calculate their contribution to the Unadjusted Function
Point count (UFP)
Identify and rate the data function types to calculate their contribution to the UFP
Determine the Value Adjustment Factor (VAF) by using General System Characteristics (GSCs)
Finally, calculate the adjusted Function Point count
Each of the components of Function Point Analysis is explained in brief in the following sub-sections.

(a)External Input (EI)

External Input is an elementary process in which data crosses the boundary from outside to inside. This data
may come from a data input screen or another application. The data may be used to maintain one or more
internal logical files. The data can be either control information or business information.

(b) External Output (EO)

External Output is an elementary process in which derived data passes across the boundary from inside to
outside. Additionally, an EO may update an internal logical file. The data creates reports or output files sent
to other applications. These reports and files are created from information contained in one or more internal
logical files and external interface files. Derived Data is data that is processed beyond direct retrieval and
editing of information from internal logical files or external interface files.

(c) External Inquiry (EQ)


External Inquiry is an elementary process with both input and output components that results in data retrieval
from one or more internal logical files and external interface files. The input process does not update or
maintain any FTRs (Internal Logical Files or External Interface Files) and the output side does not contain
derived data.

(d)Internal Logical File (ILF)

Internal Logical File is a user identifiable group of logically related data that resides entirely within the
application boundary and is maintained through External Inputs. Even though it is not a rule, at least one
external output and/or external inquiry should include the ILF as an FTR.

(e) External Interface File (EIF)

External Interface File is a user identifiable group of logically related data that is used for reference purposes
only. The data resides entirely outside the application boundary and is maintained by external inputs of
another application. That is, the external interface file is an internal logical file for another application. At
least one transaction, external input, external output or external inquiry should include the EIF as a File Type
Referenced.

Rating the Transactional and Data Function Types

Each of the identified components is assigned a rating (as Low, Average, and High). Transactional Function
Types are given the rating depending upon the number of Data Element Types (DET), File Types Referenced
(FTR) associated with them. Data Function Types are assigned ratings based on the number of Data Element
Types (DET), and Record Element Types (RET) associated. A DET is a unique user recognizable, non-
recursive (non-repetitive) field. A DET is information that is dynamic and not static. A dynamic field is read
from a file or created from DETs contained in an FTR. A RET is user recognizable sub group of data
elements within an ILF or an EIF. An FTR is a file type referenced by a transaction. An FTR must also be an
internal logical file or external interface file.

The total number of EIs, EOs, EQs, ILFs, and EIFs, after applying the weights corresponding to the ratings
(Low, Average, and High) will give the Unadjusted Function Point count (UFP).

General System Characteristics (GSCs)

The value adjustment factor (VAF) is calculated based on 14 General System Characteristics that rate the
general functionality of the application being counted. The GSCs are: Data communications, Distributed data
processing, Performance, Heavily used configuration, Transaction rate, On-line data entry, End-user
efficiency, On-line update, Complex processing, Reusability, Installation ease, Operational ease, Multiple
sites, and Facilitate change. The degree of influence of each characteristic has to be determined as a rating on
a scale of 0 to 5 as defined below.

Influence Rating

Not present, or no influence 0

Incidental influence 1

Moderate influence 2

Average influence 3

Significant influence 4

Strong influence throughout 5

Once all the GSCs have been rated, Total Degrees of Influence (TDI) is obtained by summing up all the
ratings. Now, Value Adjustment Factor is calculated using the formula:

VAF = 0.65 + TDI/100

Final FP Count

After determining the Unadjusted Function Point count (UFP) out of transactional and data function types,
and calculating the Value Adjustment Factor (VAF) by rating the general system characteristics, the final
Function Point count can be calculated using the formula:

FP = Unadjusted Function Point count (UFP) * Value Adjustment Factor (VAF)

Lines of Code
Lines of code (often referred to as Source Lines of Code, SLOC or LOC) is a software metric used to
measure the amount of code in a software program. LOC is typically used to estimate the amount of effort
that will be required to develop a program, as well as to estimate productivity once the software is produced.
Measuring software size by the number of lines of code has been in practice since the inception of software.

There are two major types of LOC measures: physical LOC and logical LOC. The most common definition
of physical LOC is a count of "non-blank, non-comment lines" in the text of the program's source code.
Logical LOC measures attempt to measure the number of "statements", but their specific definitions are tied
to specific computer languages (one simple logical LOC measure for C-like languages is the number of
statement-terminating semicolons). It is much easier to create tools that measure physical LOC, and physical
LOC definitions are easier to explain. However, physical LOC measures are sensitive to logically irrelevant
formatting and style conventions, while logical LOC is less sensitive to formatting and style conventions.
Unfortunately, LOC measures are often stated without giving their definition, and logical LOC can often be
significantly different from physical LOC.

There are several cost, schedule, and effort estimation models which use LOC as an input parameter,
including the widely-used Constructive Cost Model (COCOMO) series of models invented by Dr. Barry
Boehm. While these models have shown good predictive power, they are only as good as the estimates
(particularly the LOC estimates) fed to them.

Function Points – Advantages & Disadvantages

Advantages

Function Point Analysis (FPA) provides the best objective method for sizing software projects, and for managing the
size during development. Following are some of the many advantages that FPA offers.

(a) Helps Comparison:Since Function Points measures systems from a functional perspective they are
independent of technology. Regardless of language, development method, or hardware/platform used, the
number of FP for a system will remain constant. The only variable is the amount of effort needed to deliver a
given set of FP; therefore, Function Point Analysis can be used to determine whether a tool, an environment,
a language is more productive compared with others within anorganization or among organizations. This is a
critical point and one of the greatest values of Function Point Analysis.

(b) Helps Monitor Scope Creep:Function Point Analysis can provide a mechanism to track and monitor
scope creep. FP counts at the end of requirements, analysis, design, code, testing and deployment can be
compared. The FP count at the end of requirements and/or designs can be compared to FP actually delivered.
If the project has grown, there has been scope creep. The amount of growth is an indication of how well
requirements were gathered by and/or communicated to the project team. If the amount of growth of projects
declines over time it is a natural assumption that communication with the user has improved.

(c) Ease of Contract Negotiations:From a customer view point, Function Points can be used to help specify
to a vendor, the key deliverables, to ensure appropriate levels of functionality will be delivered, and to
develop objective measures of cost-effectiveness and quality. They are most effectively used with fixed price
contracts as a means of specifying exactly what will be delivered. From a vendor perspective, successful
management of fixed price contracts depends absolutely on accurate representations of effort. Estimation of
this effort (across the entire life cycle) can occur only when a normalized metric such as the one provided by
Function Points is applied.

(d) Handling Volatility:The advantage that Function Points bring to early estimation is the fact that they are
derived directly from the requirements and hence show the current status of requirements completeness. As
new features are added, the function point total will go up accordingly. If the organization decides to remove
features or defer them to a subsequent release, the function point metric can also handle this situation very
well, and reflect true state.

(e) Use of Historic Data:Once project size has been determined in Function Points, estimates for Duration,
Effort, and other costs can be computed by using historic data. Since FP is independent of languages or tools,
data from similar past projects can be used to produce consistent results, unlike Lines of Code data which is
much tightly tied to languages requiring many other parameters to be taken into account.

(f) Availability of Empirical Formulae: Unlike lines of code, FP can be used more effectively to develop
many predictive formulae such as defect potential, maintenance effort which can help pinpoint opportunities
for improvement. Caper Jones estimates that Function Points raised to the 1.2 power (FP1.2) estimates the
number of test cases. That is, test cases grow at a faster rate than Function Points. This is logically valid
because as an application grows, the number of interrelationships within the application becomes more
complex, requiring more test cases. Many empirical formulae have been suggested by Caper Jones which are
in wide use among FP practitioners.

(g) Enables Better Communication:FP can help improve communications with senior management since it
talks in terms of functionality rather than any implementation details, technical aspects, or physical code.
Further more, Function Points are easily understood by the non-technical user. This helps communicate
sizing information to a user (or customer) as well.
(h) Offers Better Benchmarking:Since FP is independent of language, development methodology,
programming practices, and technology domain, projects using FP become better candidates for
benchmarking across organizations and geographies.

Disadvantages

Function Points offer vast number of benefits by capturing the size of the software from its functionality
standpoint. FPA does have some disadvantages. However, organizations can easily overcome these problems
by practicing FPA consistently over a period of time.

(a) Requires Manual Work:Due to its very nature, Function Points have to be counted manually. The
counting process cannot be automated.

(b) Necessitates Significant Level of Detail:A great level of detail is required to estimate the software size
in terms of Function Points. Information on inputs, outputs, screens, database tables, and even records and
fields will be required to perform FPA accurately. Typically this is not the case with any development project
where the requirements are not clear to this level of detail, in the beginning.

(c) Requires Experience:Function Point Analysis requires good deal of experience if it were to be done
precisely. FPA inherently requires sufficient knowledge of the counting rules, which are comparatively
difficult to understand.

Lines of Code – Advantages & Disadvantages

Advantages

(a) Scope for Automation of Counting:Since Line of Code is a physical entity; manual counting effort can
be easily eliminated by automating the counting process. Small utilities may be developed for counting the
LOC in a program. However, a code counting utility developed for a specific language cannot be used for
other languages due to the syntactical and structural differences among languages.

(b) An Intuitive Metric:Line of Code serves as an intuitive metric for measuring the size of software due to
the fact that it can be seen and the effect of it can be visualized. Function Point is more of an objective metric
which cannot be imagined as being a physical entity, it exists only in the logical space. This way, LOC
comes in handy to express the size of software among programmers with low levels of experience.

Disadvantages
(a) Lack of Accountability:Lines of code measure suffers from some fundamental problems. First and foremost,
It is completely inaccurate and unfortunate to have to measure the productivity of a development project with
the outcome of one of the phases (coding phase) which usually accounts for only 30% to 35% of the overall
effort.

(b) Lack of Cohesion with Functionality:Though experiments have repeatedly confirmed that effort is
highly correlated with LOC, functionality is less well correlated with LOC. That is, skilled developers may
be able to develop the same functionality with far less code, so one program with less LOC may exhibit more
functionality than another similar program. In particular, LOC is a poor productivity measure of individuals,
since a developer can develop only a few lines and still be more productive than a developer creating more
lines of code.

(c) Adverse Impact on Estimation: As a consequence of the fact presented under point (a), estimates done
based on lines of code can adversely go wrong, in all possibility.

(d) Developer’s Experience:Implementation of a specific logic differs based on the level of experience of
the developer. Hence, number of lines of code differs from person to person. An experienced developer may
implement certain functionality in fewer lines of code than another developer of relatively less experience
does, though they use the same language.

(e) Difference in Languages:Consider two applications that provide the same functionality (screens, reports,
databases). One of the applications is written in C++ and the other application written a language like
COBOL. The number of function points would be exactly the same, but aspects of the application would be
different. The lines of code needed to develop the application would certainly be not the same. As a
consequence, the amount of effort required to develop the application would be different (hours per function
point). Unlike Lines of Code, the number of Function Points will remain constant.

(f) Advent of GUI Tools:With the advent of GUI-based languages/tools such as Visual Basic, much of
development work is done by drag-and-drops and a few mouse clicks, where the programmer virtually writes
no piece of code, most of the time. It is not possible to account for the code that is automatically generated in
this case. This difference invites huge variations in productivity and other metrics with respect to different
languages, making the Lines of Code more and more irrelevant in the context of GUI-based languages/tools,
which are prominent in the present software development arena.

(g) Far from OO Development: Line of Code makes no meaning in the case of Object-Oriented
development where everything is treated in terms of Objects and classes. Since object is a true representation
of data and functionality and so is a Function Point, FPA remains more relevant for Object-Oriented software
development.

(h) Problems with Multiple Languages:In today‘s software scenario, never a single language is used for
development. Very often, number of languages are employed depending upon the complexity and
requirements. Tracking and reporting of productivity and defect rates poses a serious problem in this case
since defects cannot be attributed to a particular language subsequent to integration of the system. Function
Point stands out to be the best measure of size in this case.

(i) Lack of Counting Standards:There is no standard definition of what a line of code is. Do comments
count? Are data declarations included? What happens if a statement extends over several lines? – These are
the questions that often arise. Though organizations like SEI and IEEE have published some guidelines in an
attempt to standardize counting, it is difficult to put these into practice especially in the face of newer and
newer languages being introduced every year.

Remarks:

There are many uses of Function Points beyond estimating schedule, effort and cost as discussed in
preceding sections. Many organizations are using function points and software metrics just to report
organizational level trends. Many project teams report data to a central metrics group and never see the data
again. It is equivalent to reporting data into a black-hole. If project managers begin to understand how
Function Points can be used to estimate costs, productivity, quality, test cases, to calculate maintenance
costs, and so on, they will be more likely to invest in counting Function Points, making an effective use of
FP.

On the other hand, any metrics that we use should be indicators of performance, not exact measures of
performance. They should provide enough granularity to show general trends, identify problem areas, and
demonstrate progress. Trying to make metrics too perfect causes them to be reported two to three months
after they are taken. As a consequence, too much time is being spent on precision and not enough on action.
Metrics should be used in such a way that they aid efficient project tracking and monitoring and they should
act as good indicators.

2.8 Constructive Cost Model (COCOMO):


The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed
by Barry Boehm. Every Final year project Black book inevitably requires a COCOMO model as per their
project.

Cost Analysis
For a given set of requirements it is desirable to know how much it will cost to develop the software to
satisfy the given requirements, and how much time development will take. These estimates are needed before
development is initiated. The primary reason for cost and schedule estimation is to enable the client or
developer to perform a cost-benefit analysis and for project monitoring and control. Cost in a project is due
to the requirements for software, hardware and human resources. Most cost estimates are determined in terms
of Person month (PM).

We have used COCOMO (Constructive Cost Model). The Intermediate COCOMO model computes software
development effort as a function of program size and a set of "cost drivers" that include subjective
assessments of product, hardware, personnel and project attributes. This model estimates the total effort in
terms of person-months of the technical project staff. The important steps in this analysis are:

Obtain an initial estimate of the development effort from the estimate of thousand of delivered lines of source
code (KLoC)
The initial estimate (also called as nominal estimate) is determined by an equation of the form used
in the static single-variable models, using KLoC as the measure of size.
To determine the initial effort Ei in person- months the equation used is,

Ei = a * (KLoC) b

Where, a and b are constants which are determined depending on the type of the project. Since, this project is
of Windows based type, therefore the values of a = 1.40 and the value of b = 0.6 and KLoC is the number of
lines of source code which is .874 KLoC. Thus the value of Ei is:
Ei = 1.40 * (0.874) * 0.60 = 0.73416 PM
Determine a set of 15 multiplying factors from different attributes of the product which are:

Table: Cost Estimation Table


Cost Drivers Very low Low Normal High Very
High

Product Attribute

RELY, required reliability 0.75 0.88 1.00 1.15 1.40


DATA, database size 0.94 1.00 1.08 1.16
CPLX, product complexity 0.70 0.85 1.00 1.15 1.30
Computer Attribute

TIME, execution time 1.00 1.11 1.30


constraint
STOR, main storage constraint 1.00 1.06 1.21

VITR, virtual machine volatility 0.87 1.00 1.15 1.30

TURN, computer turnaround 0.87 1.00 1.07 1.15


time
Personnel Attribute
ACAP, analyst capability 1.46 1.19 1.00 0.86 0.71

AEXP, application experience


1.29 1.13 1.00 0.91 0.82

PCAP, programmer capability


1.42 1.17 1.00 0.86 0.70

VEXP, virtual machine 1.21 1.10 1.00 0.90


experience
LEXP, programming language
1.14 1.07 1.00 0.95
experience
Project Attributes
MODP, modern programming
1.24 1.10 1.00 0.91 0.82
practices
TOOL, use of SW tools 1.24 1.10 1.00 0.91 0.83

SCHED, development 1.23 1.08 1.00 1.04 1.10


schedule

Adjust the effort estimate by multiplying the initial estimate with the entire multiplying factor.

We have taken the factors:

 Reliability

Complexity
Time Constraints
Turnaround time
Analyst capability
Programmer capability
Programming language experience
Modern Programming practices
Use of SW tools
Development Schedule
Based on these factors we have calculated, Effort Adjustment Factor (EAF) as follows:
EAF = 1.15 * 0.85 * 1.00 * 0.87 * 1.00 * 1.00 * 1.07 * 1.10 * 0.91 * 1.00
= 0.91087

The final effort estimate, E is determined by multiplying the initial estimate by the EAF:
E=EAF * Ei

= 0.91087 * 0.73416

= 0.6687 Person Month

We take the assumption charges are 40 rupees per day.

Total estimation = 191 * 0.6687 * 40

= 5100 Rupees.
Chapter 3Software Project Activity Planning & Resource Allocation

3.0 Project management activities:

a) Introduction
Project management activities are activities that are in the responsibility of project manager and that usually are
performed (if not delegated) by the project manager. There is no fixed list or classification of project management
activities.
In the following we will list some of them:
1.Planning, organizing and coordinating the work of the project team.
2.Acquiring and allocation of human and other resources.
3.Controlling project execution, tracking and reporting progress.
4.Solving problems/conflicts both inside the project team as well with other parties.
5.Assessing and controlling risk.
6.Informing the project team and other parties involved about the state of the art of the project, as well as
about success and problems.
7.Create necessary work environment.
8.Encourage devotion, excitement and creativity inside the project team.
Probably the most systematic approach to project management activities is presented in Project Management
Maturity Model (PMMM).

b) Project management artefacts

Project managers should additionally to managerial competences be able to use and develop a number of
instruments and possess necessary techniques (including computing skills). Project management artefacts are
documents that regulate the project execution. Depending on the project size and type, the list of necessary artefacts
can vary, but most often the following artefacts are present:

1. Needs analysis and/or feasibility study?


2. Project charter.
3. Terms of reference/scope statement.
4. Work breakdown structure and/or project schedule.
5. Project management plan and/or responsibilities assignment document.
6. Communications plan.
7. Resource management plan.
8. Change control plan.
9. Risk management plan and/or table/database of risks.
10. Lessons learned document/database.

Taking into account that project management covers a broad range of competences and activities the skills and
knowledge necessary for project management are needed for everybody who should:
Perform a task during a certain period of time;
Deal with complex problems requiring solutions by activities that will run partly in parallel;
Accomplish the tasks with limited resources;
Co-operate in performing tasks with other people;
Take into account the changing needs of the customers etc.

c) Project management process groups

Project management is an integrative undertaking that deals with different type of activities. All activities have
certain common features: they should be initiated, planned, executed, controlled and closed. These features are
applicable for different levels starting from a single action up to the whole project.

Initiating processesare processes that start the project, its each phase, activity or action. Even project closing
needs to be initiated: the activities should be started for convincing that the outcome satisfies the needs of the
customers, the necessary project documentation is present etc.
Planning processesare processes that are necessary for performing executing processes. Planning processes
include scope planning, activity definition and sequencing, schedule composition, resource planning, cost
estimation, budgeting etc.
Executing processesare processes that coordinate people and other resources to carry out the plan.
Controlling processesare monitoring and measuring processes ensuring that project objectives are met and
corrective actions are taken when necessary.

Closing processes are processes that lead a project or its phase to an orderly end. The processes related to an
undertaking can have in the time-scale smaller or bigger overlapping. In general initiating processes are
performed before planning processes, planning processes before executing processes and executing processes
before closing processes. Controlling processes usually cover the whole time-scale of the undertaking.

d) Project managers competency development


After Project Management Institute developed a systematic approach to determine project management knowledge
areas (PMBOK Guide), the institute also developed a guidance for development project managers competencies –
Project Manager Competency Development Framework (PMCD Framework). This is applicable to all project managers,
regardless of the nature, type, size, or complexity of projects.
PMCD Framework considers competences in three separate dimensions (denoted by K, P and B, correspondingly):
1)Project Management Knowledge (what a project manager brings to a project through his knowledge and
understanding of project management);
2)Project Management Performance (what a project manager is able to demonstrate in his ability to successful
manage a project);
3)Personal Competency (the core personality characteristics underlying a person’s capability to do a project, adopted
from the Spencer Model).
The competences in each direction are structured as follows:
Units → clusters → elements → performance criteria → examples of assessment guidelines.

The project management knowledge/performance competences provide a basis for guidance to develop the
instruments required for development and assessing these competences.

3.1. Work Breakdown and Schedule

In this part we describe the breakdown of the project into activities and identify the milestone and
deliverables associated with each activity.
1.Requirements analysis
The requirements submitted by the customer group should be analyzed and a requirements specification
report will be produced as a project deliverable.
2.Hardware and Software Installation

Hardware and Software will be determined and this task will be accomplished by delivering hardware and
software specifications (development platform).

3.Database Design

The database of the system shall be designed. The milestone will be Database design report.

4.User interface design

User interface should be design and a report for user interface design will be delivered.

5.Interface Design
This specifies how the application program will connect to the database system. An interface design report
shall be given as a milestone. The deliverable for the three activities (Database design, User interface design
and Interface design) will be the software design specifications.

6.Database Implementation

Depending the Relational Database System specified in the design phase, the database tables and
relationships would be implemented. Documentation for the database tables and table relationship shall be
given.

7.Software Implementation

The actual programming will be carried out. CASE tools may be used generate a skeleton program from the
design. The deliverable here will be a working system.

8.System Testing
The system will be tested in the presence of our customer. A system testing report will be delivered as a
milestone

9.System Manual

This activity will involve writing down the User‘s Guide Manual

10. System delivery and User training

The activity will include delivering the system to the customer by installing the system to their office and
give training on how to use the system

TASK DURATIONS AND DEPENDENCIES

TID TASK DURATION DEPENDENCIES

T1 Requirement Analysis and Specification (M1) 3

T2 Hardware and Software Installation 2

T3 Database Design (M2) 3 T1 (M1)

T4 User Interface Design (M3) 4 T1 (M1)

T5 Interface Design 2 T3 (M2)

T6 Database Implementation (M4) 3 T3


T7 Software Implementation (M5) 20 T4, T5

T8 System Testing 4 T7 (M5)

T9 Develop User‘s Manual (M6) 6 T7 (M5)

T10 System delivery and User Training 3 T7 , T)

3.2 Time-table of a project:


The time-table of the project is suggested to present in a graphical form gives a good overview about the
progress.

describes what activities and in what order should be started,

helps to concentrate attention to critical tasks,

allows to feel bottle-necks and correct activities before the problems are escalating,

is a tool for enhancing co-ordination between the partners,

increases devotion and the feeling of dependence from each other

Most importantly, is a tool for completing the project on time?

For avoiding time overrunning, the following principles should be taken into account:

People are intending to complete the tasks by the time fixed even if it could be completed sooner. When
too much time is planned for a task, then some time will be wasted or, vice versa, the task will be completed
but not handed over before the deadline. Sometimes it is not possible to start subsequent activities before the
planned date because necessary conditions are not fulfilled (for example, if contracts with third parties are
made for a later – fixed – date).

the activities should be concentrated: performing several tasks at the same time means that a lot of time is
needed for completing each task and therefore it is not possible to start subsequent – depending – tasks; it is
also suggested that the tasks should not be divided into several subtasks so that these subtasks are not
performed on linear order: for example, the scheme AAABBBCCC should be used instead of scheme
ABCABCABC.
The time overrun will cumulate, spared time in performing one stage or activity do not shorten time for the
whole project. This is why it is necessary to plan some additional time at the end of the project. One possible
algorithm for determining the duration of the project is the following: 1) find period of time that is sufficient
for completing the project; 2) plan for project activities 50% of the time found; 3) add 25% of time initially
found to the end of the project (project buffer).

Double attention should be paid to the activities that form critical chain (or critical path), these are activities
that depend from each other and determine the end date of the project. The dependencies are of two types:

1) A subsequent activity cannot be started before a preceding activity is not completed because it uses the

result of a preceding activity;

2) A subsequent activity uses the same resources as preceding activity.

For assuring timely start of activities in the critical chain, it is suggested to plan incoming buffers for
eliminating possible delays. If in the course of the project the delay is bigger than it proportionally from the
time reserved for project buffer should be then some extra measures should be developed and applied.

Time-tables are usually presented by Gantt charts.

A Gantt chart(or a bar chart, first used in 1917 by Henry Gantt) is a two-dimensional table where horizontal
axis represents time flow and vertical axis represent activities. The duration of each activity is represented by
a horizontal bar. Bar charts can also be used for progress assessment during running the project (painting the
bars that correspond to the finished tasks with a different color). Bar charts can be used for planning other
resources as well (people, infrastructure, money etc).

Example: A bar chart for development and piloting a new university course (unit of measuring – one week):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

1. Planning seminars > > >


2. Revision of work plan > > >
3. Needs analysis > > >
4. Contracts with developers and
> >
teachers/tutors

5. Course development > > > > > >


6. Creation of technical > > > > >
infrastructure

7. Testing of a module > >


8. Composition of a questionnaire > > > >
9. Feedback analysis > > >
10. Modification of the course > >
11. Marketing the course > > >
12. Development of a support > > >
system for the curse

13. Piloting the course > > >


14. Assessment of the course >

For visualizing dependencies between the activities arrows from the preceding activity to subsequent activity
are used. Milestones can be represented by activities with no duration (that is, a dot or bold line in
corresponding cell).

Other possibilities to present the schedule are Critical path method(CPM) and Program Evaluation and
Review Technique(PERT) that were developed in 1950-ies, or their modifications. The structure of the
project can be represented by a graph where vertices represent the milestones/subgoals and edges represent
activities leading to these milestones/subgoals. Each vertex is represented by a circle; inside a circle there is
written: 1) the milestone/subgoal or its number and 2) the earliest and latest possible date of achieving the
milestone/subgoal. Each arrow is marked by the name/number of the activity and the expected duration of
the activity. Arrow diagrams cannot contain loops.
3.3 Composition of project team
Selection of project partners is almost as decisive for success as selection of a wife/husband. Indeed, planning and
running a project is a collaborative undertaking where the result is determined by the quality and effectiveness of
every participant. For choosing partners there are some general principles that are reasonable to follow:

1.The partners should be motivated.

Motivated partners are interested in achieving the project goals and their attitude to project activities is not
formal. Motivated partners are supporting each other in solving problems. Attention should be paid to every
sign of nonmotivation.
For example, an institute was proposed to take part in a project consortium for applying a TEMPUS JEP;
however, it was not possible during two months to find a suitable for this institute date to discuss the
application. Instead, another institute was chosen as a partner.

2.Partners, who already have experience in running similar projects, are preferred.

Having partners who already have experiences will reduce the risks of emerging problems, first of all in
running projects in the framework of (relatively regulated) programs.

3.Leading institutions (persons) are preferred as partners.

Having leading institutions as partners:


New competencies can be acquired and the quality of the project‘s outcome is expected to be higher;
There are good opportunities to be invited to participate in other projects and to build up cooperation with
third parties;
It is easier to be accepted in the professional communities.

One should also mention some possible threats. Leading institutions are running a huge number of projects in
parallel and their contribution to the project can therefore be smaller as planned. To avoid these tasks of each
partner should be discussed and agreed in detail. For example, possible replacements should be discussed for
each ―risky‖ person. This is a real challenge to find a good balance between the principles 1 and 3 because
very often the leading institutions are not very motivated to devote much time to one particular project (out
of hundred of projects). On the other hand, small and relatively unknown institutions can be very motivated
because these institutions should establish themselves in the professional community and consequently can
not allow failures.
The following principle is generally accepted: if there is a highly recognized person/institution but it does not
harmonize with the project team, then it would be wise to abandon the partnership.

4.It is wise to prefer partners with whom there already have been good cooperation.

The strengths and weaknesses of such partners are known and this can be taken into account already in the
planning phase of the project. With completely new partners there are always risks for you not to be
understood, differently interpreted or simply not having necessary knowledge. For example, one of our
partners did not come to an important meeting because of some domestic problems; the tickets went lost and
there were huge troubles to sign off the costs. There also are partners who are not answering e-mails or not
keeping deadlines.

5.It is important that the interests of a single partner would not dominate over the project’s objective.
In many cases a partner tried to solve its intrinsic problems that did not belong to the scope of the project
using the project‘s resources. For example, a university proposed that the project should cover the study fee
of some visiting students.
6.It is suggested that the partners would complement each other, would have different experience,
competency and approach (academic, pragmatic, commercial etc).

Diversity is a good precondition for emerging of a new quality. Therefore novice actors who are not
bounded with traditional approaches can sometimes offer remarkable contribution to the project. On the
other hand, this principle is a source of certain risks as well.

For example: one task of one European project relayed in a great extent on one expert. After the expert
moved to another institution the partner institution was not able to perform the task and this partner was
replaced.

7. The partners should accept the conditions set by the donors.

For example, one of the partners come to meetings always by plane, and in the business class while the
financing regulations accepted economy class only.
In another project, one partner submitted an invoice for transferring the intellectual property rights to the
project consortium, although according the agreement signed by the partners, all the outcomes of the
project were owned by the project consortium. After the partner failed it started to use double salary rates.
Finally, the partner was replaced.
To avoid this kind of cases happen, it is suggested – even required by most of European programs – that the
partners will sign agreements on mutual responsibilities. The last example shows that this will not always
prevent conflicts.

Depending on the type of projects, additional principles will be applied. For example, for EU research and
development projects, an Irish expert Sean McCarthy listed the following indicators of a good partner are
doing top level research only:

End users who have vision how the outcome of the project can be applied,
PhD students who‘s doctoral theses are in a great extent based on the project,
Research administrators who have been proven as effective project managers.
At the same time he characterised people who often creates problems and therefore should not be taken to the
project teams:
Energetic project planners who are interested in getting the financing, not so much in running the project;
Partners that will not take the responsibility,
Dominating researchers that are pretending to be the only key person in the project;
Formalistic researchers that are first of all taking care of project documentation, not so much on the quality
of the outcome;
Partners with ―fuzzy‖ structure who always delegate to the meetings different people;
Incorrect partners who do not hold agreements and are performing the tasks at their own discretion;
Partners who constantly need to remind the tasks and who leave the tasks to the last minute.

Partly different aspects should be considered if you will be proposed to become a partner. For example, the
following problems can arise:

1. Your work and competence will be exploited.


This means that the amount of work expected from you and the amount of resources for that are not
corresponding to each other. For example, a foreign university offered to perform about 25% of total work in
a project with only 3% of the budget.
It is also possible to benefit from a weak partner. For example, one university was not able to co-ordinate a
joint project with five participating universities; the partners agreed to change the coordinating university
and accordingly redesigned the budget.

2. Incompetent co-ordination of the project can harm the image not only of the coordinating
institution but the image of partners as well.
The possible arguments and preferences of evaluating experts should be taken into account as well in
choosing the partners. For example, according to Sean McCarthy, the composition of consortium caused
rejection in about 75% of cases for EU 5th framework projects because the competence and experience in
running similar projects were the most important aspects assessed.
Another example was a project submitted to 6th FW program by six open universities for development of conceptual
models for distance teaching. As most of distance teaching in Europe is performed by traditional universities, the
project was not accepted
Risks that are caused by the project team are considered as one of the most dangerous; the project manager should
be able to replace the members of the project team they are not performing well enough, and able to estimate the
costs occurred by replacements.

General suggestions:

1. Not too many partners! The complexity of co-ordination depends exponentially on the number of
partners.
2.Important are the people working in an institution, not institution as such.
For example, a researcher applied for a long-term grant to stay in a German university. As there were many
applicants to Germany (and only one grant available) and no applicant to Japan (Japan did also offer a long-
term grant) the researcher was proposed to go to Japan. The researcher rejected this proposal because Japan
did not belong to the leading countries in the area of research. Secondly, Göttingen University belonged to
the leading universities in the world before the World War II; according to the commonly accepted ratings,
today this university is not even among twenty best performing universities in Europe!

3.4 Resource Allocation in SoftwareProject Management:

Project Insight gives project managers power over the management of resource allocation for software
development, marketing, product development teams and more. Assigning team members to business goals,
projects and individual tasks is simple and easy with our PMI and PMBOK® Guide compliant solution. Mass
assign team members' tasks grouped by skill set, department or resource type, or handle resource allocation
management for a single person. It is equally simple to change a resource on a set of project tasks as well.
Our portfolio system allows resource allocation managers and project managers to use project level and/or
cross project resource allocation to manage workloads in order to achieve their goals. The software
application reports evenly divide the work (hours) among the workdays (duration) scheduled for the tasks to
calculate the total work or effort assigned to a resource within a specified date range.
a) Efficient Resource Allocation and Workload Management

Resource information may be accessed from the 'Resources' tab within a project to review the availability of
resources. Project Insight, web project management software provides real-time resource allocation data
based on the allocation of their assignments to project tasks system-wide.Project managers can also view all
resources across all projects in Project Insight. This information is accessed in 'My Reports,' 'Cross Project
Resource Allocation.' Data may be hidden or displayed according to each person's preferences, supporting a
wide variety of applications for these reports. Hundreds of permutations of resource allocation reports are
available.
Other project management software applications claim to have extensive resource allocation capabilities in
their marketing materials; however, they often fall short. Project Insight not only allows resource managers
or project managers to see the total workload each resource has per day, week or other time period, it allows
them to drill down on all of the projects and tasks that are causing the over allocation in one view. Tasks can
easily be reassigned using Project Insight's simple drag and drop functionality. It's perfect for the
management of all kinds of goals, tasks and projects including IT projects, interactive or marketing projects,
product development projects, professional services and more. All tasks are efficiently managed with proper
resource allocation and tracking, down to the last detail.

b) Time Tracking and Expense Tracking

Straight forward time entry allows team members to enter their daily time and expenses in under five
minutes. Now project managers have updates on tasks and projects in real time instead of wasting time
asking team members for task status. Team members may customize their own time worksheets and
even set the time entry grid as their home page.

Enter time and percentage complete on all tasks in one place


Account for non-project time like customer meetings, holidays, and administrative time
Submit time sheets or expense reports for approval
Attach scanned receipts to expenses
Report on all time and expenses across multiple projects
Roll up reports by customer, resource, department, organization and more
Create invoices from approved time and expenses
Export to QuickBooks or Microsoft Excel
Pass time, expenses or invoices to other enterprise systems through Web Services APIs
3.5 Developing Project Schedule:

Can you imagine starting a long car trip to an unfamiliar destination without a map or navigation system?
You're pretty sure you have to make some turns here and there, but you have no idea when or where, or how
long it will take to get there. You may arrive eventually, but you run the risk of getting lost, and feeling
frustrated, along the way.
Essentially, driving without any idea of how you're going to get there is the same as working on a project
without a schedule. No matter the size or scope of your project, the schedule is a key part of project
management. The schedule tells you when each activity should be done, what has already been completed,
and the sequence in which things need to be finished.

Luckily, drivers have fairly accurate tools they can use. Scheduling, on the other hand, is not an exact
process. It's part estimation, part prediction, and part 'educated guessing.'
Because of the uncertainty involved, the schedule is reviewed regularly, and it is often revised while the
project is in progress. It continues to develop as the project moves forward, changes arise, risks come and go,
and new risks are identified. The schedule essentially transforms the project from a vision to a time-based
plan.

Schedules also help you do the following:

They provide a basis for you to monitor and control project activities.
They help you determine how best to allocate resources so you can achieve the project goal.
They help you assess how time delays will impact the project.
You can figure out where excess resources are available to allocate to other projects.
They provide a basis to help you track project progress.

Project managers have a variety of tools to develop a project schedule - from the relatively simple process of
action planning for small projects, to use of Gantt Charts and Network Analysis for large projects. Here, we
outline the key tools you will need for schedule development.

3.6 Project Management Software Tool:


There are many project scheduling software products which can do much of the tedious work of calculating
the schedule automatically, and plenty of books and tutorials dedicated to teaching people how to use them.
However, before a project manager can use these tools, he should understand the concepts behind the work
breakdown structure (WBS), dependencies, resource allocation, critical paths, Gantt charts and earned value.
These are the real keys to planning a successful project.
a) Allocate Resources to the Tasks:

The first step in building the project schedule is to identify the resources required to perform each of the
tasks required to complete the project. A resource is any person, item, tool, or service that is needed by the
project that is either scarce or has limited availability.Many project managers use the terms ―resource‖ and
―person‖ interchangeably, but people are only one kind of resource. The project could include computer
resources (like shared computer room, mainframe, or server time), locations (training rooms, temporary
office space), services (like time from contractors, trainers, or a support team), and special equipment that
will be temporarily acquired for the project. Most project schedules only plan for human resources—the
other kinds of resources are listed in the resource list, which is part of the project plan.
One or more resources must be allocated to each task. To do this, the project manager must first assign the
task to people who will perform it. For each task, the project manager must identify one or more people on
the resource list capable of doing that taskand assign it to them. Once a task is assigned, the team member
who is performing it is not available for other tasks until the assigned task is completed. While some tasks
can be assigned to any team member, most can be performed only by certain people. If those people are not
available, the task must wait.
b) Identify Dependencies:
Once resources are allocated, the next step in creating a project schedule is to identify dependencies between
tasks. A task has a dependency if it involves an activity, resource, or work product that is subsequently
required by another task. Dependencies come in many forms: a test plan can‘t be executed until a build of the
software is delivered; code might depend on classes or modules built in earlier stages; a user interface can‘t
be built until the design is reviewed. If Wideband Delphi is used to generate estimates, many of these
dependencies will already be represented in the assumptions. It is the project manager‘s responsibility to
work with everyone on the engineering team to identify these dependencies. The project manager should start
by taking the WBS and adding dependency information to it: each task in the WBS is given a number, and
the number of any task that it is dependent on should be listed next to it as a predecessor. The following
figure shows the four ways in which one task can be dependent on another.
Figure: Task Dependency

c) Create the Schedule:

Once the resources and dependencies are assigned, the software will arrange the tasks to reflect the
dependencies. The software also allows the project manager to enter effort and duration information for each
task; with this information, it can calculate a final date and build the schedule.
The most common form for the schedule to take is a Gantt chart. The following figure shows an example:

Figure: Gantt chart

Each task is represented by a bar, and the dependencies between tasks are represented by arrows. Each arrow
either points to the start or the end of the task, depending on the type of predecessor. The black diamond
between tasks D and E is a milestone, or a task with no duration. Milestones are used to show important
events in the schedule. The black bar above tasks D and E is a summary task, which shows that these tasks
are two subtasks of the same parent task. Summary tasks can contain other summary tasks as subtasks. For
example, if the team used an extra Wideband Delphi session to decompose a task in the original WBS into
subtasks, the original task should be shown as a summary task with the results of the second estimation
session as its subtasks.
3.7 RISK PLAN:

A risk plan is a list of all risks that threaten the project, along with a plan to mitigate some or all of those
risks. Some people say that uncertainty is the enemy of planning. If there were no uncertainty, then every
project plan would be accurate and every project would go off without a hitch. Unfortunately, real life
intervenes, usually at the most inconvenient times. The risk plan is an insurance policy against uncertainty.
Once the project team has generated a final set of risks, they have enough information to estimate two things:
a rough estimate of the probability that the risk will occur, and the potential impact of that risk on the project
if it does eventually materialize. The risks
must then be prioritized in two ways: in order of probability, and in order of impact. Both the
probability and impact are measured using a relative scale by assigning each a number between 1 and
5.

These numbers are arbitrary; they are simply used to compare the probability or impact of one risk
with another, and do not carry any specific meaning. The numbers for probability and impact are
assigned to each risk; a priority can then be calculated by multiplying these numbers together. It is
equally effective to assign a percentage as a probability (i.e. a risk is 80% likely to occur) and a real
duration for impact (i.e. it will cost 32 man-hours if the risk occurs). However, many teams have
trouble estimating these numbers, and find it easier to just assign an arbitrary value for comparison.

Many people have difficulty prioritizing, but there is a simple technique that makes it much easier.
While it‘s difficult to rank all of the risks in the list at once, it is usually not hard to pick out the one
that‘s most likely to occur. Assign that one a probability of 5. Then select the one that‘s least likely to
occur and assign that one a probability of 1. With those chosen, it‘s much easier to rank the others
relative to them. It might help to find another 5 and another 1, or if those don‘t exist, find a 4 and a 2.
The rest of the probabilities should start to fall in place. Once that‘s done, the same can be done for the
impact.

After the probability and impact of each risk have been estimated, the team can calculate the priority
of each risk by multiplying its probability by its impact. This ensures that the highest priority is
assigned to those risks that have both a high probability and impact, followed by either high-
probability risks with a low impact or low-probability risks with a high impact. This is generally the
order in which a good project manager will want to try to deal with them: it allows the most serious
risks to rise to the top of the list.

3.8 Developing the Project Budget:

If scheduling is an art then costing could be considered a black art. Some projects are relatively
straightforward to cost but most are not. Even simple figures like the cost per man/hour of labor can be
difficult to calculate.
Accounting, costing and budgeting are extensive topics in themselves. Some fundamental principles to
keep in mind are derived from accounting practices:
The concept of 'prudence' – you should be pessimistic in your accounts (―anticipate no profit and
provide for all possible losses‖).Provide yourself with a margin for error and not just show the best
possible financial position. It‘s the old maxim: promise low-deliver / high once again
The 'accruals' concept- revenue and costs are accrued or matched with one another and are attributed
to the same point in the schedule. For example if the costs of hardware are in your budget at the point
where you pay the invoice, then ALL the costs for hardware should be ―accrued‖ when the invoice is
received.
The consistencyconcept. This is similar to accruals but it emphasizes consistency over different
periods. If you change the basis on which you count certain costs you either need to revise all previous
finance accounts in line with this or annotate the change appropriately so people can make
comparisons on a like-for-like basis.

Note that the principles are listed in order of precedence. If the principle of consistency comes into
conflict with the principle of prudence, the principle of prudence is given priority.
Costing:

At a basic level the process of costing is reasonably simple. You draw up a list of all your possible
expenditure and put a numerical value against each item; the total therefore represents the tangible cost
of your project. You may also however need to consider ―intangible‖ items.
Tangible costs:

• Capital Expenditure – any large asset of the project which is purchased outright. This usually
includes plant, hardware, software and sometimes buildings although these can be accounted for in a
number of ways.

• Lease costs –some assets are not purchased outright but are leased to spread the cost over the life of
the project. These should be accounted for separately to capital expenditure since the project or
company does not own these assets.

• Staff costs – all costs for staff must be accounted for and this includes (but is not limited to): salary
and pension (superannuation) costs; insurance costs; recruitment costs; anything which can be tied
directly to employing, training and retaining staff.

• Professional services –all large-scale projects require the input of one or more professional
groups such as lawyers or accountants. These are normally accounted for separately since a close
watch needs to be kept upon expenditure in this area. Without scrutiny the costs of a consultant
engineer, accountant or lawyer can quickly dwarf other costs.
• Supplies and consumables – regular expenditure on supplies is often best covered by a single
item in your budget under which these figures are accrued. They are related to overhead below.

• One-off costs – one-off costs apply to expenditure which is not related to any of the above
categories but occurs on an irregular basis. Staff training might be an example. While it might be
appropriate to list this under staff costs you might wish to track it independently as an irregular cost.
The choice is yours but the principles of prudence and consistency apply.

• Overheads – sometime called indirect costs, these are costs which are not directly attributable to
any of the above categories but never-the-less impact upon your budget. For example it may not be
appropriate to reflect the phone bill for your project in staff costs, yet this still has to be paid and
accounted for. Costing for overheads is usually done as a rough percentage of one of the other factors
such as ―staff costs‖.

Intangible costs
It has become fashionable to account for ―intangible‖ assets on the balance sheets of companies and
possibly also projects. The argument goes like this: some contributions to a project are extremely
valuable but cannot necessarily have a tangible value associated with them. Should you then account
for them in the budget or costing? The ―prudence‖ principle says yes but practicality says ―no‖. If you
are delving this murky area of accountancy youshould seek professional help and advice.

Typical things you might place in the budget under intangibles are ―goodwill‖ and ―intellectual
property‖. Personnel-related figures are a frequent source of intangible assets and so you might find
things like ―management team‖, ―relationships‖ and ―contacts‖ on an intangibles balance sheet.

Budgeting:

Once you have costed your project you can then prepare an appropriate budget to secure the requisite
funds and plan your cash

flow over the life of the project. An accurate cost model will of course entail a fairly detailed design or
at the very least requirement specification so that you can determine your scope of work. This is
normally completed well into the design phase of the project.
You must be extremely careful with initial estimates and always follow the ―promise low / deliver
high‖ commandment.
Costing and budgeting follow the iterative life cycle as do other tasks within the project. As you refine
your design, so you will need to refine the costing which is based upon it.
As in scheduling, you need to build in adequate contingency (reserves) to account for unexpected
expenditure. For example, if due to a failure in the critical path a task is delayed and a milestone (like
software purchase) falls due in the month after it was scheduled. This can wreck your carefully
planned cash flow. But if you have carefully budgeted your project then variations should be relatively
easy to spot and cope with as they arise.
Just as in scheduling you should have regular budget reviews which examine the state of your finances
and your expenditure to date and adjust the planned budget accordingly.

Regardless of circumstance, a number of basic philosophies can help your budgeting immensely by
protecting it from subjective review. By understanding concepts, and making sure that everyone
involved understands them, you‘ll be on the right track to an accurate projection:

Project costs and project budgets are two different things. Always start by identifying project costs.
Project costs are not defined solely in monetary amounts. Include actual amounts, with shipping and
taxes, for software or hardware purchases that must be made. If you‘re pro-rating the costs of using
pre-existing hardware and software tools, include it in number of hours. Likewise, developer effort
costs are recorded in hours, not dollars.
Once you‘ve laid out your costs, identify your risks and assign a percentage reflecting how much each
risk factor may affect the project as a whole, or a portion of the project. Each development team
should have a risk value assigned to it, to cover reasonable costs such as hiring the occasional
contractor to get a timeline under control, unforeseen overtime, and so on.
Your budget, then, is the total of the costs, as transcribed into a monetary figure, plus the total risk
percentage of that cost. Define conversion values that you use to represent equipment pro-rating and
development times.

Your budget is not an invoice. Once you‘ve determined the hard figures involved, leave it up to your
company‘s business representatives to make adjustments for profits. Make sure they understand your
figures reflect actual costs.
A budget should always be labeled as an estimate, until it is finalized and approved. This helps to
manage expectations and prevent miscommunications from being written in stone.
A single person does not create a budget. At the very least, all of the following should be consulted:
lead developer, project manager, and a business-side driver.

3.9 Qualityplanning:
Quality management is a process of ensuring that the outcome of the project satisfies certain requirements.
These requirements are determined by the objective of the project, by the needs and expectations of parties
concerned. The needs and expectations should be determined already during the initial analysis; it is
important that the customer’s needs and quality expectations are understood and documented. However, the
needs and expectations are most often changing during the project especially if the objective is formulated in
very general terms (for example "modernization of teacher training"). Moreover, different actors can have
different needs and expectations. Therefore, we have that

a) The quality is a relative, not an absolute category

Quality management relies heavily on the following features:


Cooperation with the customers (including pilot trials). For example, feedback from students in
developing a new university course or feedback from companies in developing a curriculum.
Competences (including qualification of project team members and external experts). For example,
competences of a university teacher in running a course.
Cooperation inside the project team (including assigning feasible tasks and organization of
reporting). For example, cooperation between the university teachers for assuring that there are no
significant gaps and repetitions in a curriculum.

PMBOK Guide considers the following three major project quality management processes: quality
planning, quality assurance and quality control.
As the competence is an important component of the quality system, the project plan (or a separate
Project Quality Plan) should explain the principles how the project team was selected and how the
cases will be handled if there should emerge a lack of competence.
For an effective quality management the following requirements should be satisfied:
The activities are adequately documented,

The documents (those in preparation as well) are available to the project team.

In the following there are some examples that stress the role of competency in quality assurance:

1) Internet was used for elections in 1999 to Estonian Parliament. As the technical

infrastructure was not able to process the massive flow of data the whole system was very slow and

the final results vere considerable delayed.


2) A similar to case 1) happened if a TV show chose randomly mobile phone numbers and the

first phoned from a selected number would win a car. The following happened: as soon a number was

drawn people started to phone him/her to announce the possibility to win and blocked the phone to call

out; moreover, a number of people started to phone to the TV show and blocked incoming calles from

winners.

3) Automobile company Opel had a strategy in 1990-ies to produce possibly all components

itself. Consequently the quality dropped considerably.

3.10 Quality Management

The problem of quality management is not what people don’t know about it. The problem is what they think
they do know. The prevalent attitude in the software industry assumes that everyone already knows how to
build quality software; therefore, no formal programs are required. This attitude could not be further from
reality. Building quality software is not a talent one is born with; it is a learned skill and requires sound
management practices to ensure delivery.

Quality does not mean the same thing to all people. It is important that each software development
organization form a common definition of software quality practices for their group. Will the customer
tolerate a certain defect level? As customers of software, we all have. You can purchase any piece of software
off the shelf and you will find defects in the product. Not a single Microsoft product has been released without
defects as it is rushed to market in an effort to beat competitors. We make our purchase and wait for the next
release to fix the errors and introduce new functionality with new defects. These defects can range from
merely annoying to life threatening. Consider, for example, the consequences of defects in aerospace
software or in software that supports surgical procedures. Defects can cost billions of dollars if they are in
financial software. How much quality testing is enough? One must further ask, how much would an error cost
the customer? Testing practices should be as rigorous as the financial impact of the defect.

If we are producing a piece of desktop software for sale on the market at a retail price of $29.99, and an error
would not be more than an irritant to the customer, it would not be cost effective to spend the time and
money to ensure that the product was completely error free.
Some software developers continue to believe that software quality is something you begin to worry about
after code has been generated. This attitude too, obstructs the production of quality software. A typical
software development project is already running late by the time the coding begins and the whole team is
frantically trying to make up for lost time. If most of the software testing is
reserved for the later part of the project, it typically is cut short to deliver the product on time. Consequently,
defects are delivered to the customer.

Software quality is an "umbrella" activity and should be practiced throughout the entire software
development lifecycle. It is not just a testing activity that is performed before the software is shipped.
Software quality is not just a job for the Software Quality Assurance (SQA) group or team; EVERY member of
the software engineering team performs quality assurance.

3.11 History of Quality Programs in the Software Industry

During the 1940’s the U.S. was heavily entrenched in product manufacturing. Little importing or exporting
occurred at this time. Therefore, the manufacturing firms had a captive and eager U.S. marketplace in which
to sell its products. These products were not of the highest quality. Regardless, the manufacturer could not
make the goods fast enough. Demand exceeded supply.

Dr. W. Edward Deming began to lecture U.S. manufacturers on ways to improve the quality of their products
through the use of metrics and continuous process improvement techniques. In a market with little
competition and in which demand exceeded supply, U.S. manufacturers showed little interest in Deming’s
ideas.

During this time, Japan was rebuilding after WWII and was very interested in competing in the world market.
Dr. Deming took his quality message to Japan where it was well received. They implemented his programs and
were able to produce higher quality products. The U.S. began to ease import restrictions and during the 60’s
and 70’s these higher quality products appeared in the U.S. marketplace. The U.S. manufacturers quickly lost
market share to the Japanese.

When Dr. Deming returned to the U.S. in the 1970’s to deliver his quality improvement message, the U.S.
manufacturers were ready to listen. During the 1980’s these programs were known as Total Quality
Management (TQM). However, the U.S. wanted a quick fix and did not fully comprehend the paradigm shift
necessary to implement TQM. Many manufacturers invested large budgets in training programs, but failed to
commit to real change. Even today, products bearing the name of SONY, Mitsubitsi, Fuji, Toyota, and Honda
represent quality in the U.S. marketplace.Many software development organizations attempted to implement
Dr. Deming’s TQM programs in the construction of software. Many of the practices and principles of
manufacturing do apply to software development. However, the processes were different enough that the
software development community realized they needed to expand the principles to establish models for
software quality management. The Software Engineering Institute borrowed much from Dr.Deming’s TQM
methods to establish the Capability Maturity Model (CMM). The CMM is the foundation for establishing
sound, continuous, quality-improvement models for software development.

Chapter 4 Managing Change and organizing team

4.0 Software configuration management:


a) Introduction

Software configuration management is referred to as source control, change management, and


version control.Software configuration management systems are commonly used in software
development groups in which several developers are concurrently working on a common set of
files. If two developers change the same file, that file might be overwritten and critical code
changes lost. Software configuration management systems are designed to avoid this inherent
problem with sharing files in a multiuser environment.Any software configuration management
system creates a central repository to facilitate file sharing. Each file to be shared must be added
to the central repository to create the first version of the file. After a file is part of the central
repository, users can access and update it, creating new versions.
b) Benefits of software configuration management

If you have not used a software configuration management system or are not that familiar with
the concept, you might wonder whether it is appropriate to use software configuration
management on your project. Test automation is a software development effort. Every time a test
script is created, whether through recording or coding, a file is generated that contains code.
When created, developed, or edited, that code is a valuable test asset.A team environment
presents the risk of losing functioning code or breaking test scripts by overwriting files. A
software configuration management system offers a way to overcome this risk. Every time a file
changes, a new version is created and the original file is preserved.
For the team that is new to software configuration management, all of the essential features for
versioning test scripts are available through the Functional Tester interface. This integration
simplifies the use and adoption of software configuration management.

4.1 Needs for personnel:


Depending on a software development model the needs for personnel varies a lot during the
development process, both in terms of the amount and qualification. Higher quality and more
experience is needed in initial phase. Distribution of total work load and duration can be, for
example, the following:

Activity Work load Duration


Determination and analysis of requirements 10% 15%
General design 10% 15%
Detailed design 20% 20%

Coding 30% 25%

Testing 20% 15%

Implementation 10% 10%

In fact duration of phases is very difficult to measure because the activities that belong to
different phases are sometimes combined and performed in parallel (for example, a small piece
of new code tested immediately by the developer). The corresponding numbers in two columns
indicate the need of personnel; for example, the fact that general workload for general design
(10%) is smaller as duration of general design (15%) means that there are relatively smaller
amount of people involved in general design. It is estimated that in case of classical waterfall
development model the testing can take up to 40% of the total work load. For a bigger software
projects, Brooks rule for estimating the duration can be applied: 1/3 of the total time will be
devoted to planning, 1/6 – coding, 1/4 – testing and integration, 1/4 – integration testing.
Personnel costs are usually the biggest costs in software development projects. Therefore the
project team should be composed with care, changes in personnel in later phases can be very
expensive (according a study that was performed in USA in 1990, replacement of a single
software developer during a software project did cost 20 000 – 100 000 USD).
Another aspect that determines the need of personnel is the quality of people involved. However,
it is relatively complicated to take this aspect into account because actual practice should not
necessarily be coherent to knowledge and skills of the people as was clearly proved by Gunnar
Piho [Piho, 2003]. For example, 95% of IT specialists agrees on necessity to have a holistic
quality system; in fact only few software companies have it.
In order to reduce the risks for time overrunning some experts are suggesting to plan only up to
75% of available resources (including personnel).

4.2 Personnel management


In this section we discuss the aspects of effective personnel management. The basic idea here is
that a project manager has a responsibility to ensure a better quality of human resource as one of
the project‘s outcome. A company will have considerable losses if, for example, five people will
leave a project/company because of poor management. This is why the following aspects are
important:
Every team member should have opportunities for his/her professional development;
The ability to keep/consolidate a project team is one of the quality indicators of a project
manager.
It is suggested to spend more efforts to find competent team members rather than to hire
inexperienced persons and hope to their professional development. According to a widespread
opinion top quality developers are up to ten times more effective than low level developers. A
study of 31 software development teams showed that harmony between the project team
members is the most significant social success factor of a software project, that is how smooth is
the cooperation between the project team members. This is why people causing problems in
interacting with the colleagues should not be invited to a project team, even if they are good
experts. Another study (Carl E.Larson, Frank M.J. LaFasto) that was based on the analysis of 75
projects showed that the week ability to solve problems of problematic people was considered as
the biggest weakness of project managers.

A project manager should have a full authority to compose the project team and should not
immediately agree to take people suggested, for example, by upper management.

Roger Woolfe from Gartner Group suggests that preference in choosing project team members
should be given to personal characteristics; these are difficult to change while technical skills can
be acquired relatively quickly. He proposes 25 key competences of an IT organization, ten of
these are personal competences (other six describe technical skills and nine describe business
processes).
Different roles of people should be taken into account as well in composing a project team, all
basic roles should be present. Rob Thomsett determines the following eight basic roles:
1. Chairman: determines the basic methods of project execution; is able to determine the
strengths and weaknesses of a project team and the most effective usage of every single
person.
2. Shaper: formulates the results of discussions and other joint activities; this role has
usually the project manager or lead designer.
3. Plant: suggests new ideas and strategies, tries to implement new technology and find new
solutions.
4. Monitor-evaluator: analyses possibilities to solve problems as well as suitability to
implement new technologies.
5. Company worker: executes the tasks; most of the analysts, programmers, testers etc are
belonging to this category.
6. Team worker: helps and motivates the team members, tries to improve interpersonal
relations and strengthen the team spirit.
7. Resource investigator: organizes communication with the partners outside the project
team, tries to find additional resources; has personal contacts to a broad range of people
that he also intensively exploits.
8. Completer: observes and motivates the project team members to be goal oriented, tries to
minimize emerging of mistakes and domination of personal interests over the project‘s
interests.
It should be assured that the roles project people had/have in some other context will not
dominate in forming the project team. For example, in Victoria university (Australia) students
formed the project teams for performing a project in software engineering themselves; as a rule
the teams consisted of bodies of friends. Success rate of the projects varied a lot because the
roles in friendship communities and software engineering are totally different. In subsequent
years when the project teams were composed by the university teacher the success rate was
considerably higher.
Effective personnel management assumes effective time management as well. The main tool
here is monitoring the time usage by the project team members, especially during in initial phase
of a project. This improves the quality of time estimation for further activities as well as in
planning new projects. Special software for time management has been developed as well (see,
for example, time-accounting programs on the cited web site). Sometimes – for solving an
urgent problem – it is suggested to form a small (1-2 persons) temporary ―tiger team‖ that will
made free from other duties to that time. Here possible different attitudes of people should taken
into account: some are perceiving membership in a ―tiger team‖ as a promotion, others as just
disturbing the main duties.

4.3 Co-operation with upper management in planning a project


As it was previously shown (see Appendix 6), success of a project depends heavily on support of upper
management of the institutions involved. Therefore, cooperation between project managers and upper
management is vitally important in all phases of a project. Studies revealed that there exist some
general schemes/attitudes that are relatively often used by project managers and chief executive
officers (CEO). In the following we list some of them:
1. For securing timely execution of a project the project manager tries to reserve for a project
somehow more time than ultimately necessary.
2. As CEOs are aware about attempts to increase the duration of projects in planning a project,
they are often cutting it down sometimes even without necessary analysis or explanation.
3. A CEO has a personal opinion about an adequate duration of a project but he does not tell it, for
avoiding responsibility. The project plan will not be accepted until the time-table is not close
enough to the opinion of CEO.
4. A CEO criticizes a project plan without knowing the details. He hopes to achieve that better
solutions will be found in the next version the project plan..
5. A CEO promised to deliver software to a certain date; he insists to complete the work before
that date as otherwise his prestige will suffer.
6. An institution is interested to get a contract for development a certain software and makes an
unrealistic offer with a dumping price. It can have several unpleasant consequences like:
“political” agreements with the customer, low quality software, overspendings, replacement of
the project manager (if a scapegoat should be found) etc.

This kind of actions are mainly based on political decisions and are not taking into account the real
possibilities. To what extent these political decisions are made depends first of all on personal
capabilities of the project manager.
There are also some indirect methods used by CEOs to check the quality of planning and execution of
projects. One of these methods is called “alcohol test”. This method consists in asking different
(sometimes unexpected) questions from project manager and team members that allow to decide
whether the project is realistic or runs smoothly enough. Among these questions can, for example, be
the following:
Who are the main customers of the project?
What exactly are your responsibilities in the project?
What are the most significant risks of the project?
What are the most significant external factors that can influence the project?
What size will the software have? How did you calculate it?
What knowledge has the project team from the area software is developed for?
For securing from unpleasant situations the project manager and other team members should
constantly ask this kind of questions from themselves and from each other and try find answers.

4.4 Release of a software


To decide about release of software different techniques are used; we describe here three of
them.
1)Counting of errors. The errors are divided according to the gravity into three groups: critical,
significant, cosmetic. The simplest method is to decide on the density of errors (that is
measured by average number of errors for 1000 lines of code). If, for example, the density
was 7…9 in previous projects and 250 errors have been detected in a new software with 50
000 lines of code then the software is most probably not ready for release. Estimating the
average time spent for correcting an error one can estimate the total duration for correction of
errors.
2)Predicting the number of errors using two test groups. If the two groups detected M and N
errors correspondingly of which L were detected by the both groups then the total number of
errors is estimated to be N*M/L. Using two test groups is relatively costly; this will mainly
be used for development of a critical software were a small number of errors is vitally
important.
3)The probability method. A number of errors will be generated and the total number of errors
will be estimated on the number of detected errors: if M errors were generated and N errors
were detected of which L were from the set of generated errors then the total number of
errors is estimated to be N*M/L. The generated errors should in some sense be representative
(the generated errors should subsequently be corrected!).
Decision to release software should base on more than one indicator. Additionally, a check list of
different activities necessary for release should be used. The check-list can consist on some few
up to some hundred of positions. For a general purpose software it could consist, for example, of
the following (responsible person in the brackets):

Update the version information (developer)


Remove information necessary for testing from the code (developer)
Remove the generated errors (developer)
Check that all registered errors are removed (tester)
Install the program from a CD (tester)
Install the program from Internet (tester)
Install the program from CD into a computer where an earlier version has been installed
(tester)
Check that the installation program creates correct Windows-registers (tester)
Uninstall the program (tester)
Fix the list of distribution files (release group)
Synchronize the date and time of all release files (release group)
Burn the final program CD (release group)
Check that all the program files are present on CD (release group)
Perform the virus check (release group)
Perform the check of bad sectors (release group)
Create a spare copy and apply the change management scheme (release group)
Check the version of readme.txt file on CD (documents group)
Check the version of help files on CD (documents group)
Check the copyright, license and other juridical materials (project manager).

All key persons in the project team should sign a protocol certifying readiness of software.
The project’s history is based on project logs and basic data and contains both quantitative and
qualitative information; opinions of project team members can be collected by specially designed
questionnaires where certain aspects are assessed, for example, on Likert scale. The history
document should be discussed on project team general meeting with the aim to gain maximal
benefit for subsequent projects.

4.5 Project Management Tips


a) Getting Started – Initiation

1. Develop a solid business case for your projects. Where appropriate, ensure you obtain
senior managers‘ agreement before you start the project. Research points out that too
many projects are started without a firm reason or rationale. Developing a business case
will identify whether it is worth working on.
2. Ensure your project fits with the key organizational or departmental agenda or your
personal strategy. If not, why do it? Stick to priority projects.
3. Carry out risk analysis at a high level at the initiation stage. Avoid going into great detail
here – more an overview focusing on the key risks.
4. Identify at this early stage key stakeholders. Consider how much you need to consult or
involve them at the business case stage. Seek advice if necessary from senior managers
5. Where appropriate, involve finance people in putting the business case together. They
can be great allies in helping crunch the numbers which should give credibility to your
business case.
b) Defining Your Project

6. Produce a written project definition statement (sometimes called PID) and use it to
inform stakeholders – see point 13. This document is ‗your contract‘ to carry out the
project and should be circulated to key stakeholders.
7. Use the project definition statement to prevent creep. Use it to prevent you going beyond
the scope of the project through its use in the review process.

8. Identify in detail what will and will not be included in the project scope. Avoid wasting
time by working on those areas which should not be included – identify these in the PID.
9. Identify who fulfils which roles in your project. Document them on the PID. Include a
paragraph to show what each person does.

10. Identify who has responsibility for what in the project e.g. project communications is the
responsibility of AD. This helps reduce doubt early in the life of the project.

11. Think ‗Team Selection‘ – give some thought to who should be in your team. Analyse
whether they have the skills required to enable them to carry out their role? If not, ensure
they receive the right training. Check they are available for the period of the project.
NOTE: this includes any contactors you may need to use
12. Form a group of Project Managers. The Project Manager role can sometimes be very
lonely! Give support to each other by forming a group of Project Managers.

13. Identify who the stakeholders are for your project – those affected and ‗impacted‘ by the
project. This should be an in- depth analysis which needs updating regularly.
14. Recognize early in the life of the project what is driving the project. Is it a drive to
improve quality, reduce costs or hit a particular deadline? You can only have 1. Discuss
with the sponsor what is driving the project and ensure you stick to this throughout the
project. Keep ―the driver‖ in mind especially when you monitor and review.

15. Hold a kick off meeting (Start up Workshop) with key stakeholders, sponsor, project
manager project team. Use the meeting to help develop the PID (see Tip 6). Identify
risks and generally plan the project. If appropriate hold new meetings at the start of a
new stage.
16. Ensure you review the project during the Defining Your Project Stage – involve your
sponsor or senior manager in this process. Remember to check progress against the
business case.

c) Delivery Planning

17. Create a work breakdown structure (WBS) for the project. A WBS is a key element you
will need to develop your plan. It lists out all of the activities you will need to undertake
to deliver the project. Post it notes can be a great help in developing your WBS.
18. Group tasks under different headings once you have a list. This will enable you to
identify the chunks of work that need to be delivered, as well as put together the Gantt
chart and milestone chart.
19. Identify dependencies (or predecessors) of all activities. This will let you put together the
Gantt and milestone charts. Ensure you write them down otherwise you are trying to
carry potentially hundreds of options in your head.
20. Estimate how long each activity will take. Be aware that research points out we are
notoriously bad at estimating. You estimate a task will take 3 days. Identify how
confident you are that you can deliver in 3 days by using %
21. e.g. I‘m only 40% certain I can deliver in 3 days. You should aim for 80%. If you do not
believe you can achieve 80% then re-calculate
22. Identify the critical path for the project. The critical path identifies those activities which
have to be completed by the due date in order to complete the project on time.
23. Communicate, communicate, communicate! Delivering a project effectively means you
need to spend time communicating with a wide range of individuals. Build a
communication plan and review it regularly and include it in your Gantt chart.
24. Are you involved in a major change project? If you are, think through the implications of
this on key stakeholders and how you may need to influence and communicate with
them.
25. Conduct Risk Assessment – carry out a full risk analysis and document it in a risk
register. Regularly review each risk to ensure you are managing them, rather than them
managing you. Appoint a person to manage each risk.
26. Develop a Gantt chart and use it to monitor progress against the plan and to involve key
stakeholders in the communications process.
27. Draw up a milestone plan. These are stages in the project. You can use the milestone
dates to check the project is where it should be. Review whether activities have been
delivered against the milestone dates and take a look forward at what needs to be
achieved to deliver the next milestone.

d) Project Delivery – Monitoring and Reviewing Project

28. Have a clear project management monitoring and reviewing process – agreed by senior
managers - the project sponsor and the project Board, if you have one.
29. Ensure your organization‘s corporate governance structure and your project management
monitoring and control structure are compatible. If you do not know whether this is the
case then seek senior management involvement.
30. Be aware early in the project what will be monitored, how they will be monitored and the
frequency.
31. Keep accurate records of your project not only for audit purposes but to ensure you have
documents which enable you to monitor changes.
32. Use a Planned v. Actual form. It is easy to create – it allows you to monitor how you are
progressing with specific tasks – time and money. Link these forms into milestone
reviews.
33. Identify with your sponsor the type of control that is needed – loose or tight or a variation
of these, e.g. tight at the start, loose in the middle, tight at the end. Ensure the system you
develop reflects the type of control intended.
34. Agree a system for project changes – have an agreed system for monitoring and
approving changes. Use change control forms and obtain formal sign off (agreement) by
the sponsor, before action a change. Look for the impact of the change on the project
scope as well as the ―key driver‖ - quality, and cost and time.
35. Appoint someone to be responsible for project quality especially in larger projects.
Review quality formally with the client at agreed milestone dates.
36. Make certain you have agreed who can sanction changes in the absence of your sponsor.
If you haven‘t agreed this, what will you do in their absence?
37. Set a time limit for project meetings to review progress. Have an agenda with times
against each item and summarize after each item at the end of the meeting.

38. Produce action points against each item on the agenda and circulate within 24 hours of
the meeting. Use these action points to help in the creation of your next agenda.
39. Review the items on the critical path checking they are on schedule. Review risks,
review yours stakeholders and your communication plans and whether you are still on
track to deliver on time, to budget and to the required quality standard.

40. Set a tolerance figure and monitor e.g. a tolerance figure of ±5% means as long as you
are within the 5% limit you do not have to formally report. If exceed the 5% limit (cost
or time) then you need to report this to the agreed person – probably your sponsor

41. Report progress against an end of a stage – are you on schedule? Time, cost or quality?
Ensure that if something is off schedule the person responsible for delivering it suggests
ways to bring it back on time, within budget or to hit the right quality standard.

42. Develop an issues log to record items that may be causing concern. Review at your
project meetings.

43. See whether you are still delivering the original project benefits when reviewing your
project. If not, consider re-scoping or if appropriate abandoning the project. Do not be
afraid of abandoning a project. Better to abandon now rather than waste valuable time,
money, and resources working on something no longer required. If you close a project
early – hold a project review meeting to identify learning.

44. Produce one-page reports highlighting key issues. Agree the areas to include with the
Sponsor before writing a report.
45. Use a series of templates to support the monitoring process, e.g. milestone reporting,
change control, log, planned v. actual. Contact info@progectagency.com for more
information.

46. Apply traffic lights to illustrate how you are progressing – red, amber and green. Use
these in conjunction with milestone reports.

47. Engender honest reporting against specific deliverables, milestones, or a critical path
activity. If you do not have honest reporting imagine the consequences.

e) Closedown and Review

48. Agree well in advance a date to hold a post project review meeting. Put this onto the
Gantt chart.

49. Invite key stakeholders, sponsor, and project team to the post project review. If the date is
in their diary well in advance it should make it easier for them to attend

50. Focus your meeting on learning – identifying what you can use on the next project.
Share the learning with others in the organization.

51. Check whether you have delivered the original project objectives and benefits and not
gone out of scope.

52. Make sure that you have delivered against budget, quality requirements and the end
deadline.

53. Understand how well you managed risks and your key stakeholders. Use questionnaires
to obtain feedback.

54. Prepare a list of unfinished items. Identify who will complete these after the project and
circulate to any stakeholders.

55. Hand over the project formally to another group (it is now their day job) - if appropriate.
You may need to build this into the project plan and involve them early in the plan and at
different stages throughout the project.

56. Write an end of project report and circulate. Identify in the report key learning points.

57. Close the project formally. Inform others you have done this and who is now responsible
for dealing with day to day issues.
58. Celebrate success with your team! Recognize achievement, there is nothing more
motivating.

f) General Tips

59. But what is a project? Why worry whether something is a project? Why not use some of the
project management processes, e.g. stakeholder analysis or use of traffic lights to manage your
work? They key principle is to deliver the piece of work using the appropriate tools. We use the
term project based working to describe this approach.

60. Get trained! Research points out that only 61% of people have received any project
management training.

61. Ensure you have the buy-in of senior managers for your project. You will need to work
hard to influence upwards and get their support.

62. What about the day job? Projects get in the way and the day job gets in the way of
projects! Many people have found that by applying project based working to day to day
activities and by being more rigorous on project work, more is achieved.

63. Identify early on in the life of the project the priority of your projects. Inevitably there
will be a clash with another project or another task. Use your project management skills
to deliver and your senior management contacts to check out the real priority of the
project.

64. Discover how project management software can help. But, you will need to develop the
business case, produce a project definition alongside planning what will go into the
software. Many project managers use simple Excel spreadsheets or charts in word to
help deliver their project.
Chapter 5Software Quality

5.1 Introduction to Software Quality:


The goal of every commercial software development project is to ship a high quality product on
time and within budget. According to a recent research study, only an average of 16.2% software
projects complete on-time and on-budget. Companies developing commercial software
(especially software that is not subject to strictly enforced standards and contractual
requirements) will seriously consider spending precious resources and time on a quality
assurance program only if they believe that the approach to quality will provide substantial return
on investment without eroding expected profits. Companies also expect quality assurance to
mitigate risks associated with deploying a software product of questionable quality while not
impacting on the schedule and the budget of their projects.

a) Definitions of software quality


Software quality means different things to different people. This makes the entire concept highly
context-dependent. For example, in the context of automobiles, a Mercedes Benz or a Cadillac
may be symbols of high quality, but how many of us can really afford to buy one of these fine
vehicles? Given a somewhat less ambitious budget, a Toyota or a Chevy might serve most of our
needs with adequate quality. Just as there is no one vehicle that satisfies everyone‘s needs, so too
there can be no one universally-accepted definition of quality.
Even so, it is important to formalize your definition of software quality so that everyone
understands your priorities and relates your sense of quality to their own.

The IEEE, ISO, and several other agencies and individuals have offered definitions for software
quality. Some of these definitions ―conformance to requirements‖, ―meeting users’
composite expectations”, “value to some person”, and “fitness for use” are useful but
extremely vague because there is no definite way to state whether or not the final software
product conforms to these definitions.
In an attempt to impart formalism and to provide a systematic definition for software quality,
the ISO 9126 defines 21 attributes that a quality software product must exhibit. These
attributes are arranged in six areas: functionality, reliability, usability, efficiency,
maintainability, and portability. Recent advances in software quality measurement techniques
allow us to measure some of these attributes. However, there still seems to be no
straightforward means to measure the remaining attributes, let alone derive a metric for
overall quality based on these attributes. Without clear methods and measures, we are back to
square one, with no means to say anything quantitative about the final software product’s
quality.

In the end, we have only a vague idea of how to define software quality, but nothing concrete.
We have some idea of measuring it, but no clearly defined methods. What we are left with is a
grab bag of diverse standards and methods. Faced with a bewildering array of definitions and
standards, software projects, already under heavy burden of schedule and budget pressures, are
faced with a dilemma the dilemma of how to create and assess the quality of software products.

b) Software does not "break" like physical systems


Physical systems tend to degrade with time and might eventually break due to aging; they
might also break down due to attrition. For example, the timing belt in your car wears out after
a certain number of miles. Manufacturers can predict the approximate number of miles (for
example, 60,000 miles for a Toyota) that the timing belt will last because they understand its
physical characteristics very well. Timing belts may sometimes even break down due to
excessive stress well before the predicted time. One the other hand, software does not “break”
nor does it degrade with time. You can run a program any number of times without wearing it
out and without noticing any degradation in its performance. (But the program might behave
differently to changing environmental stimuli). Software reliability may actually improve with
time as bugs are fixed.

c) Software is digital
Software is digital and discrete, not analog and continuous. In continuous analog systems, it is
possible to predict the immediate future behavior based on historical data. For example, if a car
travelling on a freeway due north at any point in time t, there is reasonable assurance (unless
there is a major accident) that it will continue to travel in the same direction at time t + t.
There are physical constraints and laws of physics that ensure that the car will not suddenly
change its direction from north to south. On the other hand, imagine a virtual car controlled by
software. Assume the direction in which the car traveling is represented internally by a Boolean
variable DIRECTION and that the binary value of the variable DIRECTION is 0 (false) when the
car is going north and the value is 1 (true) when the car is going south. All it takes for the car to
change direction from north to south is one computer instruction that flips a single bit from 1 to
0. What if there is bug in the software that flips the bit unexpectedly depending on certain
conditions? Suddenly, your virtual car and your virtual world will turn upside-down. At one
instant the car is travelling north and then suddenly the next instant it is going south!

There are two implications of this unique nature of software on software quality/reliability.
First, traditional reliability models of physical systems or hardware systems and process ideas
from the manufacturing world are not directly applicable to software. Second, no matter how
good a process we may apply to software development, a single bug in the product may
possibly invalidate everything else that works correctly. The car direction example above serves
to clarify this point very well. A single latent implementation bug can unexpectedly and
suddenly lead to a serious malfunction of a physical system.

d) A good development process does not guarantee good product


Companies are championing the “process improvement” mantra as if it were a magic solution
to all their software development and quality problems. A process improvement methodology
is based on establishing, and following, a set of software standards. Most of these software
standards usually neglect the final product itself and instead concentrate mainly on the
development process. Again, as mentioned earlier, this is directly the result of the
manufacturing view of quality, where the focus is on "doing it right the first time”. The
emphasis in all these process standards is on conformance to processes rather than to
specifications. A standard process is certainly necessary but is not sufficient. There is no hard
evidence that conformance to process standards guarantees good products. Many companies
are finding out the hard way that good processes don’t always result in a quality product .

e) Product assessment
Whereas the manufacturing view examines the process of producing a product, a product view
of quality looks at the product itself. The product assessment advocates stress the fact that in
the end, what runs on a computer is not the process, but the software product.

There are two ways of directly examining the product—static analysis and dynamic analysis.
A static view considers the product's inherent characteristics. This approach assumes that
measuring and controlling internal product properties (metrics) will result in improved quality
of the product in use. Unfortunately, this is not always true. Much of the legacy code developed
over 20 years ago is still functioning correctly and reliably today. Legacy code was originally
written in unstructured languages (e.g Fortran, Basic) of the 70’s and made extensive use of the
much-maligned GOTO construct. Such software is sometimes termed as spaghetti software.
Modern structural metric tools, when applied to spaghetti software, would probably turn out
some pretty dismal structural quality measures a quality assessment far removed from
reality.

A dynamic view considers the product’s behavior. Dynamic analysis requires the execution of
software. Testing is a widely recognized form of dynamic analysis. The easiest way to assess
software quality is to test the product with every conceivable input and ascertain that it
produces the expected output every time. Such exhaustive testing has been long shown to be
both practically and theoretically impossible to achieve. Real-world testing constrained by
schedule/budget pressures may hit a limited portion of the input space or cover a limited
portion of the software. Quality assessments based on the results of such testing will again be
far removed from reality.

f) The path to the future


The discussion in the previous section clearly points to one irrefutable fact assessing software
quality is extremely hard and expensive, and the state of practice falls short of expectations. As
noted, there are some inherently difficult problems with software quality assessments that defy
solutions. However, all is not lost. The field of software assurance is advancing rapidly and a
number of software quality/reliability groups both in the academic and commercial worlds are
actively researching next generation techniques and tools to better create and assess software
quality. There are dozens of software quality and testing companies producing various tools for
software metrics, GUI testing, coverage analysis, and test management. Some companies are
producing tools that support the entire development process while integrating testing and
quality assurance processes. This is a rapidly expanding market and it is estimated that the
burgeoning test tool industry is on pace to hit one billion dollar revenue by the year 2000. A
detailed listing of categories of tools available in the market today and some leading companies
supplying these tools is available at http://www.stlabs.com/marick/faqs/tools.htm.
While all these of test tools, methodologies, and theories are useful, they will not be effective
unless software projects focus on quality right from the beginning and everyone works towards
a creating a product of the highest possible quality. Bringing in fancy tools will not solve the
poor quality problem.

5.1 Software Quality Models:


A Quality model is the set of characteristics and sub-characteristics, as well as the relationships
between them that provide the basis for specifying quality requirements and for evaluating
quality of the product or component. Of course, the quality model used will depend on the kind
of target product to be evaluated. Besides, the international standards that address the software
products quality issues have shown to be too general for dealing with the specific characteristics
of software components. While some of their characteristics are appropriate to the evaluation of
components, others are not well suited for that task.

We propose quality model based on ISO-9126 that defines a set of quality attributes for the
effective evaluation of COTS components. Domain specific systems are likely to have additional
sets of qualities in addition to the ones listed below. Here we are also describing some other
quality model also used for references.

a) McCall’s Quality Model:

Software quality can be categorized in different ways. Cavano and McCall were pioneers in this
respect, and presented a Software Quality Factor Framework. This has later been named
McCall‘s model. McCall‘s model was one of the first classifications of software quality
attributes. In this model main focus was on the final product and to identify the key attributes of
quality from the user‘s point of view. The key attributes are normally external attributes.
They split their 11 different quality attributes into three groups.

Product Operations Attributes.

Product Revision Attributes.

Product Transition Attributes.

Product Operations Attributes. Product Revision Attributes Product Transition Attributes

Correctness Maintainability Portability

Reliability Flexibility Reusability

Testability I Interoperability

Efficiency

Integrity

Usability

Table - The quality factors of the McCall Quality model

As discussed earlier, there are several different taxonomies of quality attributes, none of which
necessarily are more correct or useful than others in general. This is because different types of
systems will have different level quality requirements. ISO have made a standard ISO 9126 of
quality attributes, where they grouped 21 sub-attributes into six main categories of quality
attributes, which are Functionality, Maintainability, Usability, Efficiency, Reliability and
Portability.

b) ISO IEC 9126 Quality Model :


The ISO 9126 quality model was proposed as an international standard for software quality
measurement in 1992. It is a derivation of the McCall model, which provides a generic definition
of software quality, in terms of six main desirable characteristics. ISO 9126 is the most common
used quality standard model in industry. These main characteristics cover some sub-
characteristics, as shown in table below.
Characteristics Sub- Characteristics
Functionality Suitability
characteristics
Accuracy
Interoperability
Compliance
Security
Reliability Maturity
Recoverability
Fault Tolerance
Usability Learnability
Understandability
Operability
Efficiency Time behavior
Resource behavior
Maintainability Stability
Analyzability
Changeability
Testability
Portability Installability
Conformance
Replaceability
Adaptability

Table: ISO 9126 Quality Characteristics


c) Quality Model for COTS components:

Most of McCall‘s quality factors are included in the ISO 9126 classification, or at least covered
by similar quality attributes. All the characteristics of a software product as defined by ISO 9126
are not directly applicable to COTS components. However the component quality model
proposed is based on ISO 9126 and some adaptations for components were accomplished. The
model is composed of marketing characteristics and some relevant component information,
which is not supported in other quality models. The complete description of the component
quality model, its characteristics and the other important characteristics, can be seen in Table2.3
Table-2.3.showsthecomponentqualitymodelclassifiedintotwoclasses:
i) Thequalitycharacteristicsthatcanbe observableatruntime i.e.
atcomponentexecutiontimeand
ii) The quality characteristics that can be observable during the product life-cycle i.e.
during component and component-based systems development.
Characteristics Sub-characteristics Sub-characteristics
(Runtime) (Life cycle)
Functionality Suitability
Accuracy
Interoperability
Compliance
Security
Reliability Maturity
Recoverability

Usability Learnability
Understandability
Operability
Efficiency Time behavior
Resource
behavior
Maintainability
Changeability
Testability
Portability

Replaceability

Table: Quality Model for COTS components

As we can scrutinize, it is basically the ISO quality model (Table 2.2), where some of the quality

attributes i.e. Portability Maintainability and Reliability sub-characteristics disappeared. Whereas

other characteristics such as Usability has changed its meaning in this new context. The

following list discusses the main changes to the ISO 9216 model as compared to quality model

for COTS component.

1. Functionality. This characteristic maintains the similar meaning for components than for

general software products. It tries to express the ability of a component to provide the services

and functions as per their specifications, when used under the specified conditions. The sub-

characteristic Compliance in this model, which indicates whether former versions of the
component are compatible with its current version, i.e., whether the component could work when

integrated in a context where a prior version correctly worked.

2. Reliability. The definition of reliability in general originates from the probability that a system
will produce failure within a given period of time. The probability of failure is directly dependent
on the usage profile and context of the module under consideration. Reliability is also dependent
on the software architecture and how components are assembled; a fault-tolerant redundant
architecture improves the reliability of the assembly of components based system. The Maturity
sub-characteristic is measured in terms of the number of commercial versions and the time
intervals between them. On the other hand, recoverability tries to measure whether the
component is able to recover from unexpected failures, and how it implements these recovery
mechanisms.

3. Usability. The degrees of ease with which a user can learn to operate, prepare inputs for, and

interpret outputs from a system or component. This characteristic and all its sub characteristics

are perhaps the best example that has a completely different meaning for software components as

compared to other general software product. The reason is that, in CBSD, the end-users of

components are the application developers and system designers that have to build applications

with them, more than the people that have to interact with them. Thus, the usability of a

component should be interpreted as its ability to be used by the application developer when

constructing a software product or system with it. Under this characteristic we have included

attributes that measure the components usability during the design of applications.

4. Efficiency. We will value the definition and classification proposed by ISO 9126. (which
distinguishes between Time behavior and Resource behavior), although many people prefer to
converse about Performance and use other sub-classifications. Efficiency is directly composable
and architecture related attribute. Efficiency is affected by the component technology, mainly
through resource usage by the run- time system but also by interaction mechanisms. Good
efficiency is equal to low memory, processor, and communication medium usage but it is also
potentially in conflict with many higher prioritized quality attributes. In any case, the attributes
we have identified for this characteristic are applicable independently of the name or sub-
classification used.

5. Maintainability. It is related to the activities of people and not of the system itself. This
characteristic describes the ability of a software product to be modified. Modifications include
corrections, improvements or adaptations to the software, due to changes in the environment, in
the requirements, or in the functional specifications. Component technologies might provide
support for dynamic upgrading/deployment of components, which can improve the
maintainability of a system. The user of a component (i.e. the developer) does not need to do the
internal modifications but he does need to adapt it, re-configure it, and perform the testing of the
component before it can be included in the final product. Thus, changeability and testability are
defined as sub-characteristics that must be measured for components.

6. Portability. This characteristic is defined as the ability of a software system to be transferred


from one operating environment to another operating environment. In CBSD, portability is an
intrinsic property to the nature of components, which are in principle designed and developed to
be reused in different environments. In CBSD, re-use means not only to use a component more
than once, but also to use the same component in different environments.

5.2 Software Quality Measurement Concepts Overview:

However there is no general consensus about defining and categorizing software product
quality characteristics. At the highest level, the main goal of a software measurement process is
to satisfy certain information needs by identifying the entities and the attributes of these
entities (which are the targets of the measurement process). Attributes and the information
needs are related through measurable concepts (which belong to a Quality Model). According
to Fenton, attributes can be external or internal, attributes whose value depends on the
environment in which the software operates are external, as opposed to attributes that do not
depend on this environment, which are internal.

Then, we need to measure these attributes using metrics. A metric relates a defined
measurement approach and a measurement scale. A metric is expressed in units, and can be
defined for more than one attribute. Three kinds of metrics can be distinguished: direct metrics,
indirect metrics, and indicators.
A measurement approach is a generalization of the different approaches used by the three
kinds of metrics for obtaining their respective measures. A direct metric applies a measurement
method. An indirect metric uses a measurement function (which rests upon other direct and/or
indirect metrics). Finally, an indicator uses an analysis model (based on a decision criteria) to
obtain a measure that satisfies an information need.

Finally, the act of measuring software is defined as a set of operations that aim at determining a
value of a measure, for a given attribute of an entity, using a measurement approach. Measures
are obtained as the result of performing measurements (actions).

In this chapter we concentrate on a particular quality model, ISO 9126, which is defined in
terms of set of characteristics and sub-characteristics, as well as the relationships between
them, that provide the basis for specifying quality requirements and for evaluating quality. The
entities of our study will be software components, no matter whether they are in-house
developed, or acquired as COTS components. Since the model proposed by ISO 9126 is a
generic quality model for any software product, we need to particularize it for software
components.

5.3 Techniques to enhance s/w quality:

a) Software Tester:

This is the single most important thing. Every software project needs a tester, dedicated to
finding errors, and assuring quality. The cost of her/his pay is insignificant, compared to the time
needed to fix the errors that are found later in the client‘s live environment. Not to mention, it
will be better for the company image to release applications that have less bugs, and have happy
customers.The tester will inevitably be the person who knows the software best. She is the one
who knows business rules behind reports, or what should happen if user clicks that button or this
button, what SQL procedure will be executed if that window opens. It is impossible to develop
quality software without a tester.The tester can (and will) be the one who deals with the customer
support – and, as she knows the application best, she can also write the help and documentation –
all that is something that programmers abhor and try to avoid at any cost.
b) Software Analyst and architect:
Article about waterfall software development model was originally published as a negative
example – but over time it has become clear that there is no better substitute for it in big,
complicated projects.
Iterative development/agile development/extreme programming (see here for a brief overview)
can reduce the cost and development time – but it has almost always higher error rate and less
customer satisfaction (more iterations means constant updates for the client, fast development
cycle means that documentation and help is almost inevitably outdated and so forth). Also,
iterative development means that programmers must always work very closely with the client
and have excellent knowledge of both the program and business rules behind it.
Furthermore, the more complicated the program is (and becomes), the less ―payoff‖ will there be
from iterative development. In short – for small and medium-sized projects, iterative
development can be an excellent idea, but for big, highly complicated applications it is a
deathtrap. Also, iterative programming requires very good analysts and software architects.
Where I am going with all this – hire good business analysts and architects. Every mistake
during pre-development is costlier – both financially and time-wise – then mistake made during
the development. Most Estonian software companies use a weird mix of iterative development
and waterfall model, which seems to work best for medium-size companies and medium-size
projects – however, they‘ve not actually thought about the model or documented it – and
software development just happens.

c) Software code review:


Code review is very cheap to implement, and invaluable for quality. Another programmer will
look randomly at a programmer‘s code (non-blocking review). It doesn‘t matter if that other
programmer will spot all the mistakes (he almost certainly won‘t) – but knowing that another
programmer will look at his code, all programmers will pay more attention and write higher
quality code. It will add just few percent to the development time, but will force programmers to
write clean, commented, readable and reusable code.

d) Code rules
Not draconian, horrible rules like ―one comment per every three lines of code‖, but guidelines –
some of which need to be enforced (―every method/procedure needs to have an introductory
comment‖), some that are more relaxed (―string variable names should start with s (―sName‖),
integer variable names with i (―iAmount‖)‖). Rules should be created by mutual agreement,
separate for all programming languages (C# and SQL cannot have same rules). Rules need to be
easily accessible (intranet wiki is perfect for this).
That makes it easier to debug the code – and a new programmer can understand the code much
faster. Zero cost to implement (few hours for someone to write a draft, hour for all the
programmers to discuss).

e) . Unit testing
Unit testing is more time-costly and only feasible in certain situations. However, it will reduce
the number of ―accidental bugs‖ almost to zero. At minimum, all programmers should be at least
familiar with unit testing and create unit tests for central procedures – which tester or
programmer can use after changes. Unit testing is central to iterative programming (test-driven
development, TDD)

f) . Miscellaneous small things

Automated testing. Great for repetitive testing, but the downside is that it may pass
certain unforeseeable mistakes and UI issues. Many issues with Windows Vista are from
relying overly on automated testing. It cannot replace human tester, but can be utilized
for tedious basic testing.
Version control. Needs to be used religiously, also for database. Will not improve
quality per se but will help to deal with errors faster, by making it easy to revert to a
working version, see changes between versions easilyet cetera.

5.4 Capability Maturity Model


Developed at the Software Engineering Institute of Carnegie Mellon UniversitySupports the
structured management of software developmentBased on quality assurance processes and
process improvement activities developed in manufacturing by Juran, Shewhart and
Deming.Developed to cover the software development lifecycle.Classifies levels of maturity of
the software development management process:
i)Initial. The software process is characterized as ad hoc, and occasionally even chaotic. Few
processes are defined, and success depends on individual effort and heroics.
ii) Repeatable. Basic project management processes are established to track cost, schedule, and
functionality. The necessary process discipline is in place to repeat earlier successes on projects
with similar applications.
iii) Defined. The software process for both management and engineering activities is
documented, standardized, and integrated into a standard software process for the organization.
All projects use an approved, tailored version of the organization's standard software process for
developing and maintaining software.
iv) Managed. Detailed measures of the software process and product quality are collected. Both
the software process and products are quantitatively understood and controlled.
v) Optimizing. Continuous process improvement is enabled by quantitative feedback from the
process and from piloting innovative ideas and technologies.

5.5 CMMI Process Areas and Capability Levels


CMMI consists of four process areas:

Project Management: Planning, control, monitoring, risk management

Support: Configuration management, tracking service issues

Engineering: Requirements management, Integration, Verification

Process Management: Process focus and definition, training, Service culture.

Prescriptive detailed activities are provided for all these areas.

CMMI defines new capability levels:

Capability Level Implementation in a Service Organization

Level 0: Incomplete The organization implements only some applicable


specific practices

Level 1: Performed The organization lacks the necessary processes for


sustaining service levels.

Level 2: Managed The organization manages and reacts, but is not able to
strategically predict costs of services and compete with
lean competitors.

Level 3: Defined The organization anticipates changes in its environment


and plans, but still lacks the ability to forecast changing
costs and schedules of services.

Level 4: Quantitatively Managed The organization statistically forecasts and manages


performance against selected cost, schedule and customer
satisfaction levels.

Level 5: Optimizing The organization can reduce operating costs by improving


current process performance or by introducing innovative
services to maintain their competitive edge.

5.6 Six Sigma and quality management solutions


Six Sigma:
The term "Six Sigma" is a statistical term that refers to 3.4 defects per million opportunities or
99.99966 percent accuracy. Developed in the 1980's by Motorola, Six Sigma is a measure of
quality that strives for near perfection and is a disciplined, data-driven approach and
methodology for eliminating defects in any process from manufacturing to transactional and
from product to service. Companies can achieve an incredibly high level of performance with the
Six Sigma rigor and data-driven approach to problem solving and business process improvement,
since the focus is on driving what is most critical to customers, resulting in increased
performance and profitability.

Six Sigma Methodologies:

Six Sigma methodologies provide businesses with the tools to improve the capability of their
business processes and starts by asking fundamental questions based around customer
requirements. Applying rigorous analysis to all processes in the business, Six Sigma can assess
whether customer requirements are being met. Since 'Metrics' lie at the heart of Six Sigma, the
basic approach is to measure performance on an existing process, compare it with a statistically
valid ideal and figure out how to eliminate any variation each time the process fails to deliver
and a defect is found. Six Sigma rigorously works towards uncovering the root cause of these
defects and eliminating them time and again, resulting in reduced defects, declining costs and
ultimately achieving a highly improved state of customer satisfaction.
The methodology provides a logical sequence for applying and repackaging existing problem
solving tools and concepts. Various quality management tools are applied at each step and a
project sponsor review is recommended at conclusion of each step before moving onto the next.

The five essential steps -

Define

Mapping the process and understanding customer needs, feedback and business
objectives
Identify CTQs (critical to quality characteristics) that customers consider to have most
impact on quality (the projects that will have the most impact versus those that could stand
improvement but are not critical)
Measure
Identify key internal processes that influence CTQs and measure defects currently
generated relative to those processes
Create and Stratify frequency plots and conduct Pareto analysis (80/20)
Calculate starting sigma levels

Analyze

Discover why defects are generated by the identification of key variables that are most
likely to create process variation
Create focused problem statement
Explore potential causes
Organize potential causes
Collect Data
Use statistical methods to qualify cause and effect relationship

Improve

Identify the maximum acceptable ranges of the key variables


Validate a system for measuring deviations of the variables
Modify the process to stay within the acceptable range
Create possible solutions for root causes
Select solutions
Develop and pilot plans
Implementation
Measure results
Evaluate

Control

Tools are put in place to ensure that the key variables remain within the maximum
acceptable ranges over time
Develop and document standard practices
Train staff teams
Monitor perform
Create process for updating procedures
Summarize and communicate results
Recommend future plan

SixSigma is used for unknown causes/situations, and for problems that are neither commonplace
nor well defined. It is also used when a broad-spectrum approach is inappropriate and other
problem solving methods fail, and in complex situations that have many variables.
Six Sigma Belts
The Six Sigma Belts are based on level of competence in understanding and applying related
tools and actual definition. Competencies for each belt can vary by organization and training
institutions -

Green belt (GB) - Basic analytical tools and works on less complex projects
Black belt (BB) - Emphasis on application and analysis and works projects with help
from Green belts
Master Black belt (MBB) - Understands application and statistical theory behind
application, trains other belts and leads project reviews

The adoption of Six Sigma helps define the process for problem solving and works on a proven
methodology to solve problems.

The results are consistent and the focus is always on the bottom line, and as with all adoptions it
requires a cultural change in order to gain best results.

Six Sigma failures related to Organization/Management

Failure to create a vision and concept related to Customer expectations


Failure to follow up on the annual operating plan
Lack of leadership at the executive level
Lack of priority from Business executives who fail to show up for report-outs
Non-goal based deployment of Six Sigma
Lacking an effective plan to deploy Six Sigma
Lack of a detailed change process
Lack of metrics in place for management participation
Lack of metrics for Champions
Failure on part of Champions not showing up for report-outs
Failure to provide feedback even with metrics in place
Lack of multiple projects selected and queued for each MBB, BB or GB
Failure to communicate deployment plans effectively through the organization
Lack of a rewards or recognition program
Lack of programs to retain trained personnel
Failure of project selection process to identify projects related to business objectives
Lack of accountability with Middle management providing little support
Failure to buy-in at the Process Owner level
Failure to hold suppliers accountable for sending bad material due to price considerations
Failure to implement across the organization functions/departments such as design and
marketing after the launch of the operation
Buying cheap software to save money on the deployment
Steps towards Six Sigma successes:
Assembling your team;
Top management must have an expanded role and demonstrate leadership in directing and
supporting the Six Sigma initiative through a variety of methods from establishing and
monitoring achievement of programs to communicating customer requirements to the work
force. Top management must be involved during the implementation driving from the top and at
the same time empowering a multidisciplinary team of individuals from within the organization
who understand the critical processes involved, and take ownership of solutions, controls and
procedures. Many of the most successful corporations today have CEOs and presidents who are
trained and certified in Six Sigma. Top management should also choose the right people for
implementing the projects, by matching skills sets to projects and select the best and brightest to
participate in the Six Sigma program.
Corporate Goals and Objectives:
It is absolutely critical that all Six Sigma activities contribute to corporate goals and objectives
and are aligned with the organization‘s mission, responsibilities and policies - covering each
element of the business, purpose and scope of activity the organization performs.
Training, Certification and Infrastructure:
To ensure effective Six Sigma implementation, it is essential that employees are trained and
certified in various roles - Champion, Black Belt and Master Black Belt. They must be trained to
identify potential problems and initiate appropriate countermeasures and most importantly have a
Six Sigma deployment strategy.

Continual Improvement:
Identifying opportunities for continuous improvement by constantly tracking critical customer
complaints and feedback, finding areas that can be detrimental to improvement and removing
blocks towards achieving the goals and objectives towards overall corporate and organizational
improvement. Applying and monitoring these activities will go a long way toward driving
continual improvement.

Execution and Accountability: Meticulous execution and complete accountability is critical to


Six Sigma success and can be achieved by communicating the process across the organization.
Six Sigma deliverables need to be incorporated into every employee's performance objectives
and training/certification must be prerequisites for advancement in the organization. Project
status reports must be sent out regularly and at frequent intervals across various levels of the
business to ensure buy-in and tracking at all times.

The Business Focus: Six Sigma is good for business, delivering business results that can
accelerate growth, reduce costs and ultimately deliver extraordinary profits to stakeholders.
Manufacturing industries, health care and many more sectors have adopted Six Sigma processes
to improve performance and deliver customers with unparalleled quality and excellence in
products, services and delivery.

5.7 Process Management vs. Project Management

Confusion abounds in what are the differences and similarities between process management and
project management. There is a lot of literature in project management circles that purports that we
should be creating organizations that are led by projects and project management and forming
Project Management Environments to support these. But there are also circles that purport that all
work is a process and that we should be creating organizations that are led by process management
and, in turn, form Quality Management Environments for support.
Definition of a Project:
The Project Management Institute's Body of Knowledge defines a Project as, "A temporary endeavor
undertaken to create a unique product, service, or result." Temporary means that every project has a
definite beginning and a definite end date. Unique means that the product or service is different in
some distinguishing way from similar products or services. By examining this definition we
understand that projects are:
Time-bound and have a customer.
Have clear beginning and end states. These can be as short as half a day or be as long as a
number of years. Longer projects are often broken down into phases or stages. Each one
becoming a project onto itself.
Follow a specific cycle of Initiation, Definition, Planning, Execution and Close

Definition of a Process:
By examining this definition we understand that processes are:
On-going with no clearly defined beginning and end states.
Customer driven.
Repeatable.

All projects are managed. All processes should be continuously analyzed for improvement or
reengineering.
Project Management:
Project Management is the application of knowledge and expertise to the development of Project
Scope and a Project Plan, which meets or exceeds stakeholder requirements.
Process Improvement:
Process Improvement is the examination of a business process in order to better meet customer &
quality requirements.
Business Process Reengineering:
Business Process Reengineering is the fundamental re-thinking and re-designing of a business
process in order to exceed customer and quality requirements.
By examining the definition for project management it can be determined that the management of a
project is a process. The management of a project follows a consistent series of steps that ensures it
is successfully managed and meets the project's customer requirements. However, the process is not
subject to an improvement process. If the project management methodology (or process) is followed,
it is assumed that the project will successfully meet its defined deliverables.
By examining the definition for business process improvement and business process reengineering,
you can see that all work is a process and can be improved or reengineered in order to meet the
continuously changing needs of the customers (internal or external) for whom the process has been
designed.
Through our work in Quality Management and Project Management we have found that all work is a
process. It can be flowcharted, measured and improved. Organizations that are quality driven will
map all of their work processes. It then becomes easy to determine who does what and when they
have to do it, in order to ensure customer requirements are met. These flowcharts can replace job
descriptions. All employees can examine the flowcharts and immediately determine where their job
fits into the work to be done. As well, they can easily see where their work comes from and when
they're finished, where their part of the process then goes.
Now, back to our earlier position. Project Management Circles suggest that Project Management
Offices (PMOs) should be put in place to oversee projects, ensuring they are properly resourced and
prioritized. PMOs also help to lead the way towards the creation of a Project Management
Environment within the organization.
More current approaches and albeit, there are few examples so far as it is very leading edge in it's
thinking, is to merge process management with project management and create a Strategic Change
Management Office. It would oversee all process management (process improvement, reengineering,
benchmarking studies, ISO/QS 9000, Six Sigma Initiatives, etc.) with project management. Because
individuals are assigned to teams and these teams are either involved in some form of process
management and/or project management, this Change Management Office will oversee the link of
these to the organization's strategic direction (established through the process of Strategic Planning)
and resourcing of these.
In reviewing the definitions and literature, it becomes apparent that the correct thinking is that all
work is a process and that projects fit into the framework of process management. Dr. Edwards
Deming, Quality Management theorist, consultant and author once said, "If you can't describe what
you are doing as a process, you don't know what you are doing".
Chapter 6 Overview of Management Information System and Decision Making

Management Information System (MIS) Concepts:

6.0 Introduction to MIS:

The concept of the MIS has evolved over a period of time comprising many different facets of
the organizational function. MIS is a necessity of all the organizations. The initial concept of
MIS was to process data from the organization and present it in the reports at regular intervals.
The system was largely capable of handling the data from collection to processing. It was more
impersonal, requiring each individual to pick and choose the processed data and use it for his
requirements. This concept was further modified when a distinction was made between data and
information. The information is a product of an analysis of data. This concept is similar to a raw
material and the finished product. What are needed are information and not a mass of data.
However, the data can be analyzed in a number of ways, producing different shades and
specifications of the information as a product. It was, therefore, demanded that the system
concept be an individual- oriented, as each individual may have a different orientation towards
the information. This concept was further modified, that the system should present information
in such a form and format that it creates an impact on its user, provoking a decision or an
investigation. It was later realized then even though such an impact was a welcome modification,
some sort of selective approach was necessary in the analysis and reporting. Hence, the concept
of exception reporting was imbibed in MIS. The norm for an exception was necessary to evolve
in the organization. The concept remained valid till and to the extent that the norm for an
exception remained true and effective. Since the environment turns competitive and is ever
changing, fixation of the norm for an exception becomes a futile exercise at least for the people
in the higher echelons of the organization. The concept was then evolved that the system should
be capable of handling a need based exception reporting. This need maybe either of an individual
or a group of people. This called for keeping all data together in such a form that it can be
accessed by anybody and can be processed to suit his needs. The concept is that the data is one
but it can be viewed by different individuals in different ways.
Over a period of time, when these conceptual developments were taking place, the
concept of the end user computing using multiple databases emerged. This concept brought a
fundamental charge in MIS. The change was decentralization of the system and the user of the
information becoming independent of computer professionals. When this becomes a reality, the
concept of MIS changed to a decision making system. The job in a computer department is to
manage the information resource and leave the task of information processing to the user. The
concept of MIS in today‘s world is a system which handles the databases, databases, provides
com-putting facilities to the end user and gives a variety of decision making tools to the user of
the system.

The concept of MIS gives high regard to the individual and his ability to use information.
An MIS gives information through data analysis. While analyzing the data, it relies on many
academic disciplines. These include the theories, principles and concepts from the Management
Science, Psychology and Human Behavior, making the MID more effective and useful. These
academic disciplines are used in designing the MIS, evolving the decision support tools for
modeling and decision - making.
The MIS, therefore, is a dynamic concept subject to change, time and again, with a
change in the business management process. It continuously interacts with the internal and the
external environment of the business and provides a corrective mechanism in the

6.1 MIS Definition

The Management Information System (MIS) is a concept of the last decade or two. It has been
understood and described in a number ways. It is also known as the Information System, the
Information and Decision System, the Computer- based information System.

The MIS has more than one definition, some of which are give below.

• The MIS is defined as a system which provides information support for decision
making in the organization.
• The MIS is defined as an integrated system of man and machine for providing the
information to support the operations, the management and the decision making
function in the organization.
• The MIS is defined as a system based on the database of the organization evolved
for the purpose of providing information to the people in the organization.

• The MIS is defined as a Computer – based Information System.


Thought there are a number of definitions, all of them converge on one single point, i.e., the MIS
is a system to support the decision making function in the organization. The difference lies in
defining the elements of the MIS. However, in today‘s world MIS a computerized .business
processing system generating information for the people in the organization to meet the
information needs decision making to achieve the corporate objective of the organization.

In any organization, small or big, a major portion of the time goes in data collection, processing,
documenting it to the people. Hence, a major portion of the overheads goes into this kind of
unproductive work in the organization. Every individual in an organization is continuously
looking for some information which is needed to perform his/her task. Hence, the information is
people-oriented and it varies with the nature of the people in the organization.
The difficulty in handling this multiple requirement of the people is due to a couple of reasons.
The information is a processed product to fulfill an imprecise need of the people. It takes time to
search the data and may require a difficult processing path. It has a time value and unless
processed on time and communicated, it has novalue. The scope and the quantum of information
is individual-dependent and it is difficult to conceive the information as a well-defined product
for the entire organization. Since the people are instrumental in any business transaction, a
human error is possible in conducting the same. Since a human error is difficult to control, the
difficulty arises in ensuring a hundred per cent quality assurance of information in terms of
completeness, accuracy, validity, timeliness and meeting the decision making needs.
In order to get a better grip on the activity of information processing, it is necessary to have a
formal system which should take care of the following points:

Handling of a voluminous data.


Confirmation of the validity of data and transaction.
Complex processing of data and multidimensional analysis.
Quick search and retrieval.
Mass storage.
Communication of the information system to the user on time.
Fulfilling the changing needs of the information
.The management information system uses computers and communication technology to deal
with these points of supreme importance.

6.2 ROLE OF THE MANAGEMENT INFORMATION SYSTEM


The role of the MIS in an organization can be compared to the role of heart in the body. The
information is the blood and MIS is the heart. In the body the heart plays the role of supplying
pure blood to all the elements of the body including the brain. The heart works faster and
supplies more blood when needed. It regulates and controls the incoming impure blood,
processes it and sends it to the destination in the quantity needed. It fulfills the needs of blood
supply to human body in normal course and also in crisis.

The MIS plays exactly the same role in the organization. The system ensures that an appropriate
data is collected from the various sources, processed, and sent further to all the needy
destinations. The system is expected to fulfill the information needs of an individual, a group of
individuals, the management functionaries: the managers and the top management.

The MIS satisfies the diverse needs through a variety of systems such as Query Systems,
Analysis Systems, Modeling Systems and Decision Support Systems the MIS helps in Strategic
Planning, Management Control, Operational Control and Transaction Processing.

The MIS helps the clerical personnel in the transaction processing and answers their queries on
the data pertaining to the transaction, the status of a particular record and references on a variety
of documents. The MIS helps the junior management personnel by providing the operational data
for planning, scheduling and control, and helps them further in decision making at the operations
level to correct an out of control situation. The MIS helps the middle management in short them
planning, target setting and controlling the business functions. It is supported by the use of the
management tools of planning and control. The MIS helps the top management in goal setting,
strategic planning and evolving the business plans and their implementation.

The MIS plays the role of information generation, communication, problem identification and
helps in the process of decision making. The MIS, therefore, plays a vital role in the
management, administration and operations of an organization.

6.3 IMPACT OF THE MANAGEMENT INFORMATION SYSTEM

Since the MIS plays a very important role in the organization, it creates an impact on the
organization’s functions, performance and productivity.
The impact of MIS on the functions is in its management. With a good support, the management
of marking, finance, production and personnel become more efficient. The tracking and
monitoring of the functional targets becomes easy. The functional, managers are informed about
the progress, achievements and shortfalls in the probable trends in the various aspects of
business. This helps in forecasting and long- term perspective planning. The manager’s attention
is brought to a situation which is exceptional in nature, inducing him to take an action or a
decision in the matter. A disciplined information reporting system creates a structured data and a
knowledge base for all the people in the organization. The information is available in such a
form that it can be used straight away or by blending analysis, saving the manager’s valuable
time.
The MIS creates another impact in the organization which relates to the understanding of the
business itself. The MIS begins with the definition of a data entity and its attributes. It uses a
dictionary if data, entity and attributes, respectively, designed for information generation in the
organization. Since all the information system use the dictionary, there is common understanding
of terms and terminology in the organization brining clarity in the communication and a similar
understanding an even of the organization.
The MIS calls for a systemization of the business operation for an affective system design.

A well designed system with a focus on the manger makes an impact on the managerial
efficiency. The fund of information motivates an enlightened manger to use a variety of tools of
the management. It helps him to resort to such exercises as experimentation and modeling. The
use of computers enables him to use the tools techniques which are impossible to use manually.
The ready-made packages make this task simpler. The impact is on the managerial ability to
perform. It improves the decision making ability considerably.
Since the MIS works on the basic systems such as transaction processing and databases, the
drudgery of the clerical work is transferred to the computerized system, relieving the human
mind for better work. It will be observed that a lot of manpower is engaged in this activity in the
organization. It you study the individual‘s time utilization and its application; you will find that
seventy per cent of the time is spent in recording, searching, processing and communication.
This is a large overhead in the organization. The MIS has a direct impact on this overhead. It
creates an information- based work culture in the organization.

6.4 MANAGEMENT INFORMATION SYSTEM AND COMPTER

Translating the real concept of the MIS into reality is technically, an infeasible proposition
unless computers are used. The MIS relies heavily on the hardware and software capacity of the
computer and its ability to process, retrieve communicate with no serious limitations.
The variety of the hardware having distinct capabilities makes it possible to design the MIS
for a specific situation. For example, if the organization needs a large database and very little
processing, a computer system is available for such a requirement. Suppose the organization has
multiple business location at long distances and if the need is to bring the data at one place,
process, and then send the information to various location, it is possible to have a computer
system with a distributed data processing capability. If the distance is too long, then the
computer system can be hooked through a satellite communication system. The ability of the
hardware to store data and process it at a very fast rate helps to deal with the data volumes, its
storage and access effectively. The ability of the computer to sort and merge helps to organize
the data in a particular manner and process it for complex lengthy computations. Since the
computer is capable of digital, graphic, word image, voice and text processing, it is exploited to
generate information and present it in the form which is easy to understand for the information
user.
The ability of a computer system to provide security of data brings a confidence in the
management in the storage o data on a magnetic media in an impersonal mode. The computer
system provides the facilities such as READ ONLY where you cannot delete to UPDATE. It
provides an access to the selected information through a password and layered access facilities.
The confidence nature of the data and information can be maintained in a computer system. With
this ability, the MIS become a safe application in the organization.

The software, an integral part of a computer system, further enhances the hardware
capability. The software is available to handle the procedural and nonprocedural data processing.
For example, if you want to use a formula to calculate a certain result, an efficient language is
available to handle the situation. If you are not use a formula but has to resort every time to a
new procedure, the nonprocedural languages are available.

The software is available to transfer the data from one computer system to another. Hence,
you can compute the results at one place and transfer them to a computer located at another place
for some other use. The computer system being able to configure to the specific needs helps to
design a flexible MIS.

The advancement in computers and the communication technology has the distance, speed,
volume and complex computing an easy task. Hence, designing the MIS for a specific need and
simultaneously designing a flexible and open system becomes possible, thereby saving a lot of
drudgery of development and maintenance

6.5 Decision Making Systems

The decision making systems can be classified in a number of ways. There are two types of
systems based on the manager’s knowledge about the environment. If the manager operates in a
known environment then it is a closed decision making system. The conditions of the closed
decision making system are:

(a) The manager has a known set of decision alternatives and knows their outcomes fully in
terms of value, if implemented.

(b) The manager has a model, a method or a rule whereby the decision alternatives can be
generated, tested, and ranked.

(c) The manager can choose one of them, based on some goal or objective.

A few examples are a product mix problem, an examination system to declare pass or fail, or
an acceptance of the fixed deposits.
If the manager operates in an environment not known to him, then the decision making
system is termed as an open decision making system. The conditions of this system are:

(a) The manager does not know all the decision alternatives.

(b) The outcome of the decision is also not known fully. The knowledge of the outcome may be
a probabilistic one.

(c) No method, rule or model is available to study and finalize one decision among the set of
decision alternatives.

(d) It is difficult to decide an objective or a goal and, therefore, the manager resorts to that
decision, where his aspirations or desires are met best.

Deciding on the possible product diversification lines, the pricing of a new product, and the
plant location, are some decision making situations which fall in the category of the open
decision making systems.
The MIS tries to convert every open system to a closed decision making system by
providing information support for the best decision. The MIS gives the information support,
whereby the manager knows more and more about the environment and the outcomes, he is able
to generate the decision alternatives, test them and select one of them. A good MIS achieves this.
a) Types of Decisions

The types of decisions are based on the degree of knowledge about the outcomes or the
events yet to take place. If the manager has full and precise knowledge of the event or outcome
which is to occur, then his problem of the decision making is not a problem. If the manager has
full knowledge, then it is a situation of certainty. If he has partial knowledge or a probabilistic
knowledge, then it is decision making under risk. If the manager does not have any knowledge
whatsoever, then it is decision making under uncertainty.
A good MIS tries to convert a decision making situation under uncertainty to the situation
under risk and further to certainty. Decision making in the operations management, is a situation
of certainty. This is mainly because the manager in this field has fairly good knowledge about
the events which are to take place, has full knowledge of environment, and has predetermined
decision alternatives for choice or for selection.

Decision making at the middle management level is of the risk type. This is because of the
difficulty in forecasting an event with hundred per cent accuracy and the limited scope of
generating the decision alternatives.

At the top management level, it is a situation of total uncertainty of account of insufficient


knowledge of the external environment and the difficulty in forecasting business growth on a
long-term basis.

A good MIS design gives adequate support to all the three levees of management.
b) Nature of Decision

Decision making is a complex situation. To resolve the complexity, the decisions are
classified as programmed and non-programmed decisions.

If a decision can be based on a rule, method or even guidelines, it is called the programmed
decision. If the stock level of an item is 200 numbers, then the decision to raise a purchase
requisition for 400 numbers is a programmed-decision-making situation. The decision maker
here is told to make a decision based on the instructions or on the rule of ordering a quantity of
400 items when its stock level reaches 200.
If such rules can be developed wherever possible, then the MIS itself can be designed to
make a decision and even execute. The system in such cases plays the role of a decision maker
based on a given rule or a method. Since the programmed decision is made through MIS, the
effectiveness of the rule can be analyzed and the rule can be revived and modified from time to
time for an improvement. The programmed decision making can be delegated to a lower level in
the management cadre.

A decision which cannot be made by using a rule or a model is the non-programmed


decision. Such decisions are infrequent but the stakes are usually larger. Therefore, they cannot
be delegated to the lower level. The MIS in the non-programmed-decision situation can help to
some extent, in identifying the problem, giving the relevant information to handle the specific
decision making situation. The MIS, in other words, can develop decision support systems in the
non-programmed-decision-making situations.

6.6 Methods for Deciding Decision Alternatives

There are several methods to help the manager decide among the alternatives. The methods
basically are search processes to select the best alternative upon satisfying certain goals.

Three methods for selection of decision alternatives with the goals in view are:
(a) Optimization Techniques; (b) Payoff Analysis; and (c) Decision Tree Analysis.

All the operational research models use optimization techniques, to decide on the decision
alternatives. When a decision making situation can be expressed, in terms of decision versus the
probable event, and its pay-off value, then it is possible to construct a matrix of the decision
versus the events described by a value for each combination. The manager can then apply the
criteria such as the maximum expected value, the maximum profit and the minimum loss or the
minimum regrets.

The method of decision tree can be adopted, if the decision making situation can be
described as a chain of decisions. The process of the decision making is sequential and a chain of
decisions achieves the end regrets.

The use of both pay-off matrix and the decision tree requires a probabilistic knowledge of
the occurrence of events. In many situations this knowledge is not available and the MIS has to
provide the information support in this endeavor.

a) Optimization techniques
Linear Programming, Integer Programming, Dynamic Programming, Queuing Models,
Inventory Models, Capital Budgeting Models and so on are the examples of optimization
methods. These methods are used in cases where decision making situation is closed,
deterministic and requires optimizing the use of resources under conditions of constraints. To
handle these situations, software packages are available. These methods are termed operational
research (OR) methods.

All the OR methods attempt to balance the two aspects of business under conditions of
constraint. In the linear programming models, the use of resources versus demand is balanced to
maximize the profit. In the Inventory Model, the cost of holding inventory versus the cost of
procuring the inventory is balanced
under the constraint of capital and meeting the demand requirement. In the Queuing Model,
the cost of waiting time of the customer versus the cost of anidle time of the facility is balanced
under the constraint of investment in the facility and the permissible waiting time for the
customer. In the capital budgeting model, the return on investment is maximized under the
capital constraint versus the utility of the investment. The MIS supports the formulation of a
model, and then using it for the decision making.

b) The payoff analysis

When all the alternatives and their outcomes are not known with certainty, the decision is made
with the help of payoff analysis. The payoff matrix is constructed where the rows show the
alternatives and the columns show the conditions or the states of nature with the probability of
occurrence. The intersection of column and row shows the value of an outcome resulting out of
the alternative and the state of the nature. A typical payoff matrix in pricing decision is as given
in Table 1

Table 1 Payoff Matrix I

Your decision Competitor‘s No change Increase Decrease Expected


probability 0.50 0.20 0.30 gain
No change in the price 4 5 8 5.40

Increase the price 6 4 3 4.70

Decrease the price 10 12 4 8.60

For example, if the decision chosen is no change in the price and the competition also does not
change the price, then your gain is ‗4‘. The decision is taken by choosing that decision
alternative which has the maximum expected value of outcome. Since, the expected value in case
of the third alternative is the highest; the decision would be to decrease the price.

The concept of utility relates to the money value considered by the decision maker. Utility is
measured in terms of utile. Money has a value of a different degree to different decision makers
depending upon the amount, and also the manner in which it is received. If rupee one is equal to
one utile, then Rs 100 million is not 100 million utile but could be much more. The utile value
will be different if the money is received in one lot as against in parts in several years. The utility
function is different for different decision makers. The utile value of utility has an influence on
the risk taking ability of the decision maker. A well placed manager with a sound business will
tend to gamble or take more risk, than a manager not so well placed in the business. In such
decision making situations, the monetary values of the outcomes are replaced by the utile values,
suitable to the decision

Maker‘s utility function. In our example of pricing, if we replace the values by utile, the matrix
would be as given below in Table 2.

Table 2 Payoff Matrix II


Decision Competitor’s No change Increase Decrease Expected
choice utility
probability 0.50 0.20 0.30

No change in the price 4 50 200 72.00


Increase the price 200 4 400 220.80
Decrease the price 100 20 4 54.12

Since the highest value of utility is 220.80 utile, the decision would be to increase the price.

c) Decision tree analysis


When a decision maker must make a sequence of decisions, the decision tree analysis is
useful in selecting the set of the sequence decisions.

The method of analysis can be explained by an example. The decision tree is drawn in
Fig. 6.2 with the help of symbols.

DECISION POINT CHANCE EVENT ( ) PROBABILITY

Let us take an example of investment in production capacity for a planning period of five
years.

Collaboration High Demand (HD)


7.9 Low Demand (LD)

B No collaboration
Large
Capacity
Collaboration 9.2
A E
9.2
Small 8.2
C
Capacity No collaboration
Ist Phase F

Fig. Decision Tree

6.7 BEHAVIOURAL CONCEPTS IN DECISION MAKING

A manager, being a human being, behaves in a peculiar way in a given situation. The response of
one manager may not be the same as that of the two other managers, as they differ on the
behavioral platform. Even thought tools, methods and procedures are evolved, the decision is
many a times influenced by personal factors such as behavior.

The manager differ in their approach towards decision making in the organization, and,
therefore, they can be classified into two categories, viz., the achievement-oriented, i.e., looking
for excellence and the task-oriented, i.e., looking for the completion of the task some-how. The
achievement-oriented manager will always opt for the best and, therefore, will be enterprising in
every aspect of the decision making. He will endeavor to develop all the possible alternatives. He
would be scientific, and, therefore, more rational. He would weigh all the pros and then
conclude.
The manager‘s personal values will definitely influence ultimately. Some of the managers
show a nature of risk avoidance. Their behavior shows a distinct pattern indicating a
conservative approach to decision making a path of low risk or no risk. Further, even thought
decision making tools are available, the choice of the tools may differ depending on the motives
of the manager. The motives are not apparent, and hence, are difficult to understand. A rational
decision in the normal course may turn out to be different on account of the motives of the
manager.

The behaviors of the manager are also influenced by the position he holds in the
organization. The behaviors are influenced by a fear and an anxiety that the personal image may
be tarnished and the career prospects in the organization may be spoiled due to defeat or a
failure. The managerial behavior, therefore, is a complex mix of the personal values, the
atmosphere in the organization, the motives and the motivation, and the resistance to change.
Such a behavior sometimes overrides normal rational decisions based on business and economic
principles.

The interplay of different decision making of all the managers in the organization shapes
up the organizational decision making. The rationale of the business decision will largely depend
upon the individuals, their positions in the organization and their inter-relationship with other
managers.
If two managers are placed in two decision making situations, and if their objectives are
in conflict, the managers will arrive at a decision objectively, satisfying individual goals. Many a
times, they may make a conscious decision, disregarding rationality required in a business
decision to meet their personal goals and to satisfy their personal values. If the manager is
enterprising, he will make objectively rational decisions. But if the manager is averse to taking
risk, he will make a decision which will be subjectively rational as he would act with limited
knowledge and also be influenced by the risk averseness. Thus, it is clear that if the attitudes and
the motives are not consistent across the organization, the decision making process slows down
in the organization.

6.8 ORGANISATION DECISION MAKING

An organization is an arrangement of individuals having different goals. Each individual enjoys


different powers and rights because of his position, function and importance in the organization.
Since there is an imbalance in the power structure, the different individuals cannot equally
influence the organizational behavior, the management process and the setting of business goals.
Ultimately, what emerges is a hierarchy of goals which may be conflicting, self defeating and
inconsistent.

The corporate goals and the goals of the departments/divisions or the functional goals, may
a time, are in conflict. If the organization is a system, and its departments / divisions or functions
are its subsystems, then unless the system‘s objective and the subsystem‘s objectives are aligned
and consistent to each other, the corporate goals are not achieved.
In case of inconsistent goals, the conflict in the organization increases, affecting the
organization‘s overall performance. The organizational decision making should help in the
resolution of such conflicts. Otherwise, the organization suffers from indecision. The
organizational behavior theory provides different methods for resolution of avoiding such
conflicting goals as explained in Table 6.3
Table 6.3
Methods of Conflict Resolution
Method Explanation Example

Allowing local rationality in Where the functional interdependence Security, Time office functions, Legal,
the setting of goals. is minimum and the goals /objectives Commercial, Administrative functions.
/ targets do not significantly influence
the corporate goals

Permission to set goals which Where there is functional dependence, Production versus Sales versus
can be achieved with an to set local goals which will not Materials functions can evolve
acceptable decision making adversely affect the goals of decision rules to meet the local goals
rule and systems. dependent functions. and affect the goals of the dependent
functions, or the corporate goals.

Permission to achieve the If the goals are conflicting, they are Maximization of profit, quality, level,
goals in a sequential manner. resolved in a sequential manner one at customer satisfaction, leadership
a time. It is a deliberate decision to image, etc.
ignore the conflicting goals within a
bounded rationality.
6.9 MIS AND DECISION MAKING CONCEPTS:

It is necessary to understand the concepts of decision making as they are relevant to the design of
the MIS. The Simon Model provides a conceptual design of the MIS and decision making,
wherein the designer has to design the system in such a way that the problem is identified in
precise terms. That means the data gathered for data analysis should be such that it provides
diagnostics and also provides a path to bring the problem to surface.

In the design phase of the model, the designer is to ensure that the system provides models for
decision making. These models should provide for the generation of decision alternatives, test
them and pave way for the selection of one of them. In a choice phase, the designer must help to
select the criteria to select one alternative amongst the many.
The concept of programmed decision making is the finest tool available to the MIS designer,
whereby he can transfer decision making from a decision maker to the MIS and still retain the
responsibility and accountability with the decision maker or the manager. In case of non-
programmed decisions, the MIS should provide the decision support systems to handle the
variability in the decision making conditions. The decision support systems provide a
generalized model of decision making.
The concept of decision making systems, such as the closed and the open systems helps the
designer in providing design flexibility. The closed systems are deterministic and rule based;
therefore, the design needs to have limited flexibility, while in an open system, the design should
be flexible to cope up with the changes required from time to time.
The methods of decision making can be used directly in the MIS provided the method to be
applied has been decided. A number of decision making problems call for optimization, and OR
models are available which can be made a part of the system. The optimization models are static
and dynamic, and both can be used in the MIS. Some of the problems call for a competitive
analysis, such as a payoff analysis. In these problems, the MIS can provide the analysis based on
the gains, the regrets and the utility.

The concepts of the organizational and behavioral aspects of decision making provide an
insight to the designer to handle the organizational culture and the constraints in the MIS. The
concepts of the rationality of a business decision, the risk averseness of the managers and the
tendency to avoid an uncertainty, makes the designer conscious about the human limitations, and
prompts him to provide a support in the MIS to handle these limitations. The reliance on
organizational learning makes the designer aware of the strength of the MIS and makes him
provide the channels in the MIS to make the learning process more efficient.

The relevance of the decision making concepts is significant in the MIS design. The
significance arises out of the complexity of decision making, the human factors in the decision
making, the organizational and behavior aspects, and the uncertain environments. The MIS
design addressing these significant factors turns out to be the best design.

6.10 Bias in information

While choosing the appropriate method of communicating information a care has to be taken
to see that is not biased. For example, while using the techniques of classification or filtering
the information, it should not happen that certain information gets eliminated or does not get
classified. That is, a deliberate bias in covering certain information is to be avoided. This
bias enters because people try to block sensitive information which affects them. To
overcome this problem, a formal structure of organization should be adopted and the type of
information and its receiver should be decided by the top management.
Many a times the data and the information are suppressed but the inferences are informed,
with no or little possibility of verification or rethinking. In this case one who draws
inferences may have a bias in the process of collection, processing and presentation of data
and information. Though the deliberate enforcement of the inference on the receiver avoids a
possibility of the multiple inferences, but in this case processor‘s bias is forced on the
receiver. For example, organizations have departments like Corporate Planning, Market
Research, R and D, HRD and so on, which collect the data and analyze it for the company
and communicate the inferences. In all these cases personal bias, organizational bias and
management bias may be reflected in the entire process of collection processing, and
communication inference.
Table 7.3 Methods to Avoid Misuse of information
Method Reason Example

Delayed delivery of information A possibility of immediate action Sales report to the sales representative
or decision is reduced. It will or a copy of invoice to the sales
have only a knowledge value representative.

Change in the format and content Provide only that information Sales information to operations
of the report which may be needed, hence the managements, sales versus target for
misuse is averted. the middle management sales with a
trend analysis to the top management.

Suppression and filtering of the To avoid the risk of exposure and The price, the cost information.
information of confidential and the misuse of information for Drawing and design information.
sensitive nature achieving the undesirable goals.

Suppress the details and references Make it difficult to collect, and Statistical reports with no references.
of data and information process the data at the user end to
meet the personal needs of
information.

Truncated or lopsided presentation Make it difficult to read A focus on high value sales and
through the information and production and suppress the details.
avoid its probable misuse.
The presentation of the information will generate a bias and may influence the user. For
example, if the information is presented in an alphabetical order and if it is lengthy, the first few
information entities will get more attention. If the information is presented with a criteria of
exception, the choice of exception and deviation from the exception creates a bias by design
itself. For a quick grasp, the information is presented in a graphical form. The choice of scale,
the graphic size and the colour introduced a bias in the reader‘s mind.
The base, which may creep in inadvertently because of the information system design,
can be tackled by making the design flexible, so far as reporting is concerned. Allow the
manager or the decision maker to choose his classification or filtering criteria, the scope of
information, the method of analysis and the presentation of inference. However, somewhere
balance needs to be maintained between the flexibility of the design and the cost, and its benefits
to the managers. Disregarding the bias in information, it must have certain attributes to increase
its utility as shown in Table 7.4

Attribute Explanation
The accuracy in representation The test of accuracy is how closely it represents a
situation or event. The degree of precision will decide
the accuracy in representation.

The form of presentation Forms are qualitative or quantitative, numeric or


graphic, printed or displayed, summarized or detailed.

The frequency of reporting How often the information is needed? How often it
needs to be updated.

The scope of reporting The coverage of information in terms entities, area


and range, and the interest shown by the recipient or
the decision maker.

The scope of collection Internal form organization or external to organization.

The time scale It may relate to the past, the current and the future and
can cover the entire time span.

The relevance to decision making The information has relevance to a situation and also
to a decision making. The irrelevant information is a
data.

Complete for the decision considerations The information which covers all the aspects of the
decision situation by way of the scope, transactions
and period is a complete.

The timeliness of reporting The receipt of information on time or when needed is


highly useful. The information arriving late loses its
utility as it is outdated.
Redundancy is the repetition of the parts or messages in order to circumvent the distortions
or the transmission errors. The redundancy, therefore, sometimes is considered as an essential
feature to ensure that the information is received and digested.

In MIS the redundancy of data and information, therefore, is inevitable on a limited scale.
Its use is to be made carefully so that the reports are not crowded with information.

6.11 Internal versus external information

The information generated through the internal sources of the organization is termed as
internal information, while the information generated through the Government reports, the industry
surveys, etc. is termed as external information, as the sources of the data are outside the
organization.
The timing information, the recurring information and the internal information are the prime
areas for computerization and they contribute qualitatively to the MIS.
The timing and accuracy of the action information is usually important. The mix of the
internal and the external information changes, depending on the level of the management decision.
At the top management level, the stress is more on the external information and at the operational
and the middle management level; the stress is more on the information. Figure 7.2 shows the
source and kind of information required vis-à-vis of management in the organization.

External TOP Low


MGT

Source of MIDDLE Structured


Information MGT Information

Internal OPERATIONAL High


MGT

ORGANISATION STRUCTURE

Fig. Organization and Information

The information can also be under, in terms of its application.


Planning information

Certain standards, norms and specifications are used in the planning of any activity. Hence, such
information is called the planning information. The time standards, the operational standards, the
design standards are the examples of the planning Information.

Control information

Reporting the status of an activity through a feedback mechanism is called the controlinformation.
When such information shows a deviation from the goal or the objective, itwill induce a decision or
an action leading to control.

Knowledge information

A collection of information through the library reports and the research studies to build up a
knowledge base as an information source for decision making is known as Knowledge
information. Such a collection is not directly connected to decision making, but the need
ofknowledge is perceived as a power or strength of the organization.

The information can also be classified based on its usage. When the information is used by
everybody in the organization, it is called the organization information. When the information has
a multiple use and application, it is called the database information. When the information is used
in the operations of a business it is called the functional or the operational information.

Employee and pay-roll information is organization information used by a number of people


in a number of ways. The material specifications or the supplier information is database stored for
multiple users. Such information may need security or an access control. Information like sales or
production statistics is functional, meeting the operational needs of these functions.

You might also like