You are on page 1of 70

CHAPTER 1 : QUALITY CONCEPTS AND PRACTICES

1.1 INTRODUCTION
The concept of software quality is more complex than what common people tend to believe.
However, it is very popular both for common people and IT professionals. If we look at the
definition of quality in a dictionary, it is usual to find something like the following: set of
characteristics that allows us to rank things as better or worse than other similar ones. In many
cases, dictionaries mention the idea of excellence together with this type of definitions.
Certainly, this idea of quality does not help engineers to improve results in the different fields of
activity. In the world of industrial quality in general, a transition from a rigid concept to an
adaptive one was performed many years ago. The concept view tend to be more close to the
traditional idea of beauty: “it is in the eyes of the observer”. So, we reject absolute concepts and
tend to use customer satisfaction as main inspiration. For example, what characteristics are used
by customers as indicators of “quality” (i.e. excellence):

Product nature
Reputation of raw materials
Manufacturing location
Manufacturing method
Point-of-sale standing
Sophisticated restaurant than at the usual pub.
Price
Results

To understand the landscape of software quality it is central to answer the so often asked
question: what is quality? Once the concept of quality is understood it is easier to understand the
different structures of quality available on the market. As many prominent authors and
researchers have provided an answer to that question, we do not have the ambition of introducing
yet another answer but we will rather answer the question by studying the answers that some of
the more prominent gurus of the quality management community have provided. By learning
from those gone down this path before us we can identify that there are two major camps when
discussing the meaning and definition of (software) quality:

i) Conformance to specification: Quality that is defined as a matter of products and services


whose measurable characteristics satisfy a fixed specification – that is, conformance to an in
beforehand defined specification.

ii) Meeting customer needs: Quality that is identified independent of any measurable
characteristics. That is, quality is defined as the products or services capability to meet customer
expectations – explicit or not.

Quality software saves good amount of time and money. Because software will have fewer
defects, this saves time during testing and maintenance phases. Greater reliability contributes to
an immeasurable increase in customer satisfaction as well as lower maintenance costs. Because
maintenance represents a large portion of all software costs, the overall cost of the project will
most likely be lower than similar projects.

1.1.1 Definition of Quality


Quality is defined by International organizations as follows:

“Quality comprises all characteristics and significant features of a product or an activity which
relate to the satisfying of given requirements”. (German Industry Standard DIN 55350 Part 11)

“Quality is the totality of features and characteristics of a product or a service that bears on its
ability to satisfy the given needs” (ANSI Standard (ANSI/ASQC A3/1978).

High quality software usually conforms to the user requirements. A customer’s idea of quality
may cover a breadth of features - conformance to specifications, good performance on
platform(s)/configurations, completely meets operational requirements (even if not specified!),
compatibility to all the end-user equipment, no negative impact on existing end-user base at
introduction time etc.
1.2 COST OF QUALITY
In recent years organizations have been focusing much attention on quality management. There
are many different aspects of quality management but this tutorial focuses on the cost of quality.
The costs associated with quality are divided into two categories: costs due to poor quality and
costs associated with improving quality. Prevention costs and appraisal costs are costs associated
with improving quality, while failure costs result from poor quality. Management must
understand these costs to create quality improvement strategy. An organization’s main goal is to
survive and maintain high quality goods or services, with a comprehensive understanding of the
costs related to quality this goal can be achieved.

Costs are defined as the summation of costs over the life of a product. Customers prefer products
or services with a high quality and reasonable price. To ensure that customers will receive a
product or service that is worth the money they will spend firms should spend on prevention and
appraisal costs. Prevention costs are associated with preventing defects and imperfections from
occurring. Consider the Johnson and Johnson (J&J) safety seals that appear on all of their
products with the message, “if this safety seal is open do not use.” This is a preventive measure
because in the overall analysis it is least costly to purchase the safety seals in production than
undergo a possible cyanide scare. The focus of a prevention cost is to assure quality and
minimize or avoid the likelihood of an event with an adverse impact on the company goods,
services or daily operations. This also includes the cost of establishing a quality system. A
quality system should include the following three elements: training, process engineering, and
quality planning. Quality planning is establishing a production process in conformance with
design specification procedures, and designing of the proper test procedures and equipment.
Consider establishing training programs for employees to keep them efficient on emerging
technologies, such as updated computer languages and programs.

Appraisal costs are direct costs of measuring quality. In this case, quality is defined as the
conformance to customer expectations. This includes: lab testing, inspection, test equipment and
materials, costs associated with assessment for ISO 9000 or other quality award assessments. A
common example of appraisal costs is the expenses from inspections. An organization should
establish an inspection of their products and incoming goods from a supplier before they reach
the customer. This is also known as acceptance sampling, a technique used to verify that
products meet quality standards.

Failure Costs are separated into two different categories: internal and external. Internal failure
costs are expenses incurred from online failure. This includes cost of troubleshooting, loss of
production resulting from idle time either from manpower or during the production process.
External failure costs are associated with product failure after the completion of the production
process. An excellent example of external failure costs is the J&J cyanide scare. The company
incurred expenses in response to the customer fears of tampering with a purchased J&J product.
However, J&J managed to survive the incident, in part because of their method of corrective
action.

Understanding the cost of quality is extremely important in establishing a quality management


strategy. After defining the three major costs of quality and discussing their application we can
examine how they affect an organization. The more an organization invests in preventive
measures the more they are able to reduce failure costs. Furthermore, an investment in quality
improvement benefits the company image, performance and growth. This is basically summed
up by the Ludvall-Juran quality cost model, which applies the law of diminishing returns to these
costs. The model shows that prevention and appraisal costs have a direct relationship with
quality conformance, meaning they increase as quality conformance increases. Thus, quality
conformance should have an inverse relationship with failure costs - meaning as quality
conformance increases failure costs should decrease. Understanding these relationships and
applying the cost of quality process enables an organization to decrease failure costs and assure
that their products and services continue to meet customer expectations. Some companies that
have achieved this goal include Neiman-Marcus, Rolex, and Lexus.

Phillip Crosby states that quality is free. As discussed, the costs related to achieving quality are
traded off between the prevention and appraisal costs and the failure costs. Therefore, the
prevention and appraisal costs resulting from improved quality, allow an organization to
minimize or be free of the failure costs resulting from poor quality. In summation, understanding
cost of quality helps companies to develop quality conformance as a useful strategic business
tool that improves their products, services and image. This leverage is vital in achieving the
goals and mission of a successful organization.
1.3 TOTAL QUALITY MANAGEMENT
Total Quality Management is a management approach that originated in the 1950's and has
steadily become more popular since the early 1980's. Total Quality is a description of the culture,
attitude and organization of a company that strives to provide customers with products and
services that satisfy their needs. The culture requires quality in all aspects of the company's
operations, with processes being done right the first time and defects and waste eradicated from
operations.

Total Quality Management, TQM, is a method by which management and employees can
become involved in the continuous improvement of the production of goods and services. It is a
combination of quality and management tools aimed at increasing business and reducing losses
due to wasteful practices.

Some of the companies who have implemented TQM include Ford Motor Company, Phillips
Semiconductor, SGL Carbon, Motorola and Toyota Motor Company.

1.3.1 TQM Definition


“TQM is a management philosophy that seeks to integrate all organizational functions
(marketing, finance, design, engineering, and production, customer service, etc.) to focus on
meeting customer needs and organizational objectives”.

TQM views an organization as a collection of processes. It maintains that organizations must


strive to continuously improve these processes by incorporating the knowledge and experiences
of workers. The simple objective of TQM is "Do the right things, right the first time, every
time". TQM is infinitely variable and adaptable. Although originally applied to manufacturing
operations, and for a number of years only used in that area, TQM is now becoming recognized
as a generic management tool, just as applicable in service and public sector organizations. There
are a number of evolutionary strands, with different sectors creating their own versions from the
common ancestor. TQM is the foundation for activities, which include:

Commitment by senior management and all employees


Meeting customer requirements
Reducing development cycle times
Just In Time/Demand Flow Manufacturing
Improvement teams
Reducing product and service costs
Systems to facilitate improvement
Line Management ownership
Employee involvement and empowerment
Recognition and celebration
Challenging quantified goals and benchmarking
Focus on processes / improvement plans
Specific incorporation in strategic planning

This shows that TQM must be practiced in all activities, by all personnel, in Manufacturing,
Marketing, Engineering, R&D, Sales, Purchasing, HR, etc.

Figure 1.1 : TQM Interface


The core of TQM is the customer-supplier interfaces, both externally and internally, and at each
interface lie a number of processes. This core must be surrounded by commitment to quality,
communication of the quality message, and recognition of the need to change the culture of the
organization to create total quality. These are the foundations of TQM, and they are supported by
the key management functions of people, processes and systems in the organization.

1.3.2 Principles of TQM


The key principles of TQM are as following:

Management Commitment
Plan (drive, direct)

Do (deploy, support, participate)

Check (review)

Act (recognizes, communicate, revise)

Employee Empowerment
Training

Suggestion scheme

Measurement and recognition

Excellence teams

Fact Based Decision Making


SPC (statistical process control)

DOE, FMEA

The 7 statistical tools

TOPS (FORD 8D - Team Oriented Problem Solving)

Continuous Improvement
Systematic measurement and focus on CONQ

Excellence teams

Cross-functional process management

Attain, maintain, improve standards

Customer Focus
Supplier partnership

Service relationship with internal customers

Never compromise quality

Customer driven standards

1.3.3 The Concept of Continuous Improvement by TQM


TQM is mainly concerned with continuous improvement in all work, from high level strategic
planning and decision-making, to detailed execution of work elements on the shop floor. It stems
from the belief that mistakes can be avoided and defects can be prevented. It leads to
continuously improving results, in all aspects of work, as a result of continuously improving
capabilities, people, processes, and technology and machine capabilities.

Continuous improvement must deal not only with improving results, but more importantly with
improving capabilities to produce better results in the future. The five major areas of focus for
capability improvement are demand generation, supply generation, technology, operations and
people capability.

A central principle of TQM is that mistakes may be made by people, but most of them are
caused, or at least permitted, by faulty systems and processes. This means that the root cause of
such mistakes can be identified and eliminated, and repetition can be prevented by changing the
process.
There are three major mechanisms of prevention:

i. Preventing mistakes (defects) from occurring (Mistake - proofing or Poka-Yoke).


ii. Where mistakes can't be absolutely prevented, detecting them early to prevent them being
passed down the value added chain (Inspection at source or by the next operation).
iii. Where mistakes recur, stopping production until the process can be corrected, to prevent
the production of more defects. (Stop in time).

The basis for TQM implementation is the establishment of a quality management system which
involves the organizational structure, responsibilities, procedures and processes. The most
frequently used guidelines for quality management systems are the ISO 9000 international
standards, which emphasize the establishment of a well- documented, standardized quality
system. The role of the ISO 9000 standards within the TQM circle of continuous improvement is
presented in the following figure.

Figure 1.1: Role if ISO 9000

Continuous improvement is a circular process that links the diagnostic, planning, implementation
and evaluation phases. Within this circular process, the ISO 9000 standards are commonly
applied in the implementation phase. An ISO 9000 quality system also requires the establishment
of procedures that standardize the way an organization handles the diagnostic and evaluation
phases. However, the ISO 9000 standards do not prescribe particular quality management
techniques or quality-control methods. Because it is a generic organizational standard, ISO 9000
does not define quality or provide any specifications of products or processes. ISO 9000
certification only assures that the organization has in place a well-operated quality system that
conforms to the ISO 9000 standards. Consequently, an organization may be certified but still
manufacture poor-quality products.

1.3.4 Implementation Principles and Processes of TQM


A preliminary step in TQM implementation is to assess the organization's current reality.
Relevant preconditions have to do with the organization's history, its current needs, precipitating
events leading to TQM, and the existing employee quality of working life. If the current reality
does not include important preconditions, TQM implementation should be delayed until the
organization is in a state in which TQM is likely to succeed.

If an organization has a track record of effective responsiveness to the environment, and if it has
been able to successfully change the way it operates when needed, TQM will be easier to
implement. If an organization has been historically reactive and has no skill at improving its
operating systems, there will be both employee skepticism and a lack of skilled change agents. If
this condition prevails, a comprehensive program of management and leadership development
may be instituted. A management audit is a good assessment tool to identify current levels of
organizational functioning and areas in need of change. An organization should be basically
healthy before beginning TQM. If it has significant problems such as a very unstable funding
base, weak administrative systems, lack of managerial skill, or poor employee morale, TQM
would not be appropriate.

However, a certain level of stress is probably desirable to initiate TQM. People need to feel a
need for a change. Kanter (1983) addresses this phenomenon as describing building blocks
which are present in effective organizational change. These forces include departures from
tradition, a crisis or galvanizing event, strategic decisions, individual "prime movers," and action
vehicles. Departures from tradition are activities, usually at lower levels of the organization,
which occur when entrepreneurs move outside the normal ways of operating to solve a problem.
A crisis, if it is not too disabling, can also help create a sense of urgency which can mobilize
people to act. In the case of TQM, this may be a funding cut or threat, or demands from
consumers or other stakeholders for improved quality of service. After a crisis, a leader may
intervene strategically by articulating a new vision of the future to help the organization deal
with it.

A plan to implement TQM may be such a strategic decision. Such a leader may then become a
prime mover, who takes charge in championing the new idea and showing others how it will help
them get where they want to go. Finally, action vehicles are needed and mechanisms or
structures to enable the change to occur and become institutionalized.

1.3.5 The building blocks of TQM


Everything we do is a Process, which is the transformation of a set of inputs, which can include
action, methods and operations, into the desired outputs, which satisfy the customers’ needs and
expectations. In each area or function within an organization there will be many processes taking
place, and each can be analyzed by an examination of the inputs and outputs to determine the
action necessary to improve quality. In every organization there are some very large processes,
which are groups of smaller processes, called key or core business processes. These must be
carried out well if an organization is to achieve its mission and objectives. The section on
Processes discusses processes and how to improve them, and Implementation covers how to
prioritize and select the right process for improvement.
Figure 1.2 : The TQM blocks

The only point at which true responsibility for performance and quality can lie is with the people
who actually do the job or carry out the process, each of which has one or several suppliers and
customers.

An efficient and effective way to tackle process or quality improvement is through teamwork.
However, people will not engage in improvement activities without commitment and recognition
from the organization’s leaders, a climate for improvement and a strategy that is implemented
thoughtfully and effectively. The section on People expands on these issues, covering roles
within teams, team selection and development and models for successful teamwork.

An appropriate documented Quality Management System will help an organization not only
achieve the objectives set out in its policy and strategy, but also, and equally importantly, sustain
and build upon them. It is imperative that the leaders take responsibility for the adoption and
documentation of an appropriate management system in their organization if they are serious
about the quality journey. The Systems section discusses the benefits of having such a system,
how to set one up and successfully implement it.

Once the strategic direction for the organization’s quality journey has been set, it needs
Performance Measures to monitor and control the journey, and to ensure the desired level of
performance is being achieved and sustained. They can, and should be, established at all levels in
the organization, ideally being cascaded down and most effectively undertaken as team activities
and this is discussed in the section on Performance.

1.4 APPROACHES TO QUALITY


Organizations have continually looked for new ways to improve consistency and quality in their
products and services. Management fads may come and go but many of the underlying ideas
around quality remain the same. Here's how the works of Deming, Juran and Crosby remain at
the heart of quality approaches like TQM and Six Sigma.

1.4.1 TQM Approach


Detail about TQM is described in section 1.4 but here this sub-section is giving TQM’s view in
context of approaches to quality. Deming's views on quality are believed by many to have laid
the foundations for Total Quality Management (TQM), however, the works of Feigenbaum,
Ishikawa and Imai have also had an impact.

TQM focuses on achieving quality through engraining the philosophy within an organization,
although it does not form a system or a set of tools through which to achieve this. Companies
adopting a TQM philosophy should see their competitiveness increase, establish a culture of
growth, offer a productive and successful working environment, cut stress and waste and build
teams and partnerships.

The principles of TQM have been laid out in the ISO 9000 family of standards from the
International Organization for Standardization. Adopted by over one million companies in 176
countries worldwide, the standards lay down the requirements of a quality management system,
but not how these should be met.

Eight principles make up the ISO 9000 standards. These are:

i. Organizations should be consumer focused by understanding their needs and meeting


their requirements
ii. Strong leadership should ensure the organization understands its purpose and direction
iii. People at all levels should be involved in the quality process for the organization to reap
the greatest benefit
iv. A process approach should be taken to activities and any related resources
v. Interrelated processes should be identified as a system to boost efficiency in meeting
objectives
vi. Organizations should strive for continual improvement
vii. Decisions should be based on factual information
viii. A mutually beneficial relationship should be created between organizations and suppliers
But standards alone are not often not enough for companies to reach their quality goals, hence
the development of more structured processes like six sigma.

1.4.2 Six Sigma


Whereas TQM is a philosophy of quality, Six Sigma is a definitive measurement of quality or at
least that's how it started. Motorola pioneered six sigma over two decades ago and in this time it
has evolved from a simple metric – 3.4 defects per one million opportunities, which was often
applied to manufacturing, to a methodology and management system adopted by numerous
business sectors. By aiming for 3.4 defects it diverges from the zero-deficits model proposed by
Crosby, which many see as unattainable and in some cases demotivating.

As Deming said in his 14 principles of quality management, companies should "eliminate


slogans, exhortations, and targets for the workforce asking for zero defects and new levels of
productivity. "Such exhortations only create adversarial relationships, as the bulk of the causes of
low quality and low productivity belong to the system and thus lie beyond the power of the work
force." Sitting at the heart of the Six Sigma philosophy is the DMAIC model for process
improvement; define opportunity, measure performance, analyse opportunity, improve
performance, control performance.

Alternatively the DMADV (define, measure, analyse, design, verify) system is used for the
creation of new processes which fit with the six sigma principles. Motorola believes that even
combining the methodology and the metric is "still not enough to drive desired breakthrough
improvements and results that are sustainable over time", and therefore advocates the use of the
six sigma management systems, which aligns management strategy with improvement efforts.
Companies which have successfully implemented six sigma, such as GE, have reported savings
running into millions of dollars and six sigma is now being combined with lean manufacturing
processes to great effect.

But it is highly unlikely any of these interpretations present the end goal for quality management,
which as the methodologies teach, must always strive for continuous improvement

1.5 SUMMARY
Quality plays very important role in every aspect of software development. It plays key role in
the successful implementation of software. As an attribute of an item, quality refers to
measurable characteristics - things we are able to compare to known standards such as length,
color, electrical properties, and malleability. However, software, largely an intellectual entity, is
more challenging to characterize than physical objects. Nevertheless, measures of a program’s
characteristics do exist. These properties include cyclomatic complexity, cohesion, number of
function points, lines of code, and many others. When we examine an item based on its
measurable characteristics, two kinds of quality may be encountered: quality of design and
quality of conformance. TQM encourages participation amongst shop floor workers and
managers. TQM is an approach to improving the competitiveness, effectiveness and flexibility of
an organization for the benefit of all stakeholders. It is a way of planning, organizing and
understanding each activity, and of removing all the wasted effort and energy that is routinely
spent in organizations. It ensures the leaders adopt a strategic overview of quality and focus on
prevention not detection of problems. All senior managers must demonstrate their seriousness
and commitment to quality, and middle managers must, as well as demonstrating their
commitment, ensure they communicate the principles, strategies and benefits to the people for
whom they have responsibility. Only then will the right attitudes spread throughout the
organization.
Assignment-Module 1

1. Quality is __________

a. Conformance to specification
b. Meeting customer needs
c. Both of them
d. None of them

2. Which__________ model shows the direct relationship with quality conformance.

a. Waterfall
b. Spiral
c. Ludvall-Juran
d. None of the above

3. __________ states that quality is __________.

a. Phillip Crosby, free


b. Stalling, expensive
c. Dromey, conformance
d. Lexus, failure

4. The objective of TQM is __________.

a. Do the right things, right the first time, every time


b. Do the right time, right the first things, every things
c. Do the right time, right the first things, every right
d. None of the above
5. __________ quality system also requires the establishment of procedures that
standardize the way an organization handles the diagnostic and evaluation phases.

a. ISO/IEC 9126
b. ISO 9001
c. IEEE
d. ISO 9000

6. Mistakes may be made by people, but most of them are caused, or at least permitted, by
faulty systems and processes is the principle of __________ .

a. Quality
b. TQM
c. Six Sigma
d. ISO 9000

7. The principles of TQM have been laid out to __________ principles made up
__________ standards.

a. Six, ISO 9000


b. Two, ISO 9126
c. Eight, ISO 9001
d. Eight, ISO 9000

8. TQM__________ of quality and Six sigma __________ of quality.

a. Philosophy, definitive measurement


b. Conformance, requirements
c. Measurement, performance
d. None of them
9. Deming suggested ___________ principles of quality management.

a. Ten
b. Six
c. Three
d. Fourteen

10. Six Sigma philosophy is the ___________ model for process improvement.

a. DMAIC
b. ISO 9126
c. Mc call
d. ISO 9000
Key - Module 1

1. c
2. c
3. a
4. a
5. d
6. b
7. d
8. a
9. d
10. a
CHAPTER 2 : SOFTWARE QUALITY

2.1 SOFTWARE DEVELOPMENT PROCESS


The large and growing body of software development organizations implements process
methodologies. Many of them are in the defense industry, which in the U.S. requires a rating
based on 'process models' to obtain contracts. The international standard for describing the
method of selecting, implementing and monitoring the life cycle for software is ISO/IEC 12207.
A decades-long goal has been to find repeatable, predictable processes that improve productivity
and quality. Some try to systematize or formalize the seemingly unruly task of writing software.
Others apply project management techniques to writing software. Without project management,
software projects can easily be delivered late or over budget. With large numbers of software
projects not meeting their expectations in terms of functionality, cost, or delivery schedule,
effective project management appears to be lacking. Organizations may create a Software
Engineering Process Group (SEPG), which is the focal point for process improvement.
Composed of line practitioners who have varied skills, the group is at the center of the
collaborative effort of everyone in the organization who is involved with software engineering
process improvement.

2.1.1 System/Information Engineering and Modeling


As software is always of a large system (or business), work begins by establishing the
requirements for all system elements and then allocating some subset of these requirements to
software. This system view is essential when the software must interface with other elements
such as hardware, people and other resources. System is the basic and very critical requirement
for the existence of software in any entity. So if the system is not in place, the system should be
engineered and put in place. In some cases, to extract the maximum output, the system should be
re-engineered and spruced up. Once the ideal system is engineered or tuned, the development
team studies the software requirement for the system.
2.1.2 Software Development Life Cycle
A software development process, also known as a software development life cycle (SDLC), is a
structure imposed on the development of a software product. A software development process or
life cycle is a structure imposed on the development of a software product. Similar terms include
software life cycle and software process. There are several models for such processes, each
describing approaches to a variety of tasks or activities that take place during the process. Some
people consider a life-cycle model a more general term and a software development process a
more specific term. For example, there are many specific software development processes that
'fit' the spiral life-cycle model. ISO/IEC 12207 is an international standard for software life-cycle
processes. It aims to be the standard that defines all the tasks required for developing and
maintaining software.

2.1.3 Processes
More and more software development organizations implement process methodologies. The
Capability Maturity Model (CMM) is one of the leading models. Independent assessments can be
used to grade organizations on how well they create software according to how they define and
execute their processes. There are dozens of others, with other popular ones being ISO 9000, ISO
15504, and Six Sigma.There are several models for such processes, each describing approaches
to a variety of tasks or activities that take place during the process.

2.1.4 Software development activities


The activities of the software development process are represented in the form of waterfall model
in above figure. There are several other models to represent this process.

2.1.5 Process Activities/Steps


Software Engineering processes are composed of many activities, notably the following:

2.1.5.1 System/Information Engineering and Modeling

As software is always of a large system (or business), work begins by establishing the
requirements for all system elements and then allocating some subset of these requirements to
software. This system view is essential when the software must interface with other elements
such as hardware, people and other resources. System is the basic and very critical requirement
for the existence of software in any entity. So if the system is not in place, the system should be
engineered and put in place. In some cases, to extract the maximum output, the system should be
re-engineered and spruced up. Once the ideal system is engineered or tuned, the development
team studies the software requirement for the system.

2.1.5.2 Requirements Analysis

Extracting the requirements of a desired software product is the first task in creating it. While
customers probably believe they know what the software is to do, it may require skill and
experience in software engineering to recognize incomplete, ambiguous or contradictory
requirements. Customers typically have an abstract idea of what they want as an end result, but
not what software should do. Skilled and experienced software engineers recognize incomplete,
ambiguous, or even contradictory requirements at this point. Frequently demonstrating live code
may help reduce the risk that the requirements are incorrect.

Once the general requirements are gathered from the client, an analysis of the scope of the
development should be determined and clearly stated. This is often called a scope document.
Certain functionality may be out of scope of the project as a function of cost or as a result of
unclear requirements at the start of development. If the development is done externally, this
document can be considered a legal document so that if there are ever disputes, any ambiguity of
what was promised to the client can be clarified.

2.1.5.3 Specification

Specification is the task of precisely describing the software to be written, in a mathematically


rigorous way. In practice, most successful specifications are written to understand and fine-tune
applications that were already well-developed, although safety-critical software systems are
often carefully specified prior to application development. Specifications are most important for
external interfaces that must remain stable.

2.1.5.4 Software architecture

The architecture of a software system refers to an abstract representation of that system.


Architecture is concerned with making sure the software system will meet the requirements of
the product, as well as ensuring that future requirements can be addressed.

2.1.5.5 Implementation

Reducing a design to code may be the most obvious part of the software engineering ob, but it is
not necessarily the largest portion.

2.1.5.6 Testing

Testing of parts of software, especially where code by two different engineers must work
together, falls to the software engineer. Different testing methodologies are available to unravel
the bugs that were committed during the previous phases. Different testing tools and
methodologies are already available. Some companies build their own testing tools that are tailor
made for their own development operations.
2.1.5.7 Documentation

An important task is documenting the internal design of software for the purpose of future
maintenance and enhancement. This may also include the writing of an API, be it external or
internal. The software engineering process chosen by the developing team will determine how
much internal documentation (if any) is necessary. Plan-driven models (e.g., Waterfall) generally
produce more documentation than agile models.

2.1.5.8 Training and Support

A large percentage of software projects fail because the developers fail to realize that it doesn't
matter how much time and planning a development team puts into creating software if nobody in
an organization ends up using it. People are occasionally resistant to change and avoid venturing
into an unfamiliar area, so as a part of the deployment phase, its very important to have training
classes for the most enthusiastic software users (build excitement and confidence), shifting the
training towards the neutral users intermixed with the avid supporters, and finally incorporate the
rest of the organization into adopting the new software. Users will have lots of questions and
software problems which lead to the next phase of software.

2.1.5.9 Maintenance

Maintaining and enhancing software to cope with newly discovered problems or new
requirements can take far more time than the initial development of the software. The software
will definitely undergo change once it is delivered to the customer. There can be many reasons
for this change to occur. Change could happen because of some unexpected input values into the
system. In addition, the changes in the system could directly affect the software operations. The
software should be developed to accommodate changes that could happen during the post
implementation period.

Not only may it be necessary to add code that does not fit the original design but just determining
how software works at some point after it is completed may require significant effort by a
software engineer. About 60% of all software engineering work is maintenance, but this statistic
can be misleading. A small part of that is fixing bugs. Most maintenance is extending systems to
do new things, which in many ways can be considered new work.

2.2 SOFTWARE DEVELOPMENT MODELS OR PROCESS


MODEL
A decades-long goal has been to find repeatable, predictable processes or methodologies that
improve productivity and quality. Some try to systematize or formalize the seemingly unruly task
of writing software. Others apply project management techniques to writing software. Without
project management, software projects can easily be delivered late or over budget. With large
numbers of software projects not meeting their expectations in terms of functionality, cost, or
delivery schedule, effective project management is proving difficult. Several models exist to
streamline the development process. Each one has its pros and cons, and it's up to the
development team to adopt the most appropriate one for the project. Sometimes a combination of
the models may be more suitable.

2.2.1 Waterfall Model


The best-known and oldest process is the waterfall model, where developers follow these steps in
order. They state requirements, analyze them, design a solution approach, architect a software
framework for that solution, develop code, test, deploy, and maintain. These steps are described
in detail in section 2.1. After each step is finished, the process proceeds to the next step. The
waterfall model shows a process, where developers are to follow these phases in order:

i. Requirements specification (Requirements analysis)


ii. Software design
iii. Implementation and Integration
iv. Testing (or Validation)
v. Deployment (or Installation)
vi. Maintenance
In a strict Waterfall model, after each phase is finished, it proceeds to the next one. Reviews may
occur before moving to the next phase which allows for the possibility of changes (which may
involve a formal change control process). Reviews may also be employed to ensure that the
phase is indeed complete; the phase completion criteria are often referred to as a "gate" that the
project must pass through to move to the next phase. Waterfall discourages revisiting and
revising any prior phase once it's complete. This "inflexibility" in a pure Waterfall model has
been a source of criticism by supporters of other more "flexible" models.

2.2.2 Prototyping Model


This is a cyclic version of the linear model. In this model, once the requirement analysis is done
and the design for a prototype is made, the development process gets started. Once the prototype
is created, it is given to the customer for evaluation. The customer tests the package and gives
his/her feed back to the developer who refines the product according to the customer’s exact
expectation. After a finite number of iterations, the final software package is given to the
customer. In this methodology, the software is evolved as a result of periodic shuttling of
information between the customer and developer. This is the most popular development model in
the contemporary IT industry. Most of the successful software products have been developed
using this model – as it is very difficult (even for a whiz kid!) to comprehend all the
requirements of a customer in one shot. There are many variations of this model skewed with
respect to the project management styles of the companies. New versions of a software product
evolve as a result of prototyping.

2.2.3 Spiral model


The key characteristic of a Spiral model is risk management at regular stages in the development
cycle. In 1988, Barry Boehm published a formal software system development "spiral model,"
which combines some key aspect of the waterfall model and rapid prototyping methodologies,
but provided emphasis in a key area many felt had been neglected by other methodologies:
deliberate iterative risk analysis, particularly suited to large-scale complex systems.
The Spiral is visualized as a process passing through some number of iterations, with the four
quadrant diagram representative of the following activities:

i. formulate plans to: identify software targets, selected to implement the program, clarify
the project development restrictions;
ii. Risk analysis: an analytical assessment of selected programs, to consider how to identify
and eliminate risk;
iii. the implementation of the project: the implementation of software development and
verification;

Risk-driven spiral model, emphasizing the conditions of options and constraints in order to
support software reuse, software quality can help as a special goal of integration into the product
development. However, the spiral model has some restrictive conditions, as follows:

i. The spiral model emphasizes risk analysis, and thus requires customers to accept this
analysis and act on it. This requires both trust in the developer as well as the willingness
to spend more to fix the issues, which is the reason why this model is often used for
large-scale internal software development.
ii. If the implementation of risk analysis will greatly affect the profits of the project, the
spiral model should not be used.
iii. Software developers have to actively look for possible risks, and analyze it accurately for
the spiral model to work.

The first stage is to formulate a plan to achieve the objectives with these constraints, and then
strive to find and remove all potential risks through careful analysis and, if necessary, by
constructing a prototype. If some risks can not be ruled out, the customer has to decide whether
to terminate the project or to ignore the risks and continue anyway. Finally, the results are
evaluated and the design of the next phase begins.
2.2.4 Strength and Weakness of Waterfall, Prototype and Spiral Model

(i) Waterfall Model

Strengths
•Emphasizes completion of one phase before moving on

•Emphasises early planning, customer input, and design

•Emphasises testing as an integral part of the life cycle •Provides quality gates at each life cycle
phase

Weakness:
•Depends on capturing and freezing requirements early in the life cycle

•Depends on separating requirements from design

•Feedback is only from testing phase to any previous stage

•Not feasible in some organizations

•Emphasises products rather than processes

(ii) Prototyping Model

Strengths
•Requirements can be set earlier and more reliably

•Requirements can be communicated more clearly and completely between developers and
clients

•Requirements and design options can be investigated quickly and with low cost

•More requirements and design faults are caught early

Weakness
•Requires a prototyping tool and expertise in using it – a cost for the development organization
•The prototype may become the production system

(iii) Spiral Model

Strengths
•It promotes reuse of existing software in early stages of development

•Allows quality objectives to be formulated during development

•Provides preparation for eventual evolution of the software product

•Eliminates errors and unattractive alternatives early.

•It balances resource expenditure.

•Doesn’t involve separate approaches for software development and software maintenance.

•Provides a viable framework for integrated Hardware-software system development.

Weakness
•This process needs or usually associated with Rapid Application Development, which is very
difficult practically.

•The process is more difficult to manage and needs a very different approach as opposed to the
waterfall model (Waterfall model has management techniques like GANTT charts to assess)

2.2.5 Iterative processes


Iterative development prescribes the construction of initially small but ever larger portions of a
software project to help all those involved to uncover important issues early before problems or
faulty assumptions can lead to disaster. Iterative processes are preferred by commercial
developers because it allows a potential of reaching the design goals of a customer who does not
know how to define what he wants.
Agile software development processes are built on the foundation of iterative development. To
that foundation they add a lighter, more people-centric viewpoint than traditional approaches.
Agile processes use feedback, rather than planning, as their primary control mechanism. The
feedback is driven by regular tests and releases of the evolving software.

Agile processes seem to be more efficient than older methodologies, using less programmer time
to produce more functional, higher quality software, but have the drawback from a business
perspective that they do not provide long-term planning capability. In essence, they say that they
will provide the most bang for the buck, but won't say exactly when that bang will be.

Extreme Programming, XP, is the best-known agile process. In XP, the phases are carried out in
extremely small (or "continuous") steps compared to the older, "batch" processes. The
(intentionally incomplete) first pass through the steps might take a day or a week, rather than the
months or years of each complete step in the Waterfall model. First, one writes automated tests,
to provide concrete goals for development. Next is coding (by a pair of programmers), which is
complete when all the tests pass, and the programmers can't think of any more tests that are
needed. Design and architecture emerge out of refactoring, and come after coding. Design is
done by the same people who do the coding. The incomplete but functional system is deployed
or demonstrated for the users (at least one of which is on the development team). At this point,
the practitioners start again on writing tests for the next most important part of the system.

While Iterative development approaches have their advantages, software architects are still faced
with the challenge of creating a reliable foundation upon which to develop. Such a foundation
often requires a fair amount of upfront analysis and prototyping to build a development model.
The development model often relies upon specific design patterns and entity relationship
diagrams (ERD). Without this upfront foundation, Iterative development can create long term
challenges that are significant in terms of cost and quality.

Critics of iterative development approaches point out that these processes place what may be an
unreasonable expectation upon the recipient of the software: that they must possess the skills and
experience of a seasoned software developer. The approach can also be very expensive, akin to...
"If you don't know what kind of house you want, let me build you one and see if you like it. If
you don't, we'll tear it all down and start over." A large pile of building-materials, which are now
scrap, can be the final result of such a lack of up-front discipline. The problem with this criticism
is that the whole point of iterative programming is that you don't have to build the whole house
before you get feedback from the recipient. Indeed, in a sense conventional programming places
more of this burden on the recipient, as the requirements and planning phases take place entirely
before the development begins, and testing only occurs after development is officially over.

2.2.6 Rapid Application Development (RAD) Model


The RAD model is a linear sequential software development process that emphasizes an
extremely short development cycle. The RAD model is a “high speed” adaptation of the linear
sequential model in which rapid development is achieved by using a component-based
construction approach. Used primarily for information systems applications, the RAD approach
encompasses the following phases:

(i) Business modeling

The information flow among business functions is modeled in a way that answers the following
questions:

What information drives the business process?

What information is generated?

Who generates it?

Where does the information go?

Who processes it?


(ii) Data modeling

The information flow defined as part of the business modeling phase is refined into a set of data
objects that are needed to support the business. The characteristic (called attributes) of each
object is identified and the relationships between these objects are defined.

(iii) Process modeling

The data objects defined in the data-modeling phase are transformed to achieve the information
flow necessary to implement a business function. Processing the descriptions are created for
adding, modifying, deleting, or retrieving a data object.

(iv) Application generation

The RAD model assumes the use of the RAD tools like VB, VC++, Delphi etc… rather than
creating software using conventional third generation programming languages. The RAD model
works to reuse existing program components (when possible) or create reusable components
(when necessary). In all cases, automated tools are used to facilitate construction of the software.

(v) Testing and turnover

Since the RAD process emphasizes reuse, many of the program components have already been
tested. This minimizes the testing and development time.

2.2.7 Component Assembly Model


Object technologies provide the technical framework for a component-based process model for
software engineering. The object oriented paradigm emphasizes the creation of classes that
encapsulate both data and the algorithm that are used to manipulate the data. If properly designed
and implemented, object oriented classes are reusable across different applications and computer
based system architectures. Component Assembly Model leads to software reusability. The
integration/assembly of the already existing software components accelerate the development
process. Nowadays many component libraries are available on the Internet. If the right
components are chosen, the integration aspect is made much simpler.

2.2.8 Process improvement models


2.2.8.1 Capability Maturity Model Integration

The Capability Maturity Model Integration (CMMI) is one of the leading models and based on
best practice. Independent assessments grade organizations on how well they follow their
defined processes, not on the quality of those processes or the software produced. CMMI has
replaced CMM.

2.2.8.2 ISO 9000

ISO 9000 describes standards for a formally organized process to manufacture a product and the
methods of managing and monitoring progress. Although the standard was originally created for
the manufacturing sector, ISO 9000 standards have been applied to software development as
well. Like CMMI, certification with ISO 9000 does not guarantee the quality of the end result,
only that formalized business processes have been followed.

2.2.8.3 ISO/IEC 15504

ISO/IEC 15504 Information technology — Process assessment also known as Software Process
Improvement Capability Determination (SPICE), is a "framework for the assessment of software
processes". This standard is aimed at setting out a clear model for process comparison. SPICE is
used much like CMMI. It models processes to manage, control, guide and monitor software
development. This model is then used to measure what a development organization or project
team actually does during software development. This information is analyzed to identify
weaknesses and drive improvement.
2.2.9 Formal methods

Formal methods are mathematical approaches to solving software (and hardware) problems at
the requirements, specification, and design levels. Formal methods are most likely to be applied
to safety-critical or security-critical software and systems, such as avionics software. Software
safety assurance standards, such as DO-178B, DO-178C, and Common Criteria demand formal
methods at the highest levels of categorization.

For sequential software, examples of formal methods include the B-Method, the specification
languages used in Automated theorem proving, RAISE, VDM, and the Z notation.

Another emerging trend in software development is to write a specification in some form of logic
(usually a variation of FOL), and then to directly execute the logic as though it were a program.
The OWL language, based on Description Logic, is an example. There is also work on mapping
some version of English (or another natural language) automatically to and from logic, and
executing the logic directly. Examples are Attemp to Controlled English, and Internet Business
Logic, which do not seek to control the vocabulary or syntax. A feature of systems that support
bidirectional English-logic mapping and direct execution of the logic is that they can be made to
explain their results, in English, at the business or scientific level.

2.3 SOFTWARE QUALITY ATTRIBUTES

2.3.1 Introduction
Quality attributes are the overall factors that affect run-time behavior, system design, and user
experience. They represent areas of concern that have the potential for application wide impact
across layers and tiers. Some of these attributes are related to the overall system design, while
others are specific to run time, design time, or user centric issues. The extent to which the
application possesses a desired combination of quality attributes such as usability, performance,
reliability, and security indicates the success of the design and the overall quality of the software
application.
When designing applications to meet any of the quality attributes requirements, it is necessary to
consider the potential impact on other requirements. You must analyze the tradeoffs between
multiple quality attributes. The importance or priority of each quality attribute differs from
system to system; for example, interoperability will often be less important in a single use
packaged retail application than in a line of business (LOB) system.

This chapter lists and describes the quality attributes that you should consider when designing
your application. To get the most out of this chapter, use the table below to gain an
understanding of how quality attributes map to system and application quality factors, and read
the description of each of the quality attributes. Then use the sections containing key guidelines
for each of the quality attributes to understand how that attribute has an impact on your design,
and to determine the decisions you must make to addresses these issues. Keep in mind that the
list of quality attributes in this chapter is not exhaustive, but provides a good starting point for
asking appropriate questions about your architecture.

2.3.2 Common Quality Attributes


The following table describes the quality attributes covered in this chapter. It categorizes the
attributes in four specific areas linked to design, runtime, system, and user qualities. Use this
table to understand what each of the quality attributes means in terms of your application design.

Quality
Category Description
attribute

Conceptual integrity defines the consistency and coherence of the


Conceptual overall design. This includes the way that components or modules
Integrity are designed, as well as factors such as coding style and variable
Design
naming.
Qualities
Maintainability is the ability of the system to undergo changes
Maintainability with a degree of ease. These changes could impact components,

services, features, and interfaces when adding or changing the


functionality, fixing errors, and meeting new business
requirements.

Reusability defines the capability for components and subsystems


to be suitable for use in other applications and in other scenarios.
Reusability
Reusability minimizes the duplication of components and also the
implementation time.

Availability defines the proportion of time that the system is


functional and working. It can be measured as a percentage of the
Availability total system downtime over a predefined period. Availability will
be affected by system errors, infrastructure problems, malicious
attacks, and system load.

Interoperability is the ability of a system or different systems to


operate successfully by communicating and exchanging
Interoperability information with other external systems written and run by
external parties. An interoperable system makes it easier to
exchange and reuse information internally as well as externally.
Run-time
Qualities Manageability Manageability defines how easy it is for system administrators to
manage the application, usually through sufficient and useful
instrumentation exposed for use in monitoring systems and for
debugging and performance tuning.

Performance is an indication of the responsiveness of a system to


execute any action within a given time interval. It can be measured
Performance in terms of latency or throughput. Latency is the time taken to
respond to any event. Throughput is the number of events that take
place within a given amount of time.

Reliability Reliability is the ability of a system to remain operational over


time. Reliability is measured as the probability that a system will
not fail to perform its intended functions over a specified time
interval.

Scalability is ability of a system to either handle increases in load


Scalability without impact on the performance of the system, or the ability to
be readily enlarged.

Security is the capability of a system to prevent malicious or


accidental actions outside of the designed usage, and to prevent
Security
disclosure or loss of information. A secure system aims to protect
assets and prevent unauthorized modification of information.

Supportability is the ability of the system to provide information


Supportability helpful for identifying and resolving issues when it fails to work
correctly.

System
Testability is a measure of how easy it is to create test criteria for
Qualities
the system and its components, and to execute these tests in order
Testability to determine if the criteria are met. Good testability makes it more
likely that faults in a system can be isolated in a timely and
effective manner.

Usability defines how well the application meets the requirements


User of the user and consumer by being intuitive, easy to localize and
Usability
Qualities globalize, providing good access for disabled users, and resulting
in a good overall user experience.

The following sections describe each of the quality attributes in more detail, and provide
guidance on the key issues and the decisions you must make for each one:

Availability

Conceptual Integrity
Interoperability

Maintainability

Manageability

Performance

Reliability

Reusability

Scalability

Security

Supportability

Testability

User Experience / Usability

Availability

Availability defines the proportion of time that the system is functional and working. It can be
measured as a percentage of the total system downtime over a predefined period. Availability
will be affected by system errors, infrastructure problems, malicious attacks, and system load.
The key issues for availability are:

A physical tier such as the database server or application server can fail or become
unresponsive, causing the entire system to fail. Consider how to design failover support for
the tiers in the system. For example, use Network Load Balancing for Web servers to
distribute the load and prevent requests being directed to a server that is down. Also, consider
using a RAID mechanism to mitigate system failure in the event of a disk failure. Consider if
there is a need for a geographically separate redundant site to failover to in case of natural
disasters such as earthquakes or tornados.
Denial of Service (DoS) attacks, which prevent authorized users from accessing the system,
can interrupt operations if the system cannot handle massive loads in a timely manner, often
due to the processing time required, or network configuration and congestion. To minimize
interruption from DoS attacks, reduce the attack surface area, identify malicious behavior,
use application instrumentation to expose unintended behavior, and implement
comprehensive data validation. Consider using the Circuit Breaker or Bulkhead patterns to
increase system resiliency.
Inappropriate use of resources can reduce availability. For example, resources acquired too
early and held for too long cause resource starvation and an inability to handle additional
concurrent user requests.
Bugs or faults in the application can cause a system wide failure. Design for proper exception
handling in order to reduce application failures from which it is difficult to recover.
Frequent updates, such as security patches and user application upgrades, can reduce the
availability of the system. Identify how you will design for run-time upgrades.
A network fault can cause the application to be unavailable. Consider how you will handle
unreliable network connections; for example, by designing clients with occasionally-
connected capabilities.
Consider the trust boundaries within your application and ensure that subsystems employ
some form of access control or firewall, as well as extensive data validation, to increase
resiliency and availability.

Conceptual Integrity

Conceptual integrity defines the consistency and coherence of the overall design. This includes
the way that components or modules are designed, as well as factors such as coding style and
variable naming. A coherent system is easier to maintain because you will know what is
consistent with the overall design. Conversely, a system without conceptual integrity will
constantly be affected by changing interfaces, frequently deprecating modules, and lack of
consistency in how tasks are performed. The key issues for conceptual integrity are:
Mixing different areas of concern within your design. Consider identifying areas of
concern and grouping them into logical presentation, business, data, and service layers as
appropriate.
Inconsistent or poorly managed development processes. Consider performing an
Application Lifecycle Management (ALM) assessment, and make use of tried and tested
development tools and methodologies.
Lack of collaboration and communication between different groups involved in the
application lifecycle. Consider establishing a development process integrated with tools
to facilitate process workflow, communication, and collaboration.
Lack of design and coding standards. Consider establishing published guidelines for
design and coding standards, and incorporating code reviews into your development
process to ensure guidelines are followed.
Existing (legacy) system demands can prevent both refactoring and progression toward a
new platform or paradigm. Consider how you can create a migration path away from
legacy technologies, and how to isolate applications from external dependencies. For
example, implement the Gateway design pattern for integration with legacy systems.

Interoperability

Interoperability is the ability of a system or different systems to operate successfully by


communicating and exchanging information with other external systems written and run by
external parties. An interoperable system makes it easier to exchange and reuse information
internally as well as externally. Communication protocols, interfaces, and data formats are the
key considerations for interoperability. Standardization is also an important aspect to be
considered when designing an interoperable system. The key issues for interoperability are:

Interaction with external or legacy systems that use different data formats. Consider how you
can enable systems to interoperate, while evolving separately or even being replaced. For
example, use orchestration with adaptors to connect with external or legacy systems and
translate data between systems; or use a canonical data model to handle interaction with a
large number of different data formats.
Boundary blurring, which allows artifacts from one system to defuse into another. Consider
how you can isolate systems by using service interfaces and/or mapping layers. For example,
expose services using interfaces based on XML or standard types in order to support
interoperability with other systems. Design components to be cohesive and have low
coupling in order to maximize flexibility and facilitate replacement and reusability.
Lack of adherence to standards. Be aware of the formal and de facto standards for the domain
you are working within, and consider using one of them rather than creating something new
and proprietary.

Maintainability

Maintainability is the ability of the system to undergo changes with a degree of ease. These
changes could impact components, services, features, and interfaces when adding or changing
the application’s functionality in order to fix errors, or to meet new business requirements.
Maintainability can also affect the time it takes to restore the system to its operational status
following a failure or removal from operation for an upgrade. Improving system maintainability
can increase availability and reduce the effects of run-time defects. An application’s
maintainability is often a function of its overall quality attributes but there a number of key
issues that can directly affect maintainability:

Excessive dependencies between components and layers, and inappropriate coupling to


concrete classes, prevents easy replacement, updates, and changes; and can cause changes to
concrete classes to ripple through the entire system. Consider designing systems as well-
defined layers, or areas of concern, that clearly delineate the system’s UI, business processes,
and data access functionality. Consider implementing cross-layer dependencies by using
abstractions (such as abstract classes or interfaces) rather than concrete classes, and minimize
dependencies between components and layers.
The use of direct communication prevents changes to the physical deployment of
components and layers. Choose an appropriate communication model, format, and protocol.
Consider designing a pluggable architecture that allows easy upgrades and maintenance, and
improves testing opportunities, by designing interfaces that allow the use of plug-in modules
or adapters to maximize flexibility and extensibility.
Reliance on custom implementations of features such as authentication and authorization
prevents reuse and hampers maintenance. To avoid this, use the built-in platform functions
and features wherever possible.
The logic code of components and segments is not cohesive, which makes them difficult to
maintain and replace, and causes unnecessary dependencies on other components. Design
components to be cohesive and have low coupling in order to maximize flexibility and
facilitate replacement and reusability.
The code base is large, unmanageable, fragile, or over complex; and refactoring is
burdensome due to regression requirements. Consider designing systems as well defined
layers, or areas of concern, that clearly delineate the system’s UI, business processes, and
data access functionality. Consider how you will manage changes to business processes and
dynamic business rules, perhaps by using a business workflow engine if the business process
tends to change. Consider using business components to implement the rules if only the
business rule values tend to change; or an external source such as a business rules engine if
the business decision rules do tend to change.
The existing code does not have an automated regression test suite. Invest in test automation
as you build the system. This will pay off as a validation of the system’s functionality, and as
documentation on what the various parts of the system do and how they work together.
Lack of documentation may hinder usage, management, and future upgrades. Ensure that you
provide documentation that, at minimum, explains the overall structure of the application.

Manageability

Manageability defines how easy it is for system administrators to manage the application, usually
through sufficient and useful instrumentation exposed for use in monitoring systems and for
debugging and performance tuning. Design your application to be easy to manage, by exposing
sufficient and useful instrumentation for use in monitoring systems and for debugging and
performance tuning. The key issues for manageability are:

Lack of health monitoring, tracing, and diagnostic information. Consider creating a health
model that defines the significant state changes that can affect application performance, and
use this model to specify management instrumentation requirements. Implement
instrumentation, such as events and performance counters, that detects state changes, and
expose these changes through standard systems such as Event Logs, Trace files, or Windows
Management Instrumentation (WMI). Capture and report sufficient information about errors
and state changes in order to enable accurate monitoring, debugging, and management. Also,
consider creating management packs that administrators can use in their monitoring
environments to manage the application.
Lack of runtime configurability. Consider how you can enable the system behavior to change
based on operational environment requirements, such as infrastructure or deployment
changes.
Lack of troubleshooting tools. Consider including code to create a snapshot of the system’s
state to use for troubleshooting, and including custom instrumentation that can be enabled to
provide detailed operational and functional reports. Consider logging and auditing
information that may be useful for maintenance and debugging, such as request details or
module outputs and calls to other systems and services.

Performance

Performance is an indication of the responsiveness of a system to execute specific actions in a


given time interval. It can be measured in terms of latency or throughput. Latency is the time
taken to respond to any event. Throughput is the number of events that take place in a given
amount of time. An application’s performance can directly affect its scalability, and lack of
scalability can affect performance. Improving an application’s performance often improves its
scalability by reducing the likelihood of contention for shared resources. Factors affecting
system performance include the demand for a specific action and the system’s response to the
demand. The key issues for performance are:

Increased client response time, reduced throughput, and server resource over utilization.
Ensure that you structure the application in an appropriate way and deploy it onto a system or
systems that provide sufficient resources. When communication must cross process or tier
boundaries, consider using coarse-grained interfaces that require the minimum number of
calls (preferably just one) to execute a specific task, and consider using asynchronous
communication.
Increased memory consumption, resulting in reduced performance, excessive cache misses
(the inability to find the required data in the cache), and increased data store access. Ensure
that you design an efficient and appropriate caching strategy.
Increased database server processing, resulting in reduced throughput. Ensure that you
choose effective types of transactions, locks, threading, and queuing approaches. Use
efficient queries to minimize performance impact, and avoid fetching all of the data when
only a portion is displayed. Failure to design for efficient database processing may incur
unnecessary load on the database server, failure to meet performance objectives, and costs in
excess of budget allocations.
Increased network bandwidth consumption, resulting in delayed response times and
increased load for client and server systems. Design high performance communication
between tiers using the appropriate remote communication mechanism. Try to reduce the
number of transitions across boundaries, and minimize the amount of data sent over the
network. Batch work to reduce calls over the network.

Reliability

Reliability is the ability of a system to continue operating in the expected way over time.
Reliability is measured as the probability that a system will not fail and that it will perform its
intended function for a specified time interval. The key issues for reliability are:

The system crashes or becomes unresponsive. Identify ways to detect failures and
automatically initiate a failover, or redirect load to a spare or backup system. Also, consider
implementing code that uses alternative systems when it detects a specific number of failed
requests to an existing system.
Output is inconsistent. Implement instrumentation, such as events and performance counters,
that detects poor performance or failures of requests sent to external systems, and expose
information through standard systems such as Event Logs, Trace files, or WMI. Log
performance and auditing information about calls made to other systems and services.
The system fails due to unavailability of other externalities such as systems, networks, and
databases. Identify ways to handle unreliable external systems, failed communications, and
failed transactions. Consider how you can take the system offline but still queue pending
requests. Implement store and forward or cached message-based communication systems that
allow requests to be stored when the target system is unavailable, and replayed when it is
online. Consider using Windows Message Queuing or BizTalk Server to provide a reliable
once-only delivery mechanism for asynchronous requests.

Reusability

Reusability is the probability that a component will be used in other components or scenarios to
add new functionality with little or no change. Reusability minimizes the duplication of
components and the implementation time. Identifying the common attributes between various
components is the first step in building small reusable components for use in a larger system.
The key issues for reusability are:

The use of different code or components to achieve the same result in different places; for
example, duplication of similar logic in multiple components, and duplication of similar logic
in multiple layers or subsystems. Examine the application design to identify common
functionality, and implement this functionality in separate components that you can reuse.
Examine the application design to identify crosscutting concerns such as validation, logging,
and authentication, and implement these functions as separate components.
The use of multiple similar methods to implement tasks that have only slight variation.
Instead, use parameters to vary the behavior of a single method.
Using several systems to implement the same feature or function instead of sharing or
reusing functionality in another system, across multiple systems, or across different
subsystems within an application. Consider exposing functionality from components, layers,
and subsystems through service interfaces that other layers and systems can use. Use
platform agnostic data types and structures that can be accessed and understood on different
platforms.
Scalability

Scalability is ability of a system to either handle increases in load without impact on the
performance of the system, or the ability to be readily enlarged. There are two methods for
improving scalability: scaling vertically (scale up), and scaling horizontally (scale out). To scale
vertically, you add more resources such as CPU, memory, and disk to a single system. To scale
horizontally, you add more machines to a farm that runs the application and shares the load. The
key issues for scalability are:

Applications cannot handle increasing load. Consider how you can design layers and tiers for
scalability, and how this affects the capability to scale up or scale out the application and the
database when required. You may decide to locate logical layers on the same physical tier to
reduce the number of servers required while maximizing load sharing and failover
capabilities. Consider partitioning data across more than one database server to maximize
scale-up opportunities and allow flexible location of data subsets. Avoid stateful components
and subsystems where possible to reduce server affinity.
Users incur delays in response and longer completion times. Consider how you will handle
spikes in traffic and load. Consider implementing code that uses additional or alternative
systems when it detects a predefined service load or a number of pending requests to an
existing system.
The system cannot queue excess work and process it during periods of reduced load.
Implement store-and-forward or cached message-based communication systems that allow
requests to be stored when the target system is unavailable, and replayed when it is online.

Security

Security is the capability of a system to reduce the chance of malicious or accidental actions
outside of the designed usage affecting the system, and prevent disclosure or loss of information.
Improving security can also increase the reliability of the system by reducing the chances of an
attack succeeding and impairing system operation. Securing a system should protect assets and
prevent unauthorized access to or modification of information. The factors affecting system
security are confidentiality, integrity, and availability. The features used to secure systems are
authentication, encryption, auditing, and logging. The key issues for security are:

Spoofing of user identity. Use authentication and authorization to prevent spoofing of user
identity. Identify trust boundaries, and authenticate and authorize users crossing a trust
boundary.
Damage caused by malicious input such as SQL injection and cross-site scripting. Protect
against such damage by ensuring that you validate all input for length, range, format, and
type using the constrain, reject, and sanitize principles. Encode all output you display to
users.
Data tampering. Partition the site into anonymous, identified, and authenticated users and use
application instrumentation to log and expose behavior that can be monitored. Also use
secured transport channels, and encrypt and sign sensitive data sent across the network
Repudiation of user actions. Use instrumentation to audit and log all user interaction for
application critical operations.
Information disclosure and loss of sensitive data. Design all aspects of the application to
prevent access to or exposure of sensitive system and application information.
Interruption of service due to Denial of service (DoS) attacks. Consider reducing session
timeouts and implementing code or hardware to detect and mitigate such attacks.

Supportability

Supportability is the ability of the system to provide information helpful for identifying and
resolving issues when it fails to work correctly. The key issues for supportability are:

Lack of diagnostic information. Identify how you will monitor system activity and
performance. Consider a system monitoring application, such as Microsoft System Center.
Lack of troubleshooting tools. Consider including code to create a snapshot of the system’s
state to use for troubleshooting, and including custom instrumentation that can be enabled to
provide detailed operational and functional reports.
Lack of tracing ability. Use common components to provide tracing support in code, perhaps
though Aspect Oriented Programming (AOP) techniques or dependency injection. Enable
tracing in Web applications in order to troubleshoot errors.
Lack of health monitoring. Consider creating a health model that defines the significant state
changes that can affect application performance, and use this model to specify management
instrumentation requirements. Implement instrumentation, such as events and performance
counters, that detects state changes, and expose these changes through standard systems such
as Event Logs, Trace files, or Windows Management Instrumentation (WMI). Capture and
report sufficient information about errors and state changes in order to enable accurate
monitoring, debugging, and management.

Testability

Testability is a measure of how well system or components allow you to create test criteria and
execute tests to determine if the criteria are met. Testability allows faults in a system to be
isolated in a timely and effective manner. The key issues for testability are:

Complex applications with many processing permutations are not tested consistently, perhaps
because automated or granular testing cannot be performed if the application has a
monolithic design. Design systems to be modular to support testing. Provide instrumentation
or implement probes for testing, mechanisms to debug output, and ways to specify inputs
easily. Design components that have high cohesion and low coupling to allow testability of
components in isolation from the rest of the system.
Lack of test planning. Start testing early during the development life cycle. Use mock objects
during testing, and construct simple, structured test solutions.
Poor test coverage, for both manual and automated tests. Consider how you can automate
user interaction tests, and how you can maximize test and code coverage.
Input and output inconsistencies; for the same input, the output is not the same and the output
does not fully cover the output domain even when all known variations of input are provided.
Consider how to make it easy to specify and understand system inputs and outputs to
facilitate the construction of test cases.
User Experience / Usability

The application interfaces must be designed with the user and consumer in mind so that they are
intuitive to use, can be localized and globalized, provide access for disabled users, and provide a
good overall user experience. The key issues for user experience and usability are:

Too much interaction (an excessive number of clicks) required for a task. Ensure you design
the screen and input flows and user interaction patterns to maximize ease of use.
Incorrect flow of steps in multi-step interfaces. Consider incorporating workflows where
appropriate to simplify multi-step operations.
Data elements and controls are poorly grouped. Choose appropriate control types (such as
option groups and check boxes) and lay out controls and content using the accepted UI
design patterns.

Feedback to the user is poor, especially for errors and exceptions, and the application is
unresponsive. Consider implementing technologies and techniques that provide maximum
user interactivity, such as Asynchronous JavaScript and XML (AJAX) in Web pages and
client-side input validation. Use asynchronous techniques for background tasks, and tasks
such as populating controls or performing long-running tasks.

2.4 HIERARCHICAL MODELS OF QUALITY


This section discusses the classical hierarchical models of quality provided by McCall and
Boehm. These models form the basis of most subsequent work in software quality.

2.4.1 What is hierarchical model?


In order to compare quality in different situations, both qualitatively and quantitatively, it is
necessary to establish a model of quality. There have been many models suggested for quality.
Most are hierarchical in nature. In order to examine the nature of hierarchical models, consider
the methods of assessment and reporting used in schools. The progress of a particular student
has generally been recorded under a series of headings, usually subject areas such as Science,
English, Maths and Humanities.

A qualitative assessment is generally made, along with a more quantified assessment. These
measures may be derived from a formal test of examination, continuous assessment of
coursework or a quantified teacher assessment. In practice, the resulting scores are derived from
a whole spectrum of techniques. They range from those which may be regarded as objective and
transferable to those which are simply a more convenient representation of qualitative
judgements. In the past, these have been gathered together to form a traditional school report.
(Table 2.1)

The traditional school report often had an overall mark and grade, a single figure, generally
derived from the mean of the component figures, intended to provide a single measure of
success. In recent years, the assessment of pupils has become considerably more sophisticated
and the model on which the assessment is based has become more complicated. Subjects are
now broken down into skills, each of which is measured and the collective results used to give a
more detailed overall picture. For example, in English, pupils’ oral skills are considered
alongside their ability to read; written English is further subdivided into an assessment of style,
content and presentation. The hierarchical model requires another level of sophistication in order
to accommodate the changes (Figure 2.1). Much effort is currently being devoted to producing a
broader-based assessment, and in ensuring that qualitative judgements are as accurate and
consistent as possible. The aim is for every pupil to emerge with a broad-based ‘Record of
Achievement’ alongside their more traditional examination results.
Table 2.1 A traditional school report

Subject Teacher’s comments Term grade Exam mark


(A-E) (%)

English

Maths

Science

Humanities

Languages

Technology

OVERALL

A hierarchical model of software quality is based upon a set of quality criteria, each of which has
a set of measures or metrics associated with it. This type of model is illustrated schematically in
Figure 2.2.
Examples of quality criteria typically employed include reliability, security and adaptability.
The issues relating to the criteria of quality are:

What criteria of quality should be employed?


How do they inter-relate?
How may the associated metrics be combined into a meaningful overall measure of quality?
2.4.2 THE McCALL AND BOEHM MODELS

2.4.2.1 The McCall Model

This model was first proposed by McCall in 1977. It was later adapted and revised as the
MQ model (Watts, 1987). Jim McCall produced this model (Figure 2.3) for the US Air Force
and the intention was to bridge the gap between users and developers. He tried to map the user
view with the developer's priority. The model is aimed at system developers, to be used during
the development process. However, in an early attempt to bridge the gap between users and
developers, the criteria were chosen in an attempt to reflect users’ view as well as developers’
priorities.

Figure 2.3 : Decomposition tree of McCall software quality model

With the perspective of hindsight, the criteria appear to be technically oriented, but they are
described by a series of questions which define them in terms acceptable to non-specialist
managers. The three perspective of model are described as:
Product revision

The product revision perspective identifies quality factors that influence the ability to change the
software product, these factors are:-

Maintainability, the ability to find and fix a defect.


Flexibility, the ability to make changes required as dictated by the business.
Testability, the ability to Validate the software requirements

Product transition

The product transition perspective identifies quality factors that influence the ability to adapt the
software to new environments:-

Portability, the ability to transfer the software from one environment to another.
Reusability, the ease of using existing software components in a different context.
Interoperability, the extent, or ease, to which software components work together.

Product operations

The product operations perspective identifies quality factors that influence the extent to which
the software fulfils its specification:-

Correctness, the functionality matches the specification.


Reliability, the extent to which the system fails.
Efficiency, system resource (including cpu, disk, memory, network) usage.
Integrity, protection from unauthorized access.
Usability, ease of use.

The McCall model, illustrated in Figure 2.4, identifies three areas of software work: product
operation, product revision and product transition. These are summarized in Table 2.2
Table 2.2 The three areas as addressed by McCall’s model (1977)

Product operation requires that it can be learned easily, operated


efficiently and that the results are those required by the
user.

Product revision is concerned with error correction and adaptation of


the system. This is important because it is generally
considered to be the most costly part of software
development.

Product transition may not be so important in all applications. However,


the move towards distributed processing and the rapid
rate of change in hardware is likely to increase its
importance.
McCall’s model forms the basis for much quality work even today. For example, the MQ model
published by Watts (1987) is heavily based upon the McCall model. The quality characteristics
in this model are described as follows:

Utility is the ease of use of the software.


Integrity is the protection of the program from unauthorized access.
Efficiency is concerned with the use of resources, e.g. processor time, storage.
It falls into two categories: execution efficiency and storage efficiency.
Correctness is the extent to which a program fulfills its specification.
Reliability is its ability not to fail.
Maintainability is the effort required to locate and fix a fault in the program within its
operating environment.
Flexibility is the ease of making changes required by changes in the operating
environment.
Testability is the ease of testing the program, to ensure that it is error-free and meets its
specification.
Portability is the effort required to transfer a program from one environment to another.
Reusability is the ease of reusing software in a different context.
Interoperability is the effort required to complete the system to another system.

This study carried out by the National Computer Centre (NCC). The characteristics and sub-
characteristics of McCall model is shown in following figure.

The idea behind McCall’s Quality Model is that the quality factors synthesized should provide a
complete software quality picture. The actual quality metric is achieved by answering yes and no
questions that then are put in relation to each other. That is, if answering equally amount of “yes”
and “no” on the questions measuring a quality criteria you will achieve 50% on that quality
criteria1. The metrics can then be synthesized per quality criteria, per quality factor, or if relevant
per product or service
2.4.2.2 The Boehm Model

Barry W. Boehm (1978) also defined a hierarchical model of software quality characteristics, in
trying to qualitatively define software quality as a set of attributes and metrics (measurements).
Boehm’s model was defined to provide a set of ‘well-defined, well-differentiated characteristics
of software quality’. The model is hierarchical in nature but the hierarchy is extended, so that
quality criteria are subdivided. The first division is made according to the uses made of the
system. These are classed as ‘general’ or ‘as is’ utility, where the ‘as is’ utilities are a subtype of
the general utilities, roughly equating to the product operation criteria of McCall’s model. There
are two levels of actual quality criteria, the intermediate level being further split into primitive
characteristics, which are amenable to measurement. The model is summarized in Figure 2.5

At the highest level of his model, Boehm defined three primary uses (or basic software
requirements), these three primary uses are:-

As-is utility, the extent to which the as-is software can be used (i.e. ease of use, reliability and
efficiency).

Maintainability, ease of identifying what needs to be changed as well as ease of modification


and retesting.

Portability, ease of changing software to accommodate a new environment.


These three primary uses had quality factors associated with them , representing the next level of
Boehm's hierarchical model. These quality factors are further broken down into Primitive
constructs that can be measured, for example Testability is broken down into:- accessibility,
communicativeness, structure and self descriptiveness. As with McCall's Quality Model, the
intention is to be able to measure the lowest level of the model.
2.5 PRACTICAL EVALUATION

Correctness was seen as an umbrella property encompassing other attributes. Two types of
correctness were consistently identified. Developers talked in terms of technical correctness,
which included factors such as reliability, maintainability and the traditional software virtues.
Computer users, however, talked of business correctness, of meeting business needs and criteria
such as timeliness, value for money and ease of transition.

This reinforced the existence of different views of quality. It suggests that these developers
emphasized conformance to specification, while users sought fitness for purpose. There was
remarkable agreement between the different organizations as to some of the basic findings.
In particular:

A basic distinction between business and technical correctness.


A recognition that different aspect of quality would influence each other.
The study confirmed that the relationships were often context and even project dependent.
The studies demonstrated that the relationships were often not commutative. Thus although
property A may reinforce property B, property B may not reinforce property A.

Table 2.4 Software quality criteria elicited from a large manufacture in company

Criteria Definition

Technical The extent to which a system satisfies its technical specification.


correctness

User correctness The extent to which a system fulfills a set of objectives agreed
with the user.

Reliability The extent to which a system performs its intended function


without failure.

Efficiency The computing resources required by a system to perform a


function.

Integrity The extent to which data and software are consistent and
accurate across systems.

Security The extent to which unauthorized access to a system can be


controlled.

Understandability The ease of understanding code for maintaining and adapting


systems.

Flexibility The effort required to modify a system.

Ease of interfacing The effort required to interface one system to another.

Portability The effort required to transfer a program from one hardware


configuration and/or software environment to another or to
extend the user base.

User consultation The effectiveness of consultation with users.

Accuracy The accuracy of the actual output produced, i.e., is it the right
answer?

Timeliness The extent to which delivery fits with the deadlines and practices
of users.

Time to use The time for the user to achieve a result.

Appeal The extent to which a user likes the system.

User flexibility The extent to which the system can be adapted both to changes
in user requirements and individual taste.
Cost/benefit The extent to which the system fulfils its cost/benefit
specification both with regard to development costs and business
benefits.

User friendliness The time to learn how to use the system and ease of use once
learned.

2.5.1 Quality Assurance


Quality assurance (QA) refers to the planned and systematic activities implemented in a quality
system so that quality requirements for a product or service will be fulfilled. It is the systematic
measurement, comparison with a standard, monitoring of processes and an associated feedback
loop that confers error prevention. This can be contrasted with quality control, which is focused
on process outputs.

Two principles included in QA are: “Fit for purpose”, the product should be suitable for the
intended purpose; and “Right first time”, mistakes should be eliminated. QA includes
management of the quality of raw materials, assemblies, products and components, services
related to production, and management, production and inspection processes.

Suitable quality is determined by product users, clients or customers, not by society in general. It
is not related to cost and adjectives or descriptors such “high” and “poor” are not applicable. For
example, a low priced product may be viewed as having high quality because it is disposable
where another may be viewed as having poor quality because it is not disposable.

2.5.2 Quality Assurance Plan


The objective of quality assurance plan is to develop and design the activities related to quality
control project for the organization. It is a composite document containing all the information
related to the quality control activities. It is used to schedule the reviews and audits for checking
different business components and also to check the correctness of these testing procedures as
defined in the plan. The quality management team is totally responsible to build up the primary
design of the plan. To develop this plan, certain steps are followed, which are described below.

Step 1: To define the quality goals for the processes. These goals will be accepted
unconditionally by the developer and the customer, both. These objectives are to be clearly
described in the plan, so that both the parties can understand easily the scope of the processes.
The developers might also set a standard to define the goals. If possible, the plan can also
describe the quality goals in terms of measurement. This will ultimately help to measure the
performance of the processes in terms of gradation.

Step 2: To define the organization and the roles and responsibilities of the participant activities.
It should include the reporting system for the outcome of the quality reviews. The quality team
should know where to submit the reports, directly to the developers or somebody else. In many
cases, the reports are submitted to the project review team, who in turn delivers the report to the
subsequent departments and keeps it in storage for records. Whatever is the process of reporting,
it should be well defined in the plan to avoid disputes or complications in the submission process
for reviews and audits.

Step 3: The subsidiary quality assurance plan: It includes the list of other related plans
describing project standards, which have references in any of the process. These subsidiary plans
are related to the quality standards of several business components and how they are related to
each other in achieving the collective qualitative objective. This information also helps to
determine the different types of reviews to be done and how often they will be performed.
Normally, the included referenced plans are identified below.

a. Documentation Plan
b. Measurement Plan
c. Risk Measurement Plan
d. Problem Resolution Plan
e. Configuration Management Plan
f. Product Development Plan
g. Test Plan
h. Subcontractor Management Plan etc.
Step 4: To identify the task and activities of the quality control team. Generally, this will include
following reviews:

a. Reviewing project plans to ensure that the project abide by the defined process.
b. Reviewing project to ensure the performance according to the plans.
c. Endorsement of variation from the standard process.
d. Assessing the improvement of the processes.

It is the responsibility of the quality manager, to fix the schedule for the reviews and audits to
conduct quality control. This schedule is also documented within the plan, so that task control
can be done at an individual level. Thus, the entire process of quality control is documented
within the plan. This helps as a guideline for the reviewers and developers, simultaneously.

2.5.3 Quality control


Quality control, or QC for short, is a process by which entities review the quality of all factors
involved in production. This approach places an emphasis on three aspects:

a. Elements such as controls, job management, defined and well managed processes,
performance and integrity criteria, and identification of records
b. Competence, such as knowledge, skills, experience, and qualifications
c. Soft elements, such as personnel integrity, confidence, organizational culture, motivation,
team spirit, and quality relationships.

Controls include product inspection, where every product is examined visually, and often using a
stereo microscope for fine detail before the product is sold into the external market. Inspectors
will be provided with lists and descriptions of unacceptable product defects such as cracks or
surface blemishes for example. Quality control emphasizes testing of products to uncover defects
and reporting to management who make the decision to allow or deny product release, whereas
quality assurance attempts to improve and stabilize production (and associated processes) to
avoid, or at least minimize, issues which led to the defect(s) in the first place.

Figure 2.7: Quality Management, Quality Assurance and Quality Control

2.5.4 Quality Assurance (QA)


The Monitoring and measuring the strength of “development process” is SQA. QA is the set of
support activities (including facilitation, training, measurement, and analysis) needed to provide
adequate confidence that processes are established and continuously improved to produce
products that meet specifications and are fit for use.

Following are some of the QA activities:

a. System development methodologies


b. Establish and Estimation Process
c. Sets up measurement Programs to evaluate process.
d. System maintenance process
e. Requirements definition process
f. Testing Process and standards
g. Identifies weaknesses in programs and improves them.
h. Management responsibility, frequently performed by staff function.
i. Concerned with all products produced by the process.
2.5.5 Quality Control (QC):
Quality Control is the process by which product quality is compared with applicable standards,
and the action taken when non-conformance is detected. Its main focus is defect detection and
removal. Quality Control is the validation of the Software product with respect to Customer
Requirements and Expectations. It is a process by which product quality is compared with
applicable standards, and the action taken when non-conformance is detected.
These activities begin at the start of the software development process with reviews of
requirements, and continue until all application testing is complete.

It is possible to have quality control without quality assurance. A testing team may be in a place
to conduct system testing at the end of development.

Following are some of the QC activities:

a. Relates to specific product or service.


b. Implements the process
c. Verifies Specific attributes are there or not in product/service.
d. Identifies for correcting defects.
e. Detects, Reports and corrects defects
f. Concerned with specific product.

2.5.6The Following Statements help differentiate Quality Control from Quality


Assurance
Quality Control is concerned with specific Product or Service. And Quality Assurance is
concerned with all of the products that will ever be produced by a process.
QA does not assure quality, rather it creates and ensures the processes are being followed
to assure quality. QC does not control quality, rather it measures quality.
Quality control activities are focused on the deliverable itself. Quality assurance activities
are focused on the processes used to create the deliverable.
Quality Control identifies defects for the primary purpose of correcting defects and also
verifies weather specific attribute(s) are in, or are not in, a specific product or service.
While Quality Assurance identifies weaknesses in processes and improves them. Quality
Assurance sets up measurement programs to evaluate processes.
Quality Control is the responsibility of the Tester. Quality Assurance is a management
responsibility, frequently performed by a staff function.
Quality Assurance is sometimes called quality control because it evaluates whether
quality control is working. While Quality Assurance personnel should never perform
quality control unless it is to validate Quality Control.
Quality Assurance is preventing in Nature while Quality Control is detective in nature.

2.6 SUMMARY
All the different software development models have their own advantages and disadvantages.
Nevertheless, in the contemporary commercial software development world, the fusion of all
these methodologies is incorporated. Timing is very crucial in software development. If a delay
happens in the development phase, the market could be taken over by the competitor. Also if a
‘bug’ filled product is launched in a short period of time (quicker than the competitors), it may
affect the reputation of the company. So, there should be a tradeoff between the development
time and the quality of the product. Customers don’t expect a bug free product but they expect a
user-friendly product that they can give a thumbs-up to.

The better understanding about quality can be achieved by study of quality models. The initial
quality models were in hierarchical order. These hierarchies provide better perspective about
quality characteristics. The model proposed by McCall and Bohem fall in above category. The
perspectives in McCall model are- Product revision (ability to change), Product transition
(adaptability to new environments) and Product operations (basic operational characteristics). In
total McCall identified the 11 quality factors broken down by the 3 perspectives, as listed above.
For each quality factor McCall defined one or more quality criteria (a way of measurement), in
this way an overall quality assessment could be made of a given software product by evaluating
the criteria for each factor. Boehm’s model was defined to provide a set of ‘well-defined, well-
differentiated characteristics of software quality’. The model is hierarchical in nature but the
hierarchy is extended, so that quality criteria are subdivided. There are two levels of quality
criteria, the intermediate level being further split into primitive characteristics, which are
amenable to measurement in this model.
Assignment-Module 2

1. The ___________describing the method of selecting, implementing and monitoring


the life cycle for software.
a. ISO/IEC 12207
b. ISO/IEC 9126
c. IEEE
d. ISO 9000

2. SEPG stands for ___________


a. Software Engineering Process Group
b. Software Engineering Product Groups
c. Six sigma Engineering Production Group
d. Software Experienced Product Group

3. SDLC stands for ___________


a. Software design life cycle
b. Software development life cycle
c. System development life cycle
d. System design life cycle

4. CMM stands for ___________


a. Capability Maturity Model
b. Capable Maturity Model
c. Complexity Mature Model
d. Capability Maintainable Model
5. Waterfall model is not suitable for___________
a. Small projects
b. Accommodating changes
c. Complex projects
d. None of the above

6. Which is not a software life cycle model


a. Waterfall model
b. Spiral model
c. Prototyping model
d. Capability Maturity Model

7. Which model is cyclic version of linear model


a. Waterfall model
b. Spiral Model
c. Prototyping model
d. None of them

8. Which is the most important feature of spiral model


a. Quality management
b. Risk management
c. Performance management
d. Evolutionary management

9. Which phase is not available in waterfall model


a. Coding
b. Testing
c. Maintenance
d. Abstraction
10. What are the hierarchical models
a. Mc call model
b. Boehm model
c. None of them
d. Both of them

Key - Module 2
1. a
2. a
3. b
4. a
5. b
6. d
7. c
8. b
9. d
10. d

You might also like