You are on page 1of 172

SOFTWARE ENGINEERING

NOTES
1 ANNA UNIVERSITY CHENNAI
UNIT I
INTRODUCTION
1.0 INTRODUCTION
Before we go deep into the concepts of Software Engineering one should know the
need for Software Engineering for the Software Development, Industrial perception of the
software engineering and finally the scope of software engineering.
Conventionally software is defined as a set of programs, data and document. The
software characteristics are different fromhardware characteristics. Pressman clearly
brings out the differences as follows.
- Software is a logical product and is engineered whereas hardware is physical
product.
- Software does not wear out. The hardware wears out.
- Software evolves and is subject to changes so does the hardware.
- Most of the software is custombuilt whereas hardware is built for general purposes.
It is the software that will harness the full potential of the hardware.
Softwares are of different types depending on there functionalities:
- SystemSoftware
- Application Software
- Real Time Software
- Embedded Software
System Software
Such as compilers, editors, operating systems, drivers etc provide an environment for
the user to carryout the computations. They interact with Computer Hardware and various
users. The developments of hardware and systemsoftware go hand in hand. In view of
the rapid development in hardware technologies, the need for these systemprogrammes is
also growing in parallel. The development of systemsoftware can be considered as complex
and requires lot of human effort to develop these programmes.
Application Software
These softwares aid the business information processing which is built on certain
business logic to describe various interaction among business processes and access one or
DMC 1703
NOTES
2 ANNA UNIVERSITY CHENNAI
more large databases containing business information. These concepts will be clear as we
discuss different methodologies of software engineering in subsequent sections.
Real Time Software
These software are developed for mission critical application where the responses
should come within a stipulated time period. These are event driven systems, I mean
every event triggers action to be accomplished in a constrain time period. The software
development for such applications are quite challenging and require sophisticated principles
of software engineering.
Embedded Software
All industries want to automate several processes so that the product come out of the
organization within stipulated schedules.
Hence the software that monitors these activities needs automatic control mechanisms.
In these cases the software is embedded in read only memory and is used to control and
monitor the products.
Besides these software, there are some softwares that makes use of non numerical
algorithms to solve complex problems which may not be amenable for computation and
analysis. The expert system and knowledge based system come under this category.
Some softwares are being developed minimizing the human brain which are called neural
networks software.
What ever may be the type of software, its development is dependent on the following
factors.
- Schedules
- Cost
- Human Resources
All major organizations which deals with the development of the software mentioned
above, need to be delivered to the clients in time without breaking the schedules that have
been agreed upon by the clients and organizations. Sometimes the delivery schedules may
be very tight. The entire software which may be broken down into components/ modules
may have to be delivered either in part or full within a stipulated costs agreed upon.
These software components/modules need to be developed without any mistakes or
bugs. Since the debugging efforts or fixing these bugs involve some costs which may effect
the cost ultimately there may be some cost overruns. The increase in costs may be x times
the original costs depending on at what stage these mistakes have been discovered. The
impact of the cost will be discussed in details when we discuss software cost estimation.
These large and complex softwares requires a team consisting of more than one team
SOFTWARE ENGINEERING
NOTES
3 ANNA UNIVERSITY CHENNAI
member. Generally it may vary from10-200 teammembers depending on the complexity
of the project. Teamconstitution and teammanagement is also vital for the success of the
software development project. What is the optimumteam size require careful analysis of
the project? These concepts will be discussed in detail in software project planning.
Our ultimate aimis to produce a high quality (good) software without any schedule
slippages and cost overruns.
Having decided on this objective, we shall highlight which are the quality characteristics
that a software should possess? How do we achieve this objective? Here the principles
of software engineering play a major role to develop a quality software.
Learning Objectives
- To know what is software engineering and the need for it.
- To understand the basic concepts of Software Engineering
- To examine various process models and their uses and limitations
- To identify under what circumstances you can use these process models.
- To study the need for unified process modeling.
- To understand different techniques for unified process modeling.
- To identify essential draw backs of popular process model and need for agile
process models.
- To study various agile software development process models.
- To appreciate the use of quantitative and objective approaches for software cost
estimation.
- To study various empirical model for cost estimation
- To understand most popular and widely used techniques for estimating software
cost and effort
- To examine several factors for project control.
- To understand what exactly the software project planning.
- To understand different types of risks and how risks can be prevented/reduced.
First let us define what is Software Engineering and answer some of the following
questions.Why do you need Software Engineering?
- What is the scope of Software Engineering?
Problems are part and parcel of every days life. We shall always try to find solutions
to these problems. Likewise we come across some problemin different domains such as
finance, industry, insurance, agriculture, education which require either hardware based or
software based solutions or combinations of both. The solution to these problems may be
simple or complex. When the problem is too big to solve, we decompose the problem
DMC 1703
NOTES
4 ANNA UNIVERSITY CHENNAI
into smaller problems which are amenable for easy solutions and integrate the solutions of
these smaller problems to arrive at the appropriate solution of the complex problem. This
divide and conquer strategy is the philosophy that is being followed in all the complex
software based solutions. In traditional way, this approach is called the analysis and synthesis
of a problem. Thus any problem solving technique must have two sides: analyzing the
problemto determine its nature and then synthesizing a solution based on an analysis. Any
solution to a given problemmust be approached in a systematic manner with careful analysis
of the problem. Fromthe definition of Software given in the introduction it is obvious that
software is a product and it provides a solution to a problem specified by the customers/
client. The development of the software requires an engineering approach. This important
aspect has been recognized during 1768 NATO Software Engineering Conference held in
Germany. Everybody at that time was convinced that software production should be an
engineering activity.
Definition of Software Engineering
IEEE definition of Software Engineering: It is the application of a systematic, disciplined,
quantifiable approach to the development, operation and maintenance of a software that is
the application of engineering to software.
As we have seen in the introduction that software is developed unlike other physical
products, which are manufactured. Each product whether it is a physical product or a
logical product like software requires a sequence of steps or phases for its completion.
The life cycle model for the software development consists of the following phases even
though the name of each phase may vary from organization to organization.
- Requirement phase
- Specification phase
- Design phase
- Implementation Phase
- Integration Phase
- Maintenance Phase
- Retirement or termination phase
By and large, this is the generic software development life cycle. To find out a solution
to the problemwe require a variety of methods, tools, procedures and process paradigms
in order to have effective control on the overall software development and provide a
qualify software.
Methods provide technical know how for developing software. They may include
several tasks such as project planning, estimation, software requirements analysis, selection
of appropriate data structures, software architecture, algorithms, coding, testing etc.
A tool is an automated systemthat support methods explained previously. The tools
are essential to enhance the productivity, quality of software that is being developed.
SOFTWARE ENGINEERING
NOTES
5 ANNA UNIVERSITY CHENNAI
A procedure is like a cookbook receipe. It is a combination of methods and tools to
be adopted for the software development.
Finally a process paradigms provides the approach or framework for building the
software. The process paradigms may follow classical approach like structured system
Analysis and Design (SSAD) or Object Oriented Development depending on the problem
to be solved.
The SSAD may be suitable for certain type of problems and The Object Oriented
Analysis Design may be suitable for a certain class problems. As we go on in subsequent
chapters the concepts will be more clear.
Fromthe preceeding discussions it is easy to note that software engineering has many
facets. Software Engineering should not be construed as programming although the
programming is an essential part of software engineering. Mathematical methods play a
role in programme proving and correctness. Sound Engineering practices are absolutely
necessary to get useful and quality products. Psychological aspects play a role in enhancing
communication between human and machines in other words human- computer interaction.
Management concepts are needed to effectively control the whole development project.
A simple view of the software development is given below.
Fig 1.1 A simplistic view of software development

Problem
Requirements
Specification
Design
Specification
Programme
Working Programme
Retirement or
Termination
Design
Implementation
Testing and Integration
Maintenance
Analysis of needs, features of the problem
A Process Model
DMC 1703
NOTES
6 ANNA UNIVERSITY CHENNAI
The problemto be solved can be split into problem domain where the needs of the
customers and features of the problem are considered. In fact they are inputs to get the
required solution. What is required for solution such as requirements specification, Design
specification formsolution domain.
Software Processes & Process Characteristics
Whether we develop a software or manufacture a physical product we always follow
a sequence of steps to accomplish a set of tasks.
There will always be a order by which you accomplish these tasks.
The software process is defined as a set of ordered tasks or a series of steps involving
activities, Constraints and resources that produce given output of some kind. A processes
usually involves a set of tools and techniques.
Every process shall have important characteristics which are listed below:
- Process prescribes all of the major process activities
- Each process has a set of resources and uses these resources with certain constraints
(Such as schedules).
- A process may be decomposed into sub processes so that we can think of process
hierarchical model.
- Each process activity has entry and exit criterion so that we know when the activity
starts and begins.
- The activities are arranged in a sequential portion
- Every process has a set of guidelines
- Every process will have certain budgetary constraints and resource constraints.
When the process involves the building of some product we refer the process as life
cycle.
Software development process is some times called software development cycle.
Why Processes are important?
They impose consistency and structure on a set of activities.
A process is more than a Procedure. It is a collection of procedures organized so
that we build products to satisfy a set of goals on standards.
Process structures helps us to have better control on the activity.
Every Software Development organization maintain its own process document which
covers the following
SOFTWARE ENGINEERING
NOTES
7 ANNA UNIVERSITY CHENNAI
Process Document: Standards
Tools
Methods, Sub Processors
Software development usually involves the following stages. The detailed discussion
on these activities are provided in subsequent chapters.
- Req Analysis and definition
- SystemDesign
- ProgramDesign
- Writing the programs (Programimplementation)
- Unit Testing
- Integration Testing
- SystemTesting
- SystemDelivery
- Maintenance
In a simple process model, the phases have been depicted sequentially. For a given
project the phases need not be sequential. We can also identify some parallel activities
also.
For the development of software cost and human effort (team effort) are essential.
These exists a number of algorithmic models that allows use to estimate total cost and
development time, effort required in termof person months; person yeans etc for software
development projects.
Distribution of cost and effort:
Based on survey on several projects, the percentage of total effort is distributed
among several phases is as follows:
Requirements Phase 15%
Specification 10% 40
Design 15%
Coding 20% 20
Testing and Machine 40 40
This is what is called 40- 20 40 rules of software development. It clearly
demonstrates that the requirements and design phases, testing and maintenance are very
important.
Approximate relative cost of the phases of the software life cycle.
Requirement 2%
Specification (Analyzer) 5%
Design 6%

DMC 1703
NOTES
8 ANNA UNIVERSITY CHENNAI
Coding 5%
Testing 7%
Integration 8%
Maintenance 67%
Errors made during requirements phase are the ones that are mostly costly to repair.
In view of this, it is advisable considerable effort and energy on requirements phase
than to try to remove errors during the time consuming testing phase or worse still during
maintenance.
According to Boehm (1987)
Successful projects followed 60-15-25 distribution
60% Requirements Engineering & Design
15% Implementation
25% Testing
Message is clear: The longer you post-pone coding, the earlier you are finished
SOFTWARE PROCESS MODELS: A software process model is an abstract
representation of the software process which is a partially ordered set of activities
undertaken to manage, develop and maintain software system. It encompases all the
phases of software development life cycle. The most elementary software process models
is build and Fix model. Entire system development often took place in a rather adhoc
manner relying on the skills and experience of the individual member performing the work.
This may also be called as opportunities approach.
Figure 1.2 Build And Fix Model
This approach may work very well for small projects student projects in a class
room. It is highly inappropriate for complex software projects where on time delivery and
high quality are expected. Besides, the cost of the software development is greater than
the cost of properly specified and designed system.


First Prototype
Modify unit
client is
satisfied

Improvement
if any

Retirement
Otherwise
SOFTWARE ENGINEERING
NOTES
9 ANNA UNIVERSITY CHENNAI
Waterfall model
The waterfall model is generally attributed to Royce. The waterfall model is given in
figure.
Figure 1.3 Waterfall Model
This model provides a classical framework for the software development which
accounts for the importance of requirements, design and quality assurance (V & V)
(verification and validation) need to be done. Verification tells you if the system meet the
requirements. This is equivalent to the question. Are we building the system right? It tries
to access the correctness of the transition to the next phase. Validation gives if the system
meets the user requirements or Are we building the right system?. The model suggests
that the software development is to be done in a sequential phases. Some parallel activities
among phase can be identified. Before completing each phase, quality assurance must be
done. The waterfall model place considerable emphasis on a careful analysis before the
systemis actually built. Due attention must be given to the requirements phase as it is the
first step and at the same time very crucial.
All the requirements that are collected during the phases must satisfy all the customers
needs.

Requirements Gathering and
Definition
V
&
V
Requirement Specification
V
&
V
V
Design
V
&
V

Implementation
Integrity and Deployment
V
&
V
Maintenance
V
&
V
V
&
V
DMC 1703
NOTES
10 ANNA UNIVERSITY CHENNAI
1. In the first phase the system services, constraints and goals are identified by
consultations with user consistently. All these are converted into specifications
and Software Requirement Specification (SRS) document is prepared. This SRS
document is visible outcome of this phase.
2. In the design phase the requirements are partitioned into hardware requirements
and software requirements and it finalizes the overall systemarchitecture.
3. In the implementation phase through the design process the requirement
specifications are converted as set of programmes or program units. Each unit is
tested separately by identifying appropriate test cases, which are designed based
on the requirements.
4. In the integration and systemtesting phase these individual units are integrated and
tested as a complete system to ensure that the Software requirements are met.
Suitable interfaces are designed across the programs unit, so that complete
integration is possible.
5. The operation and maintenance stage is the longest life cycle phase as we have
seen earlier from our discussions.
Maintenance involves correcting errors which are not discovered during the earlier
phases. Some major changes in the requirements, enhancement of services are taken up
during this phase. Some times if these errors are serious, changes in the requirements,
may cause redesign, recoding, retesting etc which may add to the cost of overall Software
Development cost.
One special feature of the waterfall model is that at the end of every phase, there is a
visible outputs in the formof documents as SRS, design documents, coding document, test
plan document and maintenance document. There are documents that are prepared after
verification and validation at the end of every phase. In this sense, we can say that the
process is visible. In spite of these advantages there are some limitations of the waterfall
model.
- Real projects rarely follow the sequential flow that the model proposes.
- As the beginning of most of the projects, there is often a great deal of uncertainty
about requirements and goals. This model does not accommodate this natural
uncertainty very well.
- Working version of the systemis available at the end of implementation and testing
phases. That means the customers cannot use anything until the entire development
is complete.
- Major design problems may not be detected till very late.
- It is rigid and inflexible procedure for Software development.
1.2 RAPID PROTOTYPING
A rapid prototype is quickly designed and developed software that exhibits the key
functionality of the specified product. Rapid prototype reflects the functionality that the
SOFTWARE ENGINEERING
NOTES
11 ANNA UNIVERSITY CHENNAI
client sees such as input screens, reports but may omit file updation, some changes/areas
that need improvement. The developer incorporates the changes so that both parties are
convinced and satisfied that the needs of customer are actually encapsulated in the rapid
prototype. Then this rapid prototype may be used as a basis for drawing up the specifications.
The purpose of the rapid prototype is to enable the client as well as the developer to
agree as quickly as possible on what the software is supposed to perform. If no agreement
is reached means that immediately another prototype has to be developed quickly second
version. This may better satisfy customer needs. In order to achieve rapid development
throughout the rapid prototyping process, fourth generation languages such as small talk,
Prolog, Lisp and Java may be used for rapid prototyping. Another popular technique is to
use hypertext. Some investigations also reveal that it is important to build a rapid prototype
as early as possible in the object Oriented life cycle.
Rapid prototype life cycle is given below.
Figure 1.4 Rapid Prototyping Process Cycle
The major advantage of this rapid prototyping model is that the development of the
product proceeds in a sequential fashion like waterfall model. The feedback loops of the
waterfall model are less likely to be needed since the prototypes been validated through
interactions with the client. Further, the specification document prepared after interaction
with the client will be correct. Since the prototype is a product out of quick design, the
developer may know some drawbacks in the design methodology and will have chance to
rectify. Only thing is that the developers must speed up the development of prototypes
and in particular software development process.
1.3 INCREMENTAL MODEL
In order to keep the cost of software development under control and at the same time
to have interaction with the client during the software development life cycle, it is better to

Rapid
prototype

Verify
Specification phase
on Phase Verify
Design Phase


Verify
Implementati
on phase
Verify
Integration
Phase

Verify
Operation

Verify

Changed
requirements
Verify
DMC 1703
NOTES
12 ANNA UNIVERSITY CHENNAI
develop the software in an incremental fashion. The product is designed, implemented,
integrated and tested as a series of incremental blocks, where each block consists of code
pieces fromvarious modules interacting together to provide a specific functionality. For
example for building operating system we can build a scheduler and then the file
management.
The process stops when the product achieves the functionality to meet all customer
requirements.
The incremental development can also be used to control the over functionality
syndrome. Since the users are unable to formulate their needs, they tend to demand more
fromthe developers. They may spent considerable effort in realizing the features that are
not really needed.
With this incremental approach, attention is focused on systemwith essential features
only. Additional functionalities are added if and when needed. Difficulties on the management
projects are greatly reduced by the incremental process model.
1.4 RAPID APPLICATION DEVELOPMENT
It has a lot common with incremental models. It focuses on user involvement,
prototyping, reuse, the use of automated tools and small development teams. The only
difference is that it fixes a time frame within which activities are to be completed. Here,
contrary to other development models, first the time frame is decided upon and then the
project times to realize the required functionality within that time frame. Some sort of trade
off need to be done in order to realize the product within a stipulated time period.
The RAD life cycle consists of the four phases.
Requirements Planning
User Design
Construction
Cutover
For smaller projects, the requirements planning and user design can be combined
together in view of their commonality. Since the focus is on user involvement, two mean
technique namely Joint Requirements Planning (JRP) and Joint Application Design (JAD)
are used as a part of RAD. During JRP the requirements finalized with the end users are
prioritized since it is mostly likely that not all of themare implemented in the first version of
the system. This requirements prioritization is known as triage. In RAD, the triage process
is used to make sure that the most important requirements are addressed first.
In JAD, first an initial design of the systemis finalized through mutual agreement with
the end user and then a prototype is built for evaluation by the end user. The systemis built
by a SWAT team (Skilled With Advanced Tools) which will be done after finalizing the
SOFTWARE ENGINEERING
NOTES
13 ANNA UNIVERSITY CHENNAI
functionality required. The SWAT teamdecides which functionality to implement in each
iteration and construct a series of evolutionary prototypes.
During the cutover phase, the final testing of the systemtakes place, users are trained
and the systemis installed.
1.5 THE SPIRAL MODEL
In the preceeding sections we have seen that iterative process is ideal for the software
development because there is a scope for improvement after each iterations. Requirements
will get refined at every iteration. Further, even though the waterfall model is well accepted
Software Development process model the inherent disadvantage of the waterfall model,
complexity of software development projects need more sophisticated models which takes
are of nodes associated with complex large software projects.
The spiral model was designed to include the best features from the waterfall and
prototyping models and introduce a new component namely risk assessment. The term
spiral is used to describe the process that is followed as the development of the system
takes place. Similar to prototyping model, an initial version of the systemis developed and
then respectively modified based on the inputs received from the customer after his
evaluation. Unlike the prototyping model, the development of each version of the system
is meticulously designed using the steps in waterfall model. With each iteration around the
spiral (beginning at the center and working outward), progressively more complete version
of the systemare built.
Figure 1.5 Spiral Model
DMC 1703
NOTES
14 ANNA UNIVERSITY CHENNAI
Risk assessment is included as a step in the development process as a means of
evaluating each version of the system to determine whether or not development should
continue. If the developer thinks that any identified risks are too great and over- shadow
the entire software development project, the project may be halted. For example if a
substantial increase in cost or project completion time is identified during risk analysis at
one stage, the developer or the customer may decide that it does not make sense to
continue the project since the increased cost and extended time frame for the project may
not be acceptable to the customer/client. The project may be impractical and unfeasible.
The spiral model is made up of the following steps.
Project Objectives: The objectives and constraints are identified and set of alternative
approaches are also explored against these objectives and constraints.
Risk Assessment: Possible alternatives and associated risk factors are analysed
thoroughly. Resolutions of risks are evaluated and weighted in the consideration
of project continuation. The prototyping models are also used to examine the
technical feasibility of the product.
Engineering and Production: Once the detailed requirements are finalized the
software product is developed based on waterfall model after the risk analysis
Planning and Management: The product so developed is given to the customer/
client for his evaluation. This will provide another opportunity for the customers to
look into the functionality of the product and to find out whether it meets his needs/
requirements. The customer will then give a feed back to the developer.
Problems/ Challenges associated with spiral model:
1. It is difficult to assess its strengthens and weakness as it is relatively new process
model.
2. The risk assessment component of the Spiral model provides both developers and
customers with a measuring tool that earlier process models do not have.
3. The practical nature of this tool helps to make the spiral model a more realistic
process model when compared to others.
1.6 THE REUSE MODEL
The basic premise behind the Reuse model is that systems should be built using existing
components (Reusable) as opposed to customer built new components. It is well known
that object oriented design are more suitable for developing reusable components. Hence
the Reuse model is clearly meant for Object Oriented Computing Environments.
Within the reuse model, Libraries of Software models are maintained that can be
copied for use in any system. These components are of two types.
Procedural Modules
Database Modules
SOFTWARE ENGINEERING
NOTES
15 ANNA UNIVERSITY CHENNAI
When building a new system the developer will borrow a copy of a module fromthe
system library and then plug it into a function or procedure. If a needed module is not
available, the developer will develop a new one and store a copy in the systemlibrary for
future use.
The reuse model consists of the following steps
Definition of requirements: Initial system requirements, which may be subset of
complete system requirements to identify the existing software component are
collected.
Definition of objects: The objects which can support the necessary system
components are identified.
Collection of Objects: The systemlibraries are searched to determine whether the
needed objects are available or not. Copies of the needed objects are downloaded
fromthe system.
Creation of Customized Objects: Objects that have been identified are needed
but that are not available in the library are created.
Prototype Assembly: A prototype version of the system is developed using the
existing components or modifying the existing components.
Prototype Evaluation: This prototype is evaluated to determine if it adequately
meets the customer needs and requirements.
Requirement Refinement: Requirements are further refined to come out with a
more detailed/refined version of the product.
Objects Refinement: Object are refined to reflect the changes in the requirement.
A general criticism about the Reuse model is that it is limited for use in Object
Oriented development environments.
The selection of an appropriate process model depends primarily on two factors.
Organizational Environment
Nature of the application
Four categories of environments can be identified based on the types of applications.
Unchanging Environment: The requirements are unchanging for life time of the
system for example Scientific algorithms. In these type of applications, the
requirements are unambiguous and comprehensive. In these type of situations a
waterfall model or spiral model may give good results.
Turbulent environment: The organization is undergoing constant change and system
requirements are always changing. Many business systems fall in this category. In
these types of applications, prototyping model, reusable model could be the choice.
Uncertain environments: The requirements of the systemare unknown or uncertain.
It is not possible to define requirements accurately head a of time because the
DMC 1703
NOTES
16 ANNA UNIVERSITY CHENNAI
situation is new. In such situations, perhaps a process model based on Artificial
Intelligence may be ideal.
Adaptive Environment: The environment may change in reaction to the system
being developed thus prompting a changed set of requirements. Teaching systems
and expert systems fall into this category. In this type of situation, the process
methodology must allow for a straight forward introduction of new rules.
1.7 UNIFIED PROCESS MODEL
Unified Process is a generic process frame work that can be specialized for a very
large class of software systems for different application areas, different types of organizations,
different competence levels and different project sizes.
The unified process is a component based method which implies that the software
systemwhich is being built is made up of software components which could be reused and
interconnected via well defined interfaces.
The unified process uses UML (Unified Modelling Language) when preparing all
blueprints of software system. UML is an integral part of the unified process.
The essence of unified process can be captured in three keywords.
Use-case driven
Architecture Centric
Iteractive and incremental
1.7.1 Use Case Driven
A use case is a price of functionality in the systemthat gives a user a result of value.
User is very generic, may devote human user or some other system. User interacts with
the system to get some required results. All possible interactions can be considered as a
set of scenarios in use case. By means of these interactions it is possible to know the
functional requirements. Thus use cases capture functional requirements. All the use cases
together make up the use case model which describe complete functionality of the system.
A Use case diagram displays the relationship among actors and use cases.
An actor represents a user or acter another systemuse case that will interact with the
system you are modeling. A use case is an external view of the system that represents
some action the user might performin order to complete a task.
During the initial stage of a project most use case should be defined, but as the project
continues more might become visible.
Use cases stated simply allow description of sequence of events that taken together
lead to a systemdoing something useful.
SOFTWARE ENGINEERING
NOTES
17 ANNA UNIVERSITY CHENNAI
During 1990 usecases become one of the most common practices for capturing
functional requirements.
Use cases treat the systemas a black box and the interactions with the systemincluding
system responses are perceived from outside the system.
What the system must do is very important rather then how it is to be done.
Let us consider an user placing order with a sales company. The simple use case
diagramis given below.
Figure 1.6 A Typical Use Case Diagram
Use case are not just a tool for specifying the requirements of a system. They also
drive its design, implementation and test that is they derive the development processes.
Based on the Use-Cases finalized, the developer creates a series of design and
implementation models that realize user cases . The developers reviewes each successive
model and tests the implementation to make sure that the components of the implementation
model correctly implement use cases.
Use case driven means that the development process follows a flow that is derived
from these Use cases. It is to be noted that use cases are developed in tandem with the

Browse Catalogue and
select item
Call sales Piece
Give Stupping info
Give Payment info
Get Confirmation
DMC 1703
NOTES
18 ANNA UNIVERSITY CHENNAI
system architecture. In other words the use cases derive the systemarchitecture and the
systemarchitecture in turn influence the selection of use cases.
In view of this, the system architecture and use cases mature during the entire life
cycle.
1.7.2 Architecture Centric
The software Architecture is the description of software components fromwhich the
system are built and gives the interaction among these component an important aspect of
the software development and it covers both the static and dynamic aspects of the system.
The architecture is developed based on the needs of the users and other stake holders and
on User cases.
The Software Architecture is also influenced by other factors such as.
Platformon which it will run
Database Management System
Operating System
Protocols for Network communication
Availability of reusable components
Deployment Considerations
Legacy Systems
Non-functional requirements
Now the question is how are these use cases related to architecture?
Every product has both function and form. This architecture or formshould be designed
in such a way that it is flexible and scalable. In order to achieve this, the architecture
should work fromthe general understanding of key use cases.
As the use cases are specified and become mature, most of the architecture is
developed.
This process continues until the architecture become stable.
1.7.3 Iterative and Incremental
As I have already mentioned, in order to solve a complex problem, the best strategy
is divide and conquer strategy. Thus big project can be divided into several mini projects.
Each mini project is an iteration that results in an increment. The iterations must be controlled
in an effective manner and each iteration need to be executed in a planned way.
Developers base the selection of what is to be implemented in an iteration upon two
factors.
1. Group of use cases that extend the usability of the product
2. Most important risks associated with iterations.
SOFTWARE ENGINEERING
NOTES
19 ANNA UNIVERSITY CHENNAI
In every iteration, the developers identify and specify the relevant use cases, create a
design using the selected architecture, implement the design in components and verify that
the components satisfy the use cases. If iteration meets the stipulated goals, we shall move
on to the next iteration. If the iteration does not meet goals, the developer must relook at
their previous approaches and shall adopt a new approach. In order to achieve better
results in the development, a project teamshould judicially select the iteration required to
reach the project goal. In this way the iteration process is being controlled.
Such controlled iteration processes have several benefits.
Controlled iteration reduces the cost risk.
It reduces the risk of not getting the product to market on the planned schedule.
Controlled iteration speeds up the work of the whole development effort.
Controlled iteration acknowledges a reality often ignored that user needs and the
corresponding requirements cannot be fully defined up front.
In summary, Architecture provides the structure in which to guide the work in the
iteration whereas use cases define the goals and drives the work of each iteration.
1.7.4 Life cycle of the unified process
The life cycle consists of four phases
1. Inception
2. Elaboration
3. Construction
4. Transition
In the first phase of the software life cycle namely inception phase of the life cycle the
seed idea for the development is brought up to the point of being sufficiently well founded
to warrant entering into the elaboration phase.
In the elaboration phase, the software architecture is defined.
In the construction phase the software is brought from an executable architectural
base line to the point where it is ready to be transitioned to the user community.
In the transition phase, the software is delivered to hands of user community. Each
cycle results in a new release of the system. It consists of the source code embodied in
components that can be compiled and executed plus manuals and associated deliverables.
The finished product includes the requirements, usecases, non-functional specifications
and test cases. It also includes the architecture and the artifacts modeled by the unified
modeling language. Since requirement often changes due to several factors, the developers
need to undertake new cycles. To carry out different cycles efficiently the developers
need all the essential ingredients of the product such as:
DMC 1703
NOTES
20 ANNA UNIVERSITY CHENNAI
Use Case Model
Analysis Model
Design Model
Deployment Model
Implementation Model
Test Model
All the models are related in the following fashion.
Figure: 1.4 Models of unified Process
Each phase requires some time period for completion. The time period can be divided
into four phases. Through a sequence of models, stake holders visualize what goes on
these phases. Within each phase, managers or developers may break down the work still
further into iterations and the ensuing increments. Each phase terminates in a milestone.
Milestones also helps management and developers to monitor the progress of the
work as it passes through four phases. By keeping track of time and effort spent on each
phase, we developed data which is useful in estimating time and staff requirements.
We can identify 5 workflow requirements, analysis, design, implementation and test
and each work flow is carried out in each phase. For example the quantumof work for the
requirements is confined to two phases only. The approximate curve for the work in the
requirements is shown in fig.1.5.
Use case
model
Test
Model
Analysis
Model
Design
Model
Deployment
Model
Implementa
tion Model
SOFTWARE ENGINEERING
NOTES
21 ANNA UNIVERSITY CHENNAI
Figure 1.8: The Five Work Flows
The curves approximate the extent to which the workflow are carried out in each
phase.
During the inception phase, a good idea about the product and the business case for
the product emerges. Use case model are used to identify the vision of the end product.
Some important risks are also identified and prioritized and the elaboration phase is planned
in detail.
In the elaboration phase systemarchitecture is designed. The architecture is expressed
as views of all the model of the systemwhich together represent the whole system.
This implies that there are architectural views of use-case model, the analysis model,
the design model, the implementation model and the deployment model. The out come of
this phase is the baseline architecture. At the end of elaboration phase, the project manager
is in a position to plan all activities related to management. During the construction it grows
to become a full-fledged system.
The transition phase covers beta release of the product, Customer training, correcting
defects after delivery and configuration control.
Summary: The unified process establishes a framework that integrates all multifaceted
process such as cycles, phases, workflow, risk instigation, quality control, project
management and configuration control.
1.8 AGILE PROCESSES AND MODELS
The process models that we have discussed in previous sections are considered to be
heavy weight processes with rigid, straight jacketed approaches. Further emphasis is also
laid on documentation. Agile Processes which are light weight process mainly focus faster

Inception Elaboration Construction Transition
Iteration in the
elaboration
phase
Core Workflow
Requirements
Analysis
Design
Implementation
Test Iter
DMC 1703
NOTES
22 ANNA UNIVERSITY CHENNAI
software development process. For the past 25 years a large number of different approaches
to software development have been introduced of which only few have survived to be
used now-a-days. The concept of agile software development methods have attracted lot
of interest among practioners and lately also in the academia. The introduction of extreme
programming method, known as XP has been widely acknowledged as the starting point
for the various agile software development approaches. There are also other methods
since then that appear to belong to the same family of methodologies. Such methods or
methodologies are e.g. Crystal methods, Feature-Driven Development and Adaptive
Software Development. The initial experience reports fromindustry are predominantly
positive, however no adequate data is available on the success of the methodologies.
Despite high interest in the subject no clear agreement has been achieved on how to
distinguish agile software development fromtraditional approaches. The boundaries have
thus not been clearly established.
1.8.1 Agile Methodologies
All the methodologies described earlier in this chapter are based on the premise that
any software development process should be predictable and respectable. One of the
major criticismabout earlier methods is that there is more emphasis on procedures and
documentation. They are considered to be heavy weight and rigorous. There is excessive
emphasis on the structure. Agile software development methods try to remove these
restrictions.
McCauley (2001) argues that the underlying philosophy of process oriented methods
is that the requirements of the software project are completely locked in and frozen before
the design and software development commences. As there are approaches not always
feasible there is also a need for flexible, adaptable and agile methods which allow the
developers to make late changes in the specifications. The proponents of Agile Software
Development argue that software development being essentially a human activity these will
always have variations in processes and inputs and the model should be flexible enough to
handle the variations. If a model cannot handle this with flexibility, there can be lot of
wastage of effort or the final product may not meet the customers needs. Hence the agile
methodologies advocate the principle build short and built often.
That is the given project is subdivided into sub projects and each sub project is
developed and integrated in to the already delivered system. In this fashion the customers
continuous interaction is guaranteed and the customer is sure of getting useful and usable
systems. The sub projects are chosen in such a way that they have short delivery cycles
usually of the order of 3 to 4 weeks. The development teamalso gets continuous feedback.
SOFTWARE ENGINEERING
NOTES
23 ANNA UNIVERSITY CHENNAI
Thus the focal values of agile methods are
Individuals and Interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over a following a plan.
There are 4 important aspects to be considered in the agile software development.
First the agile movement emphasize the relationship and community of software
developers and the human role reflected in the contracts as opposed to institutionalized
processes and development tools. Second, the vital objective of the software teamis to
continuously turnout tested working software. The developers are urged to keep the code
simple, straight and technically as advanced as possible, thus lessening the documentation
burden to an appropriate level. Third, the relation and cooperation between the developers
and the clients is given preference over strict contracts. Froma business point of view
agile development is focused on delivering business value immediately as the project starts
thus reducing the risks of non-fulfillment regarding the contract.
Fourth the developer group comprising both software developers and customer
representatives should be well informed, competant and authorized to consider possible
adjustment needs emerging during the development process life cycle. In conclusion, we
can say that the software development method becomes agile when the software
development is incremental (small software releases with rapid cycles) co-operative
(customer and developers working constantly together with close communication), straight
forward (the method itself is easy to learn and to modify) and adaptive (able to make last
moment changes).
1.8.2 Agile Models
The more popular agile software development models are
Scrum
Dynamic SystemDevelopment Methods (DSDM)
Crystal Methods
Feature Driven Development (FDD)
Extreme Programming (XP)
SCRUM: It is a project management framework. It divides the development into
short cycles called sprint cycles in which a specified set of features are delivered. It
advocates daily teammeeting for co-ordinaton and integration.
DSDM: It is characterized by nine principles
1. Active User Involvement
2. TeamEmpowerment
3. Frequent Delivery of Products
DMC 1703
NOTES
24 ANNA UNIVERSITY CHENNAI
4. Fitness of Business Purposes
5. Iteractive and Incremental development
6. All changes during development are reversible
7. Baselining of requirements at a high level
8. Integrated testing
9. Collaboration and Co-operation between stakeholders
CRYSTAL METHODOLOGIES: They are a set of configurable methodologies.
They focus on people factors in development. The configuration is carried out on project
size criticality and objectives.
Some of the names used for the methodologies are clear, yellow, orange web, Red
etc.
FDD: It is a short iteration framework for software development. It focuses an building
an object model build feature list, plan by feature, design by feature and build by feature.
XP: This methodologies is probably the most popular among the agile methodologies.
It is based on three important principles viz test first, Continuous refactoring and pair
programming. One of the important concepts popularized by XP is pair programming.
Code is always developed in pairs while one person is keying in the code, the other person
could be reviewing.
The detailed discussion on the agile models is beyond the scope.
1.9 SOFTWARE COST ESTIMATION
1.9.1 Introduction
We have described several process models for software development which is a
software project by itself. Generally, managing software project requires meticulous planning
and execution by the teamconcerned. Whatever may be the business need that the software
project is addressing, structured and planned execution is of great importance. As we
have already mentioned that the software is a knowledge product and is different from
conventional physical product as far as development is concerned. In a similar way the
software projects are very different fromtraditional projects such as constructing a building
or sending a space rocket to mars. In traditional projects we can see the progress of the
work done whereas in software projects until the code modules are complete in all respects
and sent to test them, it is practically difficult to determine how much real progress is being
made. Even after the coding of modules are complete, the testing and integration efforts
may show problem that may require extensive redesign and recoding. A well defined
project will have the following attributes:
Goal to be achieved
Resource required
SOFTWARE ENGINEERING
NOTES
25 ANNA UNIVERSITY CHENNAI
Quality and performance
Costs involved
Time frame to complete
A good project management process is required to bind these attributes together.
The management processes includes the key factors given below.
A good project planning process.
Well structured requirement specification document
Work breakdown structure
Allocation of adequate resources to the right job
Cost of project execution
Time schedule for delivery
The success of software project is mainly dependent on how well you are able to
estimate the parameters of the project such as cost, time schedules etc. There are several
methods for estimating the cost, effort and duration of the project. Understanding such
estimation methods are not enough, but we should be able to apply these estimation
knowledge to different situation appropriately during the life cycle phases of the project
duration.
The main objective of software project planning is to provide a framework that enables
the manager to develop reasonable estimates of resources, cost and schedule. In view of
uncertainties, the software teammust always update the estimates in order to have effective
control over the project.
1.9.2 Software cost estimation
The bulk of the cost of software development is due to the human effort and most
cost estimation methods focus on this aspect and give estimates in terms of person months.
Accurate software cost estimates are crucial to both developers and customers. The
cost estimates must be satisfying to the manager who executes the project and the client/
customer who is one of the important stakeholders. There should not be any overestimate
or underestimate as it affects the organizations. Overestimating may result in too many
resources committed to the project or during contract bidding, result in not winning the
contract. Underestimating may result in winning the contract but the organization may
incur loss in executing the project and the manager of the project may be forced to have
some cost overruns.
Accurate cost estimation is important because
It helps the management to classify and prioritize development projects with respect
to the overall goals of the organization.
It can help to calculate the resources required to execute the project.
DMC 1703
NOTES
26 ANNA UNIVERSITY CHENNAI
It can be used to assess the impact of changes and support replanning.
It facilitates the manager to have effective control over the project since the resource
are better matched to real needs.
Customers will also be satisfied since the actual development costs may match
with the estimated costs.
Software cost estimation involves the determination of one or more of the following
- Effort (usually in person-months)
- Project duration (in Calendar time)
- Cost ( in Rupees)
Most cost estimation models attempt to generate an effort estimate which can then be
converted into project duration and cost. Although effort and cost are closely related, they
are not necessarily related by simple transformation function.
Effort is often measured in person-months of the programmers, analysts and project
managers. This effort estimate can be converted into a Rupee cost figure by calculating an
average salary per unit time of the staff involved and then multiplying themby the estimated
effort required.
There is lot of confusion among practitioners on three fundamental issues:
Which software cost estimation model to use?
Which software size measurement to use lines of code (LOC) or function point
(FP) or feature point?
What is a good estimate?
Most of the practioners rely on experience and expert judgment for the cost estimation
even though these methods may have some problems given below.
The method of deriving an estimate is not explicit and is not based on any data
It is difficult to find highly experienced estimators for every new project.
Generally, the relationship between cost and system size is not linear. Cost tends
to increase exponentially with the size. The expert judgment method is appropriate
only when the sizes of the current project and past project are similar.
Budget manipulations by management aimed at avoiding cost over run make
experience and data fromprevious project questionable.
In the last three decades many quantitative software cost estimation models have
been developed. These are broadly classified as empirical models and analytical models.
1.9.3 Empirical cost estimation models
Basically these empirical models are based on the data collected over sample of
projects.
SOFTWARE ENGINEERING
NOTES
27 ANNA UNIVERSITY CHENNAI
In view of this, the estimation model is not appropriate for all class of softwares and
in all development environments. The results obtained from these empirical models must
be used judiciously.
A typical estimation model is derived fromregression analysis on data collected from
past software projects. The structure of an empiral model is given by.
c
v
) e ( x B A + = E
Where A,B and C are empiral constants.
E is the effort in person months e
v
is the estimation variable (either LOC of FP). Most cost
estimation models are based on the size measure such as LOC and FP, obtained fromsize
estimation. The accuracy of size estimation directly impacts the accuracy of cost estimation.
Even though common size measurement have their own drawbacks an organization can
make good use of any one as long as a consistent counting method is used.Software cost
estimation has been a major difficulty in software development for various reasons. Reasons
identified are given below.
Lack of a historical data base of cost measurement
Software development involves several interrelated factors which are not well
understood.
Lack of trained estimators with necessary expertise
Little penalty is often associated with a poor estimate.
Cost estimation is an important aspect of the planning process. For example in the
top down planning approach the cost estimate is used to derive the project plan.
1. The project manager develop a characterization of the overall functionality, size,
process, environment people and quality required for the project.
2. A macro level estimate of the total effort and schedule is developed using a software
cost estimation model.
3. The project manager partitions the effort estimate into a top level work breakdown
structure. He also partitions the schedule into major mile stone dates and determine
a staffing profile which together forma project plan.
Having discussed about several issues of cost estimation we should understand basic
steps of cost estimation processes.
1. Establish Cost estimating objectives
2. Generate a project plan for required data and resources
3. Pindown software requirements as changes in requirements may cause additional
costs.
4. Workout as much detail about the software system as feasible.
DMC 1703
NOTES
28 ANNA UNIVERSITY CHENNAI
5. Use several independent cost estimation techniques to capitalize on their combined
strengths
6. Compare different estimates and iterate the estimation process during software
development.
7. After the project has started, monitor its actual cost and feedback results to project
management.
1.9.4 Software Sizing
The software size is the most important factor that affects the software cost.
The accuracy of a software project estimate is predicted based on several factors.
1. The degree to which the planner has estimated the size of the product to be built.
2. The ability to transformthe size estimate into human efforts, project duration and
project cost.
3. Degree to which the project plan reflects the abilities of the software team.
4. The stability of software requirements.
The lines of code and function points are the most popular size measures.
Line of Code: This is the number of lines of the delivered source code of the software
excluding comments and blank lines and is commonly known as LOC.
Even though LOC is programming language dependent, it is the most widely used
software size metric. Since most of the cost models depend on LOC, the estimates need
to be accurate as far as possible. Now the question is whether it is possible to calculate
the exact lines of code before the completion of the project.. The answer is no. Estimating
the code size of a programbefore it is actually built is almost as hard as estimating the cost
of the program.
However there are ways to estimate the code size
1. Based on archival data available on the completed projects for a similar projects.
2. Based on the method using experts judgement
For relatively similar projects, the estimate can be based on the data available on
completed projects. If you encounter entirely new project for which you dont have any
data available, then the code size is based on the expert judgement as well as PERT
(Project Evaluation and Reviewing Technique) technique. It involves experts judgement
of three possible code sizes can be computed as a weighted average of S
o
(Optimistic
LOC), Sm (most likely LOC) and S
P
(Pestimistic LOC).
6
S S 4 S
S
p m 0
+ +
=
SOFTWARE ENGINEERING
NOTES
29 ANNA UNIVERSITY CHENNAI
If you look at the equation, more weightage is given to the most likely value. Basically
we assure that the probability of actual size falling outside S
0
and S
p
is very small.
If the software project has n components, then the estimate of code size for each
component is calculated by the equation 1.1. then sumof the code sizes for all n components
will give the expected code size for the software.
ILLUSTRATIVE EXAMPLE: (Pressman)
The following major software functions are identified for computer aided design
(CAD) database design.
User Interface and control facilities
Two Dimensional Geometric Analysis
Three Dimensional Geometric Analysis
Database Management
Computer Graphic Display Facilities
Peripherals controls
Design Analysis Modules
Based on the method described earlier the estimation table is given as follows:
Table 1.2 Estimation of LOC
FUNCTION POINTS: This is a measurement based on the functionality of the
program and was first interfaced by Albrecht. The FP can be used to
1. Estimate the cost or effort required to design, code and test the software.
2. Predict the number of errors that will be encountered during testing
3. Forecast the number of components and /or the number of projected source lines
in the implemented system.
Function S
o
S
m
S
p
Expected (S)
User Interface Control 1800 2400 2650 2340
2D Geometric Analysis 4100 5200 7400 5380
3D Geometric Analysis 4600 6900 8600 6800
Data base Mgt 2950 3400 3600 3350
Computer Graphic Display 4050 4900 6200 4950
Peripheral Control 2000 2100 2450 2140
Design Analysis 6600 8500 9800 8400
Total LOC 33360
DMC 1703
NOTES
30 ANNA UNIVERSITY CHENNAI
In order to calculate FP, five information domain characteristics of the software which
are measurable and software complexity values are considered.
The five information domain values are
1. User input types: Data or control user input types
2. User Output types: Output data types to the user that leaves the system
3. Inquiry types: Interactive inputs requiring a response.
4. Internal file types: Files (Logical groups of Information) that are used and shared
inside the system.
5. External file types: Files that are parsed or shared between the system and other
systems.
Each of these types is individually assigned one of three complexity levels of
1. Simple 2. Medium 3. Complex and given a weighting value Wij that varies
from simple 3 to 15 (complex).
The unadjusted function -point counts (UFC) is given as
1 j 1 i
Wij Nij
3 5
UFC
= =
E E =
Where Nij and Wij are respectively the number and weights of types in with complexity.
To compute FP the following relationship is used.
(
(
(

=
E + = Fi
1 i
14
01 . 0 65 . 0 x UFC FP
Where Fi (I =1 to 14) are value adjustment factors (VAF) based on the responses to
the questions.
1. Does the systemrequires reliable backup and recovery?
2. Are specialized data communications required to transfer information to or from
the application.
3. Are there distributed processing?
4. Is performance critical?
5. Will the systemrun in an existing heavily utilized operational environment?
6. Does the systemrequire on-line data entry?
7. Does the online data entry require the input transmission to be built over multiple
access or operations?
8. Are the logical files updated on line?
9. Are inputs, outputs, files or inquires complex?
10. Is the internal processing complex?
SOFTWARE ENGINEERING
NOTES
31 ANNA UNIVERSITY CHENNAI
11. Is the code designed to be reusable?
12. Are conversion and installation included in the design?
13. Is the systemdesigned for multiple installation in different organizations
14. Is the application designed to facilitate change and for ease of use by the user?
Based on the experience the answer to these questions are given as ratings in the ( 0
to 5) scale as follows.
0 (No influence) 1 (Incidental) 2 (Moderate) 3 (Average) 4 (Significant) 5 (Essential)
It is to be noted that the constant values and weighting factors that are applied to
information domain are determined empirically.
1.10. SOFTWARE PROJECT MANAGEMENT AND PLANNING
Present day software engineers are subject to lot of pressures due to schedule slippages
cost overruns and delivery schedules. Several reasons may be attributed for these pressures.
Part of the pressure comes fromarbitrary and sometimes unrealistic deadlines or schedules
that are established by less experienced professionals and purused by another set of
professionals who do not enough experience. To avoid these pressures and reduce some
pressures, a systematic planning and schedule of activities are required. We should spend
a considerable time to get our act together: Software Project Planning really helps the
managers and practitioners to have effective control over the entire project.
The first step of the software project planning estimation has already been discussed
in previous sections. As already been emphasized, the estimation provides the manager
with the information necessary to complete the remaining project planning activities such
as.
- Risk analysis
- Project scheduling
Besides several management activities.
Then the question arises what is management? The management involves the following
activities.
- Planning deciding what is to be done
- Organizing Making arrangements
- Staffing Selecting the right people for the job
- Directing Giving instructions
- Monitoring Checking an progress
- Controlling Taking action to remedy hold-up
- Innovating Coming up with new solutions.
- Representing Liaising with users and other stake holders.
DMC 1703
NOTES
32 ANNA UNIVERSITY CHENNAI
While performing these management activities for several software projects, the most
commonly challenges identified through a survey on several projects are as follows.
- Copying with deadlines (85%)
- Copying with resource constraints (83%)
- Communicating effectively among task groups (80%)
- Gaining commitment fromteammembers (74%)
- Establishing measurable milestones (70%)
- Copying with changes (60%)
- Working out project plan agreement with their team(57%)
- Gaining commitment frommanagement (45%)
- Dealing with conflict (42%)
- Managing vendors and sub-contractors (38%)
If you carefully analyze these results with respective percentages, the major areas of
concern can be summarized as controlling .
- Planning
- Resources Management
- People Management
- Communication
- Project Schedules
In order to have effective control on these, we need to identify the potential risks of a
project as early as possible. It is rather unusal to assume that a software project will run
smoothly from start to finish. We should identify the risks associated with the software
project early on and provide measures to deal with them.
A risk is a possible future negative event that may affect the success of an effort.
Hence risk is not a problembut however it may turn out to be a problemif it is not properly
attended. Some common examples of risks may be categorized into project risks, technical
risks and business risks
Project Risks Identify
- Potential budgetary risks
- Schedules
- Personnel
- Resources
- Customer
- Requirement Problems
- Project Complexity
- Size and Structure
SOFTWARE ENGINEERING
NOTES
33 ANNA UNIVERSITY CHENNAI
Technical Risks are
- Potential Design Risks
- Implementation
- Interfacing
- Verification and maintenance problems
- Technical obstescence
- Cutting - edge technologies
Business risks are identified as
- Market risks
- Sales person lack of knowledge of the product
- Inadequate support formsenior level management
- Budget Risks
There is a need for the project managers to look into these risk factors, control these
risks. Practically it is not possible to avoid some of these risks. Experience clearly reveals
that if we do not take care of risks at early stages, risks will dominate the mangers. Hence,
at the project planning stage. a risk management strategy is required. A carefully drafted
risk management strategy involves the following steps.
1. Identify the risk factors
2. Determine the risk exposure. For each risk we have to determine the probability
of occurrence of the risk (P) and its impact on the project in terms of loss in budget
or loss in effort. (E) Then the risk exposure is given by PxE.
3. Develop strategies to instigate the risks which have the highest probability of
occurrence and highest risk exposure.
There are three general strategies to mitigate risks
1. avoidance 2. transfer 3. acceptance
We may avoid risks by taking enough precaustions so that they may not occur. We
may transfer risks by looking for another solutions such as a prototyping approach to
handle unstable requirements. Finally of both options are not possible, we may accept
risks. In such cases we need a contingency plan to mitigate the risk so that the risk will not
be a major problem.
4. Handle risk. Risk factors must be monitored. Risk management is a cyclic process
and occasionally risk must be handled by re-assessing the project, contingency plans.
1.9.1 Project Planning Control
Since the project consists of a series of activities we can represent the activities
graphically by a work breakdown structure (WBS). The WBS of the project into sub
DMC 1703
NOTES
34 ANNA UNIVERSITY CHENNAI
tasks down to the level needed for effective planning and control. A simple work break
down structure of the project is given below.
Figure 1.9 Work Breakdown Structure of the project
These activities may be sequential and parallel. Each activity consume resources
such as people or computer time and has fixed duration. Activities must often be executed
in a specific order. For example design cannot stand before the completion of requirements
specifications.
The type of relationship between activities can be expressed as constraints which
are called as precedence relations. Further every activities has a starting time and finishing
time. Project planning has to be done in such a way that the constraints are satisfied and
the resource limits are not exceeded. The set of activities and then constraints can also be
depicted in a networks, called activity network.
Figure 1.10 A typical Activity Network
[1] Indicates initial activity, the number on the top of arrows represents the duration
of the activities in person weeks or person months, [8] indicate the final activity or An
arrow fromnode A to node B indicates that activity A has to be finished before activity B

Code A Code B
Requirements Test
Project
Design Test Plan Code

2
1
3
4
6
7
8
5
SOFTWARE ENGINEERING
NOTES
35 ANNA UNIVERSITY CHENNAI
can start. These network diagrams are often termed PERT chart. PERT is an acronymfor
Program Evaluation and Review Technique. These PERT Charts are quite useful in the
management of several projects relating to the manufacturing industry.
Fromthe PERT chart we may compute the earliest possible point in time at which the
project can be completed. The scheduling tools such as Gantt Charts, PERT charts and
critical path Analysis are used for effective time management.
The sample Gantt chart Waterfall activities given below.
Figure 1.11 Gantt-Chart
The stack time indicates that the corresponding activity may consume more than the
estimated time or start later than the earliest possible starting time without affecting the total
duration of the project.
Activities without stack time are on critical path. If the activities on a critical path- are
delayed the total project gets delayed. Given a network these exists at least are sequence
of activities that the are critical path.
Using information contained in the Gantt chart and knowledge of personal resources
required for each activity it is possible to adjust the resources for critical activities from
other activities when are not in critical path.
Summary of Unit I
In this unit, we looked into some fundamental concepts of Software Engineering,.
The need and scope of the software engineering are explained. All important process
models that are being used are explained. Brief discussion was provided on Agile software
development methodologies the essential concepts of project planning and control are
introduced..

Requirements Analysis
Design
Coding
Testing
Actual duration
Stack time
Task Activities 5 10 15 20 25 30
DMC 1703
NOTES
36 ANNA UNIVERSITY CHENNAI
Sample Questions UNIT I
1. Define Software Engineering?
2. What is the scope of Software Engineering?
3. What are the major phases in Software Development Project?
4. What are the important characteristics of a software?
5. Describe the Waterfall model of Software Development?
6. Discuss the main difference between prototyping and incremental development?
7. How does the spiral model covers prototyping, incremental development and the
waterfall model?
8. What is unified process modeling?
9. Explain the importance of Agile models?
10. Discuss salient features of Agile processes?
11. What is function point? How do you calculate it?
12. Explain the disadvantages of LOC as productivity measure?
13. Distinguish between function oriented metrics and size oriented metrics?
14. Why should software cost models be recalibrated fromtime to time?
15. What is risk management?
16. What is work break down structure?
17. Explain the various steps in project planning?
18. What do you mean by project scheduling?
19. What are conditions for effective systemcontrol
20. What are the reasons for software project feature?
SOFTWARE ENGINEERING
NOTES
37 ANNA UNIVERSITY CHENNAI
UNIT II
REQUIREMENT ANALYSIS
2.1. INTRODUCTION
Most of the organizations are involved in the development of complex software with
different methodologies, different languages but they have got clear cut goals. To develop
a quality software on time and on budget that meets customers real needs. To achieve
this, Requirements engineering plays a major role. Requirement engineering is a key
problem area in the development of complex, software intensive systems. The hardest
single part of building a software system is deciding what to build. No other part of the
work so cripples the resulting systemif it is done wrong. No other part is more difficult to
rectify later. Some of the dominant issue involved in this problemarea include:
- Achieving requirements completeness without unnecessarily constraining system
design.
- Analysis and validation difficulty
- Changing requirements over time
Many requirements errors are passed undetected to the later phases of life cycle and
correcting errors during or after implementation has been found to be extremely costly.
The Department of Defence (DOD) Software Technology plan of USA states that early
defect fixes are typically two orders of magnitude cheaper than late defect fixes and the
early requirements and design defects typically have more serious operational
consequences.
One way to reduce requirements errors is by improving requirements elicitation, an
activity often overlooked or only partially addressed by current requirements engineering
techniques.
In fact, studies point to more than 60% failure rate for software development projects
in US with poor requirements as one of the top reasons. Further, studies show a high
percentage of project schedule overruns with 80% due to creeping a changing requirements.
DMC 1703
NOTES
38 ANNA UNIVERSITY CHENNAI
Effective requirements management practices ensures that all requirements are readily
available to all project team members and only changed under controlled conditions. A
common source of easily accessible upto date requirements enables members of the project
teams to work more effectively. Requirements management is the set of activities
encompassing the collection, control, analysis, filtering and documentation of system
requirements.
This chapter basically focus on these aspects.
Learning Objectives:
- To keep the need for complete requirements
- To be a aware of requirement collection strategies
- To identity and elaborate on several activities of requirements engineering
- To have some knowledge about requirements validation techniques
- To be aware of analysis tools and techniques.
2.2 SYSTEM ENGINEERING
Generally software engineering is a subset of systemengineering. It is better to learn
software engineering from the system engineering perspective. First of all we should
know the basic definition of a system, systemengineering and important components of
systemengineering.
Simply stated, a systemis an integrated composite of people, products and processes
that provide a capability to satisfy a stated need or objective. Software system may be a
part of the whole system. Here we tried to explain the general concepts of system
engineering. Broadly these concepts may also be applied to software development.
Systemengineering consists of two significant disciplines: The technical knowledge
domain in which the systemengineering operates and the systemengineering management.
Three commonly used definition of system engineering are provided by the best-
known technical standards that apply to this subject. A logical sequence of activities and
decisions that transforms an operational need into a description of systemperformance
parameters (MIL-STD-499A May 1974, Engineering Management).
- An interdisciplinary approach that encompasses the entire technical effort and
evolves into and verifies an integrated and life cycle balanced set of systempeople,
products and process solutions that satisfy customer needs (EIA standard IS-
632, System Engineering Dec 1994).
- An interdisciplinary collaborative approach that derives evolves and verifies a life
cycle balanced system solution which satisfies customer expectations and meets
public acceptability (IEEE P1220 standard for application and management of
system engineering, September 1994).
SOFTWARE ENGINEERING
NOTES
39 ANNA UNIVERSITY CHENNAI
In summary systemengineering is an interdisciplinary engineering management process
that evolves and verifies an integrated life cycle balanced set of systemsolutions that satisfy
customer needs.
Systemengineering management is accomplished by integrating three major activities
as given in fig 2.1.
- Development phasing that controls the design process and provides baselines that
co-ordinate design effects.
- A systemengineering process that provides a structure for solving design problems
and tracking requirement flow through the design effort.
- Life cycle integration that involves customers in the design process and ensures
that the systemdeveloped is viable through its life.
Figure 2.1 Three Activities of Systems Engineering Management
Each of these activities is necessary to achieve proper management of a development
effort. Phasing has two major purposes.
- It controls the design effort and is the major connection between technical
management effort and overall acquisition effort. It controls the design effort by
developing design baselines that govern each level of development.
It interfaces with acquisition management by providing key events in the development
process where the design viability can be assessed. The viability of the baselines developed
is a major input for the acquisition management milestone decision. As a result the timing
Development
Phasing
Baselines
Systems
Engineering
Management
Life Cycle
Planning
Life Cycle
Integration Integrated
Teaming
Systems
Engineering
Process
DMC 1703
NOTES
40 ANNA UNIVERSITY CHENNAI
and co-ordination between technical development phasing and the acquisition schedule is
critical to maintain a healthy acquisition program.
The systemengineering process is the heart of system engineering management. Its
purpose is to provide structural but flexible process that transforms requirements into
specifications architectures and baselines. The discipline of this process provides the control
and trace ability to develop solutions that meet customer needs. The systemengineering
process may be repeated one or more times during any phase of the development process.
Life cycle integration is necessary to ensure that the design solution is viable throughout
the life of the system. It includes the planning associated with product and process
development as well as the integration of multiple functional concerns into the design and
engineering process. In this manner, product cycle can be reduced and the need for
redesign and rework substantially reduced.
2.2.1 Development phase
Development usually progresses through distinct levels or stages.
- Concept level which produces a systemconcept description
- System level which produces a system description in performance requirement
terms
- Subsystem/component level which produces first set of subsystemand component
product performance description, then a set of corresponding detailed description
of the product characteristics essential for there production.
The systemengineering process is applied to each level of systemdevelopment, one
level at a time to produce these descriptions commonly called configuration baselines.
This results in a series of configuration baselines, one at each development level. These
baselines become more detailed with each level.
2.2.2 System Engineering Process
The systems engineering process is a top down comprehensive, iterative and recursive
problemsolving process applied sequentially through all strategy development. This process
is used to
- Transform needs and requirements into a set of system product and process
description.
- Generate information for decision makers
- Provide input for the next level of development
The fundamental systemengineering activities are depicted in the figure 2.2
SOFTWARE ENGINEERING
NOTES
41 ANNA UNIVERSITY CHENNAI
Figure 2.2 The System Engineering Process
The systemengineering controls are used to track decision and requirements, maintain
technical baselines, manage interfaces, manage risks, track cost and schedule.
In the systemengineering framework, we can identify eight primary life cycle functions.
- Development
- Manufacturing/Production/construction
- Deployment
- Operation
- Support
- Disposal
- Training
- Verification
These eight primary life cycle functions are valid for all types of products including
software. SystemEngineering ensures that the correct technical tasks get done during
development through planning, tracking and co-ordinating. The output of each application
of the system engineering process is a major input to the next process application.
This section briefly described the general concepts of System engineering, several
activities of system engineering, and the systemengineering process. The requirements
analysis given in systemengineering processes is very similar to the requirements engineering
processes discussed in section 2.3.
DMC 1703
NOTES
42 ANNA UNIVERSITY CHENNAI
2.3 REQUIREMENTS ENGINEERING
Before going into discussion an Requirements Engineering process activities, let us
define what is requirement?
A requirement is a function or characteristics of a system that is necessary, the
quantifiable and verifiable behaviors that a system must possess and constraints that s
systemmust work within to satisfy an organization objectives and solve a set of problems.
IEEE definition of requirement
1. A condition or capability needed by a user to solve a problem or achieve an
objective.
2. A condition or capability that must be met or possessed by a system or system
component to satisfy a contract, standard, specification or other formally imposed
documents.
3. A documented representation of a condition or capability as in (1) and (2).
Requirements do not only consist of functions, but there are clearly non - functional
requirements as well as functional requirements.
1. Functional requirements (What)
2. Non-Functional requirements (How Well)
The IEEE standard Glossary of Software Engineering Terminology defines five other
types of requirements in addition to functional requirements.
1. Performance requirements
2. Interface requirements
3. Design requirements
4. Implementation requirements
5. Physical requirements
Requirements engineering is the disciplined application of scientific principles and
techniques for developing, communicating and managing requirements. Requirements
Engineering is also defined as the systematic process of developing requirements through
an iterative process of analyzing a problem, documenting the resulting observations and
checking the accuracy of the understanding gained.
During the requirements engineering phase, we do not address the question of how to
achieve these user requirements in terms of system components and their interactions.
For this phase, different types of users may be the source of different types of requirements.
The end users will be the main source of information regarding the functional, task related
requirements. Other requirements e.g those that relate to security requirements may well
be phrased by other stakeholders.
SOFTWARE ENGINEERING
NOTES
43 ANNA UNIVERSITY CHENNAI
Requirements engineering can be decomposed into three major activities.
- Requirements elicitation
- Requirements specification
- Requirements validation
Good requirements should be
- Necessary - something that must be included or an important element of the system
for which other systemcomponents will not be able to compensate.
- Unambiguous Susceptible to only one interpretation
- Concise Stated in declarative language that is brief and easy to read, yet conveys
the essence of what is required.
- Consistent Does not contradict other stated requirements nor is it contradicted
by other requirements. In addition, uses terms and languages that means the same
fromone requirements statement to the next.
- Complete stated entirely in one place and in a manner that does not require the
reader to look at additional text to know what the requirements means.
- Reachable- A realistic capability that can be implemented with the available
resources in the available time.
- Verifiable Must be able to determine that the requirements has been met through
one of the four possible methods: inspection, analysis demonstration or test most
requirements should be testable. Testable requirements are an important
component of validation.
2.3.1 Requirements elicitation
Requirements elicitation was defined as The process of identifying needs and bridging
the disparities among the involved communities for the purpose of defining and distilling
requirements to meet the constraints of these communities. Requirements elicitation serves
as a front end to system development. Requirement Analysists, sponsors, developers,
enduser and other stake holders are involved with requirements elicitation to differing
degrees.
As we have already mentioned that the requirements engineering can be broadly
decomposed into three activities.
- Elicit requirements fromvarious individual sources
- Ensure that the needs of all users are consistent and feasible.
- Validate that the requirements so derived are an accurate reflection of user needs.
Requirements elicitation process is complete only when the stakeholders are involved
in the process. We have to identify the relevant stakeholders which are sources of
requirements. Next step is to gather the wish list for each stakeholder. This wish list is
DMC 1703
NOTES
44 ANNA UNIVERSITY CHENNAI
likely to originally contain ambiguities, inconsistencies, infeasible requirements and untestable
requirements. This information is to be documented after sufficiently reforming the wish list
for each stakeholder. This list is typically high level, specific to the relevant problemdomain
and stated in user-specific terms. Then this wish list is integrated across the various
stakeholders and get the view points. This will provide another opportunity to resolve the
conflict between view points. Consistency checking in an important part of the process.
This wish lists or goals are also checked for feasibility. Once the final wish list is ready, non
functional requirements are determined. These activities are common to most of the process
definitions for requirements elicitation found in the literature. However the means of achieving
these activities and iterating between themare still not well understood.
The resulting product from the elicitation phase is a subset of the goals fromvarious
stakeholders which describe a number of possible solutions. The remainder of the
requirements engineering process concerns the validation of this subset to see if it is what
the customer/client/user actually intended. The validation typically includes the creation of
models to foster understanding between the parties involved in requirements development.
The result of a successful requirements engineering process is a requirements specification.
The goodness or badness of a specification can be judged only relative to the users goals
and resources available.
Thus Requirements elicitation is the first step in bridging the gap between the problem
domain and the solution that is ultimately constructed. Once the needs are understood
properly, the analysts, developers and customer can explore alternative solutions that will
address those needs. It should be told very clearly that we should not start designing the
system until we understand the problem otherwise we are expected to do considerable
design rework as the requirements become better understood. Elicitation, analysis,
specification and verification dont take place in a tidy linear sequence and these activities
are interleaved, incremental and iterative. To summarize the requirements, developers
follows the following four steps.
1. Customer will be asked questions and note down all his responses (elicitation).
2. Information gathered from the customer is further processed to classify it into
various categories and transform the customer needs into software requirements
(analysis).
3. Structure the input given by the customer into written documents and diagrams
(specification).
4. Customer representatives are asked to review what has been finalized so far and
correct any possible errors (verification).
It is to be pointed out at this juncture, due to diversity of software development
projects and subsequent complexities in the projects, there is no single formular approach
to requirements development.
SOFTWARE ENGINEERING
NOTES
45 ANNA UNIVERSITY CHENNAI
However there are some general guidelines to be followed for projects covering all
domains.
Karl E. Wiegers (1999) in his book on software requirements suggested some general
guidelines for requirements elicitation.
2.4 SUGGESTED REQUIREMENTS DEVELOPMENT PROCESS
1. Define the projects vision and scope
2. Identify user classes.
3. Identify appropriate representatives fromthe user classes
4. Identify the requirements decision makers and their decision making process.
5. Select the elicitation techniques that you will use.
6. Apply the elicitation techniques to develop and prioritize the use cases for a portion
of the system
7. Gather information about quality attributes and other non functional requirements
fromusers.
8. Elaborate the use cases (section 2.5.1) into the necessary functional requirements
9. Review the use-case descriptions and the functional requirements
10. Develop analysis models, if needed, to clarify the elicitation participants
understanding of portions of the requirements.
11. Develop and evaluate user interface prototypes to help visualize requirements that
are not clearly understood.
12. Develop conceptual test cases from the use cases.
13. Use the test cases to verify the use cases, functional requirements, analysis models,
and prototypes.
14. Repeat steps 6 through 13 before proceedings with design and construction of
each portion of the system.
Elicitation is a highly collaborative activity. It is not a simple transcription of what
customer say that they need, we must probe beneath the surface of the requirements the
customers present to understand their true needs.
2.4.1 Requirements elicitation Process Model
In section 2.4 we have highlighted the essence of requirement elicitation process. A
process model is proposed to make the concepts very clear. It also recognizes the
importance of communication between different stakeholders. Recognizing the importance
of communication is not enough. The backgrounds and motivations of the elicitation
participants are often very different and the process model consists of two sets of activities
to address this diversity. One set of activities is user oriented while the other is developer-
oriented. The two sets of activities are performed in parallel and can be grouped into tasks
DMC 1703
NOTES
46 ANNA UNIVERSITY CHENNAI
associated and rationalization, prioritization and integration. These task groups may be
executed iteratively as illustrated in figure 2.3.
Figure. 2.3 Requirments Elicitation Process Model
2.5 REQUIREMENTS ELICITATION TECHNIQUES
There are two main sources of information for the requirements elicitation process,
the users and the application domain. As already been discussed, the requirements elicitation
process is communication intensive. It is rather difficult to get complete information from
the user with reference to application domain. Several techniques are used to elicit the
requirements. Some of the important and widely used techniques are used such as use
cases, communication techniques, prototyping discussed in detail in subsequent sections.
2.5.1 Use Cases
A use case is a technique for documenting the potential requirements of a new system
or software change. Each case provides one or more scenario that reveals how the system
should interact with the enduser or another system to achieve a specific business goal.
User cases are simple to understand and typically avoid technical jargon and mostly uses
the language of enduser or domain expert. User cases, are jointly finalized by requirements
engineers and stake holders. Thus user cases contain a texual description of all of the
ways which the intended user could work with the software system. However they do not
give the internal working of the system or nor do they explain how the system will be
implemented. They simply give the steps that a user follows to perform a task. They also
define various ways by which the user interacts with the system.
During 1990s use cases rapidly become the most common practice for capturing
functional requirements. Even though they are originated fromobject oriented systems but
Fact-finding
Reqs. Gathering &
Classification
Evaluation and
Rationalization
Prioritization
Integration &
Validation
User-Oriented

Developer
Oriented
SOFTWARE ENGINEERING
NOTES
47 ANNA UNIVERSITY CHENNAI
then applicability is not restricted to object oriented systems. The use cases are not object
oriented in nature.
A use case defines a goal oriented set of interaction between external actors and the
systemunder consideration. Actors are parties outside the systemthat interacts with the
system. An actor may be class of users, roles users can play or other systems. A primary
actor is one having a goal requiring the assistance of the system. A secondary actor is one
fromwhich the system needs assistance.
A use case is initiated by a user with a particular goal in mind and completes successfully
when that goal is satisfied. It describes the sequence of interactions between actors and
the systemnecessary to deliver the service that satisfies the goal. It also includes possible
variants of this sequence e.g alternative sequences that may also satisfy the goal as well as
sequences that may lead to failure to complete the service because of exceptional behaviors,
error handling etc.
The system is treated as a black box and the interactions with the system including
systemresponses are as perceived fromoutside the system.
Thus use cases captures who (actor) does what (interaction) with the system for
what purpose (goal) without dealing with the systeminternals. A complete set of user
cases specifies all the different ways to use the systemand therefore defines all behaviors
required of the system.
A use case should
- Describe a business task to serve a business goal
- Be at an appropriate level of detail
- Be short enough to implement by one software
Use cases can be very good for establishing functional requirements but they are not suited
to capturing non-functional requirements. However each use cases should have an
associated performance oriented non-functional requirement.
Illustrative Example
As an illustrative example we present here an use case for the elevator problem
discussed by Stephen R. Schach (1959).
The only interaction possible between users and classes is a user pressing an elevator
button to summon an elevator or a user pressing a floor button to request the elevator to
stop at a specific floor. Within the generic description of the overall functionality we can
extract a vast number of different scenarios each representing one specific set of interactions.
The simple use case diagram for elevator problem is given below.
DMC 1703
NOTES
48 ANNA UNIVERSITY CHENNAI
Figure 2.4 Use Case Model An example
The scenario for this user case may provide the functional requirements for this elevator
problem. Table 2.1 depicts a normal scenario that is a set of interaction between users
and elevators that corresponds to the way that we understand elevators should be used.
The scenario will provide most of the functional requirements to be implemented by the
developer. Table 2.2 is an abnormal scenario. It depicts what happens when a user
presses the UP button at floor 3 but actually wants to go to floor 1.
The scenario will provide some exceptional cases to be handled while designing a
system. They are also part of functional requirements.
Table 2.1 A normal Scenario.

Press an elevator button
Press an floar button
Elevator
User
1, User A presses Up floor button at floor 3 to request elevator.
User A wishes to go to floor 7.
2. Up floor button is turned on.
3. An elevator arrives at floor 3. It contains User B who has
entered the elevator at floor 1 and pressed the elevator button
for floor 9.
4. Up floor button is turned off.
5.Elevator doors open.
User A enters elevator.
6. User A presses elevator button for floor 7.
7. Floor 7 elevator button is turned on.
8. Elevator doors close.
9. Elevator travels to floor 7.
10. Floor 7 elevator button is turned off.
11. Elevator doors open to allow User A to exit elevator.
12. Timer starts.
User A exits.
13. Elevator doors close after time out.
14. Elevator proceeds to floor 9 with User B.
SOFTWARE ENGINEERING
NOTES
49 ANNA UNIVERSITY CHENNAI
Table 2.2 An abnormal scenario.
2.5.2 Use Case Template
There is no standard template for writing use cases. However it is necessary to follow
certain conventions for writing use cases as they are quite useful to identify the functional
requirements. Derek Coleman (1998) suggested a standard use case template. We shall
present here the template with minor modifications.
Table 2.3 A Standard Use Case Template
1. User A presses Up floor button at floor 3 to request elevator. User
A wishes to go to floor 1.
2. Up floor button is turned on.
3. An elevator arrives at floor 3. It contains User B who has entered
the elevator at floor 1 and pressed the elevator button for floor 9.
4. Up floor button is turned off.
5. Elevator doors open.
User A enters elevator.
6. User A presses elevator button for floor 1.
7. floor 1 elevator button is turned on.
8. Elevator doors close after timeout.
9. Elevator travels to floor 9.
10. Floor 9 elevator button is turned off.
11. Elevator doors open to allow User B to exit elevator.
12. Timer starts.
User B exists.
13. Elevator doors close after timeout.
14. Elevator proceeds to floor 1 with User A.
Use Case Use case identifier and reference number and
modification history
Each use case should have a unique name suggesting its
purpose. The name should express what happens when
the use case is performed. It is recommended that the
name be an active phrase, e.g. Place Order. It is
convenient to include a reference number to indicate
how it relates to other use cases. The name field should
also contain the creation and modification history of the
use case preceded by the keyword history.
DMC 1703
NOTES
50 ANNA UNIVERSITY CHENNAI
Description Goal to be achieved by use case and sources for
requirement
Each use case should have a description that describes
the main business goals of the use case. The
description should list the sources for the requirement,
preceded by the keyword sources.
Actors List of actors involved in use cases
Lists the actors involved in the use cases. Optionally,
an actor may be indicated as primary or secondary.
Assumptions Conditions that must be true for use case to terminate
successfully
Lists all the assumptions necessary for the goal of the
use case to be achieved successfully. Each assumption
should be stated as in a declarative manner, as a
statement that evaluates to true or false. If an
assumption is false then it is unspecified what the use
case will do. The fewer assumptions that a use case has
then the more reobust it is. Use case extensions can be
used to specify behavior when an assumption is false.
Steps Interactions between actors and system that are
necessary to achieve goal.
The sequence of interactions necessary to successfully
meet the goal. The interactions between the system and
actors are structured into one or more steps which are
expressed in natural language. A step has the form
<sequence number><interaction>
Condition statement can be used to express alternate
paths through the use case. Repetition and concurrency
can also be expressed (see Coleman, 1997, for a
proposed approach to do doing so).
Variations
(optional)
Any variations in the steps of a use case
Further detail about a step may be given by listing any
variations on the manner or mode in which it may
happen.
<step reference><list of variations separated by or>

SOFTWARE ENGINEERING
NOTES
51 ANNA UNIVERSITY CHENNAI
2.5.3 Communication Techniques
Software requirements analysis begins with communication between the clients and
developers. Very often large gap exists between communication and understanding. The
most commonly used analysis technique to bridge the gap is to conduct preliminary meeting
and interview. There are two basic types of interview namely structured and unstructured.
Structured interview
Specific preplanned close -ended questions are posed. E.g: How many sales person
the company employs or How fast a response time is required.
Unstructured interview
Open ended questions are asked to encourage the person being interviewed to speak
out. Eg. Why the current software is not adequate for business needs.
The interview prepares a report based on the interview and delivers a copy to the
clients so that the clients may classify statements or add overlooked items.
Another way of eliciting needs is to send a questionnaire to the relevant members of
the organizations. This technique is useful to collect data frommore people. At the same
time a carefully though out answers convey much more information than the responses
given at the time of interview.
A different way of obtaining particularly in business environment to examine the forms
and documents used by the client such as operating procedures, and job descriptions.
Non-Functional List any non-functional requirements that the use case
must meet.
The non functional requirements are listed in the form:
<Keyword>: <requirement>
Non-functional keywords include, but are not limited to
performance. Reliability, Fault Tolerance, Frequency
and Priority. Each requirement is expressed in Natural
Language or an appropriate formalism
Issues List of issues that remain to be resolved List of issues
awaiting resolution. There may also be some notes as
possible implementation strategies or impact on other
use cases.
DMC 1703
NOTES
52 ANNA UNIVERSITY CHENNAI
Knowledge about how the client currently does business can be really helpful in determining
the client needs.
Facilitated Application Specification Techniques (FAST)
In view of the in herent difficulties in requirements gathering, a number of investigators
have developed a team-oriented approach to the requirements gathering that is applied to
early stages of analysis specification one such technique is FAST. The basis guidelines of
FAST are:
- A meeting is conducted at a neutral site and attended by both clients and developers.
- Rules for preparation and participation are established.
- An agenda is circulated before the meeting.
- A facilitator other than the developer and client controls the meeting.
- A definition mechanismsuch as worksheets, flip charts etc. is used
- The goal is to identify the problem, its possible, solution, explor different approaches
to problem.
The FAST teamis comprised of representatives frommarketing software and hardware
engineering and manufacturing.
Instantaneous discussions and requirements really help to develop systemspecification.
Quality function deployment:
It is quality management technique developed in Japan and it translates the needs of
the customer into technical requirements for software.
QFD identifies three types of requirements
- Normal requirements: Objectives and goals are stated for a product or system
during meetings with the customer.
- Expected requirements: These are implicit requirements that the customer does
not explicitly state them.
- Existing requirements: These features go beyond the customers expectations and
prove to be adequate when present.
In meeting with customers, function deployment is used to determine the value of
each function that is required for the system.
Information deployment identifies both data objects and events that the systemproduces
and consumes. These are closely related to function deployment. Task deployment examines
the behavior of the systemor product within the context of its environment. Finally value
analysis is conducted to determine the relative priority of requirements during each of the
three deployments discusses above.
SOFTWARE ENGINEERING
NOTES
53 ANNA UNIVERSITY CHENNAI
2.5.4 Rapid Prototyping
In the first unit it has been discussed about the rapid prototyping paradigm. It is a tool
together requirement through an iterative processes. The key point it that a rapid prototype
reflects the functionality that the customer/client have a look at input screens and reports.
He may not see the hidden aspects such as file updating. The developers change the
rapid prototype until both the parties are convinced about the needs of the client. They will
examine fromthe prototype whether the requirements of client are accurately encapsulated
in the rapid prototype. Then the rapid prototype is used as the basis for drawing up
specifications. Another important aspect of rapid prototyping model is that the rapid
prototype must be developed to incorporate changes. In order to achieve rapid prototyping,
fourth generation languages and interpreted languages such as SmallTalk, Prolog, Lisp and
UNIX, Java are effectively used for rapid prototyping.
Another important aspect to be considered in rapid prototype is Human-Computer
Interface (HCI). Unless the rapid prototype is user friendly, the client will not be able to
performexperiments on the rapid prototypes of HCI and informthe designers whether the
product will indeed be user friendly, whether the designers have taken the necessary human
factors into account. The conventional formof the rapid prototyping is discussed in section
1.4 and the figure is reproduced here for completeness.
Rapid prototype life cycle is given below.
Figure 2.5 Rapid Prototyping Model
One approach of rapid prototyping is to dispense with specification and use the rapid
prototyping itself either as a specification or as a significant part of the specification.
Another approach is to give the changed requirements as an input to the design phase
itself. All that is required is to state what the prototype does and to list additional features
that the product must support such as file updating, security and error handling. The
second approach has a specific disadvantages in the sense that it is exceedingly difficult to
change the design document to incorporate new specifications. In the absence of written
specifications, the maintenance team will not have a clear understanding of the current
specification on which the system has been designed.
Rapid
prototype

Verify
Specification
Phase
Verify
Design
Phase
Verify
Implementation
Phase

Text
Integration
Phase

Text

Operation
Changed
requirements

Verify
DMC 1703
NOTES
54 ANNA UNIVERSITY CHENNAI
Rapid prototypes are discarded early in the software process. Now the question is
whether it is wise to do that? The primary objective of building a rapid prototype is speed
of building. A rapid prototyping is hurriedly, of course correctly put together rather than
carefully specified, designed and implemented. No proper specification document and
design document is available after rapid prototypes. In the absence of many documents,
the resulting code is difficult and expensive to maintain. It appears that constructing a rapid
prototype and throwing it away is wasteful exercise, but it is far cheaper to do this rather
than trying to convert a rapid prototype into final product.
The question is more pertinent especially for real time systems where the performance
is an important criteria. Generally in rapid prototypes, the performance issues are not
addressed. If the rapid prototype is referred as final product, it is unlikely that the response
times and other timing constraints will be met. If it is not advisable to refine the rapid
prototype into final product, it is better to build a prototype in different language fromthat
of the product. If the client specifies that the product must be developed in C++, the
prototype can be developed using hypertext. Once the prototype is accepted, then the
product is designed in C++and tested. Most of the organizations may feel that rapid
prototyping is a wasteful exercise since the prototype is discarded ultimately. As a
compromise, the rapid prototype could be adopted for building the final product provided
this prototype passes the some quality assurance tests. A more fundamental issue is that
managing the rapid prototyping model requires a major change in the outlook for a manger
who is conventionally managing the water fall model.
2.6 SOFTWARE REQUIREMENTS SPECIFICATION DOCUMENT
Software requirement specification (SRS) is sometime called a functional specification,
a product specification or systemspecification. The SRS plays a major role in subsequent
project planning, design and coding, system testing and user documentation. It should
describe the systems behaviors under varying conditions. However it should not cover
design, construction testing or project management details other than known design and
implementation constraints.
Requirements lie at the heart of a well-run software project supporting many of the other
technical and management activities.
The interactions of the software requirements specification with project processes is
given in figure 2.6.
SOFTWARE ENGINEERING
NOTES
55 ANNA UNIVERSITY CHENNAI
Figure 2.6 Interaction of SRS with other activities
The SRS for the entire product need not be finalized before beginning development,
but the requirements should be captured for each increment before building the increment
it. We follow the principle of incremental developments. As we have already discussed in
chapter 1 that incremental development is appropriate when the stakeholders cannot identify
all the requirements at the outset and if it is required to show some functionality quickly to
the customer. However we should have a baseline agreement for each set of requirements
before the teamimplements them. Baselining is the process of transitioning on SRS under
development into one that has been reviewed and approved.
We have to properly organize and write the SRS so that the different stakeholders
can understand it. Fromthis view point it is always advisable to follow some standards for
preparing SRS document. Most of the organization use the standard template given by
IEEE standard 830-1998 IEEE Recommended Practice for Software Requirements
Specification. Even though, there may be some draw backs with this template, generally
it is suitable for major projects.
DMC 1703
NOTES
56 ANNA UNIVERSITY CHENNAI
The Table 2.3 below illustrate several itemin SRS document as IEEE 830 standard
Table 2.3 Template for SRS Document
1. Introduction
1.1 Purpose
1.2 Document Convention
1.3 Intended Audience and Reading Suggestions
1.4 Project Scope
1.5 References
2. Overall Descriptions
2.1 Product Perspective & Product Features
2.2 Usecases and Characteristics
2.3 Operating Environment
2.4 Design and Implementation Constraints
2.5 User Documentation
2.6 Assumption and Dependencies
3 System Features
3.1 System Feature
3.2 Description and Priority
3.3 Stimulus/Response Sequence
3.4 Functional Requirements
4. External Interface Requirements
4.1 User Interface
4.2 Hardware Interfaces
4.3 Software Interfaces
4.4 Communication Interfaces
5. Other functional Requirements
5.1Performance Requirements
5.2 Safety Requirements
5.3 Security Requirements
5.4 Software Quality Attributes
6. Other Requirments
SOFTWARE ENGINEERING
NOTES
57 ANNA UNIVERSITY CHENNAI
Instead of explaining each itemseparately, the following illustrative example due to
Karl E. Wiegers provides clear explanation and good understanding in the preparation of
SRS document.
Illustrative Example:
In a company X the employees on an average spend one hour/day for going to
cafeteria to select the menu, purchase and eat Lunch. About 20 minutes of this time is
spent walking up and down to the working place. When the employees go out for lunch
they spend an average of 90 minutes off site. Some employees book the order for meals
through phone but employees dont always get the menu due to shortage of certain items.
The cafeteria wastes a significant quantity of food that is not purchased and must be thrown
away. The same issues apply to breakfast and supper.
In view of this, many employees have requested a systemthat would permit a cafeteria
user to order meals on line, to be delivered to a designated company location at a specified
time and date. Such systemwould save time for the employees and they can get whatever
they prefer. This would improve both their quality of work life and their productivity.
Let us see how the SRS document is prepared as per the IEEE standard.
2. Introduction
2.1. Purpose
This SRS documents describes all functional and non-functional requirements for the
release of cafeteria ordering System(COS). Unless otherwise specified all the requirements
specified here are high priority and committed for release 1.0.
2.2. Project Scope and Product features
The COS permits all employers to order meals from the company Cafeteria of on-
line to be delivered to specific campus locations. The major features that are expected
from COS are:
FE1: Order meals fromthe Cafeteria menu to be picked up or delivered.
FE2: Order meals fromlocal restaurants to be delivered
FE3: Create, View, Modify and delete meal service subscriptions.
FE4: Register for meal payment options
FE5: Request meal delivery
FE6: Create, View, Modify and Delete Cafeteria menus
FE7: Order Customer meals that arent on the Cafeteria menus
FE8: Product recipes and ingredient lists for customer meals fromcafeteria
DMC 1703
NOTES
58 ANNA UNIVERSITY CHENNAI
FE9: Provide systemaccess through internet or through intranet access by authorized
employees.
The basic assumption and dependencies for COS are
- Availability of necessary infrastructure such as LAN and Printers etc.
- Cafeteria staff and vehicles will be available to deliver within 15 minutes on receipt
of order.
- Cafeteria has to own on-line ordering system
2.3 References
1. Wiegers, Karl: Cafeteria ordering Systemvision and scope document www.
Company x. coon/projects/cos/ cos vision .doc
2. ..
3. .
3. Overall Description
3.1. Product Perspective
The COS is a new systemthat replaces the current manual and telephone processes
for ordering and picking up lunches in the company cafeteria. The context diagramin fig
2.7 illustrates the external entities and system interfaces for release 1.0. The system is
expected to evolve over several releases illustratively connecting to the internet orders
services for several local restaurants and to credit and debit card holders.
Figure 2.7 Context Diagram for Cafeteria Ordering System
SOFTWARE ENGINEERING
NOTES
59 ANNA UNIVERSITY CHENNAI
3.2 USER CASES AND CHARACTERISTICS
User Class Description
Patons : Employees of the company
: Pattern of ordering
: Current Cafeteria usage data
: Benefitionaries of this system
Cafeteria Staff: no of employees of cafeteria who will deliver
: Delegation of the work to various staff such as receiving
order, preparing meals, Packaging, print delivery
instructions request delivery etc.
Manager : Cafeteria Manager and his responsibilities
Intimation regarding daily specials
Menu editing periodically
Meal Deliverer : Pick Up food and deliver
Submitting confirmations that the meal was delivered
3.3 Operating Environment
- The COS shall operate with the following Web browser Microsoft Internet
Explorer V5.0, Netscape Communicatior V 4.7 etc.
- The COS shall operate on a server with Red Hat Linux and Apache HTTP
Server
- The COS shall permit user access fromthe corporate internet and if the user is
authorized for outside access through the Corporate firewall, froman internet
connection at the users home.
3.4 Design & Implementation Constraints
- The system designers, code and maintenance documentation shall confirm to
standard.
- The systemshall use the current corporate standard oracle database engine.
- All HTML code shall confirmto the HTML 4.0 standard
- All scripts shall be in Perl.
3.5 User Documentation
- The system shall provide an online hierarchical and cross linked help system in
HTML that describes and illustrates all systemfunctions
- The systemshall provide online tutorial if there is any difficulty in on-line ordering
DMC 1703
NOTES
60 ANNA UNIVERSITY CHENNAI
3.6 Assumption and Dependencies
- Working hours of the Cafeteria
- Changes being made in the Payroll system to accept payment requests
- Inventory systemupdation
3.7 System Features
3.1 Order Meals
3.1.1 Description and Priority
- A company employee whose identity has been verified can place order
- A patron may cancel or change his ordered if it has not yet been prepared
3.1.2 Stimulus response sequences:
Stimulus patron requestes to place an order for one or more
details.
Response: Systemqueries Patron for details of meals, payment and delivery
instruction.
Stimulus: Patron
Requests to change a meal order.
Response of the status is accepted :Systemallows user to edit a previous meal order/
3.1.3 Functional Requests:
All functional requirements which are finalized should be presented here in an
unambiguous manner.
4. External Interface Requirements
4.1 User Interfaces
- The COS screen displays shall confirm to the user interface standard adopted.
- The systemshould provide help facilities
- Web pages should provide complete navigation
4.2 Hard ware Enterprises
Hardware interfaces should be given if any
4.3 Software Interfaces
- COS inventory System
- Payroll System
4.4 Communication Interfaces
- e mail facility for confirmation of orders
SOFTWARE ENGINEERING
NOTES
61 ANNA UNIVERSITY CHENNAI
5. Other Non functional Requirements
5.1 Performance Requirements
- The systemshall accommodate 400 users during peakhours 8.00 A.M. 10.00
A.M. with an estimated average session duration of 8 minutes
- All Web pages should be downloadable in no more than 10 seconds over a 40
Kbps modemconnection
- Responses to queries shall make no longer then 7 seconds to load into the screen
after the user submits query.
- The system shall display confirmation messages to users within 4 seconds after
the users inputs his data.
Fig 2.10 Partial Data Model for release 1.0 of the cafeteria ordering systems
5.2 Safety requirements
document themif any
5.3 Security requirements
- All financial transactions across the network should be encrypted
- Every user requires a valid password
- COS Staff should be authorized to perform certain function
5.4 Software quality attributes
- The systemuptime should be 99.9% between 5.00 A.M. to 12.00 A.M. midnight
and 95 frommidnight to 5.00 A.M.
Appendix A: Glossary of Data Items
We are giving have sample data items only
Delivery instruction
Delivery location
Employer ID
Food ItemDescription
Meal order number
Meal Payment
Patron Name
Patron location
Patron E-mail
Quantity ordered
DMC 1703
NOTES
62 ANNA UNIVERSITY CHENNAI
2.7 REQUIREMENTS SPECIFICATION TECHNIQUES
The SRS document that is prepared during Requirements engineering serves users
and developers (designer). For the user the requirements specification is clear and precise
description of the functionality that the systemhas to offer. For the developer/ designer it
is starting point for the designer. Actually this document is agreed upon by the user as well
as the developer. This SRS document needs to be kept through out the software
development process. There are several specification representation techniques for the
better understanding of software requirement. Most often, the representation generated
is a set of semantic network. Each such representation has various types nodes and links
between nodes distinguished by visual clues such as their shapes or natural language labels.
Nodes typically represent things like processes, datastores or repositories, objects and
attributes. Nodes are joined by arrows representing relationships such as dataflow, control
flow, abstraction, and so on. Typical examples of such techniques and their representation
are.
- Data Flow Diagram
- Entity Relationship Diagram
2.7.1 Data Flow Diagram
The Data Flow Diagram is the basic tool of structured systemanalysis and Design
Techniques (SADT).
DFD identifies the transformational process of a system, the collections (stores) of
data or material the systemmanipulates and the flows of data or material between process
stores and the outside world.
Dataflow modeling takes a hierarchical decomposition approach to systemanalysis
which works well for transaction processing systemand other function intensive applications.
By adding of control flow elements, the DFD technique has been extended to permit
modeling of real time system.
DFD is a way to represent the steps involved in a current business process or the
operations of the proposed new system.
High level DFD provides a holistic birds eye view of the data and processing
components in a multistep activity.
DFDS illustrate how the functional requirements in the SRS combine to let the user
performspecific tasks.
Data Flow Diagrams graphically represent the systemusing symbols for data source
and destination (external entries), process, data flow and repository or data store.
The set of symbols commonly used are given in figure 2.8.
SOFTWARE ENGINEERING
NOTES
63 ANNA UNIVERSITY CHENNAI
Figure 2.8 General Conventions for DFDs
As a simple example to illustrate the role of Data Flow Diagram, we shall take a
simple example of customer support system (CSS) which receives the customers
complaints/renewal requests for the products supplied by the company, interacts with service
department in order to provide service to the customers. The CSS generates periodic
reports to the management, The CSS generates periodic reports to the management, and
list of new product series for the benefit of service engineers and customers.
A simple context level diagram(level DFD) is shown in fig.2.9.
Figure.2.9 A Context Level Diagram

Reform
Process
Name
Entity
Name
Data Store
Name
Process

Entity

Data Flow Name
Data Flow
Data Store/ Repository

DMC 1703
NOTES
64 ANNA UNIVERSITY CHENNAI
We can identify the following processes in first level DFD.
1. Register Customer Call
2. Pending Calls Processing
3. Customer Calls Processing
4. MIS report generation
5. New Sales Data Processing
6. AMC renewal Process
7. Customer Bills Creation and Processing
A sample level I DFD and Level II DFD are given in fig 2.14 and 2.15 are below
The students is advised to take hint about the process specified above, draw different
levels of DFD.
Figure 2.10 A Sample Level 1- DFD
SOFTWARE ENGINEERING
NOTES
65 ANNA UNIVERSITY CHENNAI
Figure 2.11 A Sample Level 2- DFD
DFDs are considered complete only when we support DFDs by data dictionary
containing
- Contents of the data flow
- Process Descriptions
- Data Store fields
- External entity descriptions
The DFDs provide an initiative description of the system but they lack precision.
This is true in general for DFDs which lack a precise meaning mainly for the following
reasons.
DFDs has following disadvantages:
1. DFDs lack precise semantics
2. Control aspects are not properly defined. For example cosifer the sample DFD
where the output of A,B,C are inputs to D and Ds output are inputs to E and F.
DMC 1703
NOTES
66 ANNA UNIVERSITY CHENNAI
The DFD has several interpretations
1) D may need all A,B,C
2) D may need only one of A, B,C
ii) D may output result to either E or F
iii) D may output the same data to both E & F
iv) D may output distinct data to E and F
As output is Bs input
Another case where DFDs leave synchronization between components of a system
completely in specified is shown below.
Two interpretations :
1. A produces datas and then waits till B has consumed it.
(This is often the case when A and B denote arithmetic operations.)
2. A and B are autonomous activities that have different speeds but there is a buffering
mechanics between them which ensures that no data are lost or duplicated.
Different interpretations are generally possible for the control regime associated with
a DFD.
We have seen that DFD is a useful notion for describing the operations used to access
and manipulate the operations used to access and manipulate the data of a systemtypically
on implementation system.
However this is often not enough to specify all the interesting features of the system.
A conceptual descriptions of the structure of the data and then relations is also
necessary.

D
A
B
c
F
E

B
A
As output is Bs input
SOFTWARE ENGINEERING
NOTES
67 ANNA UNIVERSITY CHENNAI
2.7.2 Entity Relationship Diagram
ER diagramdescribes the relationship among the data of an information system.
It takes case of user views and logical requirements.
ER model is based on three primitive concepts.
- Entities
- Relations
- Attributes
During the requirements Analysis stage, we should use the ERD techniques to represent
high level logical groups of information and connection between these logical groups of
information. In the design phase of the development we can use the ERD technique for
depicting physical files and tables and relation between these files.
An entity (in the context of ERD) is a real world object within the scope of the system
that we plan to build for e.g. in Customer Support System. For example, in a Customer
support system, Customer, Service Engineering, call are some of the entities.
For each entity we have to identify attributes which are similar to the columns in a
file or table definition. Entities are related to each other via relationships. An ERD is a
pictorial representation of entities, attributes and their interrelationships. The conventions
and notations to be followed while drawing ER diagram is given below.
Figure 2.12 Conventions & Notations used in ERD

A B
A B
A B
A B
A B
A is associated with one and only one B
A is associated with Zero or B. O
A is associated with one or more B.
A is associated with Zero one or more B.

O
A is associated with Zero more than or B

DMC 1703
NOTES
68 ANNA UNIVERSITY CHENNAI
Figure 2.13 A simple Example of ERD
ERD acts as the primary input for database design of a system.
ERD look only at the relationship of data in the system, independent of the processing
and cannot be used for functional modeling.
ERD is an excellent tool for data modeling and needs to be used along with functional
modeling tools like DFDs to complete the picture.
Let us take another example of ER diagramthat describes the entities student and
class with a relationship Enrolled in which may hold between student and class. Student
can be identified by a collection of attributes such as name, age, sex. Thus every student
is characterized by a triple of values representing the students name, age, and sex. A
relation on two entities such as student and class is a set of pairs <a,b>where a is an
element of student and b is an element of class. The relation in fig could represent the fact
that students is enrolled in class b. ER model can be obtained using any of the elicitation
techniques.
Figure 2.14 ER Diagram between students and class

Customer
Call
Service
Engineer

Student

Class
Name
Sex
Age
Enrolled in
Subject
Course ID
Max Enrollment
SOFTWARE ENGINEERING
NOTES
69 ANNA UNIVERSITY CHENNAI
2.7.3 Finite-State Mechanics
Requirements specification techniques which model a system in terms of states and
transitions between states are called state-based modeling techniques. A simple formulation
for specifying states and state transition is the Finite State Machine (FSM). An FSM
consists of a finite number of states and a set of possible transition from one state to
another that occur an input signals froma finite set of possible stimuli.
Pictorially FSMs are represented a state transition diagram (STD). In a state transition
diagramstates are represented by bubles with a lable identifying the state and transition are
indicated as labeled are fromone state to another where the label denotes the structures
that triggers the transition.
Modeling a system in one large monolithic STD is not to be recommended.
The state transition diagramfor meals order status in COS is given in figure 2.15. This
is only a typical STD to explain the concept.
Figure 2.15 State Transition Diagram for meal order status
2.8 VERIFYING REQUIREMENTS QUALITY
In the IEEE Glossary of software Engineering Terminology, quality is defined as the
degree to which a system, component or process meets customer or user need or
expectations. There is more to software success than just delivering the right functionality.
Users also have lot of expectations about how well the software will work, how easy it is
to use, how quickly it runs, what is its reliability and so on. Such characteristics collectively
known as software quality attributes or quality factors. The quality factors can be measured
either subjectively or objectively such as ratings for individual quality factor. The following

Patron cancels donot charge


System accepts completed orders
Patron Cancels do
Dont change
Patron refuses delivery
Because order is incorrect
Patron cancels
Charge payment
Cafeteria staff request delivery



Meal Deliverer Delivers Meals


Incomplete
Accepted
Canceled
Prepared
Pending
Delivery
Delivered
DMC 1703
NOTES
70 ANNA UNIVERSITY CHENNAI
table gives several quality attributes which are important from users point of view and
develops view point.
It is not possible to have all the quality factors for the product. We may have to do
some trade off in this regard.
Availability: It offers to the percentage of the planned uptime during which the system
is actually available for use and fully operational
MTTR MTTF
MTTF
ty Availabili
+
=
Where MTTF : Meantime to failure, MTTR: Mean time to repair.
Uptime and down time are two important concepts
Availability requirements become more complex and more important for websites.
A typical availability requirement might read like this. The system shall be at least
99.5 percent available on weekdays between 6.00 a.m to midnight.
Efficiency: Efficiency is a measure of how well the system utilize processor capacity,
disk space, memory communication based width.
If system is consuming all available resources users will encounter degraded
performance a visible indication of inefficiency.
Flexibility: It measures how easy it is to add new capabilities to the product. Also
known as extensibility
Augmentability
Extendability
Expandability
Software Quality Attributes
Important Primarily to Users Important Primarily to Developers
Availability
Efficiency
Flexibility
Integrity
Interoperability
Reliability
Robustness
Usability
Maintainability
Portability
Reusability
Testability
SOFTWARE ENGINEERING
NOTES
71 ANNA UNIVERSITY CHENNAI
If developers anticipate system enhancements they can choose design approaches
that maximize software flexibility.
Integrity: It deals with blocking unauthorized access to systemfunctions, preventing
information loss and ensuring that the software is protected fromvirus infection and privacy
of data entered into the system.
Integrity requirements do not have any tolerance for error. Data and access ,must be
protected compulsory.
Integrity requirements : User Identity Verification
User Privilege levels
Access Restrictions
Data protection
Interoperatability: It indicates how easily the product can exchange data and services
with other systems.
Reliability
The probability of the software executing without a failure for a specified period of
time is known as reliability. Software reliability can be calculated based on the percentage
of completed operations correctly and the average length of time the system runs before
failing. User expects the systemto operate with high reliability.
Robustness
Robustness is sometimes considered as an aspect of reliability. Robustness is the
degree to which a system continues to function properly when confronted with invalid
inputs, defects or unexpected operating condition. This property of the software is called
fault tolerance software. Robust software recovers gracefully.
Usability
This factor is also referred as case of use and human engineering. Users expect the
users friendliness Usability also encompasses how easy it is for new or frequent users to
learn to use the product.
Attributes Important to Developers
Maintainability
It indicates how easy it is to correct a defect or modify the software. This concept is
very important as the software undergoes several versions due to changes suggested by
the users/stakeholders. Maintainability is measured in termof average time required to fix
a problemand the percentage of fixes, that are made correctly.
DMC 1703
NOTES
72 ANNA UNIVERSITY CHENNAI
Portability
It is the ability of the software to work on different operating environments. Portability
goals should identify these portions of the product that must be movable to other
environments and accordingly develop the software.
Reusability
It indicates the relative effort involved to convert a software component for use in
other applications. Reusable software must be modular, well documented, independent
of specific application and operating environment. Object Oriented Analysis and design
techniques are used to develop reusable components.
Testability
It is also known as verifiability and refers to the ease with which software components
or integrated product can be tested to look for defects. Testability is very important as the
product undergoes several revision and each time will undergo regression testing to verify
whether the original functionalities are retained with out any damages due to changes.
Summary
In this unit we tried to explain all fundamental concepts in connection with requirements
engineering. A detailed discussion was provided on three important activities.
- Requirements elicitation
- Requirements Specification
- Requirements Validation
Several techniques that are currently being used are also given for clean understanding.
Further requirement specification Techniques are quality factors that are to be considered
as a part of requirements study is also discussed.
Sample Questions Unit II
1. Define systemengineering?
2. What are the major types of activities in requirements engineering?
3. What is a requirements elicitation?
4. What are the differences between functional and non-functional requirements?
5. What is requirements elicitation process?
6. Describe the elicitation technique called task analysis?
7. What is use case? What are the advantages of use case diagrams?
8. How do you develop use cases?
9. What are the methods of requirements validation?
10. What is the requirements analysis?
SOFTWARE ENGINEERING
NOTES
73 ANNA UNIVERSITY CHENNAI
11. Why is the requirements analysis is very important?
12. What are the analysis models?
13. What is data flow diagram? Why there are required?
14. What is ER diagram? Explain its importance?
15. List major draw backs of using natural languages for specifying requirements?
16. Explain the following concepts from ER modeling: Entity, entity type, attribute
value, attribute and relation strip.
17. Explain how prototyping is useful for validating requirements?
18. What exactly you mean by systemmodeling?
19. Identify important activities in negotiating requirements?
20. What are the characteristics of SRS document?
DMC 1703
NOTES
74 ANNA UNIVERSITY CHENNAI
SOFTWARE ENGINEERING
NOTES
75 ANNA UNIVERSITY CHENNAI
UNIT III
SOFTWARE DESIGN
3.1. INTRODUCTION
In unit I, we have discussed about the problem and a solution to the problem. The
requirement analysis phase completely deals with the problemdomain. In the software
design we shall move fromproblem domain to a solution domain. In other words, design
is a problemsolving activity. The outputs of the requirement analysis such as functional
models, behavioral models, Use Cases etc. will be inputs to the design phase. The design
process is not an independent activity. It cannot be reported from either the preceding
requirements engineering phase or the subsequent documentation of the design in a
specification. These activities will overlap. Further software design process is not analysis.
We can have several accepted designs (solutions) to a problemrather than a best design
(solution) depending some trade off we shall do such as speed and robustness etc. The
changes in requirement may cause changes in design or implementation.
3.1.1 The Design Process
Software design in an iterative process through which requirements are translated
into a BLUE PRINT for constructing the software.
Design and Software quality
- The design must implement all of the explicit requirements contained in the analysis
model and it must accommodate all of the implicit requirements desired by the
customer.
- Design must be a readable, understandable guide for those who generates code
and for those who test and subsequently maintain the software.
- The design should provide a completer picture of the software, addressing the
data, functional and behavioral domains froman implementation perspective
CRITERIA FOR A GOOD DESIGN
- A design should be modular (i.e) the software should be logically partitioned into
elements that performspecific functions and sub functions.
- A design should be derived from the information obtained during software
requirement analysis.
DMC 1703
NOTES
76 ANNA UNIVERSITY CHENNAI
- A design should lead to interfaces that reduce the complexity of connection between
modules and with the external environment.
DESIGN PRINCIPLES
Creative skill, part experience, a sense of what makes good software and an overall
commitment to quality are critical success factors for a competitive design.
Design model is the equivalent of an architecture plan for a house.
- Three-dimensional rendering of the house.
- Plumbing layout, circuit layout.
Basic design principles enable the software engineer to navigate the design process.
- The design process should not suffer fromtunnel vision.
- The design should be traceable to the analysis model.
- The design should not reinvent the wheel reusability.
- The structure of the software design should imitate the structure of the problem
domain.
- The design should exhibit uniformity and integration. Rules of style on formats
should be defined for a design team.
- The design should be structured to accommodate chang
- The design should be structured to degrade, even when aberrant data, events, or
operating conditions are encountered.
- It is not like bomb.
- Design is not coding, coding is not design
- The design should be assessed for quality as it is being created.
- The design should be reviewed to minimize conceptual errors.
When the design principles described above are properly applied the software engineer
creates a design that exhibits both external and internal quality factors. External Quality
Factors are seen by users such as speed, reliability, correctness and usability. Internal
Quality Factors are seen by designers, such as complexity, coupling and cohesion. These
concepts are explained later in subsequent sections.
3.2 DESIGN CONCEPTS
A set of fundamental design concepts has evolved over the several years. These
design concepts provide software engineer different perception about the design
methodology he adopts. The design concepts that are discussed in this section provide the
necessary framework for getting the product to work in a right way.
Thus, design activity is a fundamental phase in the software development process
that progressively maps all the requirements in SRS document to an set of executable
SOFTWARE ENGINEERING
NOTES
77 ANNA UNIVERSITY CHENNAI
statements which may be the final product. The output of the design activity is the software
design; The Software design is nothing but decomposing the systeminto parts such that
each part has a lower complexity than the systemas a whole. All these parts put together
constitute a solution to the user problem. The complexity of the individual components
should not exceed the overall complexity of the whole problem. This decomposition can
be done till we reach a stage where implementation can be done in terms of a programming
language in a straight forward way. This type of module decomposition is called top
down approach strategy. Similarly we have bottom up strategy. In this strategy the
design process as we have discussed consists of defining modules that can be interactively
combined together top formhigher-level components. The strategy is quite useful especially
when we reuse modules from a library to build a new system instead of building such a
systemfromscratch. The entire systemis constructed by assembling lower level components
in an iterative fashion.
Till now we have used the notion of module in an intuitive way. We should understand
what constitute a module? What should be the main characteristics of a module? and so
on. From the programming language perspective, a module may be considered as an
identifiable programming segment with respect to compilation. But with respect to software
design, the module is defined as an identifiable unit in the design. While decomposing the
system, we should take into consideration certain desirable features of decomposition
irrespective of the type of system or the design method used. These features in some
sense are used as measures for quality design and also facilitate maintenance and reusability.
A clear separation of concepts into different modules and locality of implementation is
essential features in decomposing. Some of the inter- related issues that have an impact
on the above features are :
- Abstraction
- Information hiding
- Modularity
- Complexity
- SystemStructure
3.2.1 Abstraction
Abstraction means concealing some details and focusing only on the essential features.
After all the concealed details are not necessary for the user used needs only the required
output. How he got that output fromthe systemis not a great concern to him. Take for
example, a sorting module. User need not know how the sorting process takes place. He
needs only sorted output.
DMC 1703
NOTES
78 ANNA UNIVERSITY CHENNAI
There are two types of abstraction
- Procedural abstraction
- Data abstraction
3.2.2 Procedural abstraction
The hierarchical decomposition of the systeminto smaller systems, which are relatively
easy to design, is nothing but procedural abstraction. This decomposition is done for
convenience. User needs to know about this. The procedural abstraction is aimed at
finding a hierarchy in the programs control structure which specifies the order in which the
steps have to be executed.
Data Abstraction: is similar to procedural abstraction in nature. Here it gives the
hierarchy in the programs data. Programming language offers some primitive data structures
for integers, real numbers, characters and so on. Depending on the need for advanced
Data Structures, we may make use of these building blocks to construct more data structures
such as stacks, binary trees etc. A data type binary tree is characterized by a set of
objects and a set of operations on these objects. Here also, the type of Data Structures
and operations on themneed not worry the user. The type of objects and the associated
operations are encapsulated in one module may be termed as Object Oriented Design.
We will also consider another type of abstraction namely control abstraction which is
implicit when procedural abstraction is used.
3.2.3 Information hiding
Software design involves sequence of decisions such as number of modules, number
of tasks, type of algorithms to be used, and the order in which the tasks are to be completed
and so on. Important decisions are to be taken on what type of decisions can be hidden
from other parts of the systemor what decision to be made known to other parts of the
system. This is an important concept of modularity. In a way, the information hiding is
closely related to the notions of abstraction. If a module hides some decisions, these
decisions will not cross the boundary of the module. These decisions are bind to that
module only.
3.2.4 Complexity
Complexity refers to the attributes of the software that affect the effort needed to
construct or change a piece of software. The design is an important activity which needs
some resources, some time to complete and design need to be changed if necessary. If the
design is very complex, it is difficult to maintain. Then the question arises: Is it possible to
measure the complexity and express it in quantifiable terms? In the software metrics
literature, several metrics are available for.
SOFTWARE ENGINEERING
NOTES
79 ANNA UNIVERSITY CHENNAI
- Size based complexity metrics lines of code (LOC)
- Structure based complexity metrics Mc Cabes cyclometric measures
The detailed discussion is beyond the scope of this notes.
3.2.5 System Structure
Once we have complete the design process, we can represent a set of modules and
their mutual relationship by means of a graph. The nodes of the graph correspond to
modules and the edges denote relations between modules. Froman abstract point of view
the system structure can be described in terms of mathematical relationships. Several
mathematical concepts can be used to study the interrelationship among the modules.
A relation on S
1
is a subset of S x S. If two modules M
i
and M
j
one in S
1
, we
represent the fact that the pair <M
i,
M
j
>is in r by using the infix notation M
i
@ M
j
. Since
we are interested in describing the mutual relationship among different modules. We will
always implicitly assume the relations of interest to be irreflexive. This implies that M
i
@
M
j
cannot hold for any module M
i
in S
1
.
The transitive closure of a relation @ on s is again a relation @. Then for the pair <M
i,
M
j
>, the relation @ can be defined recursively as follows:
M
i
@

M
j
if an only if M
i
@M
j
or there is an element M
k
in S such that M
i
@

M
k
and

M
k
@

M
j.
A relation is hierarchy if and only if there are no two elements M
i
, M
j
such that
M
i
@

M
j
and M
i
@

M
j.
The transitive closure of a relation captures the initiative notion of direct and indirect
relationships.
For example, for two modules A and B, A calls B implies that either A calls B directly
or indirectly through a chain of calls.
Mathematical concepts can usually be understood more effectively if we can give
thema graphical representation.
A relation can be represented in a graphical formas a directed graph where the nodes
represent modules and directed arcs represent the relationship.
A relation is a hierarchy if and only if there are no cycles in the graph of the relation;
this type of graph is called a directed a cyclic graph.
DMC 1703
NOTES
80 ANNA UNIVERSITY CHENNAI
Fig 3.1 Graph representation of a relation among modules
a) General b) Directed a cyclic graph
3.3 FEATURES OF GOOD DESIGN: COHESION AND COUPLING
In earlier section we have explained the importance of modular design and why is it
so important.
The concept of functional independence is a direct out growth of modularity and the
concepts of Abstraction and information. In functionally independent modules each module
performsingle minded function and has less interaction with other modules. In other words
each module addresses a specific function of requirements and has a simple interface.
What are the advantages of functional independence?
Software with independent modules may be easier to develop since the function may
be compartmentalized and interfaces are simplified. If an error occurs in one module, that
particular module may be modified and compiled without effecting other module. Further
the error propagation is confined only to that module. Further reusable modules are possible
because of this functional independence. Hence a good design should have functional
independence as far as possible. Good software is designed with an architecture that
maximizes its flexibility whilst minimizing the testing and maintenance issues of the software.
Thus the systemshould have proper structure with good independence?
How do we assess the functional independence?
It is assessed by two qualitative criteria.
- Cohesion
- Coupling

M
1

M
2
M
3

M
4
M
5

M
6

M
1

M
1,1
M
1,2
M
3,1

M1.2.1 M1.2.2

M1,2, 1.1 M3

M
M2

SOFTWARE ENGINEERING
NOTES
81 ANNA UNIVERSITY CHENNAI
Cohesion is an indication of the relative functional strength of a module.
Coupling is an indication of the relative independence among modules.
Cohesion is a natural extension of implementation hiding described earlier. A cohensive
module performsingle task with little dependence on other modules. Coupling is an indication
of interconnection among modules in software structure. Coupling depends on the interface
complexity between the modules which acts as an entry point or reference for the module.
Hence we should always strive for high cohesion and low coupling for a good design.
Yourdon and Constantine identity the following seven levels of cohesion of measuring
strength.
Functional High
Sequential
Communicational
Procedural
Temporal
Logical
Coincidental Low
- Coincidental Cohesion: A module is said to have coincidental cohesion of it perform
a set of tasks that related to each other very loosely. The module may contain a
random collection of functions that have been put into the module without any
proper thought.
- Logical Cohesion: A module is said to be logically cohesive if all the elements of
the module performs similar operations. One example is a module that contain all
input routines. These routine do not call one another and they do not pass
information to each other.
- Temporal cohesion: When a module contain functions that are related by the fact
that all the functions must be executed in the same time, then the module is said to
exhibit temporal cohesion. For example initialization module.
- Procedural Cohesion: A module exhibits procedural cohesion if it contains a number
of components that have to be executed in some given order Eg:, a module which
reads some data, then search a table and then prints a result.
- Communicational Cohesion: It occurs if the components of a module refers to or
operate on the same data/ data structures.
- Eg. Set of functions defined on an array or a stack.
- Sequential Cohesion: If the components of a module form the parts of sequence
then the output fromone element of the sequence is input to the next.

DMC 1703
NOTES
82 ANNA UNIVERSITY CHENNAI
- Functional Cohesion: Here all components of a single module co-operate with
each other to carry out one single function.
- Eg. Subroutines
We can clearly say that achieving the highest possible cohesion between the components
of a module is not that an easy task . We need to make a lot of trade-offs to make
software design as quality blended activity.
COUPLING
Coupling specifies degree of interdependence between the modules. If two module
inter change large amount of data, then they are highly interdependent we can fully
comprehend this set of modules as a whole and this may cause ripple effect. Whenever
we want to change one module, it will have direct effect on other dependent module.
In loosely coupled modules, the modules are rather independent and easier to
comprehend.
Loose coupling what is required for a good design.
The following types of coupling can be identified.
Content coupling High
Common Coupling
External Coupling
Control Coupling
Stamp Coupling
Data Coupling Low
- Content Coupling : Here one module directly affects the working of another.
Content coupling occurs when a module changes another modules data or when
control is passed from one module to the middle of another (branch).
This type of coupling should always be avoided.
- Common Coupling: Two modules are said to have common coupling if they share
a global data item.
- External Coupling: Modules communicate through an external mediumsuch as
file.
- Control Coupling: Control Coupling exists between two modules if data fromone
module is used to direct the order of instructions execution in another by passing
the necessary control information.
- Stamp Coupling: It occurs when coupling data structures are passed from one
module to another. With Stamp Coupling, the peruse format of the data structures
is a common property of these modules.

SOFTWARE ENGINEERING
NOTES
83 ANNA UNIVERSITY CHENNAI
- Data Coupling: Two modules are data coupled if they communicate using an
elementary data itemthat is passed as a parameter between the two.
Now a days most of the programming languages have more flexible means of
passing information fromone module to another. Thus we require a different set of
levels of coupling.
3.4 THE DESIGN MODELS
The Design Model can be looked at from two different view points.
- Analysis model
- Design model
While discussing about these models we have to understand what are essential design
tasks in one model how they are converted to another model. Further we have to understand
how different level of details of each element of the analysis model is transformed into a
design equivalent and then refined iteratively. There is hair-line boundary between analysis
model and design model. In some cases it is possible to find out the clean distinction
between their models and in some cases it is not possible.
The elements of the design model might use the same UML diagrams that are used in
the analysis model. The only difference is that these UML diagramused is analysis model
are refined and collaborated as part of design models. Pressman (2005) explained the
interrelationship between analysis model and the design model through two perspectives
namely process dimension and abstract dimension. The figure3.2 below is due to pressman
(2005).
DMC 1703
NOTES
84 ANNA UNIVERSITY CHENNAI
PROCESS DIMENSIONS
Figure 3.2 Dimensions of Design Models
The elements in the design model are classified as follows.
- Data Design Elements
- Architectural Design Elements
- Interface Design Elements
- Component Level Design Elements
- Deployment level design elements
3.4.1 Data Design Elements
Data design creates a high level of abstract model of data and or implementation.
The architecture of data is an important part and have some influence on the architecture of
Analysis Model
Class Diagrams
Analysis Packages
CRC Models
Collaboration
Diagrams
Data Flow
diagrams
Control Flow
Diagrams
Processing
narratives

Use-cases-text
Use-case
diagrams
Activity
diagrams
Swim Lane
diagrams
State diagrams
Sequence
Diagrams
Class diagrams
Analysis Packages
CRC models
Collaboration
Diagrams
Data Flow diagrams
Control flow
diagram
Processing
narratives
State diagrams
Sequence diagrams
Requirements
Constraints
Interoperability
Targets and
Configuration


Design class
realizations
Subsystems
Collaboration
diagrams

DESIGN MODEL



Refinements to:
Design class
Realizations
Subsystems
Collaboration
diagrams



Technical
Interface design
Navigation
design GUI
design


Component
diagrams Design
classes
Activity diagrams
Sequence diagrams





Refinements to:
Component
diagrams
Design Classes
Activity Diagrams
Sequence Diagrams


Design class
realizations
subsystems
Collaboration
diagrams
Design classes
Activity diagrams
Sequence diagrams


Deployment
diagrams
Architecture
elements
Interface
elements
Component-level
Elements
Deployment- level
Elements

SOFTWARE ENGINEERING
NOTES
85 ANNA UNIVERSITY CHENNAI
the software. The data design activity translates data objects defined in the analysis model
into appropriate data structures at the software component level. Wasserman (1980)
proposed set of guidelines that may be followed to specify and design such data structures.
1. The systematic analysis principles applied to function and behavior should also be
applied to data.
2. All data structures and the operations to be performed on each should be identified.
3. A mechanism for defining the content of each data object should be established
and used to define both data and the operations applied to it.
4. Class diagrams define the data items (attribute) contained within a class and the
processing (operations) that are applied to these data items.
5. Low level data design decisions should be deferred until late in the design process.
A step wise refinement is advised for the design of data.
6. The representation of data structure should be known only to those modules that
must make direct use of the data contained within the structure.
7. A library of useful data structure and the operations that may be applied to them
should be applied to them should be developed.
8. A software design and programming language should support the specification
and realization of abstract data types. These principles are important for a
component level data design.
3.4.2 Architectural Design Elements
Architectural design provides an overall view of the software. The Architectural
model is desired from three different sources such as information about the application
domain, data flow diagram, analysis classes and their relationship and collaboration and
architectural patterns.
For software that are developed for computer based systems exhibits one of many
architectural styles and architectural patterns.
Each architectural style covers the following:
- Set of components (e.g database, Computational modules)
- Set of connectors that enable Communication, Co-ordination and Cooperation
among components.
- Constraints that specifies how components can be integrated to formthe system.
- Semantic models to understand the overall properties of a system
Thus the architectural style is a transformation that is composed on the design of an
system, likewise architectural pattern composes a transformation on the design of an
architecture,. This architectural patterns focus only on some aspects rather than processing
on the entire architecture of the system.
DMC 1703
NOTES
86 ANNA UNIVERSITY CHENNAI
The pattern may impose a rule on the architecture specifying how the software will
handle some aspects of its functionality at the infrastructure level e.g concurrency or
interrupts.
While deciding the required Architecture, we have to consider the style as well as
patterns.
We shall briefly discuss the architectural styles and patterns.
Architectural styles are of different a type that has been categorized based on several
projects that are completed for the past several years.
- Data centred Architectures
- Data flow Architecture
- Call and return Architecture
- Object Oriented Architecture
- Layered Architecture
Datacenter Architectures: A data base or file is at the center of the architectures and
all other components performoperation like addition, deletion, modification updation on
this centralized data base. Here the client software independently access the data base
and make changes independent of other components. A mechanismneed to be incorporated
to informother client softwares about the changes made by another client software clients
components independently execute processes.
Data-flow architectures: The architecture is applied when the input is transformed as
output through a series of computations.
A pipe and filter architecture is a typical example for this data flow architectures. A
pipe and filter architecture is given in figure3.3
Figure 3.3 Pipe and Filter Architecture
Each filter works independently of these components upstream and downstream.
These filters accepts data in a particular format and transforms data and sends through a
pipe to another filter and ultimately produces data output in a specified format.

Filter

Filter
Filter
Filter
Filter
Pipe Pipe
SOFTWARE ENGINEERING
NOTES
87 ANNA UNIVERSITY CHENNAI
Call and return architectures
This generates a good programstructure, which could be modified and scaled easily.
Two sub types exist in this category
- Main program/Sub programarchitecture. This defines a control hierarchy where
a main programinvokes a number of other programcomponents.
- Remote procedure cell architects.
This is a distributed architecture where several components can invoke and access
other components through Remote cell Procedures.
Object Oriented Architectures
In this architecture the components of a systemencapsulates data and all operations
such as computation of data communication and co-ordination across the components
through message passing.
Layered Architecture
In this layered architecture., we can identify distinct classes of services that can be
arranged hierarchically. The system can be depicted as a series of concentric circles
where services in one layer depend on services of inner layers. This type of system are
divided into four layers, Core layers, Utility layer, Application Layer and user interface
layer.
Figure 3.4 Layered Architecture



Utility Layer
Core
Layer
Application Layer
Services
User interface
Layer.
DMC 1703
NOTES
88 ANNA UNIVERSITY CHENNAI
These architectural styles are only a small subset of those available to software designer.
Once the requirements engineering uncovers the characteristics and constraints of the system
to be built, the architectural style or combination of styles that best fits those characteristics
and constraints can be chosen, In certain cases, more than one architectural style may be
appropriate and alternatives might be designed and evaluated. For example a layered
architecture combined with data centre architecture may be quite useful in data base
application.
3.4.3 Interface Design Elements
The interface design elements for software tell how information flows into and out of
the system and how it is communicated among the components defined as a part of the
architecture.
Interface design can be divided into internal, external and human computer interface
design. Internal interface design is concerned with the design of interfaces in-between
modules within the system. For example the design of a systemmay be made up of a data
entry module, a data storage module and a data reporting module. The interconnections
between these three modules have to be identified clearly. For example when the data
entry module receives some data that the storage module determines to be invalid, in such
situation there must be some other module which is responsible for informing the system
user of the error and obtaining new data.
External interface design is concerned with the design of interfaces to other external
entities. A software application may require the input of some sensory data which requires
interfacing with an external entity for example of data sensor.
Finally human-computer interface design is concerned with providing good usability
for the systemuser. Although aspects of usability include factors of costs and efficiency,
human factor issues have gained increasing consideration in the design of human-computer
interface. User interface design should be user-centered. The interface should ensure that
the users can interact with the systemon their terms. It should be logical and consistent
and should include facilities to help users with the systemand to recover fromany mistakes
they may make. Graphical interfaces where the user has multiple windows, menus, iconic
object representations and a pointing device, have the major advantage that they are easy
to use since they are initiative due to their self-evidence. A detailed explanation of their use
should not be required and consistency can easily be maintained by using the same
representation for the same actions in different applications like the use of icons in the
windows environment.
Many software development tools for example programming languages, spread sheets
and databases provide user interface development tools. These allow you to design your
own User Interface confirming to certain standards within the software. For example we
SOFTWARE ENGINEERING
NOTES
89 ANNA UNIVERSITY CHENNAI
may be able to define overlapping windows in various sizes, pull down menus, dialogue
boxes and printing devices support.
3.4.4 Component level design elements
Once data, architectural and interface design have been completed more detailed
attention is paid to the individual components of the system within the component level
design also referred to as procedural design. Thus the component level design for the
software fully describe the internal details of each software component. This means that
the data structures identified in the data model, the interface identified during interface
design and the individual modules that have been determined within the architectural design
now have to be described in more detail. There are a number of ways in which this can be
done. It is possible to use Pseudo code. Within Pseudo code simple natural language
(e.g. English) is combined with more structural programming constructs in order describe
an algorithms.
Alternatively it is possible to use a graphical representation. Flow charts are a popular
and long established modeling technique that is particularly useful for the representation of
the algorithms with in a module. Within the context of Object-Oriented software engineering,
a component is represented in UML diagramic forms.
Within component level design pseudocode and flow chart UML aimto reduce the
level of abstraction of the design models. The resulting procedural specification can then
easily be translated with programcode.
3.4.5 Deployment Level Design Elements
Deployment level design elements indicate how software functionality and subsystems
will be allocated within the physical computing environment such as control panels, servers
and desktops that will support the software.
During design, a UML diagramis developed based on the typical application and the
different subsystemare deployed to different physical environments. In a way the deployment
diagramshows the computing environment but does not explicitly indicate the configuration
details.
3.5 PATTERN BASED SOFTWARE DESIGN
A software developers always look for components that can be reused for deriving
a complete solution. The basic assumption is that these components are expected to meet
the requirements of the design.
In software engineering, a design pattern is a general reusable solution. A design
pattern is not a finished design that can be transformed directly into code. It is a description
or template for how to solve a problem that can be used in many different situations.
DMC 1703
NOTES
90 ANNA UNIVERSITY CHENNAI
Object-Oriented design patterns typically show relationships and interactions between
classes or objects. Without specifying the final application classes or objects that are
involved. Algorithms are not thought of as design patterns since they solve computational
problem rather the design problems. It is to be noted that all software patterns are not
design patterns. Design patterns deal specifically with problems at level of software design.
Other kinds of patterns such as architectural patterns describe problems and solutions that
have alternative scopes.
Reusing design patterns helps to prevent suitable issues that can cause major problems
and it also improves code readability for codes and architects who are familiar with the
patterns.
The documentation for a design patterns describes the context in which the patterns is
used the forces with the context that the pattern seeks to resolve and the suggested
solution.Even though different templates are used for documenting design patterns. We
shall give a standard template. It contains the following sections:
- Pattern Name and Classification: A descriptive and unique name that helps in
identifying and referring to the pattern.
- Intent: A description of the goal behind the pattern and the reason for using it.
- Also known As: Other names for the pattern.
- Motivation (Forces): A scenario consisting of a problemand a context in which
this pattern can be used .
- Applicability: Situations in which this pattern is usable, the context for the pattern.
- Structure: A graphical representation of the pattern, class diagrams and Interaction
diagrams may be used for this purpose.
- Participants: A listing of the classes and objects used in the pattern and their roles
in the design.
- Collaboration: A description of how classes and objects used in the pattern interact
with each other.
- Consequences: A description of the results, side effects and trade offs caused by
using the pattern.
- Implementation: A description of an implementation of the pattern; the solution
apart of the pattern.
- Sample Code: An illustration of how the pattern can be used in a programming
language
- Known Uses: Examples of real usages of the pattern.
- Related Patterns: Other patterns that have some relationship with the pattern;
discussion of the differences between the pattern and similar patterns.
SOFTWARE ENGINEERING
NOTES
91 ANNA UNIVERSITY CHENNAI
3.6 COMPONENT LEVEL DESIGN
In section 3.4.4 we have briefly explained the concept of component level design
which focus on the internals of the individual components.
Object Management Group OMG UML defines a component as A modular,
deployable and replaceable part of a systemthat encapsulates implementation and exposes
a set of interfaces.
From the Object Oriented view point a component contains a set of collaborating
classes. Each class within a component has been fully elaborated to include all attributes
and operations that are relevant to its implementation. As part of the design elaboration all
interface (messages) that enable the classes to communicate and collaborate with other
design classes must also be defined. In order to achieve this, the designer starts with
analysis model and elaborates analysis classes (for components that relate to the problem
domain) and infrastructure classes (or components that provide support services for the
problemdomain.
As on illustrative example for the design elaboration, let us discuss the example given
by Pressman (2005).
Consider the software that is to be build for the sophisticated Print shop. The objective
of the software is to collect the customers requirements at the front counter, cost a print
job and then pass the job on an automated production facility. During requirements
engineering an analysis class called Print Job was derived. The attributes and operations
defined during the analysis are noted at the top left of fig. 3.4 During the architectural
design, Print Job is defined as a component within the software architecture and is
presented using the shorthand UML notation shown in the middle right of the figure. You
can see from the figure that Print Job has two interfaces Computer Job that provides
job costing capability and invited job that passes the job along to the production facility.
Component level design starts at this point. The details of the component Print Job must
be elaborated to provide sufficient information to guide implementation. The figure 3.4
completely contains more detailed attribute information as well as an expanded description
f operations required to implement the component.
DMC 1703
NOTES
92 ANNA UNIVERSITY CHENNAI
Figure 3.4 Elaboration of a design comporient
The elaboration activity is applied to every component defined as part of the
architectural design. Once it is completed further elaboration is applied to each attribute
operation and interface. The Data Structures appropriate for each attribute must be
specified. In addition the algorithmic details required to implement the processing logic
associated with each operation is designed. Finally the mechanisms required to implement
the interfaces are designed.
3.6.1. The Conventional View
Conventional View: A component is a functional element of a programthat incorporates
processing logic, the internal data structures that are required to implement the processing
logic, and an interface that enables the component to be invoked and data to be passed to
it.
A conventional component, also called a module, resides within the software
architecture and serves one of three important rules as:
SOFTWARE ENGINEERING
NOTES
93 ANNA UNIVERSITY CHENNAI
1. A control component that coordinates the invocation of all other problemdomain
components,
2. A problemdomain component that implements a complete or partial function that
is required by the customer.
3. An infrastructure component that is responsible for functions that support the
processing required in the problemdomain.
To illustrate the process of design elaboration for conventional components, once
again we shall consider the software to be built for photocopying centre.
A set of data flow diagrams would be derived during analysis modeling. These dataflow
diagrams are mapped into an architectural diagramshown in fig 3.6.
Figure 3.5 Struction chart for a conventional systems
Here each box represents a software components. Shaded boxes are equivalent in
function to the operations defined for the Print Job class discussed in the above section.
During the component level design each module in fig 3.5 is elaborated. The module
interface is defined explicitly. Each data or control object that flows across the interface is
presented. The data structure and algorithms fir implementation are identified. To make
these concepts clear, let us consider an illustrative example describing the module Computer
Page Cost via the modules interface. Fig 3.6 depicts the component level design using a
modified UML notation. The computer page cost module access data by in working the
DMC 1703
NOTES
94 ANNA UNIVERSITY CHENNAI
modules get job data when allows all relevant data to be passed to the component and a
database interface Access Costs DB which enables the module to access a data base
that contains all printing costs. The design elaboration continues until sufficient details is
provided to guide construction of the component .
Figure 3.6 Component level design for computer page cost
3.6.2. Designing Class-Based Components
These principles can be used to guide the designer as each S/W component is
developed.
- The Open-Closed Principle (OCP), A module [component] should be open for
extension but closed for modification. The designer should specify the component
in a way that allows it to be extended without the need to make internal (code or
logic-level) modifications to the component.
- The Liskov substitution Principle (LSP). Subclasses should be substitutable for
their base classes. A component that uses a base class should continue to function
properly if a class derived from the base class is passed to the component instead.
- Dependency Inversion Principle (DIP), Depend on abstractions, Do not depend
on concretions. Abstractions are the place where a design can be extended
without great complications.
SOFTWARE ENGINEERING
NOTES
95 ANNA UNIVERSITY CHENNAI
- The interface segregation principle (ISP), Many client-specific interfaces are better
than one general purpose interface. The designer should create a specialized
interface to serve each major category of clients.
- The Release Reuse Equivalency Principle (REP), The granule of reuse is the
granule of release. When classes or components are designed for reuse, there is
an implicit contract that is established between the developer of the reusable entity
and the people who will use it.
- The Common Closure Principle (CCP). Classes that change together belong
together. Classes should be packaged cohesively. When some characteristic of
an area must change, it is likely that only those classes within the package will
require modification.
- The common reuse principle (CRP). Classes that arent resused together should
not be grouped together. When one or more classes with a package changes, the
release number of the package changes.
3.6.3 Component Level Design Guidelines
- Components
o Naming conventions should be established for components that are specified as
part of the architectural model and then refined and elaborated as part of the
component level model.
- Interfaces
o Interfaces provide important information about communication and collaboration
(as well as helping us to achieve the OCP)
- Dependencies and Inheritance
o It is good idea to model dependencies from left to right and inheritance from
bottom (derived classes) to top (base classes).
3.6.4 Cohesion
Cohesion implies that a component or class encapsulates only attributes and operations
that are closely related to one another and to the class or component itself.
- Levels of cohesion
- Functional: occurs when a module performs one and only one computation
and then returns a result.
- Layers: occurs when a higher layer accesses the services of a lower layer, but
lower layers do not access higher layers.
- Communicational: All operations that access the same data are defined within
one class.
- Sequential: Components or operations are grouped in a manner that allows
the first to provide input to the next and so on.
DMC 1703
NOTES
96 ANNA UNIVERSITY CHENNAI
- Procedural: Components or operations are grouped in manner that allows one
to be invoked immediately after the preceding one was invoked.
- Temporal: Operations that are performed to reflect a specific behavior or state.
- Utility: Components, classes, or operations that exist within the same category
but are otherwise unrelated are grouped together.
3.6.5 Coupling
- Conventional View:
o The degree to which a component is connected to other components and to
the external world.
- OO View:
o A qualitative measure of the degree to which classes are connected to one
another, Keeping coupling as low as possible.
- Level of Coupling
o Content: Occurs when one component Superstitiously modifies data that is
internal to another component. Violates information hiding
o Common: Occurs when a number of components all make use of a global
variable.
o Control: Occurs when operation A() invokes operation B() and passes a
control flag to B. The control flag then directs logical flow within B.
o Stamp: Occurs when Class B is declared as a type for an argument of an
operation of Class A. Because ClassB is now a part of the definition of
ClassA, modifying the systembecomes more complex.
o Data: Occurs when operations pass long strings of data arguments. Testing
and maintenance becomes more difficult.
o Routine call: Occurs when one operation invokes another.
o Type use: Occurs when component A uses a data type defined in component
B.
o Inclusion or import: Occurs when component A imports or includes a package
or the content of component B.
o External: Occurs when a component communicates or collaborates with
infrastructure components (O/S function).
3.6.6 Conducting Component-Level Design
Thesteps discussed in this section provide a reasonable task set for designing a
component. You should emphasize that (1) design classes in the problem domain are
usually custom-designed, however, if an organization has encouraged design for reuse,
there may be an existing component that fits the bill; (2) design classes corresponding to
the infrastructure domain can sometimes be often fromexisting class libraries; (3) a UML
collaboration diagram provides an indication of message passing between components.
SOFTWARE ENGINEERING
NOTES
97 ANNA UNIVERSITY CHENNAI
- Step 1: Identify all design classes that correspond to the problemdomain.
- Step 2. Identify all design classes that correspond to the infrastructure domain.
- Step 3. Elaborate all design classes that are not acquired as reusable components.
- Step 3a. Specify message details when classes or component collaborate.
- Step 3b. Identify appropriate interfaces for each component.
- Step 3c. Elaborate attributes and define data types and data structures required to
implement them.
- Step 3d. Describe processing flow within each operation in detail.
- Step 4. Describe persistent data sources (databases and files) and identify the
classes required to manage them.
- Step 5. Develop and elaborate behavioral representations for a class or component.
- Step 6. Elaborate deployment diagrams to provide additional implementation detail.
- Step 7. Factor every component-level design representation and always consider
alternatives.
In order to explain steps 3(a) let us consider the figure 3.7 given below.
Figure 3.7a) Collobration diagram with managing
b) Refectoring interface and clam definition of print job
DMC 1703
NOTES
98 ANNA UNIVERSITY CHENNAI
Fig 3.7 (a) illustrates a simple collaboration diagramfor the printing systemdescribed
earlier. Three objects Production Job, Workorder and Jobqueue collaborate to prepare a
print job for submission to the production stream. Messages are passed between objects
as given by arrows in the figure. As the design proceeds, each message is elaborated by
expanding its syntax.
Within the context of component level design a UML interface is a group of externally
visible (i.e. Public) operations. The interface contains no internal structure it has not attributes,
no association. In otherwords an interface is the equivalence of an abstract class that
provides a controlled connection between design classes. The elaboration of interfaces is
illustrated in fig 3.4.
From the fig 3.4 it can be argued that the interface initiate job does not exhibit
sufficient cohesion. The interface performs three different sub functions namely.
- Building a work order
- Checking job priority
- Passing a job to production
Hence the interface design should be refactored. One way of doing this is to reexamine
the design classes and define a new class Workorder that know take care of all activities
associated with the assembly of work order. As shown in figure 3.7 (b) the operation
build work order becomes a part of their class., The interface initiate job would than
take the formshown in fig 3.7 (b). This interface is Cohesive focusing on are function.
Figure 3.8 UML Activity diagram for Computer Paper cost
SOFTWARE ENGINEERING
NOTES
99 ANNA UNIVERSITY CHENNAI
Fig 3.8 depicts a UML activity diagram for Computer Paper cost When activity
diagrams are used for component level design specification, they are generally represented
at a level of abstraction that is somewhat higher than source code.
The dynamic behavior of an object (on insanitation of a design class as the program
executes) is affected by the events that are external to it and the current state of the object.
This dynamic behavior is well represented by state transition diagramusing a UML state
chart as given in fig 3.9.
Figure 3.9 State chart fragment for the print job class
Here each state may define entry/and exit/ actions that occur as transition into and out
of the state occur. In most of the cases, these actions correspond to operations that are
relevant to the class that is being modeled. The do/indicator provides a mechanism for
indicating activities that occur while in the state and the include/indicator provides a
mechanism for elaborating the behavior by embedding more state chart details within the
definition of a state.
DMC 1703
NOTES
100 ANNA UNIVERSITY CHENNAI
This section briefly explained the guidelines to be followed for component level design.
One should not think that the component design is final. The design is on iterative processes
and these are always alternative design solutions.
3.7 USER INTERFACE DESIGN
The User interface of a system is the yardstick by which that system is judged. An
interface, which is difficult to use result in lot of user errors. Most computer users are
aware of the graphical user interfaces (GUI), which support high-resolution color screens
and interaction using a mouse as well as keyboard.
The software that is consciously developed is usually designed fromthe point of view
of the programmer, sometimes the marketing department and occasionally fromthe users
point of view. The programmer has a different viewpoint, centering on technology and
programming methodology. The marketing department is focused on strategies that
improves the marketability of the software Users tend to focus on their day-to-day tasks.
While it is the users job to focus on tasks, the designers job is to look beyond the task to
identify the user goals. There in lies the key for creating the most effective software solutions.
The software designer must be sensitive to and aware of the users goal during the software
development process.
Essence of user interface design
There is no such thing as good user interface as there is no such thing as good
furniture arrangement. Goal directed design is a boon it is a powerful tool for answering
the most important questions. Interface design focuses on three areas
1. Interfaces between software components
2. Interfaces between the software and external entities such as produces and
consumers of information, other applications etc.
3. Interface between a human (i.e the user) and the computer. In this section we shall
focuses on third category of interfaces that is user interface designs.
In the design of user interfaces the following FAQs are useful to design user interfaces.
- What should be the form of the program?
- How will the user interact with the program?
- How can the programs functions be most effectively organized?
- How will the programintroduce itself to the first time users?
- How can the programput an understandable and controllable face on technology?
- How can the programdeal with problem?
- How will the program help frequent users become more expert?
- How can the programprovide sufficient depth for expert users?
All these questions should be properly addressed while designing the software especially
in the design of user interface.
SOFTWARE ENGINEERING
NOTES
101 ANNA UNIVERSITY CHENNAI
Software design is that portion of the development process that is responsible for
determining how the program will achieve users goals.
The questions answered by this design phase include:
1. What the software programwill do?
2. What it will look like
3. How it will communicate with the users?
User interface design encompasses items 2 and 3, although it is difficult to separate
themfromitem1.
Allan cooper (1995) in his book of on Essentials of user interface design, gave a
nice description of three models which gives us the real essence of User Interface Designs.
While designing an User Interface, we should always consider what is the users
perception about the product that he wants, In other words user has his own vision about
the software. The designer has his own perception which is technology based. The way
in which the product is designed involves a good design methodologies. Here we have.
Mental Model: Reflects the users vision.
Manifest Model: It is the way the program represents its functioning to the users.
Implementation Model: The actual method of how a software works.
The relationship between the three models is shown in figure 39.
Fig 3.11
The way the engineer must build the program is usually a give. We call this
the implementation model. The way the user perceives the program is usually
beyond our control. He will conjure up a likely image that we call the mental model.
The way the designer program that we can change significantly. If we use logic
and reason to make the manifest interface. On the other hand. If we abandon
logic and make the manifest model follow the users imagination the mental
model shown on the right, we will create a good interface.
DMC 1703
NOTES
102 ANNA UNIVERSITY CHENNAI
Although software developers have absolute control over a programs manifest model,
consideration of efficiency will strongly dictate their choice. Designers, on the other hand,
have considerable leeway in their choice of manifest model comes to the users mental
model, the easier he will find the program to use and to understand. Generally, offering a
manifest model that closely follows the implementation model will reduce the users ability
to use and learn the program. The ability to tailor the manifest model is a powerfully lever
that the software designer can use positively or negatively. If the manifest model takes the
trouble to closely represent the implementation model, the user can get confused by useless
facts. Conversely of the manifest model closely follows a likely mental model it can take
much of the complexity out of user interface. Most software components to implementation
models.
Understanding how software actually works will always help someone to use it. The
manifest model allows software creators to solve the problemby simplifying the apparent
way the software works user interfaces that abandon implementation models to follow
mental models more closely are better.
User interfaces that confirmto implementation models are bad. In Adobe Photoshop
the user can adjust color balance of an illustration. A small dialog box instead of offering
numeric setting the implementation model-shows a series of small sample images each
with different color balance. The user can click on the image that best represents the
desired color setting. Because the user is thinking in terms of colors not in terms of number
the dialog more closely follows this mental model.
The Three Interface Paradigms
There are three dominant paradigms in the design of user interfaces.
- Technology Paradigm
- Metaphor Paradigm
- Idiomatic Paradigm
Technology Pattern: is based on understanding how things work a difficult
proposition.
Metaphor Paradigm: is based on intuiting how things works a risky method.
Idiomatic paradigm: is based on learning how to accomplish things a natural human
process.
The technology paradigmof user interface design is simple and incredibly wide spread
in the computer industry.
The technology paradigm merely means that the interface is expressed in terms of its
construction of how it was built. In order to successfully use it, the user must understand
how the software works.
SOFTWARE ENGINEERING
NOTES
103 ANNA UNIVERSITY CHENNAI
Following the technology paradigmmeans user interface design based exclusively on
the implementation model.
When we talk about metaphors in the user integers design context we usually mean
visual metaphors eg. Tiny images on tool bar button.
Does a picture of an aeroplane means send via airmail or airline reservations. The
metaphor paradigamrelies on an intuitive connection in which there is no need to understand
the mechanismof the software.
The idiomatic paradigm: is based on the way we learn and use idioms or figure of
speech.
Most elements of a GUI interface are idioms buttons, caption boxes, closes boxes
screen splitters and dropdowns. The key observations about regions is that they must be
learned, good ones only need to use learned once.
Principles for User Interface Design:
User Interface Design has as much to do with the study of people as it does with
technology issues.
1. Principle of User Profiling:
It is based on the principle know who your user is. A design that is better for a
technically skilled user might not be better for a non technical business man or an artist.
Basically we should answer the following questions.
- What are the users goals?
- What are the users skills and experience?
- What are the users needs?
- How do we leverage the users strengthen and create an interface that helps them
achieve their goals?
These questions can be answered by direct interaction with real users. Direct contact
between end-users and developers has often radically transformed the development
process.
2. Principles of Metaphor
- Barrow behaviors fromsystemfamiliar to your users.
Frequently a complex software system can be understood more easily of the user
interface is depicted in a way that resembles some common place systems. The Ubiquitous
Desktop Metaphor is an overused, and a good example. Another is the tapedeck
DMC 1703
NOTES
104 ANNA UNIVERSITY CHENNAI
metaphor seen on many audio and video player programs. There are several factors to
consider when using a metaphor.
- Once a metaphor is chosen, it should be spread widely throughout the interface
rather than used once at a specific point.
- Be aware that some metaphors dont cross cultural boundaries well. Metaphors
used for common US mail box may not be found in Europe.
3. The Principle of feature exposure
- Let the user see clearly what functions are available
Some of the features that are quite useful in the User Interface design are
- Tool bar
- Menu Item
- Sub Menu Item
- Dialog Box
- Secondary Dialog box
- Dialog box
- Secondary Dialog box
- Advanced User mode
- Scripted functions
4. The principle of Coherence
The behavior of the program should be internally and externally consistent. It is
certainly arguable that an interface should be coherent-in other words logical, consistent
and easily followed. Internal consistency means that the programs behaviors make sense
with respect to other parts of the program. For example of one attribute of an object (e,g,
color) is modifiable using a pop-up menu, then if is to be expected that other attributes of
the object would also be editable in a similar fashion.
External consistency means that the programme is consistent with the environment in
which it runs. This includes consistency with both the operating system and the typical
suite of applications that run within that operating system. One of the most widely recognized
forms of external Coherence is compliance with user interface standards.
5. Principle of State Visualization
Changes in behavior should be reflected in the appearance of the program. Each
change in the behavior of the programshould be accompanied by a corresponding change
in the appearance of the interface.
One of the most important kinds of state is the current selection, in other words the
object or set of objects that will be affected by the next command. It is important that this
SOFTWARE ENGINEERING
NOTES
105 ANNA UNIVERSITY CHENNAI
internal state be visualized in a way that is consistent, clear, and unambiguous. For example,
one common mistake seen in a number of multi-document applications is to forget to dim
the selection when the window goes out of focus. The result of this is that a user, looking
at several windows at once, each with a similar-looking selection, may be confused as to
exactly which selection will be affected when they hit the delete key. This is especially
true if the user has been focusing on the selection highlight, and not on the window frame,
and consequently has failed to notice which.
6. The Principle of Short cuts:
- Provide both concrete and abstract ways of getting a task done.
There are various levels of shortcuts, each one more abstract than its predecessor.
For example in the emacs editor commands can be invoked directly by name, by member,
by a modified keystroke combination or by a single keystroke. Each one of these is more
Accelerated than its predecessor.
7. The Principle of focus
- Some aspects of the UI attract attention more than others also.
The mouse cursor is probably the most intensely observed object on the screen. It is
not only a moving object but mouse users quickly acquire the habit of tracking it with their
eyes in order to navigate. This is why global state changes are often signaled by changes
to the appearance of the cursor such as the well known hour glass Cursor. It is nearly
impossible to miss.
8. The Principle of Grammer:
- A user interface is a kind of language know to what the rules are.
Many of the operations within the user interface require both a subject (an object to
be operated upon) and a verb (an operation to perform on the object). This naturally
suggests that actions in the user interface forma kind of grammer. The two most common
grammars are known as Action object, and object Action the operation (or tool)
is selected first when a subsequent object is chosen the tool immediately operates upon the
object. The selection of the tool persists from one operation to the next so that many
objects can be operated on one by one without having to reselect the tool.
In Action object case, the object is selected first and persists fromone operation
to the next. Individual action are then chosen when operate on the currently selected
object or objects.
9. The principle of help
- Understand the different kinds of help a user needs
There are five basic types of help, corresponding to the five basic questions that users
ask.
DMC 1703
NOTES
106 ANNA UNIVERSITY CHENNAI
1. Goal- Oriented: What kinds of things can I do with this program?
2. Descriptive: What kinds of things can I do with this program?
3. Procedural: How do I do this?
4. Interpretive: Why did this happen.
5. Navigational: Where am I?
For example about boxes: are one way of addressing the needs of question of type
1. Questions of type 2 can be answered with a standard help browser, tool tips or
other kinds of context-sensitive help. A help browser can also be useful in responding to
questions of the third type, but these can sometimes be more efficiently addressed using
cue cards, interactive guides, or wizards which guide the user through the process
step-by-step. The fourth type has not been well addressed in current applications, although
well-written error messages can help. The fifth type can be answered by proper overall
interface design, or by creating an application road map. None of the solutions listed in
this paragraph are final or ideal; they are simply the ones in common use by many applications
today.
10. The principal of Safety
- Let the user develop confidence by providing a safety net. Each human mind has
an envelop of risk
That is to say a minimumand maximumrange of risk levels where they find comfortable.
In case of computer interfaces a level of risk that is comfortable for a novice user might
make a Power-User feel comfortably swaddled in safety. Novice users need to be
assured that they will be protected from their own lack of skills. At the same time the
expert user also must feel comfortable to use the system.
11. The principal of context
- Limit user activity to one well defined context unless there is a good reason not
to.
Each user action takes place within a given context- the current document, the current
selection, the current dialog box. A set of operations that is valid in one context may not be
valid in another. Even within in a single document, there may be multiple levels, for example,
in a structured showing application, selecting a text object (which can be moved on resized)
is generally considered a different state from selecting an individual character within that
text objects. It is usually a good idea to avoid mixing these levels.
Another thing to keep in mind is the relationship between contexts. For example if
the user is working in a particular task space when suddenly a dialog box will pop up
asking the user for confirmation of an action then the user is confused and it may leave the
user wondering how the new context relates to the old. Instead of giving a message Are
you sure it may be given like this There are two documents unsaved. Do you want to quit
any way? would help to keep the user anchored in the current context.
SOFTWARE ENGINEERING
NOTES
107 ANNA UNIVERSITY CHENNAI
12. The principal of user testing
- Recruit help in spotting the inevitable defects in the design.
In many cases a good software designer can spot fundamental defects in a user
interface. In some cases, a bug can only be detected while watching some one else use the
programUser-Interface testing that is the testing of user-interface using actual end-users
has been shown to be an extra ordinary effective technique for discovering design defects.
User testing can occur at any time during the project, however it is often more efficient to
build a mock-up or prototype of the application and test that before binding the real program.
13. The principle of humility
- Listen to what ordinary people have to say.
A product built entirely fromcustomer feed back is doomed to mediocrity because
what users want most are the features that they cannot anticipate. The designers should
always keep in mind the limitations of the users, there core competencies, computer literacy
and so on. The designers are advised to spend longer time with the users while they are
actually using the computer.
14. The principles of aesthetics:
Lastly the program that has been created should be readable, understandable. It
should not be sluggish or slow. Users dont like using programs that feel sluggish or slow.
3.7.1 Interface Design Issues
In the interface designs of user interfaces four common design uses surface.
1. Systemresponse time
2. User help facilities
3. Error information handling
4. Command labeling.
System response time is the primary complaint by the user. In general the system
response time is measured fromthe point at which the user performs some control action
(e.g. Hits the return key or clicks a mouse) until the software responds with appropriate
output or action. System response time has two important characteristics: Length and
variability. If the response time is long, then the user is frustrated and if the response time
is too quick, he may commit mistakes when the user is being paced by the interface.
Variability refers to the derivation from average response time.
Almost every user of interactive systemrequires certain help facilities. Most of the
software provide on time help facilities that enable a user to get a question answered to
resolve a problem without leaving the interface. Two different types of help facilities are
encountered. Integrated and add-on. Integrated help facilities are embedded in the software
fromthe beginning. This improves the user friendliness of the software. An add-on facility
is added to the software after it has been built. It is really an on-line users manual with
limited query capability.
DMC 1703
NOTES
108 ANNA UNIVERSITY CHENNAI
No of design issues in help facilities
1. Will the help be available for all systemfunctions and at all times during system
interactions?
2. How will the user require help? Options help menu, special function key and
HELP command.
3. How will help be represented? Options include separate window, a reference to
a printed document, one or two line suggestions produced in a fixed screen location.
4. How will the user return to normal interaction option: return button displaced on
the screen function key.
5. How will the information be structured. Option: include a flat structure in which
all the information is accessed through a keyboard or layered hierarchical structure
of information using hyper text.
Error Messages: Should be clear and provide constructive advise for recovering
fromthe error and indicate any negative consequence of the error. Should be accompanied
by an audible or visual cue.
But realistically speaking every user hates bad news in the form of an error message.
However an effective error message philosophy generally improve the quality of the
interactive system.
Command labeling: A no of design issues arise when commands are printed as a
mode of interaction such as
1. Will every menu option have a corresponding command?
2. What form will the commands take?
3. How difficult will it be to learn and remember the commands?
In most of the applications, interface designers provides a command macro facilities
that allows the user to store a sequence of commonly used commands under a user defined
name;
Interface design guidelines:
Three categories of HCI design guidelines are suggested
1. General interaction
2. Implementation display
3. Data entry Guidelines
1. Be consistent: in formats of menu selection, command input data display
2. Offer meaningful feedback: Visual or auditory feedback to ensure that a two
way communication exists.
SOFTWARE ENGINEERING
NOTES
109 ANNA UNIVERSITY CHENNAI
3 Ask for verification of any nontrivial destructive action Are you sure messages
are essential.?
4. Permit easy reversal of most action : Use UNDO OR REVERSE options.
5. Reduce the amount of information to be memorized between actions:
Memory load should be minimized.
6. Seek efficiency in dialogue, motion and though keystrokes to be minimized.
7. Forgive mistakes fault tolerance
8. Categorize activities by functions and organize screen geography.
9. Pseudo help facilities that are context sensitive.
Information display: of the information provided by Human-computer interface is
the complete, ambiguous; the application will fail to meet the user needs.
1. Display only relevant information.
2. Dont bury the user with datas Graph and charts
3. Use consistent labels, standard abbreviations
4. Allow the user to maintain visual context
5. Produce meaningful error messages.
6. Use windows to compartmentalize different type of information.
7. Use analog displays wherever necessary.
8. Consider the available geography of the display screen and use if efficiently eg.
Multiple windows.
Data inputs
9. Minimize the no of input action required by user. Single key stroke to be
transformed into a more complex collection of data.
10. Maintain consistency between info display and data output.
11. Allow the user to customize input.
12. Interaction should be flexible but also tuned to the users preferred mode of
input.
13. User should control the interactive flow.
14. Provide help to assist with all input actions.
15. Eliminate Mickey Mouse inputs.
Sample questions Unit III
1. Briefly give design guidelines?
2. What is the essence of information hiding?
3. What is the importance of high cohesion?
4. What is the importance of low coupling?
5. Explain the notions of coupling and cohesion?
6. What is the difference between procedural abstraction and data abstraction?
DMC 1703
NOTES
110 ANNA UNIVERSITY CHENNAI
7. What are the difference design methodologies?
8. What is topdown design?
9. What is bottomup design?
10. What is the importance of modular decomposition?
11. What is software architecture?
12. What is the main purpose of Software Architecture?
13. What is the difference between notion of software Architecture and design patterns?
14. What is the difference between the logical view and implementation view?
15. What is the functional decomposition?
16. In cyclomatic complexity a good indicator of systemdesign?
17. What do you mean by component level design?
18. What are the advantages of component level design?
19. Give some guidelines for user interface designs?
20. How do you evaluate a design?
SOFTWARE ENGINEERING
NOTES
111 ANNA UNIVERSITY CHENNAI
UNIT IV
SOFTWARE TESTING AND MAINTENANCE
4.1. SOFTWARE TESTING INTRODUCTION
Now a days, Software has grown in complexity and size. The software is developed
based on the Software Requirements Specification which is always domain dependent.
Accordingly every software product has a target audience. For example banking software
is different fromvideogame software. Therefore when a corporate organization invests
large sum in making a software product it must ensure that the software product must be
acceptable to the end users or its target audience. This is where the software testing
comes into play. Software testing is not merely finding defects or bugs in the software, it is
completely dedicated discipline of evaluating the quality of the software. Good testing is at
least as difficult as good design.
With the current state of the art, we are not able to develop and deliver fault free
software inspite of tools and techniques that we make use of during development.
Quality is not absolute; it is value to some person or product. With this in mind,
testing can never completely establish the correctness of arbitary computer software; testing
furnishes a criticismor comparison that compares the state and behavior of the product
against specification. An important point to be noted at this juncture is that software testing
is a separate discipline when compared to Software Quality Assurance (SQA) which
encampases all business process areas not just testing. Software testing may be viewed as
a sub-field SQA.
Dr. Dave Gelperin and WilliamC.Hetzel in their classic article in communication of
ACM (1988) classified the phases and goals of software testing as follows.
Until 1956 it was the debugging oriented period when testing was often associated
to debugging; there was no clear difference between testing and debugging. From1957
1978 there was the demonstration oriented period when debugging and testing was
distinguished now in this period it was shown, that software satisfies the requirements.
The time between 1978 1982 is announced as the destruction oriented period where the
goal was to find errors 1983-1987 is classified as the evaluation oriented period. Intention
here is that during the software life cycle, a product evaluation is provided and measuring
DMC 1703
NOTES
112 ANNA UNIVERSITY CHENNAI
quality. From1988 onwards it was seen as prevention oriented period where testes were
to demonstrate that software satisfies its specification, to detect faults and to prevent faults.
In general, software engineers distinguish software faults fromsoftware failures. In
case of a failure the software does not do what the user expects. A fault is a programming
error that way or may not actually manifest as a failure. A fault can also be described as an
error in the correctness of the semantic of a computer program. A fault will become a
failure if the exact computation conditions are met, one of thembeing that the faulty portion
of a computer software executes on the CPU. A fault can also turn into a failure when the
software is ported to a different hardware platform or a different compiler or when the
software gets extended.
Thus software testing is a process of executing a programor a systemwith the intent
of funding errors. Or it involves any activity aimed at evaluating an attribute or capability of
a program or systemand determining that it needs its required results.
Learning Objects
- To be aware of major software testing techniques
- To know different testing strategies
- To be aware of different test in Black Box testing
- To be aware of different tests in white-box testing
- To know the importance of test planning and test case designs.
- To be able to compare testing techniques with respect to their theoretical values as
well as practical value.
- To be to identify the contents and structure of test documentation
4.2 TESTING PRINCIPLES
A common practice of Software Testing is that it is performed by an independent
group of testers after the functionality is developed but before it is delivered to the customer.
This practice often results in the testing phase being used as project buffer to compensate
for project delays thereby compromising the time devoted to testing. Another practice is
to start software testing at the same moment the project starts and it is continuous process
until the project finishes. This is highly problematic in terms of controlling changes to
software, if faults or failures are found part way into the project, the decision to correct the
software needs to be taken on the basis of whether or not these defects will delay the
reminder of the project. If the software does need correction, this needs to be vigorously
controlled using a version numbering systemand the software testers need to be accurate
in knowing that they are testing the correct version and will need to re-test the part of the
software where in the defects are found. The correct start point needs to be identified for
retesting. There are added risks in that new defects may be introduced as part of the
corrections and the original requirement can also change partway through in which instance
previous successful tests may no longer meet the requirement and will need to be re-
SOFTWARE ENGINEERING
NOTES
113 ANNA UNIVERSITY CHENNAI
specified and redone. Clearly the possibilities for projects being delayed and running over
budget are significant. It is commonly believed that the earlier a defect is found the cheaper
it is to fix it. This is reasonable based on the role of any given defect contributing to or
being confused with further defects later in the system or process. In particular if a defect
errormiously changes the state of the data or which the software is operating that data is no
longer reliable and therefore any testing after that point cannot be relied if there are no
further actual software defects.
Testing Principles
Before applying the methods of design effective test cases, a developer must
understand the basic principles that guide the Software Testing process. Davis (1995)
suggested a set of principles which are given below.
- All tests should be traceable to customer requirements
- Test should be planned long before the testing begins.
- The pareto principle applies to the software
- Testing should begin in small and progress towards testing in the large.
- Exhaustive testing is not possible
- To be most effective, testing should be done by the third party.
4.2.1. Software Testing Axioms
1. It is impossible to test a program completely
2. Software testing is risk based exercise
3. Testing cannot show that bugs dont exist
4. The more bugs you find, the more bugs there are
5. Not all the bugs you find will be fixed
6. Product specifications never fail.
4.3 PURPOSE OF SOFTWARE TESTING
Regardless of the limitations, testing is an integral part of the software development.
It is broadly deployed in every phase of the software development cycle. Typically more
than 50% of the development time is spent in testing. Testing is usually performed for the
following purposes.
- To improve quality
- For verification and validation
- For Software Reliability estimation
Quality means the conformance to the specified design requirement. The minimum
requirement of quality means performing as required under specified circumstances.
Debugging a narrow view of software testing is performed heavily to find out design defects
DMC 1703
NOTES
114 ANNA UNIVERSITY CHENNAI
by the programmer. The imperfection of human nature makes it almost impossible to
make a moderately complex program correct for the first time. Finding the problem and
get themfixed is the purpose of debugging in programming phase.
Typical software Quality factors can be as follows.
Good testing provides measures for all relevant factors. The importance of any
particular factor varies fromapplication to application. Any systemwhere human lives are
at stake must place extreme emphasis on reliability and Integrity.
In the typical business system, usability and maintainability are the key factors while
for a one time scientific programneither may be significant. Our testing to be fully effective,
must be geared to measuring each relevant factors and thus forcing any quality to become
tangible and visible.
4.4 VERIFICATION & VALIDATION
Another important purpose of testing is verification and validation.
Verification is the checking of or testing of items including software for conformance
and consistency with an associated specification. Software Testing is just one kind of
verification which also uses techniques such as reviews, inspections and walkthrough.
Validation is the process of checking what has been specified and what the user actually
wanted.
Verification: Have we built the software right? (i.e. does it match the specification)
validation: Have we built the right software ( i.e. is this what the customer wants?)
Verification is a quality process that is used to evaluate whether a not a product, service a
system complies with a regulation, specification or conditions imposed at the start of a
development phase. Verification can be in development scale up, or production. This is
often an internal process.
Validation is the process of establishing documented evidence that provides a high
degree of assurance that a product, service or systemaccomplishes its intended requirements.
This often involves acceptance and suitability with external customers. The comparison
between verification and validation is given in the table below.
Functionality
(Exterior Quality)
Engineering
(Interior Quality)
Adaptability
(Future Quality)
Correctness Efficiency Flexibility
Reliability Testability Reusability
Usability Documentation Maintainability
Integrity Structure
SOFTWARE ENGINEERING
NOTES
115 ANNA UNIVERSITY CHENNAI
Verification and Validation
Table 5.1 Verification and Validation
Software reliability has important relations with many aspects of software including
the structure and the amount of testing it has been subjected to based on operational
people, testing can serve as a statistical sampling method to gain failure data for reliability
estimation.
4.4.1 V& V Planning and Documentation
Similar to other phases of software development, the testing activities need to be
carefully planned and documented. As we have seen from the conventional software
development life cycle, testing activity is only parallel activity and it starts as soon as the
requirements analysis phase is completed. Since test activities start early in the development
life cycle and covers all subsequent phases, timely attention to the planning of these activities
is of paramount importance. Precise description various activities, responsibilities and
procedures must be clearly documented. This document is called software verification
and validation plan. We shall follow IEEE standard 1012 where v &v activities for Waterfall
like life cycle is given as follows.
Validation Verification
Am I building the right product Am I building the product right
Determining if the system complies
with the requirements and performs
functions for which it is intended and
meets the organizations goals and
user needs. It is traditional and is
performed at the end of the project.
The review of interim work steps and
interim deliverables during a project
to ensure they are acceptable, To
determine if the system is consistent,
adheres to standards, uses reliable
techniques and prudent practices, and
performs the selected functions in the
correct manner.
Am I accessing the right data (in
terms of the data required to satisfy
the requirement)
Am I accessing the data right ( in the
right place; in the right way)
High Level Activity Low Level Activity
Performed after a work product is
produced against established criteria
ensuring that the product integrates
correctly into the environment
Performed during development on key
artifacts, like walkthroughts, reviews
and inspections, mentor feedback,
training, checklists and standards
Determination of correctness of the
final software product by a
development project with respect to
the user needs and requirements
Demonstration of consistency,
completeness, and correctness of the
software at each stage and between
each stage of the development life
cycle.

DMC 1703
NOTES
116 ANNA UNIVERSITY CHENNAI
- Concept Phase
- Requirements Phase
- Design Phase
- Implementation Phase
- Test Phase
- Installation & Checkout Phase
- Operation & Maintenance Phase
Sample contents of the verification & Validation plan according to IEEE STD 1012 is
given below.
1. Purpose
2. Reference Documents
3. Definitions
4.Verification & Validation Overview
4.1. Organization
4.2 Master Schedule
4.3 Resource Summary
4.4 Responsibilities
4.5 Tools, Techniques and Methodologies
5. Life Cycle Verification and Validation
5.1 Management Verification and Validation
5.2 Requirement Phase Verification and Validation
5.3 Design Phase Verification and Validation
5.4 Implementation Phase Verification and Validation
5.5 Test Phase Verification and Validation
5.6 Installation and Checkout Phase Verification and Validation
5.7 Operation and Maintenance Phase Verification and Validation
6. Software Verification and Validation Reporting
7. Verification and Validation administrative Procedures
7.1 Anomaly Reporting & Resolution
7.2 Task Iteration Policy
7.3 Deviation Policy
7.4 Control Procedures
7.5 Standards, Practices and Conversion
SOFTWARE ENGINEERING
NOTES
117 ANNA UNIVERSITY CHENNAI
The test design documentation specifies for each software features or combination of
such features the details of the test approach and identifies the associated tests. The test
case documentation specifies inputs, predicted outputs and execution condition for each
test item. The test procedure documentation specifies the sequence of actions for the
execution of each test. Finally the test report documentation provides information on the
results of testing tasks.
4.5 TESTING PRINCIPLES
Davis (1995) suggests a set of testing techniques.
1. All tests should be traceable to customer requirements.
2. Test should be planned long before testing begins. In fact test plan can begin as
soon as the requirements phase is complete.
3. The pareto principle applies to software testing. This principle implies that 80
percent of all errors uncovered during testing will likely to be traceable to 20 per
cent of all programmodules.
4. Testing should begin in the small and progress towards testing in the large. Initially
testing starts with module testing and is subsequently extended to integration testing.
5. Exhaustive testing is not possible. Such testing can never be performed in practice.
Thus we need testing strategies that is some criteria for selecting significant test
cases. A significant test case is a test case that has a high potential to uncover the
presence of an error. Thus the successful execution of a significant test case
increases the confidence in the correctness of the programme.
The importance of significant test cases has been discussed earlier. The next question
is how to design a test case and what are the attributes of a test case. Test case design
methods must provide a mechanismthat can help to ensure the completeness of tests and
provide the highest likelihood for uncovering errors in software. Any product that has been
engineered can be tested in one of the two ways.
1. Knowing the specified functions that a product has been designed to perform,
tests can be conducted to find out the functionality of each function and search for
possible errors in each function.
2. Knowing the internal working of the product, the tests can be conducted whether
the internal operation performs according to specification and all internal
components have been adequately exercised. The first approach is called black
box testing and second one is called white box testing which are discussed in detail
in subsequent sections.
Structured Approach to Testing
The technical view of the development life cycle places testing immediately prior to
operation and maintenance. In this strategy an error discovered in the later parts of the life
cycle must be paid for different items.
DMC 1703
NOTES
118 ANNA UNIVERSITY CHENNAI
1. Cost for developing the programerroneously which may include wrong specification
and coding.
2. The system must be tested to detect the errors.
3. Wrong specification and coding to be removed and modified specification, coding
and documentation to be added.
4. The systemmust be retested.
Studies have shown that the majority of system errors occur in the design phase
approximately 64% and the remaining 36% occurs in the coding phase.
This means that almost two-thirds of the errors must be specified and coded into
programbefore they can be detected.
The recommended testing process is given below as a life cycle chart showing the
verification activities for each phase.
At every phase, the structures produced at each phase are analyzed for internal
testability and adequacy. Test data sets are based on the structures. In addition to this the
following should be done at design and programming.
- Determine the structures that are consistent with the structures produced during
previous phases.
- Refine and redefine test data generated earlier.
Life Cycle Phase Verification Activities
Requirements - Determine verification approach
- Determine adequacy of requirements
- Generate functional test data
- Determine consistency of design with
requirements
Design - Determine adequacy of design
- Generate structural and functional list data
- Determine consistency with design
Program - Determine adequacy of implementation
- Generate structural and functional test data
for programs
Test - Test application system
Installation - Place tested system into production
Maintenance - Modify and retest.
SOFTWARE ENGINEERING
NOTES
119 ANNA UNIVERSITY CHENNAI
Generally, people test a programuntil they have confidence in it. This is a nebulous
concept. Generally you will find many errors when you begin testing an individual module
or collection of modules. The detected error rate drops as you continue testing and fixing
bugs. Finally the error rate is low enough that you feel confident that you have caught all
the major problems. How you test your software depends on the situation.
4.6 SOFTWARE TEST PLANS
Large projects usually test their product in accordance with a software test plan. Or,
at least they say they do. The test plan is filled with motherhood statements saying that
each module will be thoroughly tested, with special emphasis on values just inside and
outside the nominal input limits, and values clearly outside.
A test plan is a mandatory document. A good test plan goes a long way towards
reducing risks associated with software development. By identifying areas that are riskier
than others we can concentrate our testing efforts there. Historical data and bug and
testing reports from similar products or previous releases will identify areas to explore.
Bug report fromcustomers are important but also look at bugs reported by the developers
themselves. The following are the components of test plan.
- Test Plan
- Test Case
- Test Script
- Test Scenario
- Test run
Test Plan covers the following:
o Scope, objectives and the approach to testing
o People and requirement dedicated/allocated to testing
o Tools that will be used
o Dependencies and risks
o Categories of defects
o Test entry and exit criteria
o Measurement to be captured
o Reporting and Communication process
o Schedules and mile stones
- Test case is a document that defines a test itemand specifies a set of test inputs or
data execution conditions and expected results. The inputs/ data used by a test
case should be both normal and intended to produce a good result and intentionally
erroneous and intended to produce an error. A test case is generally executed
manually but many test cases can be combined for automated execution.
- Test script is a step by step procedure for using a test case to test a specific unit of
code, function or capability.
DMC 1703
NOTES
120 ANNA UNIVERSITY CHENNAI
- Test Scenario is a chronological record of the details of the execution of a test
script. It captures the specification, tested activities and outcomes. This is used to
identify defects.
- Test run is nothing but a series of logically related groups of test cases or conditions.
4.7. SOFTWARE TESTING STRATEGIES
A testing strategy is a general approach to the testing process which integrates software
test case design methods into various steps that result in the quality software product. It is
a roadmap for the software developer as well as the customer. It provides a framework
to performsoftware testing in an organized manner. In view of this any testing strategy
should have
- Test planning
- Test case design
- Test execution
- Data collection and evaluation
Whenever a testing strategy is adopted it is always sensible to adopt an incremental
approach to systemtesting. In stead of the integrating all the components and then perform
systemintegration testing straight away, it is better to test the system incrementally. The
software testing strategy should be flexible enough to promote the customization that are
necessary for all components of larger system. For this reason a template for software
testing is a set of steps in to which we can place specific test cases design methods should
be defined for the software engineering processes. Different strategies may be needed for
different parts of the systemat different stages of software testing processes.
The flows and deficiencies in the requirements can surface only at the implementation
stage. The testing after the system implementation checks conformance with the
requirements and assess the reliability of the system. It is to be noted that verification and
validation encompasses wide array of activities that include:
- Formal Technical Reviews
- Quality and Configuration
- Performance monitoring
- Simulation
- Feasibility study
- Documentation review
- Database review
- Algorithmanalysis
- Development testing
- Qualification testing
- Installation testing
SOFTWARE ENGINEERING
NOTES
121 ANNA UNIVERSITY CHENNAI
Testing Strategies
- Top down testing: Where testing starts with the most abstract component and
work backwards.
- Bottom up testing: Testing starts with the fundamentals components and work
upwards.
Whatever the testing strategy is adopted, it is better to follow an incremental approach
to testing. Each module is to be tested independently before the next module is added to
the system.
Unit testing: Under unit testing various tests are conducted on:
- Interfaces
- Local data structures
- Boundary conditions
- Independent paths
- Error handling paths
Module interface is tested to examine whether the information flows into and out of
the module under test. The data starts temporally in the data structure mains its integrity
during all steps in an algorithmic execution is examined in the tests on local data structure.
Boundary conditions are tested to ensure that the module operates properly at the
boundaries established to limit or restricted processing at independent paths (basis paths).
Finally all error handling paths are tested.
In addition to local data structure, the impact of global data on a module should be
ascertained during unit testing.
Meyer (1979) has given a check list of the parameters to be examined under various
tests. For details Refer to Pressman (2007).
Selective testing of execution paths is an essential task during the unit test. Proper
test cases should be designed to uncover errors due to erroneous computations, incorrect
comparisons or improper control flow.
The most common error in computations are:
- Incorrect arithmetic procedure
- Mixed mode operations
- Incorrect initialization
- Precision inaccuracy
- Incorrect symbolic representation.
DMC 1703
NOTES
122 ANNA UNIVERSITY CHENNAI
Test cases should uncover the errors such as
- Comparison of different data types
- Incorrect logical termination
- Non existent loop termination
- Failure to exit when divergent iteration in encountered.
- Improperly modified loop variables
Unit test case designs begins invariably after the modules has been developed reviewed
and verified for correct syntax. Since a module is not a standalone program, driver and /or
stub software must be developed for each unit test.
The driver is nothing more than a main program that accepts test case data, passes
such data to the module to be tested and prints relevant results.
Stubs serve to replace modules that are subordinate to (called by) the module to be
tested.
A stub is a dummy programthat uses the subordinate module interface, do minimal
data manipulations, verifications of entry and return. These drivers and stub cause some
overheads in testing process. These drivers and stubs must be removed from the final
product delivered to the customer.
Integration Testing
Integration testing is a systematic technique for constructing the programstructure
while conducting tests to uncover errors associated with interfacing. Modules are integrated
by moving downwards through the control hierarchy.
- Non-incremental integration
All modules are combined in advance. The entire programis tested as a whole. A set
of errors are encountered.

M1
M2 M3 M4
M5 M6 M7
M8
SOFTWARE ENGINEERING
NOTES
123 ANNA UNIVERSITY CHENNAI
- Incremental integration
The programis constructed and tested in small segments. Errors are easier to locate
and correct.
Top-down integration [incremental approach]
Modules are integrated by moving downward the control hierarchy, beginning with
the main control module (main program). Modules sub-ordinate to the main control module
are incorporated into the structure in either Depth-first manner (or) Breadth first manner.
Depth first manner
It would integrate all modules on a major control path of the structure. Selection of a
major path depends on application- specific characteristics. In the figure given above,
depth-first integration suggests that the modules M1, M2 and M5 are integrated first after
selecting the left hand path. Then M6 or M8 are integrated later.
Breadth first integration
It would integrate all modules directly subordinate at each level moving across the
structure horizontally. Fromthe figure M2, M3 and M4 would be integrated first. The
next control level M5, M6 and so on.
Steps involved in the integration process
1. The main control module is used as a test driver. Stubs replace all modules directly
subordinate to the main control module.
2. Depending on integration approach selected, subordinate stubs are replaced one
at a time with actual module.
3. Tests are conducted as each module is integrated
4. On completion of each set of test, another stub is replaced with real module.
5. regression testing may be conducted to ensure that new errors have not been
introduced.
Top down strategy appear to be simple and straight forward, but in practice logical
problems can arise.
Problems involved in integration process
Problem occurs when processing at low levels in the hierarchy is required to test
upper levels. Since stubs replace low level modules at the beginning of top-down testing,
no significant data can flow upward in the programstructure.
Three choices to solve the problem
1. To delay many tests until stubs are replaced with actual modules.
DMC 1703
NOTES
124 ANNA UNIVERSITY CHENNAI
2. To develop stubs that performlimited functions that simulate the actual module.
3. To integrate the software from the bottom of the hierarchy upward.
First approach causes to lose control over correspondence between specific tests
and incorporation of specific modules.
Second approach is workable but stubs become more and more complex.
Bottom-up integration
Bottomup integration testing begins construction and testing with atomic modules
(modules at lowest levels in the programstructure).
Steps involved in bottom up integration
1. low level modules are combined into clusters or builds that perform a specific
software sub function.
2. A driver is written to co-ordinate testcase input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program
structure.
Advantages and disadvantages of
1. Top-Down Integration Strategy
Advantage is testing the major control function early. Disadvantage is the need for
stubs and testing difficulties associated with them.
2. Bottom-up Integration Strategy
Advantage is easy test case design and also there is no need for stubs as module
subordinate to a given level is available. Disadvantage is that the programas an entity
does not exist until the last module is added.
Regression Testing
Regression testing is the activity that helps to ensure that changes do not introduce
unintended behaviour or additional errors.
The regression test suit contains 3 different classes of test cases.
1. A representative sample of tests that will exercise all software functions.
2. Additional tests that focus on software functions that are likely to be affected by
the changes.
3. Tests that focus on the software components that have been changed.
Regression test suit should be designed to include only those tests that address one or
more classes of errors in each of the major programfunctions.
SOFTWARE ENGINEERING
NOTES
125 ANNA UNIVERSITY CHENNAI
Integration test documentation
An overall plan for integration of the software and a description of specific tests are
documented in TEST SPECIFICATION. It includes:
Scope of Testing
Test Plan
Interface integrity
Functional Validity
Performance
Information Content
Test procedure
Actual Test Results
References and Appendices
Validation Testing
Validation test succeeds when software functions in a manner that can be reasonably
expected by the customers.
Validation test criteria
Both test plan and test procedures are designed to ensure that all functional
requirements are met and all performance requirements are achieved. Documentation is
correct and human engineered Other requirements (transportability, maintainability) are
met.
Configuration review
The intent of the review is to ensure that all elements of the software configuration
have been properly developed and are catalogued. This should have details to support the
maintenance phase of software life cycle.
Alpha and Beta Testing
The ALPHA TEST is conducted at the developers site by the customer. Alpha tests
are conducted in a controlled environment.
The BETA TEST is conducted at one or more customer sites by the end user(s) of
the software. The developer is generally not present. This is a live application of the
software in an environment that cannot be controlled by the developer.
System Testing
SystemTesting is actually a series of different tests whose primary purpose is to fully
exercise the computer based system. They all work to verify that all system elements
have been properly integrated and performallocated functions.
DMC 1703
NOTES
126 ANNA UNIVERSITY CHENNAI
Recovery Testing
This is a system test that forces the software to fail in a variety of ways and verifies
that recovery is properly performed. If recovery requires human intervention, the mean
time to repair is evaluated.
Security Testing
Any computer based system that manages sensitive information or causes actions
that improperly harm(or benefit) individuals is a target for improper or illegal penetration.
Security testing attempts to verify that protection mechanisms built into a systemwill in fact
protect it fromimproper penetration.
Stress Testing
Stress tests are designed to confront programs with abnormal situations. Stress
Glossary: Testing Terminology
Test Type
The Primary intent or goal of the test: The type of bug the test is trying to find.
Scope of code tested
The scope of the software being tested- a portion of the software, a subsystem, or
the entire system.
Installation Confirm that the software is simple and
straight forward to install.
Negative test Confirm that the software fails
gracefully if used incorrectly or if it
receives incorrect input.
Performance Confirm that the software meets a
prescribed level of performance
Positive or functionality Confirm that a feature behaves
according to the specifications.
Security Confirm that the software allows only
authorized users to access the system
Stress Confirm that the software or system is
able to run under a loaded condition.
Usability Confirm that the software is easy to use.
Typically done in a controlled setting
by observing users perform assigned
tasks.
SOFTWARE ENGINEERING
NOTES
127 ANNA UNIVERSITY CHENNAI
Knowledge of the underlying implementation the tester have when writing a test.
When run
As opposed to a specific type of test, certain tests can be categorized as collection or
groups of existing test types.
Integration or sub system
test
Testing the interactions between two or
more modules
System test Testing the entire system (end to end)
Unit test Testing at the module level. A module
may consist of one or more source
modules. Typically done by the
development team.
Black Box The tester has no knowledge of program
implementation
White box or glass box The tester has access to program source
code to help develop more effective test
cases.
Acceptance Run when deciding to accept or reject
software. For example, the testing team
defines tests that must be run before
accepting a new build for testing
Beta test or User test Testing done at user sites, typically to run
against more configurations or real-world
conditions.
Build Verification A subset of tests run as a sanity check to
verify that the current build is minimally
functional
Configuration test Verifying that a group of tests works on a
defined list of hardware and or software
configurations.
Regression A test run to verify that software
modifications havent broken the previously
tested functionality.
DMC 1703
NOTES
128 ANNA UNIVERSITY CHENNAI
Test Architecture
How is the test executed or driven? Does it have to be run manually, or is it automated?
4.8 WHITE BOX/BLACK BOX TESTING
Black box and white-box are test design methods. Black box test design treats the
systemas a black-box, so it doesnt explicitly use the knowledge of the internal structure.
Black-box test design is usually described as focusing on testing functional requirements.
Synonyms for black box include: behavioral, functional, opague-box, and closed box.
White box test design allows one to peek inside the box, and it focuses specifically on
using internal knowledge of the software to guide the selection of test data. Synonyms for
white box include: structural, glass-box and clear box.
While black box and white box are terms that are still in popular use, many people
prefer the terms behavioral and structural. Behavioral test design is slightly different
fromblack box test design because the use of internal knowledge isnt strictly forbidden,
but its still discouraged. In practice, it hasnt proven useful to use a single test design
method. One has to use a mixture of different methods to that they arent hindered by the
limitations of a particular one. Some call this gray-box or translucent box test design,
but others wish wed stop talking about boxes altogether.
It is important to understand that these methods are used during the test design phase,
and their influence is hard to see in the tests once they are implemented. Note that any
level of testing (unit testing, system testing, etc) can use any test design methods. Unit
testing is usually associated with structural test design, but this is because testers usually
dont have well-defined requirements at the unit level to validate.
4.8.1 White Box Testing
White box testing is a test case design method that uses the control structure of the
procedural design to derive test cases. Test cases can be derived that
1. Guarantee that all independent paths within a module have been exercised at least
once.
2. Exercise all logical decisions on their true and false sides,
Check return code Run a program and verify if it passed or
failed by checking the return code
Command line Tests run by inputting commands to a
programs command line interface
GUI tester A testing tool that automates the
recording and play back of testing
scripts.
Library A test written to exercise the programs
application programming interface.
SOFTWARE ENGINEERING
NOTES
129 ANNA UNIVERSITY CHENNAI
3. Execute all loops at their boundaries and within their operational bounds and
4. Exercise internal data structures to ensure their validity.
The Nature of Software Defects
Logic errors and incorrect assumptions are inversely proportional to the probability
that a programpath will be executed.
We often believe that a logical path is not likely to be executed when it may be
executed on a regular basis. Our unconscious assumptions about control flow and data
lead to design errors that can only be detected by path testing.
Basic Path Testing
This method enables the designer to derive a logical complexity measure of a
procedural design and use it as a guide for defining a basis set of execution paths. Test
cases that exercise the basis set are guaranteed to execute every statement in the program
at least once during testing.
Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the
derivation of the basis set. Each flow graph node represents one or more procedural
statements. The edges between nodes represent flow of control. An edge must terminate
at a node, even if the node does not represent any useful procedural statements. A region
in a flow graph is an area bounded by edges and nodes. Each node that contains a
condition is called a predicate node.
The Basis Set
An independent path is any path through a program that introduces at least one new
set of processing statements (must move along at least one new edges in the path). The
basis set is not unique. Any number of different basis sets can be derived for a given
procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to
1. The number of regions in the flow graph.
2. V (G) =E-N+2 where E is the number of edges and N is the number of nodes.
3. V (G) =P +1 where P is the number of predicate nodes.
Deriving Test Cases
1. Fromthe design or source code, derive a flow graph.
2. Determine the cyclomatic complexity of this flow graph.
Even without a flow graph, V(G) can be determined by counting the number of
conditional statements in the code.
3. Determine a basis set of linearly independent paths.
DMC 1703
NOTES
130 ANNA UNIVERSITY CHENNAI
Predicate nodes are useful for determining the necessary paths.
4. Prepare test cases that will force execution of each path in the basis set.
Each test case is executed and compared to the expected results.
Automatic Basis Set Derivation:
The derivation of the flow graph and the set of basis paths is amenable to automation.
A software tool to do this can be developed using a data structure called a graph matrix.
A graph matrix is a square matrix whose size is equivalent to the number of nodes in the
flow graph. Each row and column corresponds to a particular node and the matrix
corresponds to the connections (edges) between nodes. By adding a link weight to each
matrix entry, more information about the control flow can be captured. In its simplest
form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link
weights can be represented:
The probability that an edge will be executed,
The processing time expended during link traversal,
The memory required during link traversal, or
The resources required during link traversal.
Graph Theory algorithms can be applied to these graph matrices to help in the analysis
necessary to produce the basis set.
LOOP TESTING
This white box testing technique focuses exclusively on the validity of loop constructs.
Four different classes of loops can be defined:
1. Simple loops
2. Nested loops
3. Concatenated loops and
4. unstructured loops
SIMPLE LOOPS
The following tests should be applied to simple loops where n is the maximumnumber
of allowable passes through the loop:
1. Skip the loop entirely,
2. Only pass once through the loop,
3. m passes through the loop where m <n,
4. n-1,n,n+1 passes through the loop.
SOFTWARE ENGINEERING
NOTES
131 ANNA UNIVERSITY CHENNAI
NESTED LOOPS
The testing of nested loops cannot simply extend the technique of simple loops since
this would result in a geometrically increasing number of test cases. One approach for
nested loops:
1. Start at the innermost loop. Set all other loops to minimumvalues.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at
their minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer
loops at minimums and other nested loops to typical values.
4. Continue until all loops have been tested.
CONCATENATED LOOPS
Concatenated loops can be tested as simple loops if each loop is independent of the
others. If they are not independent (e.g the loop counter for one is the loop counter for the
other), then the nested approach can be used.
UNSTRUCTURED LOOPS
This type of loop should be redesigned not tested.
OTHER WHITE BOX TECHNIQUES
Other white box testing techniques include:
1. Condition testing
Exercises the logical conditions in a program
2. Data flow testing
- Selects test paths according to the locations of definitions and uses of variables in
the program
1.9 BLACK BOX TESTING
Black box testing attempts to derive sets of inputs that will fully exercise all the functional
requirements of a system. It is not an alternative to white box testing. This type of testing
attempts to find errors in the following categories.
1. incorrect or missing functions
2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and
5. initialization and termination errors.
DMC 1703
NOTES
132 ANNA UNIVERSITY CHENNAI
Tests are designed to answer the following questions:
1. How is the functions validity tested?
2. What classes of input will make good test cases?
3. Is the systemparticularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the systemtolerate?
6. What effect will specific combinations of data have on systemoperation?
White box testing should be performed early in the testing process, while black box
testing tends to be applied during later stages. Test cases should be derived which
1. reduce the number of additional test cases that must be designed to achieve
reasonable testing, and
2. Tell us something about the presence or absence of classes of errors, rather than
an error associated only with the specific test at hand.
Equivalence Partitioning
This method divides the input domain of a programinto classes of data fromwhich
test cases can be derived. Equivalence partitioning strives to define a test case that uncovers
classes of errors and thereby reduces the number of test cases needed. It is based on an
evaluation of equivalence classes for an input condition. An equivalence class may be
defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes
are defined.
2. If an input condition requires a specific value, then one valid and two invalid
equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid
equivalence class are defined.
4. If an input condition is Boolean, then one valid and one invalid equivalence class
are defined.
Boundary Value Analysis (BVA)
This method leads to a selection of test cases that exercise boundary values. It
complements equivalence partitioning since it selects test cases at the edges of the class.
Rather than focusing on input conditions solely, BVA derives test cases fromthe output
domain also. BVA guidelines include:
1. For input ranges bounded by a and b, test cases should include values of a and b
just above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed
to exercise the minimumand maximumnumbers and values just above and below
these limits.
SOFTWARE ENGINEERING
NOTES
133 ANNA UNIVERSITY CHENNAI
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be
designed to exercise the data structures at its boundary.
Cause Effect Graphing Techniques
Cause effect graphing is a technique that provides a concise representation of logical
conditions and corresponding actions. There are four steps:
1. Causes (input conditions) and effects (actions) are listed for a module and an
identifier is assigned to each.
2. A cause effect graph is developed
3. The graph is converted to a decision table.
4. Decision tables rules are converted to test cases.
Unit, Component and Integration Testing
The definitions of unit, component, and integration testing are recursive. Unit : The
smallest compliable component. A unit typically is the work of one programmer (At least
in principle). It does not include any called sub-components (for procedural languages) or
communicating components in general.
Unit testing
In unit testing, the called components (or communicating components) are replaced
with stubs, simulators or trusted components. Calling components are replaced with drivers
or trusted super components. This unit is tested in isolation.
Component
A unit is a component. The integration of one or more components is a component.
Component testing is the same as unit testing except that all stubs and drivers are
replaced with the real thing.
Two components (actually one or more) are said to be integrated when: they have
been compiled, linked, and loaded together. They have successfully passed the integration
tests at the interface between them.
Thus components A and B are integrated to create a new and larger component
(A,B). Note that this does not conflict with the idea of incremental integration it just
means that A is a big component and B, the component added, is a small one.
Integration testing: Carrying out integration tests.
Integration tests for procedural languages:
DMC 1703
NOTES
134 ANNA UNIVERSITY CHENNAI
This is easily generalized for OO languages by using the equivalent constructs for
message passing. In the following, the word call is to be understood in the most general
sense of a data flow and is not restricted to just formal subroutine calls and returns for
example, passage of data through global data structures and / or the use of pointers.
Let A and B be two components in which A calls B.
Let Ta be the component level tests of A
Let T
b
be the component level tests of B
Tab the tests in As suite that cause A to call B
Tbsa: The tests in Bs suite for which it is possible to sensitize A the inputs are to A,
not B
Tbsa +Tab ==the integration test suite (+- union)
Note
Sensitize is a technical term. It means inputs that will cause a routine to go down a
specified path. The inputs are to A. Not every input to A will cause A to traverse a path in
which B is called. Tbsa is the set of tests which do cause A to follow a path in which B is
called. The outcome of the test of B may or may not be affected.
System Testing
When integration tests are completed a software systemhas been assembled and its
major sub systems have been tested. At this point, the developers/testers begin to test the
system as a whole.
Systemtest planning should begin at the requirements phase with the development of
a master test plan and requirements based (Black-Box) tests.
System test evaluates both functional behavior and quality requirements such as
reliability, usability, performance and security. This phase of testing is especially useful for
detecting external hardware and software interface defects for example causing race
conditions, dead locks, problems with interrupts and exceptional handling and ineffective
memory usage. After the systemtest, the software will be turned over the users for evaluation
during acceptance test or alpha/beta test.
As to the difference between integration testing and system testing, System testing
specifically goes after behaviors and bugs that are properties of the entire systemas distinct
from properties attributable to components (unless, of course, the component in question
is the entire system). Examples of systemtesting issues: resource loss bugs, throughput
SOFTWARE ENGINEERING
NOTES
135 ANNA UNIVERSITY CHENNAI
bugs, performance, security, recovery, transaction synchronization bugs (often misnamed
timing bugs).
Code Coverage Analysis
This gives a complete description of code coverage analysis (test coverage analysis),
a software testing technique.
Code Coverage Analysis Is The Process of
Finding areas of a program not exercised by a set of test cases,
Creating additional test cases to increase coverage, and
Determining a quantitative measure of code coverage, which is an indirect measure
of quality.
An optional aspect of code coverage analysis is:
Identifying redundant test cases that do not increase coverage.
A code coverage analyzer automates this process.
Coverage analyis is used to assure the quality of your set of tests, not the quality of the
actual product. Coverage analysis requires access to test programsource code and often
requires recompiling it with a special command.
Coverage analysis has certain strengths and weaknesses. You must choose froma
range of measurement methods. You should establish a minimumpercentage of coverage,
to determine when to stop analyzing coverage. Coverage analysis is one of many testing
techniques, you should not rely on it alone.
Code coverage analysis is sometimes called test coverage analysis. The two terms
are synonymous. The academic world more often uses the termTest coverage while
practitioners more often use Code coverage. Like wise a coverage analyzer is sometimes
called a coverage monitor.
Structural Testing and Functional Testing
Code coverage analysis is a structural testing technique (Glass box testing and white
box testing). Structural testing compares test programbehavior against the apparent intention
of the source code. This contrasts with functional testing (black box testing), which compares
test program behavior against a requirements specification. Structural testing examines
how the programworks, taking into account possible pitfalls in the structure and logic.
Functional testing examines what the program accomplishes, without regard to how it
works internally.
DMC 1703
NOTES
136 ANNA UNIVERSITY CHENNAI
Structural testing is also called path testing since you choose test cases that cause
paths to be taken through the structure of the program.
At the first glance, structural testing seems unsafe. Structural testing cannot find errors
of omission. However, requirements specifications do not exist. And are rarely complete.
This is especially true near the end of the product development time line when the requirements
specification is updated less frequently and the product itself begins to take over the role of
the specification. The difference between functional and structural testing blurs near release
time.
The Premise
The basic assumptions behind coverage analysis tell us about the strengths and
limitations of this testing technique. Some fundamental assumptions are listed below.
Faults relate to control flow and you can expose faults by varying the control flow,
for example a programmer wrote f(c) rather than f(!c).
You can look for failures without knowing what failures might occur and if all tests
are reliable, successful test runs imply programcorrectness. The tester understands
what a correct version of the programwould do and can identify differences from
the correct behavior .
Other assumptions include achievable specifications, no faults of omission, and no
unreachable code.
Clearly these assumptions do not always hold. Coverage analysis exposes some
plausible faults but does not come close to exposing all classes of faults. Coverage analysis
provides more benefit when applied to an application that makes a lot of decision rather
than data centric applications, such as a data base application.
4.10 BASIS MEASURES
A large variety of coverage measures exist. Here is a description of some fundamental
measures and their strength and weakness.
Statement Coverage
This measure reports whether each executable statement is encountered. Also known
as: line coverage, segment coverage and basic block coverage. Basic block coverage is
the same as statement coverage except the unit of code measured is each sequence of
non-branching statements.
The chief advantage of this measure is that it can be applied directly to object code
and does not require processing source code. Performance profilers commonly implement
this measure.
SOFTWARE ENGINEERING
NOTES
137 ANNA UNIVERSITY CHENNAI
The chief disadvantages of statement coverage is that it is insensitive to some control
structures.
If statements are very common, statement coverage does not report whether loops
reach their termination condition only whether the loop body was executed.
Statement coverage is completely insensitive to the logical operators.
Statement coverage cannot distinguish consecutive switch labels.
Test cases generally correlate more to decisions than to statements. You probably
would not have to separate test cases for a sequence of 10 non-branching statements; you
would have only one test case. Basic block coverage eliminates this problem.
One argument in favour of statement coverage over other measures is that faults are
evenly distributed through code; therefore the percentage of executable statements covered
reflects the percentage of faults discovered.
Decision Coverage
This measure reports whether Boolean expressions tested in control structures (such
as the if-statement and while statement) evaluated to both true or false. The entire Boolean
expression is considered one true-or-false predicate regardless of whether it contains logical
and or logical or operators.
Condition Coverage
Condition coverage reports the true or false outcome of each Boolean sub-expression
separated by logical-and logical-or if they occur. Condition coverage measures the sub
expressions independently of each other.
This measure is similar to decision coverage but has better sensitivity to the control
flow.
Multiple Condition Coverage
Multiple condition coverage reports whether every possible combination of Boolean
sub-expressions occurs. As with condition coverage, the sub expressions are separated
by logical and and logical or when present.
A disadvantage of this measures is that it can be tedious to determine the minimumset
of test cases required, especially for very complex Boolean expressions. An additional
disadvantage of this measure is that the number of test cases required could vary substantially
among conditions that have similar complexity.
DMC 1703
NOTES
138 ANNA UNIVERSITY CHENNAI
Condition /Decision Coverage
Condition/ Decision coverage is a hybrid measure composed by the union of condition
coverage and decision coverage.
It has the advantage of simplicity but without the short comings of its components
measures.
Path Coverage
This measure reports whether each of the possible paths in each function has been
followed. A path is a unique sequence of branches fromthe function entry to the exit, Also
known as predicate coverage.
Path coverage has the advantage of requiring very thorough testing. Path coverage
has two severe disadvantages. The first is that the number of paths is exponential to the
number of branches. For example, a function containing 10 if-statements has 1024 paths
to test. Adding just one more if-statement doubles the count to 2048. The second
disadvantage is that many paths are impossible to exercise due to relationships of data.
Function Coverage
This measure reports whether you invoked each function or procedure. It is useful
during preliminary testing to assure at least some coverage in all areas of the software.
Broad, shallow testing finds gross deficiencies in a test suite quickly.
Call Coverage
This measure reports whether you executed each function call. The hypothesis is that
faults commonly occur in interfaces between modules.
Data flow Coverage
This variations of path coverage considers only the sub-paths fromvariable assignments
to subsequent references of the variables. The advantage of this measure is the paths
reported have direct relevance to the way the programhandles data. One disadvantage is
that this measure does not include decision coverage. Another disadvantage is complexity.
Object Code Branch Coverage
This measure gives results that depend on the compiler rather than on the program
structure since compiler code generation and optimization techniques can create object
code that bears little similarity to the original source code structure.
Loop Coverage
This measure reports whether you executed each loop body Zero times, exactly
once, and more than once. The valuable aspect of this measure is determining whether
SOFTWARE ENGINEERING
NOTES
139 ANNA UNIVERSITY CHENNAI
while-loops and for-loops execute more than once, information not reported by others
measures.
Race Coverage
This measure reports whether multiple threads execute the same code at the same
time. It helps detect failure to synchronize access to resources. It is useful for testing
multithreaded programs as in an operating system.
Relational Operator Coverage
This measure reports whether boundary situations occur with relational operators
(<,<=, >,>=).
Weak Mutation Coverage
This measure is similar to relational operator coverage but much more general. It
reports whether test cases occur which would expose the use of wrong operators and also
wrong operands. It works by reporting coverage of conditions derived by substituting
(mutating) the programs expressions with alternate operators, such as - substituted for
+ and with alternate variables substituted.
Table Coverage
This measure indicates whether each entry in a particular array has been referenced.
This is useful for programs that are controlled by a finite state machine.
Comparing measures
You can compare relative strengths when a stronger measure includes a weaker
measure.
- Decision coverage includes statement coverage since exercising every branch must
lead to exercising statements.
- Condition/Decision coverage includes decision coverage and condition coverage
(by definition).
- Path coverage includes decision coverage.
- Predicate coverage includes path coverage and multiple condition coverage, as
well as most other measures.
Coverage Goal for Release
Each project must choose a minimumpercent coverage for release criteria based on
available testing resources and the importance of preventing post-release failures. Clearly,
safety critical software should have a high coverage goal. You might set a higher coverage
goal for unit testing than for system testing since a failure in lower level code may affect
multiple high level callers.
DMC 1703
NOTES
140 ANNA UNIVERSITY CHENNAI
Using statement coverage, decision coverage, or condition/decision coverage you
generally want to attain 80% - 90% coverage or more before releasing. Some people feel
that setting any goal less than 100% coverage does not assure quality. However, you
expend a lot of effort attaining coverage approach 100%. The same effort might find more
faults in a different testing activity, such as formal technical review. Avoid setting a goal
lower than 80%.
Intermediate Coverage Goals
Choosing good intermediate coverage goals can greatly increase testing productivity.
Our highest level of testing productivity occurs when you find the most failures with the
least effort. Effort is measured by the time required to create test cases, add them to your
test suite and run them. It follows that you should use coverage analysis strategy that
increases coverage as fast as possible. This gives you the greatest probability of finding
failures sooner rather than later.
4.11 PLANNING FOR TESTING
The testing process can be divided into three phases: planning, acquisition, and execution
and evaluation. The planning phase provides an opportunity for the tester to determine
what to test and how to test it. The acquisition phase is the time during which the required
testing software is manufactured, data sets are defined and collected, and detailed test
scripts are written. During the execution and evaluation phase the test scripts are executed
and the results of that execution are evaluated to determine whether the product passed
the test.
Test Strategy
The project test plan should describe the overall strategy that the project will follow
for testing the final application and the products leading up to the completed application.
Strategic decisions that may be influenced by the choice of development paradigms and
process models include:
When to test? The test plan should show how the stages of the testing process, such
as component, integration and acceptance, correspond to stages of the development
strategy, incremental testing is a natural fit. Testing can begin as soon as some coherent
unit is developed and continuous on successively larger units until the complete application
is tested. This approach provides for earlier deliveries of executable units, the big bang
testing strategy, in which the first test is performed on the complete product, is necessary.
This strategy often results in costly bottlenecks as small faults prevent the major system
functionality frombeing exercised.
Who will test? Independent testers and developer/testers The test plan should clearly
assign responsibilities for the various stages of testing to project personnel. The independent
SOFTWARE ENGINEERING
NOTES
141 ANNA UNIVERSITY CHENNAI
tester brings a fresh perspective to how well the application meets the requirements. Using
such a person for the component test requires a long learning curve which may not be
practical in a highly iterative environment. The developer brings a knowledge of the
details of the programbut also a bias concerning his/her own work. I favour involving
developers in testing as do many others, but this only works if there are clear guidelines
about what to test and how.
What will be tested? Systemand Component Testing The test plan should provide
clear objectives for each type of testing. The amount of each type of testing will be
determined by various factors. For example, the higher the priority of reuse in the project
plan, the higher should be the priority of component testing in the testing strategy.
Component testing is a major resource sink, but it can have tremendous impact on quality.
Detailed Test Plans
The major output of the planning phase is a set of detailed test plans. In a project that
has functional requirement specified by use cases, a test plan should be written for each
use case. There are a couple of advantages to this. Since many managers schedule
development activity in terms of use cases, the functionality that becomes available for
testing will be in use case increment. This facilities determining which test plans should
be utilized in a specific iteration. Second, this approach improves the traceability fromthe
test cases back into the requirements model so that changes to the requirements can be
matched by changes to the test cases.
4.11.1 Testing the Requirements Model
Writing the detailed test plans provides an opportunity for a detailed investigation of
the requirements model. A test plan for a use case requires the identification of the underlying
domain objects for each use case. Since an object will typically apply to more than one
use case, this gives the opportunity to locate inconsistencies in the requirements model.
Typical errors include conflicting defaults, inconsistent naming, incomplete domain definitions
and unanticipated interactions.
The individual test cases are constructed for each use case by identifying the domain
objects that cooperate to provide the use and by identifying the equivalence classes for
each object. The equivalence classes for a domain object can be thought of as subset of
the states identified in the dynamic models of the object. Each test cases represents on
combination of values of each domain object in the use case scenario.
As the use case test plan is written, an input data specification table captures the
information required to construct the test cases. That information includes the class from
which the domain object is instantiated, the state space of the class, and significant states
(boundary values) for the objects. As the tester writes additional test plans and encounters
DMC 1703
NOTES
142 ANNA UNIVERSITY CHENNAI
additional objects fromthe same class, the information fromone test plan can be used to
facilitate the completion of the current test plan.
4.11.2 Testing Interactions
Creating use case-level test plans also facilitates the identifications and investigations
of interactions, situations in which one object affects another one or one attribute of an
object affects other attributes of the same object. Certainly many interactions are useful
and necessary. That is how objects achieve their responsibilities. However, there are also
undesirable or unintended interactions where an objects state is affected by another object
in unanticipated ways. Two objects might share a component object because a pointer to
the one object was inadvertently passed to the two encapsulating objects instead of a
second new object being created and passed to one of them. A change made in one of the
encapsulating objects is seen by the other encapsulating object.
Even an intended interaction gone bad can cause trouble. For example, if an error
prevents the editing of a field, then it is more probable that the same, or a related error will
prevent us from clearing that same field. This is due to the intentional use of a single
component object to handle both responsibilities.
The brute force technique for searching for unanticipated interactions is to test all
possible permutations of the equivalence classes entered in the input data specification
table. If this proves to be too much information or require too many resources for the
information gained, the tester can use all possible permutations of successful execution but
only include a single instance of error conditions and exceptional situations. These selection
criteria represent successively less thorough coverage but also require fewer resources.
Since the tester often does not have access to the code, the identification of interactions
is partially a matter of intuition and inference. The resources required for the permutation
approach can be reduced further by making assumptions about where interactions do not
exist. That is, there is no need to consider different combinations of object values if the
value of one object does not influence the value of another. So test cases are constructed
to exercise permutations within a set of interacting objects.
4.11.3 Allocation of Resources
Testing can be resource intensive activity. The tester may been to reserve special
hardware or he/she may have to construct large, complex data sets. The tester always will
have to spend large amounts of time verifying that expected results actually correspond to
the correct behavior. In this section I want to present two techniques for determining
which parts of the product should be tested more intensely than other parts. This information
will be used to reduce the amount of effort required while only marginally affecting the
quality of the resulting product.
SOFTWARE ENGINEERING
NOTES
143 ANNA UNIVERSITY CHENNAI
Use Profile
One technique for allocating testing resources uses Use Profiles as the basis for
determining which parts of the application will be utilized the most and then tests those
parts the most. The principle here is test the most used parts of the programover a wider
range of inputs than lesser used portions to ensure greatest user satisfaction.
A use profile is simply a frequency graph that illustrates the number of times that an
end user function is used, or is anticipated to be used, in the actual operation of the program.
The profile can be constructed in a couple of ways. First, data can be collected from
actual use such as during usability testing. This results in a raw count profile. Second, a
profile can be constructed by reasoning about the meanings and responsibilities of the
system interface. The result is a relative ordering of the end user functions rather than a
precise frequency count.
The EXIT function for the systemwill be successfully completed exactly once per
invocation of the program but the SAVE function may be used numerous times. It is
conceivable that the create FOOTNOTE function might not be used at all during a use of
the system. This results in a profile that indicates an ordering of SAVE, EXIT and create
FOOT NOTE. The SAVE function will be tested over a much wider range of inputs than
the create FOOTNOTE function.
A second approach to use profiling is to rate each use case on a scale. In projects
mentored by software architects, we use a form of a use case that includes fields the
record an estimate how frequently the use will be activated and how critical the use described
in the use of scenario is to the operation of the system. An estimate is also made of the
relative complexity of each use case.
The frequency field can be used to support the first approach to ordering the use
cases. The criticality field can also be used to order the use cases. However, neither of
these attributes is really adequate by itself. For example, we might paint a logo in the lower
right hand corner of each window. This would be a relatively frequent event but should it
failure the systemwill still be able to provide the important functionality to the user. Like
wise, attaching to the local data base server would happen very seldom but it success is
critical to the success of certain other functions.
4.11.4 Risk Analysis
A second technique for allocating testing resources is based on the concept of risk. A
risk is anything that threatens the successful achievement of the projects goals. The principle
here is test most heavily those portions of the system that pose the highest risk to the
project to ensure that the most harmful faults are identified.
DMC 1703
NOTES
144 ANNA UNIVERSITY CHENNAI
Risk are divided into three types: business, technical and projects risks. Project risks
are largely managerial and environmental risks, such as an insufficient supply of qualified
personnel that do not directly affect the testing process.
Business risks correspond to domain related concepts. For example, changes in IRS
reporting regulations would be a risk for an accounting system because the systems
functionality must be altered to conformto the new regulations. This type of risk is related
to the functionality of the programand therefore to the systemlevel testing.
Technical risks include some implementation concepts. For example, a failure to use
pointers correctly is a technical risk. This type of risk is related to the implementation of
the programand hence to the component level testing process.
Applying Risk Analysis to System Testing
The output from the risk analysis process is a prioritized list of risk to the project.
This list must be translated into an ordering of the use cases. The ordering in turn is used to
determine the amount of testing applied to each use case. For the purpose of system
testing, consider those business risks that address the domain within which the application
is located.
The criticality value combined with the risk associated with the use case can produce
a ranking that identifies those use cases which describe behavior that is critical to the
success of the systembut that is also most vulnerable to the risks faced by the project. A
highly critical use case that has a risk should obviously receive a high rating for the number
of test cases to be generated while a non-critical use cases with low risk should receive a
low rating. There are several strategies possible for combining the risk and criticality
values when the result is not so obvious. An averaging strategy would assign a medium
rating to a low risk yet highly critical use case while a conservative strategy would assign a
high rating to that same use. The choice of strategy is not important to our discussion and
is domain and application specific.
The rating of the use case is used to determine which combinations of values are
used. A low rating would indicate that each equivalence class for each object should be
represented in some test case. A high rating indicates that each equivalence class for each
object should be used in combination with every other equivalence class from the other
objects (all permutations).
What is the best tester to developer ratio:
Reported tester: developer rations range for 10:1 to 1:10
Jermy L.Mordkoff writes:
Theres no simple answer. It depends on so many things. Amount of reused code,
number and type of interfaces, platform, quality goals etc.
SOFTWARE ENGINEERING
NOTES
145 ANNA UNIVERSITY CHENNAI
It also can depend on the development model. The more specs, the less testers. The
role can play a big part also.
Boris Beizer adds
These figures can all vary very widely depending on how you define tester and
developer. In some organizations, a tester is anyone who happens to be testing software
at the time such as their own. In other organizations, a tester is only a member fan
independent test group.
It is far, far, better to ask about the test labor content than it is to ask about the tester/
developer ratio. The test labor content, across most applications is generally accepted as
50% when people do honest accounting. For life critical software, this can go up to 80%
the normal input range to stress it. It sounds great on paper, but it is usually impractical.
4.12 SOFTWARE MAINTENANCE
The termsoftware maintenance usually refers to changes that must be made to software
after they have been delivered to the customer or user. The definition of software
maintenance by IEEE (1993) is as follows.
The modification of a software product after delivery to correct faults to improve
performance of other attributes or to adopt the product to modified environment.
These are four types of software maintenance
Corrective Maintenance
Adaptive Maintenance
Perfective Maintenance
Preventive Maintenance
Corrective Maintenance deals with the repairs of faults or defects found. A defect
can result fromdesign errors, logic errors and coding errors. Design errors occur when
for example, changes made to the software are incorrect, incomplete, wrongly communicated
of the change request is misunderstood.
Logic errors results frominvalid tests and conclusions and incorrect implementation
of design specification, faulty logic flow or incomplete test of data. Coding errors are
caused by incorrect implementation of detailed logic design and incorrect use of source
code logic design. Defects are also caused by data processing errors and system
performance errors. All this errors sometime called residual errors.
Or bugs prevent the software fromconforming to its agreed specifications.
The need for corrective maintenance is usually initiated by bug reports drawn up by
the endusers.
DMC 1703
NOTES
146 ANNA UNIVERSITY CHENNAI
Adaptive maintenance consists of adapting software to changes in the environment
such as hardware or the operating system. The termenvironment within context refers to
the totality of all conditions and influences which act from outside upon the system. For
example business rule, government policies , work patterns, software and hardware
platforms. The need for adaptive maintenance can only be recognized by monitoring the
environment.
A case study on the adaptive maintenance of an Internet application B4U Call,
B4UCall is an internet application that helps compare mobile phone packages offered by
different service providers and adding or removing a complete new service provider to
the Internet application requires adaptive maintenance on the system.
Perfective Maintenance concerns with functional enhancements to the system, and
activities to increase the systemperformance or to enhance its user interface A successful
piece of software tends to be subjected to a succession of changes, resulting in an increase
in the number of requirements. This is based on the premise that as the software becomes
useful, the users tend to experiment with new cases beyond the scope for which it was
initially developed. Examples of perfective maintenance include adding a new report in the
sales analysis system, improving a terminal dialogue to make it more user friendly and
adding an online HELP command.
Preventive maintenance Concern activities aimed at increasing the system
maintainability such as updating documentation, adding commits and improving the modular
structure of the system.
The long term effect of corrective, adaptive and perfective changes increases the
systemcomplexity. As a large programme is continuously changed its complexity which
reflects deterioting structure increases unless work is done to maintain or reduce it. This
work is known as preventive change. The change is usually initiated from within the
maintenance organization with the intention of making programeasier to understand and
hence facilitating future maintenance work.
Examples of preventive change include restructing and optimizing code and updating
documentation.
Among these four types of maintenance only corrective maintenance is traditional
maintenance. The other types can be considered software evolution.
Software evolution is now widely used in the software maintenance community.
In order to increase the maintainability of software we need to know what characteristics
of a product affects its maintainability. The factors that affect maintenance include system
size, systemage, number of input/output data items, application type, programming language
and the degree of structure.
SOFTWARE ENGINEERING
NOTES
147 ANNA UNIVERSITY CHENNAI
Longer systemrequire more maintenance effort than do smaller systembecause there
is a greater learning curve associated with longer systems and larger systems are more
complex in terms of the variety of functions they perform.
For example a 10% change in a module of 200 lines of code is more expensive then
20% change in a module of 100 lines of code. The factors that decrease maintenance
effort are
1. Use of structured techniques
2. Use of automated tools
3. Use of data-base techniques
4. Good data administration
5. Experienced Maintenance
Maintenance tasks
Maintenance tasks can be grouped into five categories
- Analysis /Isolation
- Design
- Implementation
- Testing
- Documentation
Analysis/Isolation tasks consists of impact analysis cost benefit analysis and isolation.
Impact analysis and cost benefit analysis consists of analyzing different implementation
alternative and comparing their effect on schedule, cost and operation. Isolation refers to
the time spent trying to understand the problem or the proposed enhancements to the
system.
Design consists of redesigning the systembased on the understanding of the necessary
changes. Implementation entails code and unit testing and other software tests required
after incorporating changes.
Documentation consists of system, user and other documentation. System
documentation refers to the time spent writing or revising the systemdescription documents.
User documentation entails writing or revising the users guide and other formal
documentation, excluding systemdocumentation.
Documentation is very important since the future changes will rely on the documentation
of the previous changes/modifications.
The use of software maintenance tools simplifies the tasks and increases efficiency
and productivity. There are several criteria for selecting the right tool for the tasks. These
criteria are capability, features, cost/benefit, platform, programming languages, ease of
DMC 1703
NOTES
148 ANNA UNIVERSITY CHENNAI
use, openers of architecture stability of vendor and organizational culture. The chosen?
tool support program understanding and reverse engineering, testing configuration
management and documentation.
The tools mainly consists of visualization tools which assist the programmer in drawing
a model of the system.
Examples of program understanding and reverse engineering tools include the
propulsive, static analyzer, dynamic analyzer cross reference and depending analyzer.
Conclusion
Maintenance clearly plays an important role in the life cycle of a software product.
Maintenance is heavily impacted by the methods used to develop a product. Thus different
development methods result in different maintenance procedures. Iterative development
results in the creation of a working product after each iteration. Therefore maintenance
tasks are carried out on each working product created. This serves to ensure that problems
will not go undiagnosed and unfixed for long.
As the cost of maintenance has been estimated at 50% of total life cycle costs, cost
savings in this area can have a large impact on the overall life of a software project.
Sample Questions: Unit IV
1) Define the following items: error, fault and failure?
2) What are the main objectives of software testing?
3) What is the difference between black box testing and white box testing?
4) What is the difference between a systemtest and acceptance test?
5) What is regression testing?
6) What is a test plan? What are its contents?
7) What is requirements testing?
8) Explain the difference. between verification and validation.
9) What is the different coverage based testing techniques.
10)What are the fault based testing techniques?
11) Are there any differences between conventional testing techniques and object
oriented testing techniques?
12)What is Aeptic testing?
13)What is beta testing?
14)What is meant by basic path testing?
15)What is test adequacy criterion?
16)How Mc Cabes cyclomatic complexity is applied to testing?
17)What is meant by extended branch coverage?
18)Give sample content of verification and validation?
19)What are the important characteristics of test case design?
20)What is the test stubs? Why do you make use of then in unit testing.
SOFTWARE ENGINEERING
NOTES
149 ANNA UNIVERSITY CHENNAI
UNIT V
SCM AND QUALITY ASSURANCE
5.1 INTRODUCTION
In all the proceeding units, we have discussed various facts of software engineering
such as process paradigms, software design, testing and maintenance and so on. We have
been emphasizing that the ultimate goal of any software development is to deliver a good
quality software to the clients/users. Worldwide all software development organizations
are becoming more concerned with the process of developing a high quality software. We
need to focus on quality process methodologies, tools and scientific principles of project
management and other quality assessing strategies. CMM, ISO 9000 and IEEE 1074
become popular standards to suggest ways and mean to achieve quality. They turnout to
be strategic instruments for many development organizations. Even in well managed software
organizations, the software frequently goes out of control, simply because the organization
fails to understand how to control the creative process that are parts of the software
development activities. In view of this the mantra that is followed is Total Quality
Management (TQM). Software TQM should include the meticulous use of well drafted
plans, analysis and control of the software and goals that cause the quality software to
happen. In other words, TQM should included software quality assurance plans and the
process of planning itself. Why should we worry so much about the Quality management?
We need to focus mainly because of one word Change. Change is a part and parcel of
any software development. We have already discussed this issue in Unit II. Requirements
are subject to changes. These changes will bring lot of confusion and burden to the managers
of the projects. They needs to adjust their plans, resources and estimates of cost and
schedules. In view of this, the changes are to be evaluated, monitored and implemented.
Since many organization deals with family of softwares, they require a separate team to
keep track of the changes, implementation details, the modifications that have been made
on several modules and such an activity is called software configuration management (SCM).
The primary objective of SCM is to control, critical processes in the development and
maintenance activities, thereby increasing the productivity and at the same time improving
the quality also.
DMC 1703
NOTES
150 ANNA UNIVERSITY CHENNAI
The basic objective of this unit is to highlight some of the quality assurance strategies,
planning and standards to be followed to ensure the quality.
SQA and Configuration Management Monitoring: SQA assures that Software
Configuration Management (CM) activities are performed in accordance with CM plans,
standards and procedures. SQA reviews the CM plans for compliance with software CM
Policies and requirements and provides the follow-up for conformances SQA audits the
CM functions for adherence to standards and procedures and prepares reports of its
findings.
The CM activities monitored and audited by SQA include
- Baseline Control
- Configuration identification
- Configuration control
- Configuration status accounting
- Configuration authentication
All these concepts are discussed in detail in later sections to follow.
LEARNING OBJECTIVES
1. To bring awareness about the importance of software quality
2. To critically assess various quality assurance plans
3. To be aware of international standards on software quality
4. To explain the importance of standards/tools to improve quality
5. To know about the current practices of configuration management.
5.2 SOFTWARE CONFIGURATION MANAGEMENT
Software configuration management is an umbrella activity that comprises of the
following.
- Identify and evaluate changes
- Control change
- Change implementation
- Communicate to others the modifications made.
Fromthis it is very clear that software configuration management work starts when a
software development project begins and terminates only when the software is taken out
of operation. The SCM helps to improve the case with which changes can be incorporated
and reduces the effort required to modify the software in view of these changes. There is
a need for clear procedures on how proposed changes will be handled. Changes that are
entered via the back door lead to badly structured code, insufficient documentation and
cost and time overruns. Since changes may lead to different versions of both documentation
SOFTWARE ENGINEERING
NOTES
151 ANNA UNIVERSITY CHENNAI
and code, the procedures to be followed in dealing with such changes are often handled in
the context of configuration management plans. The configuration management is often
supported by tools.
The key tasks of configuration management are discussed in the section 5.2.1 given
below.
5.2.1 Key Tasks of Configuration Management
Configuration management deals with the management of all artifacts developed during
the course of software development project. Even though the Configuration managements
also play a role during operational phase of the system, what we discuss now confines
only to the role of configuration management during systemdevelopment. As the software
engineering process progresses, number of software configuration items (SCI) are generated
including Software project plan and software requirement specifications. In view of the
strong relationship among these items any change any where causes disturbance in certain
activities throughout the life cycle. Let me quote the first law of SystemEngineering which
states that No matter where you are in the system life cycle the systemwill change and
the desire to change it will persist throughout the life cycle. Thus the software configuration
management is a set of activities that focus on the management of changes throughout the
software life cycle.
Before we discuss further, we shall assume that there is one official version of the
complete set of documents at any point in time related to the project. This is called the
base line.
Definition of base line
A base line is a specification or product that has been developed with all the
specifications agreed upon and thereafter serves as a basis for future development.
The base line can be changed only through a formal change control procedures. Thus
the baseline is the shared project database containing all approved items. The items
contained in the baseline are the Configuration Items (CI).
According IEEE std 10.12, 1990, the configuration items are defined as follows.
Definition
A configuration itemis an aggregate of hardware, software or both that is designated
for configuration management and treated as a single entity in the configuration management
process.
Some typical configuration items include:
- Requirement Specifications Document
- Design Document
- Test Cases
DMC 1703
NOTES
152 ANNA UNIVERSITY CHENNAI
- Test Plans
- Source Code
- Object code
- User Manual
A major task of configuration management is to maintain the integrity of this set of
artifacts.
In some organizations, even software specific versions of compiler, editors and other
CASE Tools are considered as configuration items.
These configuration items may be organized to form configuration objects that may
be catalogued in the project data with a unique ID.
5.2.2 Software Configuration Management Process
Five important tasks are identified for Software Configuration management process
(SCM Process)
- Identification
- Version Control
- Change Control
- Configuration Auditing
- Reporting
The SCM process helps organizations to identify and manage existing version of
programs and relevant documents and implement changes very efficiently. Organization
must ensure that the changes are verified and valid. They should be able to prioritize these
changes and valid changes need to be incorporated in configuration items. Any proposed
change to the baseline is called a change request.
Figure 5.1 Workflow of a change request scheme on CCB functions

Analyzer CR
Prepare and schedule for
incorporating change
Implement the
Change
Notify to CR Owner
Gather some more
information about
change
Prioritized Activities
Change Request (CR)
Change Approved
Updated CI
deferred
SOFTWARE ENGINEERING
NOTES
153 ANNA UNIVERSITY CHENNAI
Adding an item to this database or changing an item is subject to a formal approval
scheme. For large projects, there is a separate division called configuration control board
(CCB). The CCB ensures that any change in the CI is properly authorized and
implemented. The change request processes and CCB role is given in figure 5.1. The
rectangular boxes represents this roles of CCB. The process model given in figure 5.1
explains how the work flow of change requests can be managed.
New itemshould not be added to the baseline until they have been thoroughly reviewed
and tested. Items from shared database may be used freely and frequently used by the
other teammembers. If a particular item has to be changed, a teammember responsible
for implementing the change gets a copy of that item and the itemis temporarily locked so
that others cannot simultaneously update the same item. The person implementing the
change is free to tinker with the copy. After the change has been thoroughly reviewed and
tested it is submitted back the CCB. Once the CCB has approved it, the revised item is
included in the database with appropriate documentation separately and then the item is
unlocked again. When an itemis changed, the old version is retained since old version may
be used by others. Sometimes it is necessary to trace all the changes made on older
version. That is how we will have different versions of one and the some item. We must
clearly distinguish each one of them. Adequate numbering system need to be followed.
Usually the numbering systemfollowed is Xij where i may refer to major changes, j may be
minor changes for the component X.
Now a days many configuration management tools are available to keep track of
version changes and at the same time record these changes also. These configuration
management tools also helps us to build an executable version of the systemand retrieving
themfromthe database and linking them.
5.2.3 Configuration Audit
In spite of all the tools and techniques available for identification , for version control
and change control, we will not be able to get satisfactory results unless we validate them
properly. Formal technical reviews and software configuration audit is necessary to ensure
that configuration management is in right direction.
The formal technical review deals with the technical corrections of the configuration
object that has been modified. The technical reviewers assess the Software Configuration
items to determine consistency with other items, omissions or potential side effects.
A software configuration Audit is a complementary approach for the formal technical
review and examines all records starting frominitial change request to subsequent changes
and verify all documents by checklist counting of the following questions.
1. Has the change approved by CCB been made? Have any additional modifications
been incorporated.
2. Has a formal technical review been conducted?
DMC 1703
NOTES
154 ANNA UNIVERSITY CHENNAI
3. Have the software Engineering standards been followed?
4. Whether changes have been documented?
5. Whether persons responsible for these changes identified or not?
6. Have all the information regarding change requests modifications been properly
recorded and updated?
Such formal configuration audits also ensure that the correct SCI have been made
and documented properly.
5.3 CONFIGURATION MANAGEMENT PLAN
The Configuration Management Plan (CMP) comprises of procedures laid down for
configuration management. The CMP focus on SCM management and SCM activities.
The management aspect deals with how the project is being organized and what factors
affect configuration management. Further concerns with procedures to be adopted for
change requests activities describe how a configuration will be identified and controlled
and how its status is accounted and documented.
IEEE standard 828-1990 specify the contents of configuration management plan which
is given in table 5.1.
Table 5.1 Sample structure of Configuration Management Plan
(Source IEEE std 828-1990)
1. Introduction
1.1. Purpose
1.2 Scope
1.3 Definitions and acronyms
1.4 References
2. SCM Management
2.1 Organization
2.2 SCM responsibilities
2.3 Applicable Policies, Directions and Procedures
3. SCM Activities
3.1 Configuration Identification
3.2 Configuration Control
3.3 Configuration Status accounting
3.4 Configuration audit and reviews
3.5 Interface Control
3.6 Sub Contractor/ Vendor Control
4. SCM Schedules
5. SCM Resources
6. SCM Plan Maintenance
SOFTWARE ENGINEERING
NOTES
155 ANNA UNIVERSITY CHENNAI
Sample questions to students :
1. What are the main tasks of configuration management?
2. Define configuration item?
3. What is base line?
4. Explain the importance of baseline?
5. What is IEEE 820-1990 standard?
6. What are the main contents of configuration management plan?
7. What type of tool support is available for SCM?
8. What is TQM?
9. What is Software Quality Assurance?
10. What are the important tasks in Software Configuration management process?
5.4 SOFTWARE QUALITY ASSURANCE
Definition: Software Quality Assurance (SQA) is defined as a planned and systematic
approach to the evaluation of the quality of and adherence to software product standards,
processes and procedures. SQA includes the processes of assuring that standards and
procedures are established and are followed throughout the software life cycle. A major
activity is auditing. SQA involves the entire software development process monitoring
and improving the processes making sure that any agreed upon standards and procedures
are followed and ensuring that problems are found and dealt with.
To be unbiased, quality assurance needs to have organizational freedomand authority
from persons directly responsible for developing the software product or executing the
process within the project. Quality assurance may be internal or external depending upon
whether evidence of product or process quality is demonstrated to the management of the
supplier or the acquirer. Quality assurance may make use of the results of other supporting
processes such as verification, validation, joint reviews, audits and problemresolution.
For SQA to be more effective, top level management support is very much needed
so that the suggestions made by the SQA division can be enforced. The SQA division
should be independent and stated with technically competent and judicious people. They
need to co-operate with the development team. If SQA teamconflict with development
team SQA wont be effective.
The review and audit activities and the standards and procedures that must be followed
are described in the software quality assurance plan.
IEEE standard 730 offers a framework for the contents of Quality Assurance plan for
software development. The table 5.2 gives the contents of the Software Quality Assurance
document.
DMC 1703
NOTES
156 ANNA UNIVERSITY CHENNAI
Table 5.2 Main Contents of IEEE std 730
The software quality assurance plan describes how the quality of the software is to be
assessed and it provides the necessary framework to plan the systematic actions necessary
to provide adequate confidence that the itemor product conforms to established technical
requirements. In general, we call this as software auditing. The software audit process
improves the availability and reliability of software and the product supported by the
software. The concept of quality auditing should be based on the use of the same standards.
The minimal set of standards for the software development and maintenance process should
consists of the following.
- Planning and procedures
- Analysis of productivity and quality data
- Reviews, audits and inspections
- Configuration management
- Software testing
- Specifications and documentation.
No software Quality Assurance plan is not complete without. These issues properly
addressed. Further SQA plan should have appropriate metric programme associated
with it. Life Cycle metrics are really needed and they provide the practioner with a clear
path towards the kind or information to be gathered and stored on a quality/productivity
data base.
1. Purpose
2.Reference Documents
3. Management
4. Documentation
5. Standards, Practices and Metrics
6. Reviews and Audits
7. Test
8. People reporting and corrective action
9. Tools, Techniques and Methodologies
10. Code Control
11. Media Control
12. Supplier Control
13. Records Collection and Maintenance
14. Training
15. Risk Management
SOFTWARE ENGINEERING
NOTES
157 ANNA UNIVERSITY CHENNAI
Quality Goals
The quality goals are set by the organization/top level management. Set of quality
goals serves as a basis for the complete activity for achieving quality. Practically it is
difficult to set meaningful quality goals which could be agreeded upon and established.
The quality goals if they are established serve as a baseline for subsequent activities given
below.
- Guidance and control systemdevelopment
- Delivery
- Conversion of the System
- Assessment of the systemfor its compliance to the quality requirements
- Control of long termmaintenance
If it is not possible to establish well defined quality goals, we can have some
intermediate quality goals, for management control to achieve these targets. In general
the quality goals depend on several factors such as changes in laws, regulations, organizational
structures and objective systemcharacteristics. Some quality goals are mutually exclusive
while others are mutually supportive. Judicious mix has to be taken to ensure a reasonable
set of alternate goals.
It is to be noted that establishment of software quality goals is not one-time activity
during the systemdevelopment life cycle but it is an on-going activity performed during
the life cycle of the product. That is the reason why the quality goals set in the initial
software quality assurance plan need to be updated periodically as the system matures.
The quality goals established during planning will become the quality attributes
afterwards. First of all, we have to assess whether the quality goals established are suitable
for the product and if they are suitable whether we can retain there quality attributes during
the maintenance phase. In other words suitability and maintainability are the two important
criteria to analyze these quality attributes. Sometimes there could be conflict among quality
attributes. Some analysis needs to be done to remove this inconsistence in quality goals.
Simple steps to set up goals
- Compile the quality goals fromall the stake holders and come to some consensus
on the quality goals selected with mutual consent.
- A quality/reliability engineering analysis must be done to ensure that the quality
attributes are essential for the products and the remove conflicting quality goals.
- Create a prioritized list of the desired and agreed on quality goals.
- When the quality goals for the software have been agreed upon on by all interested
parties, one can proceed to identification of the software measures which will
relate to these goals. Appropriate metrics/measures can be identified for the quality
attributes.
- Some of the quality attributes and then components are given in the table 5.3.
DMC 1703
NOTES
158 ANNA UNIVERSITY CHENNAI
Table 5.3 List of quality attributes and components.
Sl.No Quality attributes Components
Operability 1. Usability
Training
Conciseness
Execution
Operability
2. Efficiency
Size
Completeness
Accuracy
Consistency
Error Tolerance
Size
3. Reliability
Fail Softwares
Auditablity
Security
Size
4. Integrity
Penetrability
Requirements
Auditability
Understandability
Readability
5. Appropriations
Design Auditablity
Product Auditablity
Design
Understandability
6. Correctness
Design Auditablity
Generality
Modularity
Hardware and Software interdependence
Self documentation
Fault -rate
Fault derivity
7. Portablity
and sensibility
Size
Auditability
Complexity
High Level design
Computational Complexity
Control
Interface
Coupling
Fault rate
Self documentation
8. Testability
Modularity
SOFTWARE ENGINEERING
NOTES
159 ANNA UNIVERSITY CHENNAI
While establishing quality goals, the quality planning must weigh several considerations.
The quality goals for a systemdepend on systemcharacteristics. Some of the system
characteristics are given below
- Functionality
- Performance
- Constraints
- Technological innovations
- Technological and managerial risks.
While establishing quality goals some trade offs need to be done. The relations
between any two quality attributes may be different. For example efficiency will generally
supportive of reliability and conflict with the ability to port the software to different platforms.
The software quality assurance plan (SQAP) needs to be a formal document which
describes the activities to be carried out by the SQA team. The SQA teamwriting the plan
begins in chapter I of the plan to describe details of SQAP and starts to provide a look at
the process (or Processes) used by the quality assurance to guarantee that the desired
level of quality of the software product or subset of the product will be achieved. SQAP
must also specify the purpose and scope of what is planned. That is the name of all the
Software products covered by SQAP must be listed. The second chapter deals with all
the documents that are referred by the plan.
Chapter 3 of the standard concerns with the structure of the organization with reference
to the quality. In other words the hierarchical structure of the organization and division
responsible for achieving and maintaining the quality. A senior level management should
be identified in SQAP who has the direct responsibility for the quality. This is absolutely
necessary for ISO 9000 compatibility.
An organizational chart depicting the hierarchial structure and who is performing
software quality activities and their position in the project management systemmust be
clearly specified. A sample structure is given in fig 5.2.
Figure 5.2 A Sample Organizational Structure
Software Configuration
management
Design and
Analysis Team
System Team
Application
Support Team
Testing Team
Project
Management
DMC 1703
NOTES
160 ANNA UNIVERSITY CHENNAI
Further roles of each individual and different quality tasks to be performed by each
individual should also be specified. Once the resources needed to implement a software
quality programme have been ascertained a schedule for implementation need to be
established. The quality schedule must be established in association with the development
schedule.
For each activity and task, activity initiation, ending time and dependence on other
activities must be clearly specified. Besides these development milestones such as formal
reviews, audits etc, review need to be specified. In a similar manner, a schedule should be
established for the acquisition of QA support tools.
All tasks associated with the life-cycle and sequence by which these tasks are to be
completed should also be clearly documented.
Inspite of meticulous manner the SQAP may be drawn there are some factors that
affect the amount of quality assurance. These:
- Size of the systemespecially in terms of effort.
- Criticality of the system in terms of tasks
- Cost of correcting errors
- Type of release old or new
- Relationship with the user
The chapter 4 of the SQAP describes the documentation to be created by the project
which is minimal and at the same time provide what SQA practitioner requires. Accordingly
IEEE std 730, the minimumset of documents are
- Software Requirements Specification (SRS)
- Software Design Description (SDD)
- Software Verification and Validation Plan (SVVP)
- Software Verification and Validation report (SVVR)
- User Manual
- Software Configuration Management Plan
- Software Test Plan
- Software Test Report
Other recommended documents are
- Software Development Plan (SDP)
- Standards and Procedures manual
- Software Project Management Plan
- Software Maintenance Manual
Chapter 5 of the standard is very much essential to the control of the processes of the
development and maintenance. The concepts and tools are to be clearly defined and
explained.
SOFTWARE ENGINEERING
NOTES
161 ANNA UNIVERSITY CHENNAI
This chapter identifies the standards procedures to be followed for
- Documentation
- Naming convention of the modules
- Logic structure standards
- Coding standards
- Software inspection procedures
- Software Quality metrics
The purposes of reviews and audits is to ensure that all the client needs and
requirements are met.
The main objective is three fold.
- To give guidelines for reviews for the project taking into consideration the size
criticality and complexity of the project.
- What deliverables should each review provide
- What should be the results of the review
In general any review should have some template to be followed and these reviews
should be compatible both Capability Maturity Model as far as possible.
Review process cover the following aspects.
- Management Review Process
- Technical Review Process
- Software Requirement Review
- Software Design Review
- Software Test Review
- Software inspection process
- Walk through processes.
Another aspect that has been covered in the standard is about the test plans. Testing
is essential for discovering errors and measuring the performance to some extent.
Testing can only indicate the presence of bugs not their absence as pointed by Dykstra.
The prerequisites for establishing an effective software testing program are a clear
requirements deposition and well documented SRS.
Test plans and Test Case have already been discussed in detail in Unit III.
If the tests are not done properly, the products level of risk increases. Testing is
always a critical development activity.
Problemreporting and corrective action is dealt in with chapter of SQAP standard.
Perhaps this is most critical one for maintenance of the future software products. A well
as for the current product. We have made it clear in Unit I that 40% of the software
development effort goes towards maintenance as per 40-20-40 rule. Large amounts of
money is also spent during maintenance. Experience has shown that only about 20% of
DMC 1703
NOTES
162 ANNA UNIVERSITY CHENNAI
the time spent on maintenance is to correct coding errors. Maintenance further consists of
change management process. Since the changes are inevitable and some changes do
require for usability scalability etc, change control procedures must not be delayed to a
later time. It is strongly recommended that this procedure be activated earlier if possible.
Software configuration control is a mandatory task assignment in the management of any
project. The degree of control is of extreme importance. Further SQAP deals with problem
reporting and corrective actions. Software problems are frequently discovered by the end
user/other than the end user. Generally any valid user is allowed to report on software
problems. That is why there is a need for user validation. All software problems should be
reported via a software problem report (SPR). Once the SPR is submitted, it should be
verified by ascertaining that the problemwhich has been reported is reproducible. Some
times the problem may be irreproducible and at the same time it is legitimate. After
ascertaining SPR as genuine one, it becomes a valid change request implementation of
SPR is always done via the procedures for the implementation of change request. The
problemreporting processing and data flow in problemreport processing is given in figure
5.3 and table 5.4.
Figure 5.3 Software problem report processing
Table 5.4 Data flows in Problem Report Processing
A. Problem documentation
B. Invalid SPR
C. SPR
E. Nonimplementable change
F Change authorization
G. Change / test results
H. Problems
I Change standards
SOFTWARE ENGINEERING
NOTES
163 ANNA UNIVERSITY CHENNAI
Figure 5.4 A correct state diagram of corrective actions
Every problemrequires corrective action. There are two dimensions to the corrective
action throughout the life cycle of the software product Fig 5.4 depicts a state diagramfor
corrective action. A corrective action may result fromthe need to correct a problemor the
desire to implement a new feature. Every corrective action starts with open state and
pass through several states as given in fig 5.4. Tools, Techniques and methodologies are
very important for successful implementation of SQAP. This is the chapter 9 of SQAP as
per the standard. Generally tools aid in the evaluation and improvement of systemquality.
They may enhance the productivity levels of team members. All software and hardware
tools required for the implementation of SQA are to be identified and listed. SQA techniques
may cover both technical as well as managerial procedures that aid in improving the
quality of the product. Methodologies are nothing but the integrated set of tools and
techniques. The chapter 10 of the standards deals with the code control. Actually this is
a special case of configuration managements. The tools and techniques required to ensure
the validity of complete code are discussed. To what extent this validity is protected is
also vital and is discussed in this chapter.
Practically data indicates that the time spent on coding is less then the time spent on
code analysis.
The chapter 11 of SQAP discusses about Mundane activities such as backup
frequencies and procedures and special facilities (such as fire proof or off-site storage a
backup sites). Further it discusses about the virus free media containing the software to
DMC 1703
NOTES
164 ANNA UNIVERSITY CHENNAI
the user. It is very important that the interfaces between the corporate configuration
management function and the media distribution be defined for the project. Generally
backup is a project critical and need to be taken care of properly. Project master library
should be backed up periodically. There are security limitations of the media. Appropriate
mechanisms should be introduced to avoid unauthorized access to the media. As per
IEEE std 730, supplier control is also important fromSQAP perceptive. For any software
development projects we depend on several supplier who supply off- the- shelf packages.
Whenever we procure off the- shelf products and vendor supplied software, some
methods techniques and tools are required to support and control these products which
one parts of the product developed by the project. If the software is to be made to-
order the absolute minimumdemand fromthe supplier should be the IEEE 730 standard.
One of the major functions of quality assurance is the collection and accurate reporting
on the results. Several works that have been carried during life cycle of the product. The
chapter 12 of SQAP basically deals with the collection of data and draw valid conclusions
from the data. If the inferences are not as you expected, there must be a feedback loop.
These is no short cuts for achieving the quality. You have to really assess whether the
required quality is achieved or not. It has been studied that 50% of software costs are
directly attributable to error corrections. Quality always begins with management. Unless
management shows good commitment, the quality program may lead to failure.
Organizational responsibility for retention and storage of database and the realization of
data recording analysis and reporting must rest with the quality assurance department.
What type of data to collect is the most important from the analysis point of view and
drawing valid conclusion. Among other reports, the following reports must be included.
- Change requests
- Derivation fromstandards
- Document errors and updates
- Test reports
- Software problem report
Training is a must for any corporate sectors. The SQAP standard must an adequate
training in order to accomplish QA tasks. If you want the SQA to be successful, the
training must be good. Hence there is a need for metrciuloan training plan and training
goals. The training plan and training goals. The training requirements accomplish SQA
tasks must be incorporated in the training plan.
The last important aspect in the standard is Risk management which was discussed in
Unit I. Each type of risk (technical, business, project management) should be identified
with enough participation. Risk assessment should strive to quantify the magnitude of
every identified risk.
SOFTWARE ENGINEERING
NOTES
165 ANNA UNIVERSITY CHENNAI
5.5 SOFTWARE QUALITY ASSURANCE ACTIVITIES
- Product Evaluation and Process Monitoring
Product evaluation and process monitoring are the SQA activities that assure the
software development and control processes described in the projects management plan
are correctly carried out and that the projects procedures and standards are followed.
Products are monitored for conformance to standards and processes are monitored for
conformance to procedures. Audits are a key technique used to performproduct evaluation
and process monitoring. Review of the Management Plan should ensure that appropriate
SQA approval points are built into these processes.
Product evaluation is an SQA activity that assures standards are being followed.
Ideally the first products monitored by SQA should be the projects standards and
procedures. SQA assures that clear and achievable standards. Product evaluation assures
that the software product reflects the requirements of the applicable standard(s) as identified
in the Management Plan. Process monitoring is an SQA activity that ensures that appropriate
steps to carry out the process are being followed. SQA monitors processes by comparing
the actual steps carried out with those in the documented procedures. The Assurance
section of the Management Plan specifies the methods to be used by the SQA process
monitoring activity.
- SQA Audit
A fundamental SQA technique is the audit, which looks at process and/ or a product
in depth, comparing them to established procedures and standards. Audits are used to
review management, technical, and assurance processes to provide an indication of the
quality and status of the software product.
The purpose of an SQA audit is to assure that proper control procedures are being followed,
that required documentation is maintained, and that the developers status reports accurately
reflect the status of the activity. The SQA product is an audit report to management
consisting of findings and recommendations to bring the development into conformance
with standards and /or procedures.
- Formal Test Monitoring
SQA assures that formal software testing, such as acceptance testing, is done in
accordance with plans and procedures. SQA reviews testing documentation for
completeness and adherence to standards. The documentation review includes test plans,
test specifications, test procedures, and test reports. SQA monitors testing and provides
follow-up on nonconformances. By test monitoring, SQA assures software completeness
and readiness for delivery.
DMC 1703
NOTES
166 ANNA UNIVERSITY CHENNAI
Software testing verifies that the software meets its requirement. The quality of testing is
assured by verifying that project requirements are satisfied and that the testing process is in
accordance with the test plans and procedures.
Software Quality Assurance During the Software Acquisition Life Cycle:
In addition to the general activities described above, there are phase-specific SQA
activities that should be conducted during the Software Acquisition life Cycle. At the
conclusion of each phase, SQA concurrence is a key element in the management decision
to initiate the following life cycle phase.
- Software Concept and Initiation Phase
SQA should be involved in both writing and reviewing the management plan in order
to assure that the processes, procedures, and standards identified in the plan are appropriate,
clear, specific, and suitable. During this phase, SQA also provides the QA section of the
Management Plan.
- Software Requirements Phase
During the software requirements phase, SQA assures that software requirements
are complete, testable, and properly expressed as functional, performance, and interface
requirements.
- Software Architectural (Preliminary) Design Phase
SQA activities during the architectural (preliminary) design phase include:
Assuring adherence to approved design standards as designated in the Management plan.
Assuring all software requirements are allocated to software components.
Assuring that a testing verification matrix exists and is kept up to date.
Assuring the Interface Control Documents are in agreement with the standard in formand
content.
Reviewing PDR documentation and assuring that all action items are resolved.
Assuring the approved design is placed under configuration management.
- Software Detailed Design Phase
SQA activities during the detailed design phase include:
Assuring that approved design standards are followed.
Assuring that allocated modules are included in the detailed design.
Assuring that results of design inspections are included in the design.
Reviewing CDR documentation and assuring that all action items are resolved.
- Software Implementation Phase
SQA activities during the implementation phase include the audit of:
Results of coding and design activities including the schedule contained in the software
development plan.
SOFTWARE ENGINEERING
NOTES
167 ANNA UNIVERSITY CHENNAI
Status of all deliverable items.
Configuration management activities and the software development library.
Nonconformance reporting and corrective action system.
- Software Integration and Test Phase
SQA activities during the integration and test phase include:
Assuring readiness for testing of all deliverable items.
Assuring that all tests are run according to test plans and procedures and that any non-
conformances are reported and resolved.
Assuring that test reports are complete and correct.
Certifying that testing is complete and software and documentation are ready for delivery.
Participating in the Test Readiness Review and assuring all action items are completed.
- Software Acceptance and Delivery Phase
As a minimum, SQA activities during the software acceptance and delivery phase
include assuring the performance of a final configuration audit to demonstrate that all
deliverable items are ready for delivery.
- Software sustaining Engineering and Operations phase
During this phase, there will be mini-development cycles to enhance or correct the
software. During these development cycles, SQA conducts the appropriate phase-specific
activities described above.
F. Techniques and Tools
SQA should evaluate its needs for assurance tools versus those available off-the-
shelf for applicability to the specific project, and must develop the others it requires. Useful
tools might include audit and inspection checklists and automatic code standards analyzers.
The Software V & V plan (SVVP)
The SVVP purpose is to provide the highest-level description of verification and
validation efforts. The following topics, must be addressed.
- Project Identification
- Plan Goals
- Summary of Verification and Validation efforts
- Responsibilities conveyed with the plan
- Software to be verified and validated
- Identification of waivers and changes to organization standards
- SVVP assumptions
DMC 1703
NOTES
168 ANNA UNIVERSITY CHENNAI
Software test or verification and validation processes are used to determine if developed
software products conformto their requirements, and whether the software products fulfil
the intended use and user expectations. This includes analysis, evaluation, review, inspections,
assessment, and testing of the software products and the processes that produced the
products. Also, the software testing, verification, and validation processes apply when
integrating the purchased or customer supplied software products into the developed
product.
The V & V plan is the instrument to explain the requirements and management of V&
V and the role of each technique in satisfying the objectives of V & V. An understanding
of the different purposes of each verifications and validation activity will help in planning
carefully the techniques and resources needed to achieve their purposes. IEEE standard
1012, section 7, specifies what ordinarily goes into a V&V plan.
Verifications & Validation Monitoring
SQA assures verification and validation (V &V) activities by monitoring technical
reviews, inspections, and walkthroughs. The SQA role in reviews, inspections, and
walkthroughs is to observe, participate as needed, and verify that they were properly
conducted and documented. SQA also ensures that any actions required are assigned,
documented, scheduled and updated.
Formal software reviews should be conducted at the end of each phase of the life
cycle to identify problems and determine whether the interimproduct meets all applicable
requirements. Examples of formal reviews are the preliminary Design Review (PDR),
Critical Design Review (CDR), and Test Readiness Review (TRR). A review looks at the
overall picture of the product being developed to see if it satisfies its requirements. Reviews
are part of the development process, designed to provide a ready/not-ready decision to
begin the next phase. In formal reviews, actual work done is compared with established
standards. SQAs main objective in reviews is to assure that the Management and
Development Plans have been followed, and that the product is ready to proceed with the
next phase of development. Although the decision to proceed is a management decision,
SQA is responsible for advising management and participating in the decision.
An inspection or walkthrough is a detailed examination of a product on a step-by-
step or line-of-code by line-of-code basis to find errors.
Sample Question Unit V
1. What is software configuration management?
2. What are the steps in SCM process?
3. What are the important quality characteristics?
4. What is quality assurance?
5. What is the role played by reviews in quality assessment?
SOFTWARE ENGINEERING
NOTES
169 ANNA UNIVERSITY CHENNAI
6. What are the important SQA activities?
7. What is meant by version control?
8. Define Software reliability?
9. What is meant by configuration Audit?
10. How does the management monitor the changes in the existing version of the
programs
11. Who has the responsibility for approving and ranking changes?
12. What are the contents of Quality Assurance Plan?
13. What is the role of SCM repository?
14. Briefly explain the features of SCM?
15. What is the importance of formal technical review?
16. What are the procedures for noting the change, recording and reporting?
17. What are the steps in change management?
18. What are the quality standards? Briefly explain?
19. Briefly describe steps in change control process?
20. Briefly explain different IEEE standards for Quality assurance?
REFERENCES
1. Hans Van Vliet: Software Engineering Principles and Practice, 2000 edition, Joh
Wiley & sons Ltd, USA,2000.
2. Carlo Chezzi, Mehido Jazayen, Dino Mandrioli , Fundamentals of Software
Engineering, Prentice Hall of India, New Delhi, 2002.
3. Karl E. Wiegers: Software Requirements, Microsoft Press, USA, 2003.
4. Roger S. Pressman: Software Engineering A Practioners Approach Sixth edition,
MC Graw Hill International Edition 2005.
5. Stephen Schach: Software Engineering, Tata-MC Graw Hill, New Delhi 2007.
6. Shari Lawrence Pfleeger: Software Engineering, Theory and Practice, Second
Edition, Pearson Education, 2001.
7. Swapna Kishore & Rajesh Naik Software Requirements and Estimation, Tata
MC Graw Hill, New Delhi 2007.
8. Ian Sommerville: Software Engineering, 6
th
edition Pearson Educaiton Asia, 2001.
9. Ivan Jacobson, Grady Booch, J ames Rumbaugh: The Unified Software
Development Process Pearson Education 2007.
10. M.A. Parthasarathy: Practical Software Estimation, Pearson Education, 2007.
11. Steve MC Connell: Software Estimation, Demystifying the Black Art, Microsoft
press, 2006.
12. Mordectar Ben Menachemand Ganys. Marliss, Software Quality, International
Computer Press 1997.
13. A Survey of SystemDevelopment Process Models www.ctg.albany.edu.
DMC 1703
NOTES
170 ANNA UNIVERSITY CHENNAI
14. www.ambysoft.com/assays/userinterfacedesign.
15. Alan Cooper: The essentials of User Interface Design Wiley Dreamtech India
(P) Ltd, New Delhi 2002.
16. http://en.wikipedia.org/wiki/software testing
17. http://en.wikipedia.org/wiki/software design
18. Waman S. Jawadekar. Software Engineering Tata MC Graw Hill, New Delhi
2007.
19. Systems Engineering Fundamentals www.dau mil/Pubs/SEF Guide
SOFTWARE ENGINEERING
NOTES
171 ANNA UNIVERSITY CHENNAI
NOTES
DMC 1703
NOTES
172 ANNA UNIVERSITY CHENNAI
NOTES

You might also like