You are on page 1of 3

Where is quality assessed within the task set?

Introduction
Quality attributes of software is assessed in a task set after examining it for design purposes. Computer systems are
used in many critical applications where failure can have serious consequences. Developing systematic ways to
understand the software qualities of a system to the systems architecture provides a sound basis for making
objective decisions which are about design trade-offs that enables engineers to make reasonably accurate predictions
about a systems attributes which are free from hidden assumptions. The ultimate goal is the ability to quantitatively
assess and trade off numerous software quality attributes to arrive at a better overall system. The scope of this
assignment is to evaluate the quality assessment procedure in a given task set.
Computer systems and software are used in many critical jobs and failure of the software system can cause serious
damage. Critical applications of the software systems have the following characteristics:
1.
2.
3.
4.

The applications must be of longer life and must have systemic updates.
The applications must work continuously without stopping
There should be proper interactions between the hardware and software systems.
The applications must be of optimum quality attributes such as timeliness, reliability, safety,
interpolability etc.

Developing systematic ways to relate the software quality attributes of a system to the systems architecture renders
a basis for making objective decisions about design tradeoffs and enables engineers to make precise predictions
about a systems features that are free from bias and hidden assumptions. The ultimate goal is the ability to
quantitatively examine and trade off multiple software quality attributes to arrive at a better overall system.
Software Quality Attributes
Much of a software architects life is spent designing software systems to meet a set of quality features demands.
General software quality attributes include security, performance, scalability and reliability. These are often
informally called an applications -ilities,
Throughout the design process, the quality of the evolving design is assessed with a series of technical reviews
discussed suggests three characteristics that serve as a guide for establishment of a good design:
The design must implement all of the explicit requirements contained in the requirements model,
and it must include all of the implicit requirements which are desired by stakeholders.
The design must be a readable beside an understandable guide for those who generate code and for
those who test and subsequently support the software.
The design must be a complete picture of the software, addressing the data, functional, and
behavioural spheres from an implementation perspective.
Each of these characteristics is a goal of the design process in true sense.
Hewlett-Packard developed a set of software quality attributes that has been given the acronym FURPS
functionality, performance, usability, reliability and supportability. The FURPS quality attributes represent a
target for all software design:
Functionality is assessed by examining the feature set beside capabilities of the program, the generality of the
functions that are to be delivered, and the security of the entire system.
Usability is assessed by considering overall aesthetics, human factors, consistency, and documentation.
Reliability is assessed by considering the accuracy of output results, the frequency and severity of failure, the
mean-time-to-failure (MTTF), the ability to recover from failure, and the predictability of the concerned
program.
Performance is measured by evaluating response time, resource consumption, processing speed, throughput,
and efficiency.
Supportability incorporates the ability to extend the program (extensibility), adaptability, and also serviceability
these three attributes represent a more common term, maintainabilityand in compatibility, addition,
configurability, testability, (the ability to organize and control elements of the software configuration), the ease
and efficiency with which the system can be properly installed, and the ease with which problems can be
localized.

Software Quality Guidelines


In order to assess the quality of a design representation, the quality assessment leader and other members of the
software team must go on to construct some technical criteria for good design. The design concepts also serve as
software quality criteria. The following guidelines must be considered to assess the quality of software:
1. A design should exhibit an architecture that (1) has been created by using the recognizable architectural styles or
patterns, (2) is generally composed of the components that exhibit good design characteristics and (3) can also be
implemented in a strong evolutionary fashion which facilitates implementation and testing.
2. A design must be modular; the software has to be logically categorised into elements or subsystems.
3. A design should have distinct representations of data, interfaces, architecture, and components
4. A design should lead to appropriate data structures for the classes to be implemented and are drawn from
recognizable data patterns.
5. A design should always lead to the components that totally exhibit independent functional characteristics.
6. A design should lead proceed to promote interfaces that will reduce the complexity of connections between
components and the external environment.
7. A design should be derived using a well established repeatable method which is fashioned by information
obtained during software requirements analysis.
8. A design should be represented by using notations which can communicate its meaning.
Assessing Design Quality: A general overview
Design is important because it allows a software team to assess the quality3 of the software before it is
implementedat a time when errors, omissions, or inconsistencies are easy and inexpensive to correct. So the
question arises about assessing quality during design. The software cant be tested, because there is no executable
software to test.
During design, quality is assessed by conducting a series of technical reviews (TRs). A technical review is a
meeting conducted by members of the software team. Usually two, three, or four people participate on it which
totally depends on the scope of the concerned information of the design to be reviewed. Each person plays a role:
the review leader plans the meeting, sets an agenda, and runs the meeting; the recorder takes notes so that
nothing is missed; the producer is the person whose work product (e.g., the design of a software component) is
being reviewed. Prior to the meeting, each person on the review team is given a copy of the design work product
and is asked to read it, looking for errors, omissions, or ambiguity. When the meeting commences, the intent is to
note all problems with the work product so that they can be corrected before implementation begins. The TR
typically lasts between 90 minutes and 2 hours. At the conclusion of the TR, the review team determines whether
further actions are required on the part of the producer before the design work product can be approved as part of
the final design model.
Conclusion
To make a proper quality assessment an architect must precisely consider every quality attribute of the software. Not
every software quality attribute is weighted equally as the software design is developed. One application may stress
functionality with a special emphasis on security. Another may demand performance with particular emphasis on
processing speed. A third might focus on reliability. Regardless of the weighting, it is important to note that these
quality attributes must be considered as design commences, not after the design is complete and construction has
begun.
References
1. Roger S. Pressman, Software Engineering: A Practitioners Approach, 7th edition, McGraw-Hill
Publications
2. L. Chung, B. Nixon, E. Yu, J. Mylopoulos, (Editors). Non-Functional Requirements in Software
Engineering Series: The Kluwer International Series in Software Engineering. Vol. 5, Kluwer
Academic Publishers. 1999.
3. I. Gorton, L. Zhu. Tool Support for Just-in-Time Architecture Reconstruction and
Evaluation: An Experience Report. International Conference on Software Engineering
(ICSE) 2005, St Loius, USA, ACM Press.
4. J. Ramachandran. Designing Security Architecture Solutions. Wiley & Sons, 2002.

You might also like