Professional Documents
Culture Documents
SYSTEM
DEVELOPMENT
LIFE CYCLE
========================================================
1. System Development Life Cycle
The basic idea of software development life cycle (SDLC) is that there is a
well defined process by which an application is conceived, developed and
implemented. The phases in the SDLC provide a basis for the
management and control because they define segments of the flow of
work, which can be identified for the managerial purpose and specifies
the documents or other deliveries to be produced in each phase.
System Development revolves around a life cycle that begins with the
recognition of user needs. In order to develop good software, it has to go
through different phases. There are various phases of the System
Development Life Cycle for the project and different models for software
development, which depict these phases. We decided to use waterfall
model, the oldest and the most widely used paradigm for software
engineering. The Various relevant stages of the System Life Cycle of this
Application Tool are depicted in the following flow diagram.
SYSTEM ANAYLSIS
SYSTEM DESIGN
CODING
SYSTEM TESTING
========================================================
SYSTEM
IMPLEMENTATION
SYSTEM MAINTENANCE
1. System Analysis
2. System Design
System Design is actually a multistep process that focuses on four distinct
attributes of a program: data structures, software architecture, interface
representations, and procedural (algorithmic) detail. System design is
concerned with identifying the software components (Functions, data streams,
and data stores), specifying relationships among components, specifying
software structure, maintaining a record of design decisions and providing a
blueprint for the implementation phase.
3. Coding
Coding step performs the translations of the design representations into an
artificial language resulting in instructions that can be executed by the
computer. It thus involves developing computer programs that meet the
system specifications of design stage.
4. System Testing
========================================================
System testing process focuses on the logical internals of the software,
ensuring that all statements have been tested on the functional externals, that
is conducting tests using various tests data to uncover errors that defined
input will produce actual results that agree with required results.
5. System Implementation
System Implementation is a process that includes all those activities that
take place to convert an old system to a new system. The new system
may be totally new system replacing the existing system or it may be
major modification to the existing system. Coding performs the
translations of the design representations into an artificial language
resulting in instructions that can be executed by the computer. It thus
involves developing computer programs that meet the system design
specifications. System implementation involves the translation of the
design specifications into source code and debugging, documentation and
unit testing of the source code.
6. System Maintenance
Maintenance is modification of a software product after delivery to correct
faults to improve performance or to adopt the product to a new operating
environment. Software maintenance canot be avoided due to ware & tear
caused by users. Some of the reasons for maintaining the software are
1. Over a period of time, software original requirements may change.
2. Errors undetected during software development may be found during
user & require correction.
3. With time new technologies are introduced such as hardware,
operating system etc. The software therefore must be modified to
adapt new operating environment.
========================================================
Corrective Maintenance: This type of maintenance is also called bug
fixing that may observed while the system is in use i.e correct reported
errors.
========================================================
SYSTEM ANAYLSIS
1.1.1 Problem Definition
========================================================
from his accounts as per usage similar to what we see in case of prepaid
mobile system.
File Delete Notification System: The system will write notification
events in case a file is deleted from the system along with all attributes
like time,date etc. This module is very useful in case of College Lab as we
can track that when a file was deleted from a system.
User Activity Monitoring Module: This module will enable the
administrator to keep a track on the activities occurring on a user system
in a real time basis. This system will work in the manner that after every
few minutes the screen of the user desktop will be catured as an image
and will be transferred to the administrator so that can keep the track of
what is happening on the user system.
Remote Desktop Login: This module enables the administrator to
remotely logon to a particular system on the network to view what all is
happening on that remote computer.
========================================================
Cyber café is automated no need to keep whatching the clock and to maintain
the records of cyber café.
Prevents and Notifies about deletion of any important files. The culprit can be
tracked who has deleted the files
Administrator can track user working by keep watch on the activities thus
preventing further misuse.
Problem Recognition
The aim of the poroject was understood and through research was done on
internet to get a deep insight of how the proposed system will work, we went
========================================================
to different travel related sites and understood their working. We recorded
what all features will be required when we build our website like for eg. We
need to keep a database of destinations, Travel Agents and Hotels should be
able to register and post their data online etc. All these features were noted
down so that they could be incorporated in our application.
The feasibility study is carried out to test if the proposed system is worth being
implemented. Given unlimited and infinite time, all projects are feasible.
Unfortunately such resources and time are not possible in real life situations.
Hence it becomes both necessary and prudent to evaluate the feasibility of
the project at the earliest possible time in order to avoid unnecessarily
wastage of time, effort and professional embarrassment over an ill conceived
========================================================
system. Feasibility study is a test of system proposed regarding its work
ability, impact on the organization ability to meet the user needs and effective
use of resources.
The main objective of feasibility study is to test the technical, operational and
economical feasibility of developing a computer system Application.
The following feasibility studies were carried out for the proposed system:
========================================================
Schedule Feasibility: “Evaluates the time taken in the development of
the project”. The system had schedule feasibility.
========================================================
1.2.1 DESIGN CONCEPTS
The design of an information system produces the detail that state how a
system will meet the requirements identified during system analysis. System
specialists often refer to this stage as Logical Design, in contrast to the
process of development program software, which is referred to as Physical
Design.
System Analysis begins process by identifying the reports and the other
outputs the system will produce. Then the specific on each are pin pointed.
Usually, designers sketch the form or display as they expect it to appear when
the system is complete. This may be done on a paper or computer display,
using one of the automated system tools available. The system design also
describes the data to be input, calculated or stored. Individual data items and
calculation procedures are written in detail. The procedure tells how to
process the data and produce the output.
The following goals were kept in mind while designing the system:
• To reduce the manual work required to be done in the existing system.
========================================================
• To avoid errors inherent in the manual working and hence make the
outputs consistent and correct.
• To improve the management of permanent information of the
Computer center by keeping it in properly structured tables and to
provide facilities to update this information efficiently as possible.
• To make the system completely menu-driven and hence user friendly,
and hence user friendly, this was necessary so that even non-
programmers could use the system efficiently.
• To make the system completely compatible i.e., it should fit in the total
integrated system.
• To design the system in such a way that reduced future maintenance
and enhancement times and efforts.
• To make the system reliable, understandable and cost effective.
1.2.3 DESIGN MODULES
Cyber Cafe Mangement: This feature allows the cyber café
administrator to to maintain the system chart and view which user is
sitting on which compuer and maintain the accounting for that particular
user. The user account can be topped up by the cyber café administrator
with a particular amount and when the user sits on a particular system he
is not able to log on to the system until he provides his
username/password, as soon as he logs in, the system deducts money
from his accounts as per usage similar to what we see in case of prepaid
mobile system.
File Delete Notification System: The system will write notification
events in case a file is deleted from the system along with all attributes
like time,date etc. This module is very useful in case of College Lab as we
can track that when a file was deleted from a system.
User Activity Monitoring Module: This module will enable the
administrator to keep a track on the activities occurring on a user system
in a real time basis. This system will work in the manner that after every
few minutes the screen of the user desktop will be catured as an image
and will be transferred to the administrator so that can keep the track of
what is happening on the user system.
========================================================
Remote Desktop Login: This module enables the administrator to
remotely logon to a particular system on the network to view what all is
happening on that remote computer.
SYSTEM DESIGN
The design stage takes the final specification of the system from analysis
stages and finds the best way of filing them, given the technical environment
and previous decision on required level of automation.
The high level Design maps the given system to logical data structure.
Architectural design involves identifying the software component, decoupling
and decomposing the system into processing modules and conceptual data
structures and specifying the interconnection among components. Good
notation can clarify the interrelationship and interactions if interest, while poor
notation can complete and interfere with good design practice. A data flow-
oriented approach was used to design the project. This includes Entity
Relationship Diagram (ERD) and Data Flow Diagrams (DFD).
One of the best design approaches is Entity Relationship Method. This design
approach is widely followed in designing projects normally known as “ Entity
Relationship Diagram (ERD)”.
ERD helps in capturing the business rules governing the data relationships of
the system and is a conventional aid for communicating with the end users in
the conceptual design phase. ERD consists of:
========================================================
Entity – It is the term use to describe any object, place, person, concept,
activity that the enterprise recognizes in the area under investigation and
wishes to collect and store data. It is diagrammatically represented as
boxes.
Attribute – They are the data elements that are used to describe the
properties that distinguish the entities.
One-to-One (1:1)
It is an association between two entities. For example, each student
can have only one Roll No.
One-to-Many (1:M)
It describes entities that may have one or more entities related to it.
For example, a father may have one or many children.
Many-to-Many (M:M)
It describes entities that may have relationships in both directions.
This relationship can be explained by considering items sold by
========================================================
Vendors. A vendor can sell many items and many vendors can sell
each item.
========================================================
Entity Relationship Diagram
========================================================
1.2.4.2 Context Analysis Diagram
Context Analysis Diagram (CAD) is the top-level data flow diagram, which
depicts the overview of the entire system. The major external entities, a single
process and the output data stores constitute the CAD. Though this diagram
does not depict the system in detail, it presents the overall inputs, process
and the output of the entire system at a very high level. The Context Analysis
Diagram if the project is given ahead.
Context Level
Data Flow Diagram
Cyber
Computer Users Security Administrator
System
A Data Flow Diagram (DFD) is a graphical tool used to describe and analyze
the movement of data through a system – manual or automated including the
processes, stores of data and delays in the system. They are central tools and
the basis from which other components are developed. It depicts the
========================================================
transformation of data from input to output through processes and the
interaction between processes.
========================================================
4. Data stores are the physical areas in the computer’s hard disk
where a group of related data is stored in the form of files. They
are depicted as an open-ended rectangle. The Data store is used
either for storing data into the files or for reference purpose.
DFD – 1
DFD – 2
========================================================
DFD – 3
DFD - 4
The Low Level Design maps the logical model of the system to a physical
database design. Tables created for the system Entities and Attributes were
mapped into Physical tables. The name of the entity is taken as the table
name.
During detailed design phase, the database if any and programming modules
are designed and detailed user procedures are documented. The interfaces
between the System users and computers are also defined.
========================================================
1.2.5.1 APPLICATION DESIGN
After the detailed problem definition and system analysis of the problem, it
was thought of designing web based Computer designing. Simplicity is hard to
design. It is difficult to design something that is technically sophisticated but
appears simple to use. Any software product must be efficient, fast and
functional but more important it must be user friendly, easy to learn and use.
For designing good interface we should use the following principles.
MODULES
========================================================
in a real time basis. This system will work in the manner that after every
few minutes the screen of the user desktop will be catured as an image
and will be transferred to the administrator so that can keep the track of
what is happening on the user system.
Remote Desktop Login: This module enables the administrator to
remotely logon to a particular system on the network to view what all is
happening on that remote computer.
========================================================
WORKING ENVIRONMENT
========================================================
2.1 Technical Specifications
HARDWARE ENVIRONMENT
HARD DISK - 80 GB
SOFTWARE ENVIRONMENT
Frontend - VB.NET
========================================================
========================================================
Technology Used: VB.NET
VB.NET
.NET Architecture
The .NET Framework consists of three parts: the Common Language
Runtime,
the Framework classes, and ASP.NET, which are covered in the
following sections.
The components of .NET tend to cause some confusion.
========================================================
ASP.NET
One major headache that Visual Basic developers have had in the
past is trying to
reconcile the differences between compiled VB applications and
applications built
in the lightweight interpreted subset of VB known as VBScript.
Unfortunately,
when Active Server Pages were introduced, the language supported
for serverside
scripting was VBScript, not VB. (Technically, other languages could
be used
for server side scripting, but VBScript has been the most commonly
used.) Now, with ASP.NET, developers have a choice. Files with the
ASP extension
are now supported for backwards compatibility, but ASPX files have
been introduced
as well. ASPX files are compiled when first run, and they use the
same syntax that is used in stand-alone VB.NET applications.
Previously, many developers
have gone through the extra step of writing a simple ASP page that
simply
executed a compiled method, but now it is possible to run compiled
code
directly from an Active Server Page.
Framework Classes
Ironically, one of the reasons that VB.NET is now so much more
powerful is
because it does so much less. Up through VB 6.0, the Visual Basic
compiler had
to do much more work than a comparable compiler for a language
like C++.
This is because much of the functionality that was built into VB was
provided in
C++ through external classes.This made it much easier to update
and add features
to the language and to increase compatibility among applications
that shared
the same libraries.
Now, in VB.NET, the compiler adopts this model. Many features that
were
formerly in Visual Basic directly are now implemented through
Framework
classes. For example, if you want to take a square root, instead of
using the VB
operator, you use a method in the System.Math class.This approach
makes the
language much more lightweight and scalable.
.NET Servers
========================================================
We mention this here only to distinguish .NET servers from .NET
Framework.
These servers support Web communication but are not necessarily
themselves
written in the .NET Framework.
Common Language Runtime
CLR provides the interface between your code and the operating
system, providing
such features as Memory Management, a Common Type System,
and
Garbage Collection. It reflects Microsoft’s efforts to provide a unified
and safe
framework for all Microsoft-generated code, regardless of the
language used to
create it.
========================================================
for this programmer will be dramatically reduced by .NET’s
Multilanguage capabilities.
========================================================
data needed within the application for the CLR to operate. This
ensures that any dependencies your application might have are
always met and never broken.
When you set your compiler to generate the .NET code, it runs
through the CTS and inserts the appropriate data within the
application for the CLR to read. Once the CLR finds the data, it
proceeds to run through it and lay out everything it needs within
memory, declaring any objects when they are called (but not
before).Any application interaction, such as passing values from
classes, is also mapped within the special data and handled by the
CLR.
Using .NET-Compliant
Programming Languages
.NET isn’t just a single, solitary programming language taking
advantage of a multiplatform system. A runtime that allows
portability, but requires you to use a single programming model
would not truly be delivering on its perceived value. If this were the
case, your reliance on that language would become a liability when
the language does not meet the requirements for a particular task.
All of a sudden, portability takes a back seat to necessity—for
something to be truly “portable,” you require not only a portable
runtime but also the ability to code in what you need, when you
need it. .NET solves that problem by allowing any .NET compliant
programming language to run. Can’t get that bug in your class
worked out in VB, but you know that you can work around it in C?
Use C# to create a class that can be easily used with your VB
application. Third-party programming language users don’t need to
fret for long, either; several companies plan to create .NET-
compliant versions of their languages.
Currently, the only .NET-compliant languages are the entire
Microsoft flavor; for more information, check these out at
http://msdn.microsoft.com/net:
_ C#
_ VB.NET
_ Jscript.NET
========================================================
Origin of .Net Technology
1. Ole Technology
Object linking and embedding technology was
developed by Microsoft in the early 1990 to enable easy
interprocess communications. To embed documents from one
application into another application. This enabled users to develop
applications which required inter- operability between various
products such as MS Word and MS Excel.
2. Com Technology
Microsoft introduced component-based model for
developing softwares programs. In the components –based
approaching a program is broken into a number of independent
components where each one offers a particular service. It reduces
the overall complexity of software. Enables distributed
developments across multiple organization or departments and
Enhances software maintainability
========================================================
environment that you use to execute Visual Basic .NET
applications and the services you can use within those
applications. One of the main goals of this framework is to
make it easier to develop applications that run over the
Internet. However, this framework can also be used to
develop traditional business applications that run on the
Windows desktop. To develop a Visual Basic .NET
application, you use a product called Visual Studio .NET
(pronounced “Visual Studio dot net”). This is actually a
suite of products that includes the three programming
languages described. Visual Basic .NET, which is designed
for rapid application development. Visual Studio also
includes several other components that make it an
outstanding development product. One of these is the
Microsoft Development Environment, which you’ll be
introduced to in a moment. Another is the Microsoft SQL
Server 2000 Desktop Engine (or MSDE). MSDE is a
database engine that runs on your own PC so you can use
Visual Studio for developing database applications that are
compatible with Microsoft SQL Server. SQL Server in turn
is a database management system that can be used to
provide the data for large networks of users or for
Internet applications.
The two other languages that come with Visual Studio
.NET are C# and C++. C# .NET (pronounced “C sharp dot
net”) is a new language that has been developed by
Microsoft especially for the .NET Framework. Visual C+
+ .NET is Microsoft’s version of the C++ language that is
used on many platforms besides Windows PCs.
========================================================
Two other components of Visual
Studio .NET
Component Description
Description
• The .NET Framework defines the environment that you use for
executing Visual Basic .NET applications.
• Visual Studio .NET is a suite of products that includes all three of
the programming languages listed above. These languages run
within the .NET Framework.
• You can develop business applications using either Visual Basic
.NET or Visual C# .NET.
Both are integrated with the design environment, so the
development techniques are similar although the language details
vary.
• Besides the programming languages listed above, third-party
vendors can develop languages for the .NET Framework. However,
programs written in these languages can’t be developed from within
Visual Studio .NET.
========================================================
The components of the .NET Framework
========================================================
Description
========================================================
The Common Language Runtime
Visual Basic has always used a runtime, so it may seem strange to
say that the biggest change to VB that comes with .NET is the
change to a Common Language Runtime (CLR) shared by all .NET
languages. The reason is that while on the surface the CLR is a
runtime library just like the C Runtime library, MSVCRTXX.DLL, or
the VB Runtime library, MSVBVMXX.DLL, it is much larger and has
greater functionality. Because of its richness, writing programs that
take full advantage of
the CLR often seems like you are writing for a whole new operating
system API. Since all languages that are .NET-compliant use the
same CLR, there is no need for a language-specific runtime. What is
more, code that is CLR can be written in any language and still be
used equally well by all .NET CLR-compliant languages.
Your VB code can be used by C# programmers and vice versa with
no extra work. Next, there is a common file format for .NET
executable code, called Microsoft Intermediate Language (MSIL, or
just IL). MSIL is a semi compiled language that gets compiled into
native code by the .NET runtime at execution time. This is a vast
extension of what existed in all versions of VB prior to version 5. VB
apps used to be compiled to p-code (or pseudo code, a machine
language for a hypothetical machine), which was an intermediate
representation of the final executable code.
The various VB runtime engines, interpreted the p-code when a
user ran the program. People always complained that VB was too
slow because of this, and therefore, constantly begged Microsoft to
add native compilation to VB. This happened starting in version 5,
when you had a choice of p-code (small) or native code (bigger but
presumably faster). The key point is that .NET languages combine
the best features of a p-code language with the best features of
compiled
languages. By having all languages write to MSIL, a kind of p-code,
and then compile the resulting MSIL to native code, it makes it
relatively easy to have cross-language compatibility. But by
ultimately generating native code you still get good performance.
========================================================
Another problem was the lack of true inheritance. Inheritance is a
form of code reuse where you use certain objects that are really
more specialized versions of existing objects. Inheritance is thus the
perfect tool when building something like a better textbox based on
an existing textbox. In VB5 and 6 you did not have inheritance, so
you had to rely on a fairly cumbersome wizard to help make the
process of building a better textbox tolerable.
True Multithreading
========================================================
contain Visual Basic statements. Most simple projects consist of just
one source file, but more complicated projects can have more than
one source file. A project may also contain other types of files, such
as sound files, image files, or simple text files. As the figure shows,
a solution is a container for projects, which you’ll learn more about
in a moment. You use the Visual Basic compiler, which is built into
Visual Studio, to compile your Visual Basic source code into
Microsoft Intermediate Language (or MSIL). For short, this can be
referred to as Intermediate Language (or IL). At this point, the
Intermediate Language is stored on disk in a file that’s called an
assembly. In addition to the IL, the assembly includes references to
the classes that the application requires. The assembly can then be
run on any PC that has the Common Language Runtime installed on
it. When the assembly is run, the CLR converts the Intermediate
Language to native code that can be run by the Windows operating
system. Although the CLR is only available for Windows systems
right now, it is possible that the CLR will eventually be available for
other operating systems as well. In other words, the Common
Language Runtime makes platform independence possible. If, for
example, a CLR is developed for the Unix and Linux operating
systems, Visual Basic applications will be able to run on those
operating systems as well as Windows operating systems.
========================================================
Description
========================================================
VB .NET is the first fully object-oriented version of
VB
Introduction to OOP
OOP is a vast extension of the event-driven, control-based model of
programming used in early versions of VB. With VB .NET, your
entire program will be made up of self-contained objects that
interact. These objects are stamped out from factories called
classes. These objects will:
• Have certain properties and certain operations they can perform.
• Not interact with each other in ways not provided by your code's
public interface.
• Only change their current state over time, and only in response to
a specific request. (In VB .NET this request is made through a
property change or a method call.)
The point is as long as the objects satisfy their specifications as to
what they can do (their public interface) and thus how they respond
to outside stimuli, the user does not have to be interested in how
that functionality is implemented. In OOP-speak, you only care
about what objects expose.
========================================================
(hidden)Social Security Number As String - instead has functions
that validate and return and change the Social Security number
(hidden) Address as String - instead has functions that validate and
return and change the address and also return it in a useful form
End Employee Info as CLASS
Abstraction
Abstraction is a fancy term for building a model of an object in
code. In other words, it is the process of taking concrete day-to-day
objects and producing a model of the object in code that simulates
how the object interacts in the real world. For example, the first
object-oriented language was called Simula, because it was
invented to make simulations easier. Of course, the more modern
ideas of virtual reality carry abstraction to an extreme.
Abstraction is necessary because:
• You cannot use OOP successfully if you cannot step back and
abstract the key issues from your problem.
Always ask yourself: What properties and methods will I need to
mirror in the object’s code so that my code will model the situation
well enough to solve the problem?
Encapsulation
Encapsulation is the formal term for what we used to call data
hiding. It means hide data, but define properties and methods that
let people access it. Remember that OOP succeeds only if you
manipulate data inside objects, only sending requests to the object.
The data in an object is stored in its instance fields. Other terms
you will see for the variables that store the data are member
variables and instance variables. All three terms are used
interchangeably, and which you choose is a matter of taste; we
usually use instance fields. The current values of these instance
========================================================
fields for a specific object define the object’s current state. Keep in
mind that you should:
• Never ever give anyone direct access to the instance fields.
Inheritance
As an example of inheritance, imagine specializing the Employee
class to get a Programmer class, a Manager class, and so on.
Classes such as Manager would inherit from the Employee class.
The Employee class is called the base (or parent) class, and the
Manager class is called the child class. Child classes are:
• Always more specialized than their base (parent) classes.
• Have at least as many members as their parent classes (although
the behavior of an individual member may be very different).
Polymorphism
Traditionally, polymorphism (from the Greek “many forms”) means
that inherited objects know what methods they should use,
depending on where they are in the inheritance chain. For example,
as we noted before, an Employee parent class and, therefore, the
inherited Manager class both have a method for changing the salary
of their object instances. However, the RaiseSalary method
probably works differently for individual Manager objects than for
plain old Employee objects. The way polymorphism works in the
classic situation where a Manager class inherits from an Employee
class is that an Employee object would know if it were a plain old
employee or really a manager. When it got the word to use the
RaiseSalary method, then:
• If it were a Manager object, it would call the RaiseSalary method
in the Manager class rather than the one in the Employee class.
• Otherwise, it would use the usual RaiseSalary method.
Advantages to OOP
At first glance, the OOP approach that leads to classes and their
associated methods and properties is much like the structured
approach that leads to modules. But the key difference is that:
• Classes are factories for making objects whose states can diverge
over time.
Sound too abstract? Sound as though it has nothing to do with VB
programming?
Well, this is exactly what the Toolbox is! Each control on the
Toolbox in earlier versions of VB was a little factory for making
objects that are instances of that control’s class. Suppose the
Toolbox was not a bunch of little class factories waiting to churn out
new textboxes and command buttons in response to your requests.
Can you imagine how convoluted your VB code would been if you
needed a separate code
========================================================
module for each textbox? After all, the same code module cannot be
linked into your code twice, so you would have to do some fairly
complicated coding to build a form with two identical textboxes
whose states can diverge over time.
========================================================
3. Resize it by dragging one of the small square sizing boxes that
the cursor points to.
MDI Forms
In earlier versions of VB, Multiple Document Interface (MDI)
applications required you to decide which form was the MDI parent
========================================================
form at design time. In .NET, you need only set the IsMdiContainer
property of the form to True. You create the child forms at design
time or at run time via code, and then set their MdiParent
properties to reference a form whose IsMdiContainer property is
true. This lets you do something that was essentially impossible in
earlier versions of VB: change a MDI parent/child relationship at run
time. It also allows an application to contain multiple MDI parent
forms.
========================================================
COM/Interop facilities of .NET with the attendant scalability
problems that classic ADO always had.)
Because data is usually disconnected, a typical .NET database
application has to reconnect to the database for each query it
executes. At first, this seems like a big step backward, but it really
is not. The old way of maintaining a connection is not really
practical for a distributed world: if your application opens a
connection to a database and then leaves it open, the server has to
maintain that connection until the client closes it. With heavily
loaded servers pushing googles of bits of data, maintaining all those
per-client connections is very costly in terms of bandwidth.
System.Data.SqlClient
========================================================
========================================================
Software Engineering process
========================================================
engineering activities and tasks to be performed, the
information that is collected and created, and the methods
used to produce a high quality product must all be
adapted tom the people doing the work, the project time
line and constraint, and the problem to be solved. Before
we define a process framework for web engineering, we
must recognize that:
========================================================
Software Model of the
Project
========================================================
is repeated following delivery of each increment,
until the complete product is produced.
As opposed to prototyping, incremental models focus on
the delivery of an operational product after every iteration.
Advantages:
• Particularly useful when staffing is
inadequate for a complete implementation by
the business deadline.
• Early increments can be implemented
with fewer people. If the core product is well
received.
========================================================
•
•
•
• System / information Increment 1
engineering
• Analysis Design Code Test
Delivery of 1st
increment
• additional staff can be added to
implement the next increment.
• Increment 2 Increments
Analysis
can
Design
be planned
Code
to
Test
manage
Delivery of 2nd
increment
technical risks. For example, the system may require
availability of some hardware that is under development.
Increment 3
It may be possible
Analysisto planDesign
early increments without the
rd
Delivery of 3
Code Test
increment
use of this hardware, thus enabling partial functionality
and avoiding unnecessary delay.
Increment 4
Delivery of 4th
Analysis Design Code Test
increment
Calendar time
========================================================
Time Scheduling
Scheduling of a software project does not differ
greatly from scheduling of any multitask
development effort. Therefore, generalized project
scheduling tools and techniques can be applied to
software with little modification.
The program evaluation and review technique
(PERT) and the critical path method (CPM) are two
project scheduling methods that can be applied to
software development. Both techniques a task
network description of a project, that is, a pictorial
or tabular representation of tasks that must be
accomplished from beginning to end of project. The
network is defined by developing a list of all tasks,
sometimes called the project work breakdown
structure (WBS), associated with a specific project
and list of orderings (sometimes called a restriction
list) that indicates in what order tasks must be
accomplished.
Both PERT and CPM provide quantitative tools that
allow the software planner to:
i) Determine the critical path- the chain of tasks
that determines the duration of the project
ii) Establish most likely time estimates for
individual tasks by applying statistical models
iii) Calculate boundary times that define a time
“window” for a particular task.
Boundary time calculations can be very useful in
software project scheduling. Riggs describes
important boundary times that may be discerned
from a PERT or CPM networks.
• Earliest time that a task can begin when all
preceding tasks are completed in the shortest
possible time
• The latest time for task initiation before the
minimum project completion time is delayed
• The earliest finish-the sum of the earliest start-
and the task duration
• The latest finish-the latest start time added to
task duration
========================================================
• The total float-the amount of surplus time or
leeway allowed in scheduling tasks to that so that
the network critical path is maintained on
schedule. Boundary time calculations lead to a
determination of critical path and provide the
manger with a quantitative method for evaluating
progress as tasks are completed. The planner
must recognize that effort expended on software
does not terminate at the end of development.
Maintenance effort, although not easy to schedule
at this stage, will ultimately become the largest
cost factor. A primary goal of software
engineering is to help reduce this cost.
========================================================
========================================================
TESTING PROCESSES
========================================================
• Is thoroughly tested. Untested code adds an
unknown element to the product and increases the
risk of product failure.
• Meets product requirements. To meet customer
needs, the product must provide the features and
behavior described in the product specification. For
this reason, product specifications should be clearly
written and well understood.
• Does not contain defects. Features must work within
established quality standards .Having a test plan
helps you avoid ad hoc testing—the kind of testing
that relies on the uncoordinated efforts of developers
or testers to ensure that code works. The results of
ad hoc testing are usually uneven and always
unpredictable. A good test plan answers the following
questions:
The test plan specifies the different types of tests that will
be performed to ensure that product meets customer
requirements and does not contain defects.
Types of Tests
Test type Ensures that
Each independent piece of code works
Unit test
correctly.
Integration test All units work together without errors.
Newly added features do not introduce
Regression test errors to other features that are
already working.
Load test (also The product continues to work under
called stress test) extreme usage.
The product works on all of the target
Platform test
hardware and software platforms.
========================================================
The testing cycle
========================================================
unit testing, although the focus is now whether the units
work together. At this point, it is possible to wind up with
two components that need to work together through a
third component that has not been written yet. To test
these two components, you create a driver. Drivers are
simply test components that make sure two or more
components work together. Later in the project, testing
performed by the driver can be performed by the actual
component.
========================================================
verifies major control or decision points early in the test
process. In a well-factored program structure, decision-
making occurs at higher levels in the hierarchy and is
thus encountered first.
Bottom-up testing
Bottom-up integration testing, as the name implies,
begins construction and testing with atomic modules (i.e.
components at the lowest levels in the program
structure). Because components are integrated form the
bottom-up, processing required for components
subordinate to a given level is always available and the
need for stubs is eliminated.
A bottom-up integration strategy may be implemented
using the following steps:
1. Low-level components are combined into clusters that
perform a specific sub-function.
2. .The cluster is tested. Drivers are removed and
clusters are combined moving upward in the program.
An overall plan for integration of the software and a
description of specific tests are documented in a test
specification. This document contains a test plan and a
test procedure. It is a work product of the software
process, and becomes a part of the software
configuration.
========================================================
•
========================================================
PLATFORM TESTING
========================================================
5.4 Validation testing
Software validation is achieved through a series of black
box tests that demonstrate conformity with
requirements. A test plan outlines the classes of tests to
be conducted and a test procedure defines specific test
cases that will be used to demonstrate conformity with
requirements.
After each validation test case has been conducted, one
of two possible conditions exists:
i) The function or performance
characteristics conform to the specifications and are
accepted.
ii) A deviation form specifications is
discovered and a deficiency list is created.
An important element of validation testing is
configuration review. The intent of the review is to
ensure that all elements of the software configuration
have been properly developed and are well documented.
The configuration review is sometimes called an audit.
If software is developed for the use of many customers,
it is impractical to perform formal acceptance tests with
each one. Many software product builders use a process
called alpha and beta testing to uncover errors that only
the end-user is able to find.
1. The alpha test is conducted at the developer’s site by
the customer. The software is used in a natural setting
with the developer recording errors and usage problems.
Alpha tests are performed in a controlled environment.
========================================================
1. Beta tests are conducted at one or more customer
sites by the end-users of the software. Unlike alpha
testing, the developer is generally not present. The beta
test is thus a live application of the software in an
environment that cannot be controlled by the developer.
The customer records all the errors and reports these to
the developer at regular intervals.
Recovery testing
Many computer-based systems must recover from faults
and resume processing within a pre-specified time. In
some cases, a system must be fault-tolerant, i.e.
processing faults must not cause overall system function
to cease. In other cases, a system failure must be
corrected within a specified period of time or severe
economic damage will occur. Recovery testing is a
system test that forces the software to fail in a variety of
ways and verifies that recovery is properly performed. If
========================================================
recovery is automatic, re-initialization, check-pointing
mechanisms, data recovery and restart are evaluated for
correctness. If recovery requires human intervention, the
mean-time-to-repair (MTTR) is evaluated to determine
whether it is within acceptable limits.
Security testing
Any computer-based system that manages sensitive
information or causes actions that can harm individuals
is a target for improper or illegal penetration. Security
testing attempts to verify that protection mechanisms
built into a system will, in fact, protect it from improper
penetration. During security testing, the tester plays the
role of the hacker who desires to penetrate the system.
Given enough time and resources, good security testing
will ultimately penetrate a system. The role of the
system designer is to make penetration cost more than
the value of the information that will be obtained.
Stress testing
During earlier testing steps, white box and black box
techniques result in a thorough evaluation of normal
program functions and performance. Stress tests are
designed to confront programs with abnormal situations.
Stress testing executes a system in a manner that
demands resources in abnormal quantity, frequency or
volume. Essentially, the tester attempts to break the
program.
========================================================
A variation of stress testing is a technique called
sensitivity testing. In some situations, a very small range
of data contained within the bounds of valid data for a
program may cause extreme and even erroneous
processing or performance degradation. Sensitivity
testing attempts to uncover data combinations within
valid input classes that may cause instability or improper
processing.
Performance testing
Software that performs the required functions but does
not conform to performance requirements is
unacceptable. Performance testing is designed to test
run-time performance of software within the context of
an integrated system. Performance testing occurs
through all the steps in the testing process. However, it
is not until all system elements are fully integrated that
the true performance of a system can be ascertained.
6. MAINTENANCE FEATURES
========================================================
Not all jobs run successfully. Sometimes an
unexpected boundary condition or an overload
causes an error. Sometimes the o/p fails to pass
controls. Sometimes program bugs may appear.
No matter what the problem, a previously working
system that ceases to function, requires
emergency maintenance. Isolating operational
problems is not always an easy task, particularly
when combinations of circumstances are
responsible. The ease with which a problem can
be corrected is directly related to how well a
system has been designed and documented.
Changes in environment may lead to maintenance
requirement. For example, new reports may need
to be generated, competitors may alter market
conditions, a new manager may have a different
style of decision-making, organization policies
may change, etc. Information should be able to
accommodate changing needs. The design should
be flexible to allow new features to be added with
ease.
Although software does not wear out like
hardware, integrity of the program, test data and
documentation degenerate as a result of
modifications. Hence, the system will need
maintenance.
========================================================
Maintenance covers a wide range of activities
such as correcting code, design errors, updating
documentation and upgrading user support.
Software maintenance can be classified into four
types:
Corrective maintenance
It means repairing processing or performance
failures, or making changes because of previously
uncorrected problems or false assumptions. It
involves changing the software to correct defects.
1. Debugging and correcting errors or failures and
emergency fixes.
2. Fixing errors due to incomplete specifications,
which may result in erroneous assumptions, such
as assuming an employee code is 5 numeric digits
instead of 5 characters.
Adaptive maintenance
Over time the environment for which the software
was developed is likely to change. Adaptive
maintenance results in modifications to the
software to accommodate changes in the external
environment.
For example:
1. Report formats may have been changed.
========================================================
2. New hardware may have been
installed (changing from 16-bit to 32-bit
environment)
========================================================
2. Changing from a single-user to a multi-
user environment.
In general software maintenance can be reduced
by keeping the following points in mind:
• A system should be planned keeping
the future in mind.
• User specs should be accurate.
• The system design should be
modular.
• Documentation should be complete.
• Proper steps must be followed during
the development cycle.
• Testing should be thorough.
========================================================
Gather change requirements
========================================================