You are on page 1of 84

EMPLOYEE INFORMATION SYSTEM

1. Abstract
2. Organization profile
3. Project Introduction
3.1 Problem Formulation
3.2 Description of Project
3.3 Back ground
3.4 Motivation
3.5 Scope
4. Analysis and Design
4.1 Existing System
4.2 Proposed System
4.3 Defining a system
4.4 System Analysis
4.5 Feasibility Study
4.5.1 Technical Feasibility
4.5.2 Operational Feasibility
4.5.3 Economic Feasibility
4.5.4 Legal Feasibility
5. Specific Requirements
5.1 Hardware Interface
5.2 Software Interface
6. Software Development Approach
6.1 OOP’s Concepts
6.2 Visual studi2010
6.3 SQL Server2008
6.4 Java Script
6.5 UML
6.7 Use case Diagrams
6.8 Class Diagrams
6.9 Interaction Diagrams
6.10 Collaboration Diagrams
7. Implementation
8. Testing
9. Screens
10. Conclusion
11. Bibliography
1.ABSTRACT

A well planned, systematically executed industrial training helps a great deal in inculcating a
good work culture. It provides a linkage between students and the industry in order to develop
awareness of the industrial approach to problem solving based on broad understanding of
operations of the industrial organizations.

This project entitled “Employee Information System” has been designed towards improving the
Management of Employee Information in the field offices of DGLW. It maintains Seniority,
Leave Records, Transfer & Postings and other basic information of Employee. Various MIS
reports can be generated through it Viz. Leave Order, Seniority List, Vacancy Position and CR
Status etc. It is a web-based application; it will be accessed simultaneously from many field
offices of DGLW. Field offices are supposed to enter information of Employee of their region.
This information is compiled and used at Headquarter Level for MIS purposes.

The project has been an enriching experience for me in the field of programming and Enterprise
Application development. The project has been developed to fulfill the requirements of the
Employees in Labour Ministry.

The tools and technologies used for developing the software are ASP for analysis and design
phases for developing the code for the application and SQL SERVER as the back end tool on
Microsoft windows 2000 platform.
2. Organization Profile

At Bhaskara Info Services we offer application and product development, re-engineering,


technology consulting, data warehousing, testing, maintenance of companies in the travel, tour &
hospitality, cargo & logistics, oil & gas domains.
Bhaskara Info Services Pvt.Ltd a pioneering IT service organization offering countless
services to a wide range of businesses. Committed to providing quality enterprise wide solutions
using a wide variety of technologies or platforms with a special focus on knowledge
management solutions. We offer software development, web development and specialization in
the above mentioned areas. We build robust, scalable and complete solutions based on your
unique organizational needs satisfying real business needs in real business environment. We are
committed to provide powerful software application development and services using internet
technologies to improve and transform key enterprise processes.. Offering a comprehensive
range of qualitative knowledge services at a minimal cost, the group has been a hub for the
success to the aspiring IT segments the organization shares the higher end service lines in
touching the extremities of IT Services while managing the IT assets of our satisfied clients.
With a vision of reaching great heights in IT industry serving both domestic and
international sectors, Bhaskara Info Services Pvt.Ltd brings a fresh and innovative approach for
IT services. We help customers to achieve their business objectives by providing innovative &
best-in-class IT solutions and services. In pursuit of our goal, we are driven by a set of closely
held values and business principles.
Quality Objectives
To enhance the company's ability to consistently meet our customer's needs, by improving
organizational and team effectiveness.
To realize our quality policy, daily improvements will be coupled with individual and team
innovations in the following areas:

 Continuous improvement and learning: Improvement and learning are regular part of
daily work so that each employee seeks to eliminate problems at the source and
identifies opportunities for improvement
 Customer Driven Quality: Quality is judged by the customer. The quality process must
lead to services that contribute value and lead to customer delight
 Timely support to customer

We meet customer expectations by:

 Understanding the needs of customers


 Anticipating and working towards the future needs of customers
 Constantly improving and preventing errors from occurring
 Attracting, training and retaining qualified staff

Our Vision
To pioneer in developing the most progressive technology with the most secured systems,
relentless in the pursuit of client and employee excellence.
Our Mission
Our mission is to provide high-quality, extremely good value solutions through strong
relationship with our customers.
Our Philosophy
Always open to our client's needs and always willing to change our ways to suit their style.

How BIS works

BIS brings together existing simple and coherent program targeted for specific career paths.
Choose a suitable career path for yourself and follow the specialization courses to get there.

Why BIS
 To benefit from the latest and the most advanced educational program, BIS value pack
 BIS is a tailor made, customized program to help students get the right career start
 Value pack is empowered with the right balance of theory and hands-on sessions.
 Available on leading IT tracks, namely- e-Business Administration, Software Testing,
Information Management, Performance Management, Managing Technology & Service
Oriented Architecture.
3. Project Introduction

This section presents the background and motivation of the system that has been designed and
prototyped in this project. The section also reviews the objective of the project.

3.1 Problem Formulation

Problem introduction or problem stating is the starting point of the software development
activity. The objective of this statement is to answer: Exactly what must the system do? The
software project is initiated by the client’s need. In the beginning, these needs are on the minds
of various people in the client’s organization. The analyst has to identify the requirements by
talking to the people and understanding to their needs .it gores without saying that an accurate
and through understanding of Software requirements are essentials to the success of software
development effort. all further development like system analysis, System design and coding will
depend on how accurate and well understood the requirements are poorly analyzed and specified
software will disappoint the user and will bring brief to the developer. No matter how well
designed and well coded the software is. Software requirement appears to be a relatively simple
task, but appearances are often deceiving. Chances of misinterpretation are very high, ambiguity
is probable and communication gap between customer and developer is bound to bring
confusions. Requirement understanding begins with a clear and concise heading stating in a
sentence the task to be performed. Then Requirements are described in a technical manner in
precise statements.

3.2 Detailed Description Of The Project

Labour Information Systems Division of NIC is actively involved in development of the


application for Directorate General of Labour Welfare. In all, application systems were to be
developed and implemented at CLC Division in Labour Ministry. System maintains the
information about the Employee records, it maintain the each and every record about the
employee regarding their posting, leave, vacancy position etc., it maintains the details of all
Employees located at various field offices. The application software takes care of database and
day-to-day operations. DGLW (Directorate General of Labour Welfare) has its Headquarter in
Delhi and its field offices are spread across the country. This project will help in capturing
information regarding Human Resources through field offices. So, that manpower can be
monitored at Head Quarter. For the ease of the user and for the public the existing systems were
migrated to web-based applications.

3.3 Background

In terms of background the Employee Information System for DGLW project referred as
Directorate General of Labour Welfare. Labour Information Systems Division (LISD) of NIC is
actively involved in development of the application for Directorate General of Labour Welfare
(DGLW). DGLW has its Headquarter in Delhi and its field offices are spread across the country.
System maintains the information regarding Human Resources through field offices. So, that
manpower can be monitored at Headquarter.

The application software takes care of database and day to day operations. For the ease of the
user the web-based application is developed using ASP and SQL server in the back .The
different modules were added to the system as per DGLW Desk requirement and are being
integrated into this web-based application.

3.4 Motivation

Realizing a higher need of development efforts and the investment of time, developing uniform,
more user-friendly application software for implementation. With keeping use in mind,
supporting existing business process of DGLW appears as a fruitful concept for adding more
value through a web based application. There by increasing quality of services offered.

3.5 Scope

The “Employee Information System for DGLW” is a big and ambitious project. I am thankful for
being provided this great opportunity to work on it. As already mentioned, this project has gone
through extensive research work. On the basis of the research work, we have successfully
designed and implemented Employee Information System. This system is based upon 3-tier
client server architecture. The tools used for development were as follows.
4. Analysis and Design

4.1 Existing System


The existing system comprises of a system in which details are to be manually handled. This is
not user friendly.

4.2 Proposed System

It will be able to manage information about Employee in more user friendly way. This system
will manage Employees information at various field offices. User ID and password has been
given to all the field offices so that they can enter their employee’s information into central
database. Their access to the central database is restricted to their information only. Various
reports based on the data entered by employees at field offices are generated at Head Quarter.
These reports are helpful in Manpower management decisions.

4.3 Defining A System

Collections of components, which are interconnected, and work together to realize some
objective, form a system. There are three major components in every system, namely input,
processing and output.

Input Output

Processing
Systems Life Cycle

The sequencing of various activities required for developing and maintaining systems in an
ordered form is referred as Systems Life Cycle. It helps in establishing a system project plan as it
gives overall list of process and sub-processes required for developing any system. Here, the
systems life cycle will be discussed with reference to the development of Employee Management
System.

Broadly, following are the different activities to be considered while defining the systems
development cycle for the said project:

 Problem Definition

 Systems analysis
 Study of existing system
 Drawbacks of the existing system
 Proposed system
 Systems Requirement study
 Data flow analysis
 Feasibility study
 Systems design

 Input Design (Database & Forms)


 Updation
 Query /Report Design
 Administration
 Testing
 Implementation
 Maintenance
4.4 System Analysis

System analysis is a logical process; the objective of this phase is not actually to solve the
problem but to determine what must be done to solve the problem. The basic objective of the
analysis stage is to develop the logical model of the system using tools such as the data flow
diagram and elementary data description of the elementary algorithm. The logical model is
Subject to review by both the management and the user who agree that the model does in fact
reflect what should be done to solve the problem.

System analysis is not a precise science. It is in fact more of an art, aided by scientific approach
to find definition and recording data, gathering traditional structures is only one part of the
system analysis, the next step is to examine the data, assess the situation and looking at the
alternatives.

Analysis and development of the actual solution

A complete understanding of the requirement for the new system is very important for the
successful development of a software product. Requirement Specification is the foundation in the
process of software development .All further developments like system analysis; designing and
coding will depend on how accurate and well documented the Requirement Specification is.

Requirement specification appears to be a relatively simple task, but appearance is often


deceiving. There is always a chance of wrong specification because of communication gap
between the user and the Developer. Requirement Specification begins with a clear statement of
the problem and the task to be performed. Then the requirement is described in a technical
manner in precise statements. After the initial specification reports are received, they are
analyzed and redefined through customer interaction.

Product prospective

It will be able to manage information about Employee in more user friendly way. This system
will manage Employees information at various field offices. User ID and password has been
given to all the field offices so that they can enter their employee’s information into central
database. Their access to the central database is restricted to their information only. Various
reports based on the data entered by employees at field offices are generated at Head Quarter.
These reports are helpful in Manpower management decisions.

User Interface

 The system will be having user privileges based menu.

 User will have to select the options form the given menu.

 The system will be entering the information into the database to generate reports.

 The forms will be designed to enter the data.

 Buttons will be used to insert, retrieve or modify the data.

 Links will be provided to shift from one form to another.

Hardware – Software Interface

An Internet Web Server, running IIS, in this case Windows 2000 advanced server is used to host
the application. The application software, Employee Management, is developed in ASP,
JavaScript, and HTML. The backend database is MS SQL Server 2000. The Client systems with
internet facility equipped with web browser will be able to access the system

Memory Constraints

No memory constraints are applicable. A normal memory configuration is more than sufficient.

Product Function

It is advisable to have weekly data backups. The system administrator will do the data recovery.
Selection of panel is user-initiated operation, while indent handling is client initiated
Constraints

General Constraints

This system will not take care of any virus problem, which might occur either on the client or the
server system. Avoiding the use of pirated software and ensuring that floppies and other
removable media are scanned for viruses before use could minimize the possibility of viral
infection.

Recovery of data after a system crash will be possible only if backups are taken at regular
intervals.

Manual interfaces cannot be fully avoided. Documented proofs like dates etc. will have to be
verified by the concerned staff before entering it into the computerized system

Hardware Constraints

Constraints of the Internet & Intranet will be applicable to the system. The performance of the
system will be dependent on the network conditions like network congestion, bandwidth etc. The
primary memory (RAM) and the secondary memory (Hard Disk Space) requirement of the
system at the client end will be the same as that required by the web browser and the operating
system. At the server end memory requirements will be that of the server software (Operating
system, Database Software, etc) and the space required to store the data. The space required to
store the data would increase as more and more records are added to the system.

Security Constraints

User will be authenticated by the use of username and passwords. This does not Provide
complete security and the system could be hacked into. Use of secure Socket Layer (SSL) is
recommended. Use of SSL prevents any unauthorized access as all communications are
encrypted. Valid Digital Certificates are required for this at the server end and the client web
browser should have support for SSL.
Assumptions and Dependencies

1. It is assumed that the user is familiar with the basic computer fundamentals.
2. Timely backup of data should be taken to avoid data loss in case of system crash.
3. The use of pirated software should be avoided as it may lead to data loss and system
crashes due to viral infections.
4. Floppies and other removable media should be scanned for viruses before use.
5. Proper configuration of the client, database server and network is necessary for the
system to function as intended.
6. It is assumed that the maintenance of the database will be assigned to the authorized
person only.
7. Only authorized persons will be allowed inside the server room.

4.5 Feasibility Study

The main objective of the feasibility study is to treat the technical, Operational, logical and
economic feasibility of developing the computerized system. All systems are feasible, given
unlimited resources and infinite time. It is both necessary and prudent to evaluate the feasibility
of the project at System study phase itself. The feasibility study to be conduced for this project
Involves.

1. Technical Feasibility

2. Operational Feasibility

3. Economic Feasibility

4. Logical Feasibility
4.5.1 Technical Feasibility

Technical feasibility includes Risk Resources availability and technologies. The management
provides latest hardware and software facilities for the successful completion of the projects.
With these latest hardware and software support the system will perform extremely well. The
system is available through Internet.

4.5.2 Operational Feasibility

In the existing manual system it is very difficult to maintain and update huge amount of
information. The development of the system was started because of the requirement put forward
by the management of the concerned department. This system, will handles the request in a
better way and make the process easier thus, it is sure that the system developed is operationally
feasible.

4.5.3 Economic Feasibility

In the economic feasibility the development cost of the system is evaluated weighing it against
the ultimate benefit derived from the new system. It is found that the benefit, from the new
system would be more than the cost and time involved in its development.

4.5.4 Legal Feasibility

In the legal feasibility it is necessary to check that the software we are going to develop is legally
correct which means that the ideas which we have taken for the proposed system will be legally
implemented or not. So, it is also an important step in feasibility study.
5. Specific Requirements

5.1 Hardware Requirements

Intel Pentium processor at 500 MHz or faster, minimum of 364 MB available disk space for
installation (including IBM SDK), minimum of 256 MB memory,512 MB recommended, CD-
ROM drive.

5.2 Software Interface

An Internet Web Server, running IIS, in this case Windows 2000 advanced server is used to host
the application. The application software, Employee Management, is developed in ASP,
JavaScript, and HTML. The backend database is MS SQL Server 2000. The Client systems with
internet facility equipped with web browser will be able to access the system
6. Software Development Approach

6.1 Object Oriented Programming is a method of implementation in which programs are


organized as cooperative collection of objects, each of which represents an instance of a class,
and whose classes are all members of a hierarchy of classes united via inheritance relationships.

OOP Concepts

Four principles of Object Oriented Programming are

 Abstraction
 Encapsulation
 Inheritance
 Polymorphism

Abstraction

Abstraction denotes the essential characteristics of an object that distinguish it from all
other kinds of objects and thus provide crisply defined conceptual boundaries, relative to the
perspective of the viewer.

Encapsulation

Encapsulation is the process of compartmentalizing the elements of an abstraction that


constitute its structure and behavior encapsulation serves to separate the contractual interface of
an abstraction and its implementation.

Encapsulation

 Hides the implementation details of a class.


 Forces the user to use an interface to access data
 Makes the code more maintainable.
Inheritance

Inheritance is the process by which one object acquires the properties of another object.

Polymorphism

Polymorphism is the existence of the classes or methods in different forms or single name
denoting different implementations

ARCHITECTURE OF ASP.NET

ARCHITECTURE OF ASP.NET

Web Server ASP.net Runtime Env

(.aspx)
HTTP Aspnet_isapi.dll
Machine.config
REQUEST
(.asp)
Asp.dll
Web.config

App Domain
HTTP HTTP Handlers
RESPONSE

Process Req

inet_info.exe Aspnet_wp.exe

Inet_info.exe  identifies the request and submits the request to the aspnet_isapi.dll.
Aspnet_isapi.dll  is a script engine which process the .aspx page
Then the script engine will submit the request to the ASP.NET runtime env.
After verifying all the security issues of both machine.config and web.config then an
AppDomain will be defined for the request and after processing the request the response
will be given to the client as HTTP response.
Machine.Config  it is used to maintain the complete configuration details of all the
web applications registered on to the web server of ASP.net
Web.Config  It is used to maintain the config details about a single web application.
Where configuration details includes security,database connectivity,state
management,trace details of the web application,,authentication and authorization of the
applications and globalizations
AppDomain:All windows appns run inside a process and these process own resources
such as memory and kernel objects and the threads execute code loaded into a
process.Process are protected from each other by the OS. All these appns are run on high
isolation mode to work safely.The disadvantage of this is memory resources are
blocked.So to achieve this in a single process all the applications should be made to run
which is good to an extent but the draw back is if one crashes all other are effected. So
in .net the code verification feature takes care that the code is safe to run.
so asp.net each application runs its own application domain and therefore it is protected
from other asp.net applications on the same machine so it ignores the process isolation
specified on IIS.
HTTPHandlers:ASP.net builds upon a extensible architecture known as HTTP
runtime.This is responsible for handling the requests and sending the response.It is upto
an individual handlers such as asp.net or web service to implement the work done on a
request.IIS supports a low level API known as ISAPI. ASP.net implements a similar
concept with HTTP handlers.A request is assigned to ASP.net from IIS then ASP.net
examines entries in the <httphandlers> section based on the extension of the request to
determine which handler the request should be send to.

Features of asp.net
ASPX,ASP
Up gradation of ASP to ASPX is not required it supports side by side execution
and hence a request can be given from ASP to ASPX and vice versa.
Simplified Programming Model
ASP.Net is a technology which can be implemented using any dot net language
such as VB.net,C# etc and hence there is no requirement of HTML,JavaScript or
VBScript to implement ASP.NET
Simplified deployment
ASP.Net supports setup and deployment and hence the web app can be defined
with a web set up project which can be easily deployed on to the web server.
Where as for ASP CUTE FTP is used for deploying manually we have to upload.
Better Performance
As the ASPX pages are complier based the performance of the web application
will be faster then the ASP pages (as they are interpreter based)
Caching
It is a process of maintaining the result or output of a web page temporarily for
some period of time .ASP supports Client Side caching where as ASP.Net
supports both client side and server side.
Security
In ASP security is done by IIS or writing the code manually. Where as ASP.Net
is defined with built in security features such as
 windows authentication
 Forms Authentication
 Passport Authentication
 Custom Authentication
More powerful data access
ASP.net supports ADO and ADO.Net as its database connectivity model which
will be implemented using the most Powerful OOP’S languages like VB.Net and
C# and hence the database access using ASPX pages will be very powerful.
Web services
It is a code which will be published on the web which can be used by any
applications written using any language for an platform or device.
Better session Management
Session Management in ASP.Net can be maintained using the database and as
well cookieless sessions are also supported.It also supports enabling and disabling
of session info within a web application.
Simplified Form Validations
ASP.Net provides validation controls using which any type of client side
validations are performed without writing any code.

A web page is in 2 parts


1} Designing part (HTML Content,Flash,Dreamweaver etc)
2} logic Part (sub programs and event procedures and it has also your database
interaction)
ASP.Net supports 2 techniques for creating web page
1) In Page Technique
when you place design part code and logic part code with in a single file called as
ASPX then it is called as inPage Technique.
2) Code Behind Technique
when design part code is represented with in ASPX file and logic part code is
represented with in dll file then it is called as code behind technique.
ASP Supports only In Page technique.
DLL file is not a readable file so it is secured.

Difference Between VB 6.0 & VB.NET

1) It is an object based 1) It is an object oriented programming


programming 2)Here its mandatory
2)Variables or member declarations
are not mandatory 3) Uses Unstructured / Structured methods
3)Uses Unstructured method for for handling exception
handling exceptions 4) supports ADO and ADO.NET models
4) Uses DAO, RDO, ADO object
models for database connectivity 5) uses crystal reports
5)Uses Data projects as its default
reporting tool

6)Uses COM for language 6) Uses .net assembly for language


interoperability interoperability
7)Does not support multi threading 7)Does support multithreading
8)Uses DCOM to support 8)Uses .net remoting to support
distributed technology distributed tech.
9) Supports web tech.,client side 9)It does not support web technology.
appns or server side appns can be Note VB.net cant be used to design Client
designed using VB Side / Server side appns but it can used as
an implementing Lang for asp.net

Differences between C#.net & VB.net

W.R.T C#.NET VB.NET


DATA TYPES 1.Unsigned Data Types 1.No Unsigned Data Types
2.Strongly Typed Lang. 2.It is not strongly typed

OOPS Concept More concepts in C# Less Concepts here.


u have interfaces, No indexes in Vb.net and it has
abstraction, indexes limitations wrt interface
Memory Garbage Collector. Garbage collector,
Manag. Automatic releasing of destructor,dispose.Automatic
resources is available. releasing of resources is not
It Boosts the performance. available.You have to explicitly use
dispose method
Operator Is available in C# Is not available in VB.Net
Overloading
Pointers Is available in C# Is not available in VB.Net
Auto XML Is available in C# Is not available in VB.Net
Document.

Page Life Cycle Events


Page_Init
This is fired when the page is initialized
Page_Load
This is fired when the page is loaded
The difference between Page_Init and Page_load is that the controls are guaranteed to be
fully loaded in the Page_load.The controls are accessible in the Page_Init event,but the
ViewState is not loaded,so controls will have their default values,rather than any values
set during the postback.
Control_Event
This is fried if a control triggered the page to be reloaded (such as a button)
Page_unload
This is fired when the page is unloaded from the memory

Types of Controls in ASP.Net


HTML SERVER
SYNTAX
<INPUT TYPE=TEXT RUNAT=SERVER>
WEBSERVER CONTROLS
Standard List Controls Validation Data Misc
Controls  Radio Controls bound Controls
label Button List Required Controls
Textbox  Check Box field Validator Data Grid Crystal
Button List Range Data List Report
Link Button Dropdown Validator Repeater Viewer
Image List Compare control
Button List Box Validator
Calendar Regular
AdRotator Expression
Panel Validator
Place Custom
Holder Validator
Table Validation
Literal Summary
Control
Radio
Button
Check Box
XML

Common Syntax for any web server control


<asp:controltype id=“name of the control” runat=“server”
----------------
----------------
//additional properties
></asp:controltype>
To close syntax is “ / “ .

In order to set or get the value from any standard control text property should be used.
Eg:
<asp:label id=“lb1” runat=“server” text=“user name”></asp:label>
<asp:button id=“lb1” runat=“server” text=“Login” />
Calendar Control
Usage: It is used to place the calendar on the webform
– Note: Place the calendar control and right click on it and select autoformat to
provide a better look and feel for the control
– Calendar control can be considered as a collection of table cells
– Where every table cell will maintain the information about the days as a calendar day in
the format of a link button control
– When ever the calendar days has to be customized based on the requirement of the user
DAYRENDER event should be used.
– Every event handler in the dot net tech will accept two arguments 1st one being object
and the 2nd one is eventArguements
– I.e. DayRender(Object,eventArguements)
– Event Arguments of DayRender event will provide
– e.cell -> to refer table cell
– e.day -> to refer calendar day
– In order to add a string value as a control to any other control “Literal Control” Should be
used.

ADO.NET
 CONNECTION ORIENTED MODEL
 DISCONNECTED ORIENTED MODEL
CONNECTION ORIENTED MODEL
 Whenever an application uses the connection oriented model to interact with the db then
the connectivity between the application and the database has to be maintained always.
 Whenever an user executes any statement other than a select then command object can be
binded directly to the application
 If the user executes a select statement then dataReader is used to bind the result to the
application.
Disconnected Oriented Model
 When the user interacting with the db using this model then while performing the
manipulations or navigations on the data connectivity between the application and the
database is not required
Note: When ever the data is been updated on to the database then the connectivity is required in
the disconnected model.
DISCONNECTED MODEL

Application Data View DataSet Database

This is available in
client system
Data Adapter Connection

Data Providers

Disconnected Model
 Connection  it is used to establish the physical
connection between the application and the database
 DataAdapter it is a collection of commands which acts
like a bridge between the datastore and the dataset.
 Commands in DataAdapter 

DataAdapter Collection of all these commands


is DataAdapter
Select Command
Fill(Dataset Name[,DataMember])
Table Mappings

Insert Command

Update Command
Update(Dataset Name[,DataMember])
Delete Command

DataAdapter
 DataAdapter can always be binded to a single table at a time.
 Whenever the dataAdapter is used then implicit opening and closing of connection of
closing object will take place.
 If the dataAdapter is defined using a tool or a control then all the commands for the
adapter will be defined implicitly provided the base table with a primary key.
 If the base table is not defined with a primary key then the commands relevant for update
command and Delete command will not be defined.
Fill Method
 It is used to fill the data retrieved by the select command of DataAdapter to the dataset.
Update Method
 It is used to update the dataAdapter with the data present in the dataMember of the
dataSet. In other words used to the update the database.
DataSet
 It is an in memory representation of the data in the format of XML at the client system.
 Points to remember about DataSet:
– It contains any no of datatables which may belong to the same or different
databases also.
– If any manipulation are performed on the database it will not be reflected on to
the database.
– Dataset is also considered as a collection of datatables where a datatable can be
considered as a DataMember.
– Dataset will not be aware of from where the data is coming from and where the
data will be passed from it.
– Dataset supports establishing the relationship between the datatables present in
the dataset where the datatables might belong to different databases also.
 DataSet is of 2 types 
– Typed DataSet  when ever the dataset is defined with the support of XML
schema definitions then it is said to be typed dataSet.
– UnTyped DataSet  if the dataset is defined without the XML Schema Definition
then it is said to be UnTyped DataSet.
DataView
 It is logical representation of the data present in the datamember of dataSet.
 Usage  It is used to sort the data,filter the data or if the data has to be projected in the
pagewise then the dataView should be used.
Command
 It is used to provide the source for executing the statement I.e it used to specify the
command to be executed.
Data Reader
 It is a forward and read only record set which maintains the data retrieved by the select
statement.
ADO.NET

DISCONNECTED MODEL CONNECTION ORIENTED MODE L

Used if the statement is CONNECTION


CONNECTION select statement

DATA ADAPTER COMMAND

DATA SET DATA READER

DATA VIEW UI

UI
Used if the data has to be filtered,
sorted or if the data has to be projected
in page-wise

ADO.NET

ODBC
SQL-SERVER ORACLE OleDB Providers
providers
System.data.SqlClient System.data.OracleClient System.data.Oledb System.data.ODBC

SQL Connection Oracle Connection OleDB Connection ODBC Connection

SQL Command Oracle Command OleDB Command ODBC Command

SQL Datareader Oracle DataReader OleDB Data Provider ODBC DataProvider

SQL DataAdapter Oracle DataAdapter OleDB DataAdapter ODBC DataAdapter

 Syntax to define the Object


– Dim objectName as new xxxConnection(“ProviderInfo”) where xx  can be
either SQL,Oracle,Oledb or ODBC
 Provider Info
– To connect to MS-Access 2000 above versions 
• Provider=microsoft.jet.oledb.4;datasource=databaseName.mdb
– To connect to SQL-Server db 
• Provider=sqloledb.1;userid=sa;password=;database=database
name;datasource=servername
• Note if SQL Connection is used then provider=providername is not
required.
– To Connect to ORACLE 
• Provider = oracleoledb.oracle;userid=scott;pwd=tiger;datasource =
servername
• OR
• Provider = msdaora.1;…….
• Note if oracle connection is used then provider= provider name is not
required.
 To define Command Object 
– Dim objectName as new xxxCommand([SQL Statement,connection
object/Connection String])
 To define DataReader 
– Dim objectName as xxxDataReader
 To define DataAdapter 
– Dim objectName as xxxDataAdapter(Select Statement,<Connection Object /
Connection String>)
– When ever the DataAdapter is defined using the above syntax then only the
command relevant for the SelectCommand will be defined and in order to use the
above commands they have to be build explicitly.
 To define DataSet 
– Dim objectName as new DataSet()
 To define DataView 
– Dim objectName as new DataView(datasetName.DataMemberName)
Security in ASP.NET
 Asp.net provides various authentication methods to achieve security.
 They are: 
– Forms Authentication
– Windows Authentication
– Passport Authentication
– Custom Authentication
FORMS Authentication
 It is used to authenticate the user credentials for Internet and Intranet applications.
 It is used to specify the authentication mode to be used by the ASP.Net web application,
to specify the login page information and to specify the format of the password to be used
for providing additional security and also it acts like a database which maintains the user
credentials information.
 Syntax to set the authentication
<authentication mode=“Forms”>
<forms loginUrl = “login.aspx”>
<Credentials passwordFormat =“SHA1/MD5/Clear”>
<User name =“_____” password=“____” />
_____________
_____________ any no of user information
</credentials>
</forms>
</authentication>
Authorization
 It is used to allow or deny the users from accessing the webforms present in the web
application.
 <authorization>
 <allow users=“__,__,__ / * “ />
 <deny users=“__,__,__ / * ”/>
 </authorization>
 Note: the tags and the attributes present in the web.config is a case sensitive contents.
 In order to support Forms Authentication in ASP.Net the Dot Net Framework provides a
base class library called as “System.web.security.Formsauthentication”

Methods to support Forms Authentication


 Authenticate :It is used to authenticate if the provided information belongs to a valid
user credentials or not.It returns True if user info is valid else returns false.
 Syntax  authenticate(username,password)
 RedirectFromLoginPage  It is used to redirect to the requested webform from the login
page if the provided user credentials belongs to a valid user.
 Syntax :- redirectFromLoginPage(username,booleanvalue)
 If specified TRUE then the user info will be maintained as a permanent HTTP Cookie at
the client system and if FALSE is specified then user info will be maintained temporarily
till the browser is closed.
 HashPasswordForStoringInConfigFileit is used to encrypt the data using either SHA1
or md5 hash algorithms.
 Syntax  HashPasswordForStoringInConfigFile
(original Text,”md5/sha1”)
 SignOut  It is used to clear the session of the user which has been set the application
 User.identity.name  returns the name of the user who has currently logged in.

Windows Authentication
 It is used to authenticate the user information based on the users registered on the
network.
 Note it is used to validate the users on the intranet environment.
 In web.config file 
– <authentication mode=“windows” />
– <authorization>
<allow users/role =“DomainName/UserName,---” / roleName />
<deny users/role = “DomainName/UserName,---” / roleName />
– </authorization>
– Whenever the user who has been currently logged in is present in the allow users list then
all the webforms can be accessed directly present in the web application.Else implicilty
the webserver will project a dialog box to provide the user credentials and allow the user
to access the webforms provided the information belongs to a valid user credentials.
Types of Windows Authentication
 Basic Authentication  if used as authentication type then the user credentials will be
passed across the n/w in cleartext Format.
 DigestAuthentication  it is a special authentication type used to authenticate the
Domain server users.
– Note if the OS is not a domain server then the Digest authentication type will be
disabled in that system
 NTLM authentication  it is a default authentication type used by the windows
authentication where NTLM stands for Integrated Windows Authentication

Steps to set the authentication Type


 Start > RUN > inetmgr
 Right click on default web site and select properties
 Click on Directory Security tab
 Click on the Edit button present in the anonymous access and authentication control
 Check on the different authentication types to be used
 To know the domain name of the system
– [ In command prompt ]
• C:\host Name
– This gives the domain name

Passport Authentication
 If the same user credentials has to be maintained across multiple websites then passport
authentication can be used.
 To achieve this 
– Install Microsoft Passport SDK
– In web.config file
• <authentication mode =“passport”>
– <passport redirectUrl =“internal /URL ‘ />
</authentication>
Custom Authentication
 It is used to Validate the user credentials as per the requirement of the application.
STATE MANAGEMENT IN ASP.NET
• It is used to maintain the state of the user across multiple pages.
{ OR } Web server maintaining client information with out any connectivity is called as
state management .This can be implemented in various ways
1.View State [ Hidden field ]
2. Page Submission
3.Cookies
4.Session
5.Query String
6.Application
7. Cache

View State
• It is the concept of persisting controls properties between requests under post back
implementation.
• The view state will be implemented based on hidden field.
• The main advantage of view state will be 2 things
• There is no programming required from the developer so less burden on the developer.
• The memory will not be allocated in the client system nor at in the webserver system.It
will be maintained as the part of the web page itself.
• The problem with a view state is there will be more amount of data transfer between
client and web server.
• The view state can be controlled at the 3 levels 
1 } Control Level 
<Input = ….Enable viewstate=“true/false”>
Note : when it comes to sensitive data it is not recommended to implement view state
the sensitive data can be password,credit card no, etc.
• When you go with password type textbox the view state will not be applicable implicitly.
• 2} Page Level
<%@ Pagedirective …..enable viewstate=“true/false” >
• 3 }Application Level 
It requires web config
It will be applicable to all the web pages

COOKIES
• It is used to maintain the server side information at the client system. { OR } A cookie
can be defined as a small amount of memory used by the web server on the client system.
Usage : The main purpose of cookies will be storing perosonal information of the
client,it can be username,pwd,no of visits,session id.
• Cookies can be of 2 types:-
• Client Side Cookies If the cookie information is set using Javascript / VbScript within
an HTML page then it is said to be a client Side Cookies.
• Server Side CookiesIf the cookie information is set using server side technology then
it is said to be server side cookies.They are of 2 types:
1] Persistant Cookies ( Permanent Cookies )
2] nonPersistant Cookies ( Temporary Cookies )
• 1] Persistant Cookies ( Permanent Cookies )
• When the cookie is stored on to the hard disk memory then it is called as
persistant cookie.
• When you provide expires than the cookie will be considered as persistent.
• 2] nonPersistant Cookies ( Temporary Cookies )
• When the cookie is stored with in the process memory of the browser then it is
called temporary cookies.
Syntax
• To set the cookies information
Response.cookies(“cookie name”).value = value
• To get or read the value from a cookie
variable =
request.cookies(“cookie name”).value
Points to remember about cookies
• Cookies information will be always be stored at the client system only.
• Cookies information are browser dependent ie the values of the cookies set by one
browser cant be read by other browser.
• If the cookie information is set by IE then that info. Will be maintained in the memory of
the browser itself.
• If the cookie information is set by Netscape Navigator then then information will be
maintained in “Cookies.txt” file.
• There is no security for cookies information.
• Browsers has a capability to disable the usage of cookies within it.
• Note  if the browser disables the cookies in it and if any request for a web form which
contains the usage of cookies then the request will not function properly.
• User can change cookie content (or) user can delete Text file.
• The browser will support 20 cookies towards a single website . If we add 21st cookie
then automatically the first cookie will be deleted.
• A cookie can represent maximum of 4kb of data.
• To bind the cookie information to a specific domain 
response.cookies(“cookie name”).Domain = DomainName
• To allow the different paths of the same domain to access the cookie information 
response.cookies(“cookie name”).path = “/path….”
• note the default expiration time for the cookies is 30 min.
• To set the expiration time for the cookie info 
response.cookies(“cookie name”).expires = dateTime
• To secure the cookie information 
response.cookies(“cookie Name”).secure = booleanValue

Session
When client makes a first request to the application,ASP.net runtime will create a block of
memory to the client in the web server machine.This Block of memory is called as session
memory.This memory will be unique to the client with the Time Out of 20 min by default.Here
timeout indicates from the last access of client request not from creation of cookies.Cookie can
represent only plain text not an object but session memory has an object.
Differences between Session & Cookies
Session Cookies
It will be maintained in the It will be maintained in the
web server system.So it is client system. So it is called as
called as server side client side state management.
management
Session can represent objects Cookie can represent plain
text
More security for data Less security for data.
Accessing will be slow Accessing would be faster.

Limitations of sessions in ASP


In ASP client info is maintained by the server using its sessionID irrespective of session
object usage.
Sessions in ASP are always cookies based.
Enabling and disabling of sessions are not supported in ASP
Sessions in ASP.Net
Sessions in ASP.net can be
Cookies Based ( Default )
Cookieless
It can be stored in database also (SQL Server)
Syntax
 To get session Info
Session(“variable”) = value
 To Read / Get value
Variable = session(“variable”)
Note:
If the value assigned to the session variable is a character data then the info will be
maintained in the contents collection of the session object
If the value assigned to the session variable is an object then that information will be
maintained in the static object collection of session object.
By default session state for the application will be true and hence the contents of the
session object can be used.
In order to disable the session object usage in the web form then “enable session state”
attribute of the page directive should be set as false.
In the page directive I.e go to the HTML view and in that page directive at the start of the
page make the enable session state as = false.
Syntax 
<% @ page language =“vb” enablesessionstate=“false”…….%>
Session Object
Session Object  this object can be used to access session memory from asp.net web
page.
The following are the methods 
1. Add(key,value) where key  String and value  object
2.Remove(key)
3.Abandon()  to close the session
4.SessionId
5.TimeOut

Points to remember about Session


The default session timeout is 20mins
To set the session timeout
session.timeout = minutes ( specify the min)
{OR}
In web.config we have tag available for session
<sessionstate mode=“Inproc” cookieless=“false” timeout =“minutes” />
Note : the default sessionstate uses cookies for maintaining the data or info.
To define a session as cookie-less then in web.config:
<sessionstate mode=“Inproc” cookieless=“false” timeout=“20” />
note: once the sessionstate is set to cookieless then the sessionInfo or the sessionID
will be appended to the URL for each and every webform present in the web application.
In order to retrieve the sessionID of the client
session.sessionID should be used.
In order to maintain the session info. On the SQL server database then in web.config:
<sessionState mode=“sqlserver” stateconnectionstring=“tcpid=127.0.01:42424”
sqlconnectionstring=
“______(completepath”) cookieless=“false” timeout=“20” />
In order to clear the session variable present in the session object contents collection then
“session.contents.remove(“sessionvariable”)”
In order to clear all the items present in the contents collection then
“session.contents.removeall()” should be used.
In order to kill the session of the user then “session.abandon()” method should be used.
To disable the session information for a specific webform then enablesessionstate=“false”
should be set for the page.

Application
 It is used to maintain the state of all the users accessing the web applications.
 When the first client,first request comes to the application web server will allocate a
block of memory this is called as application memory.
 The application memory will not have any life time.
 Application object can be used to access application memory from asp.net web page
 Application object consists the following methods 
1} Add (key,value) {or} Application(“var”) = value
2} Remove(key)
3} lock()
4} unLock()
note  the lock and unlock are not available in session,but available in application .
 To set:
Application (“variable”) = value
 To read:
variable = application(“variable”)
 ProblemIf the application object is not maintained properly then it will result in Data
Inconsistency.
 When ever the application variables are used in the webform then it is mandatory to Lock
the application contents.
 To do: Application.Lock()
 If application.lock() method is encountered while processing the webform then all the
other requests which uses the application contents will not be blocked till the webform
processing is completed.
 Lock is used to allow only one client at a particular time.
 Each client requests to the webserver is considered as thread.webserver will allocate
equal processor time to all the threads.In this aspect more then one thread can manipulate
application memory data,this may lead to improper result to avoid this it is recommended
for synchronisation of threads.
 Synchronisation is nothing but allowing user one at a particular time.
 The synchronisation of threads can be implemented using lock and unlock methods.

Global.asax
 It’s a collection of events where the code written in those events will be executed
implicitly whenever the relevant event takes place.
 In order to work with the application and the session objects and to handle the events in a
proper manner “global.asax” file should be used.
 Application_Start  the code written in this event will be executed only once whenever
the application has been encountered with the first request
 Session_Start  the code written in this event will be executed when ever a new session
for the user starts.
 Application_BeginRequest  the code written in this event will be fired when ever any
webform present in the webapplication is loaded.
 Application_Authenticate  the code written in this event will be executed when even
the authentication takes place.
 Application_error  the code written in this event will be executed when ever any error
or exceptions occurs at webforms present in the web application.
Note  in order to get the last error which has been generated on the webform
“server.getLastError()” should be used.
 Session_End  the code written in this event will be executed whenever the session of
the user ends
 Application_End  the code written in this event will be executed whenever the web
application is closed.

Caching
• It is used to maintain the result of the webform temporarily for a specific period of time.
• ASP supports client side caching.
• Where as ASP.net supports both client side caching and server side caching.

Client Side Caching


• If the cache page is maintained at the
client side it is said client side caching.
Web server
Server
C1
Gateway
Proxy
C2 Server
Modem ISP
Cache page

C3

• To Set this :
Response.cachecontrol = public
• Advantage : only the people who are connected in the network they will be getting the
page faster.

Server Side Caching


• then it is said to be server side caching.
• Points to remember
• Caching should be used if and only if the following properties are satisfied
1} The contents of the webform should not be modified at least for a specific period of
time.
2} The no of clicks for the webform present in the web application should be more.

Types – Server side caching


• 1~~~> Page – Output Cache
• 2 ~~~> Page – Fragmentation (Partial) Cache
• 3 ~~~> Data Cache.

Page – Output cache


when ever the complete result of the webform or the o/p of the webform is maintained as a
cache page at the webserver then it is said to be a page-output cache.
• To Set
<% @ outputcache duration=“seconds”
varybyparam=“none/controlName/VariableName” %>
• VaryByParam  it is used to set an individual cache page for every distinct value
assigned for the control or the variable assigned to the varybyparam.
{example 1}
• Page Fragmentation Cache  It is used to maintain only a partial page contents as a
cache contents on the web server
• To achieve this Page Fragmentation 
 Define a web custom control
 Set the cache for the custom control
 use the web custom control on the web form.

Web User Control


• Web User Control  It is used to design a web control which can be used by an
webforms of ASP.net
• To design  Project  Add web user control
• To use the web user control on the web form 
• 1st method 
select the name of the web user control file in the solution explorer and then drag drop
that file on to the web form.
• 2nd method 
1} register the web user control as a tag prefix in the webform:
for eg :
<% @ register tagprefix = “UC1” tagname=“webusercontrol”
src=“webusercontrol2.aspx” %>
2} place the web user control as a normal control on the webform
<uci:webusercontrol2 id=“wuc2” runat=“server” />

Data Cache
• It is used to maintain the data present in an object as a cache information ,where the
object can be dataset,datview or datareader.
• Note: once the data is been set as a cache then if the data is modified or manipulated at
the database level there wont be any reflection at the data present in the cache.

Tracing
 It is used to trace the flow of the application.
 It is of 2 types 
 Application level tracing  If this is used then for all the webforms present in the web
application the trace details or information will be provided.
 Page level tracing  if used then only specific web form the trace details will be set.
 Note  if the application level and page level tracing information is set then the
preference will be given to the page level tracing only.
 To set application level tracing 
in web.config  <trace enabled=“true” requestlimit=“10” pageoutput=“true”…../>
Methods to support tracing
 Trace.write  It is used to write the data on to the trace information.
 Trace.warn  it is used to write the data on to the trace information using red as its fore
color such that the information will be highlighted at the trace info section.
 To set page level trace info in page directive tag :
<% @ pagelanguage=“vb” trace=“true” %>

Introduction to C#

The Common Language Infrastructure


The Common Language Infrastructure (CLI) is a specification that allows several different
programming languages to be used together on a given platform

 Parts of the Common Language Infrastructure:


o Common Intermediate language (CIL) including a common type system (CTS)
o Common Language Specification (CLS) - shared by all languages
o Virtual Execution System (VES)

Metadata about types, dependent libraries, attributes, and more

MONO and .NET are both implementations of the Common Language Infrastructure

The C# language and the Common Language Infrastructure are standardized by ECMA and ISO

CLI Overview
C# Compilation and Execution
The Common Language Infrastructure supports a two-step compilation process

 Compilation
o The C# compiler: Translation of C# source to CIL
o Produces .dll and .exe files
o Just in time compilation: Translation of CIL to machine code
 Execution
o With interleaved Just in Time compilation
o On Mono: Explicit activation of the interpreter

On Window: Transparent activation of the interpreter

.dll and .exe files are - with some limitations - portable in between different platforms
What is Common Language Runtime?

The Common Language Runtime is the engine that compiles the source code in to an
intermediate language. This intermediate language is called the Microsoft Intermediate
Language.

During the execution of the program this MSIL is converted to the native code or the machine
code. This conversion is possible through the Just-In-Time compiler. During compilation the end
result is a Portable Executable file (PE).

This portable executable file contains the MSIL and additional information called the metadata.
This metadata describes the assembly that is created. Class names, methods, signature and other
dependency information are available in the metadata. Since the CLR compiles the source code
to an intermediate language, it is possible to write the code in any language of your choice. This
is a major advantage of using the .Net framework.

The other advantage is that the programmers need not worry about managing the memory
themselves in the code. Instead the CLR will take care of that through a process called Garbage
collection. This frees the programmer to concentrate on the logic of the application instead of
worrying about memory handling.

Common Language Runtime Overview

.NET Framework 1.1


Other Versions

Compilers and tools expose the runtime's functionality and enable you to write code that benefits
from this managed execution environment. Code that you develop with a language compiler that
targets the runtime is called managed code; it benefits from features such as cross-language
integration, cross-language exception handling, enhanced security, versioning and deployment
support, a simplified model for component interaction, and debugging and profiling services.

To enable the runtime to provide services to managed code, language compilers must emit
metadata that describes the types, members, and references in your code. Metadata is stored with
the code; every loadable common language runtime portable executable (PE) file contains
metadata. The runtime uses metadata to locate and load classes, lay out instances in memory,
resolve method invocations, generate native code, enforce security, and set run-time context
boundaries.

The runtime automatically handles object layout and manages references to objects, releasing
them when they are no longer being used. Objects whose lifetimes are managed in this way are
called managed data. Garbage collection eliminates memory leaks as well as some other
common programming errors. If your code is managed, you can use managed data, unmanaged
data, or both managed and unmanaged data in your .NET Framework application. Because
language compilers supply their own types, such as primitive types, you might not always know
(or need to know) whether your data is being managed.

The common language runtime makes it easy to design components and applications whose
objects interact across languages. Objects written in different languages can communicate with
each other, and their behaviors can be tightly integrated. For example, you can define a class and
then use a different language to derive a class from your original class or call a method on the
original class. You can also pass an instance of a class to a method of a class written in a
different language. This cross-language integration is possible because language compilers and
tools that target the runtime use a common type system defined by the runtime, and they follow
the runtime's rules for defining new types, as well as for creating, using, persisting, and binding
to types.

As part of their metadata, all managed components carry information about the components and
resources they were built against. The runtime uses this information to ensure that your
component or application has the specified versions of everything it needs, which makes your
code less likely to break because of some unmet dependency. Registration information and state
data are no longer stored in the registry where they can be difficult to establish and maintain.
Rather, information about the types you define (and their dependencies) is stored with the code
as metadata, making the tasks of component replication and removal much less complicated.
Language compilers and tools expose the runtime's functionality in ways that are intended to be
useful and intuitive to developers. This means that some features of the runtime might be more
noticeable in one environment than in another. How you experience the runtime depends on
which language compilers or tools you use. For example, if you are a Visual Basic developer,
you might notice that with the common language runtime, the Visual Basic language has more
object-oriented features than before. Following are some benefits of the runtime:

 Performance improvements.
 The ability to easily use components developed in other languages.
 Extensible types provided by a class library.
 New language features such as inheritance, interfaces, and overloading for object-
oriented programming; support for explicit free threading that allows creation of
multithreaded, scalable applications; support for structured exception handling and
custom attributes.

If you use Microsoft® Visual C++® .NET, you can write managed code using the Managed
Extensions for C++, which provide the benefits of a managed execution environment as well as
access to powerful capabilities and expressive data types that you are familiar with. Additional
runtime features include:

 Cross-language integration, especially cross-language inheritance.


 Garbage collection, which manages object lifetime so that reference counting is
unnecessary.
 Self-describing objects, which make using Interface Definition Language (IDL)
unnecessary.
 The ability to compile once and run on any CPU and operating system that supports the
runtime.

You can also write managed code using the C# language, which provides the following benefits:

 Complete object-oriented design.


 Very strong type safety.
 A good blend of Visual Basic simplicity and C++ power.
 Garbage collection.
 Syntax and keywords similar to C and C++.
 Use of delegates rather than function pointers for increased type safety and security.
Function pointers are available through the use of the unsafe C# keyword and
the /unsafe option of the C# compiler (Csc.exe) for unmanaged code and data.

Managed Execution Process

.NET Framework 1.1


Other Versions

The managed execution process includes the following steps:

Choosing a compiler.:

To obtain the benefits provided by the common language runtime, you must use one or more
language compilers that target the runtime, such as Visual Basic, C#, Visual C++, JScript, or one
of many third-party compilers such as an Eiffel, Perl, or COBOL compiler.

Because it is a multilanguage execution environment, the runtime supports a wide variety of data
types and language features. The language compiler you use determines which runtime features
are available and you design your code using those features. Your compiler, not the runtime,
establishes the syntax your code must use. If your component must be completely usable by
components written in other languages, your component's exported types must expose only
language features that are included in the Common Language Specification (CLS).

1. To obtain the benefits provided by the common language runtime, you must use one or
more language compilers that target the runtime.
2. Compiling your code to Microsoft intermediate language (MSIL).
3. Compiling to MSIL
.NET Framework 1.1

Other Versions

When compiling to managed code, the compiler translates your source code into Microsoft
intermediate language (MSIL), which is a CPU-independent set of instructions that can be
efficiently converted to native code. MSIL includes instructions for loading, storing, initializing,
and calling methods on objects, as well as instructions for arithmetic and logical operations,
control flow, direct memory access, exception handling, and other operations. Before code can
be run, MSIL must be converted to CPU-specific code, usually by a just-in-time (JIT) compiler.
Because the common language runtime supplies one or more JIT compilers for each computer
architecture it supports, the same set of MSIL can be JIT-compiled and run on any supported
architecture.
When a compiler produces MSIL, it also produces metadata. Metadata describes the types in
your code, including the definition of each type, the signatures of each type's members, the
members that your code references, and other data that the runtime uses at execution time. The
MSIL and metadata are contained in a portable executable (PE) file that is based on and extends
the published Microsoft PE and common object file format (COFF) used historically for
executable content. This file format, which accommodates MSIL or native code as well as
metadata, enables the operating system to recognize common language runtime images. The
presence of metadata in the file along with the MSIL enables your code to describe itself, which
means that there is no need for type libraries or Interface Definition Language (IDL). The
runtime locates and extracts the metadata from the file as needed during execution.
For detailed descriptions of MSIL instructions, see the Tool Developers Guide directory of
the .NET Framework SDK.

Compiling translates your source code into MSIL and generates the required metadata.
Compiling MSIL to Native Code

Before you can run Microsoft intermediate language (MSIL), it must be converted by a .NET
Framework just-in-time (JIT) compiler to native code, which is CPU-specific code that runs on
the same computer architecture as the JIT compiler. Because the common language runtime
supplies a JIT compiler for each supported CPU architecture, developers can write a set of MSIL
that can be JIT-compiled and run on computers with different architectures. However, your
managed code will run only on a specific operating system if it calls platform-specific native
APIs, or a platform-specific class library.

JIT compilation takes into account the fact that some code might never get called during
execution. Rather than using time and memory to convert all the MSIL in a portable executable
(PE) file to native code, it converts the MSIL as needed during execution and stores the resulting
native code so that it is accessible for subsequent calls. The loader creates and attaches a stub to
each of a type's methods when the type is loaded. On the initial call to the method, the stub
passes control to the JIT compiler, which converts the MSIL for that method into native code and
modifies the stub to direct execution to the location of the native code. Subsequent calls of the
JIT-compiled method proceed directly to the native code that was previously generated, reducing
the time it takes to JIT-compile and run the code.

The runtime supplies another mode of compilation called install-time code generation. The
install-time code generation mode converts MSIL to native code just as the regular JIT compiler
does, but it converts larger units of code at a time, storing the resulting native code for use when
the assembly is subsequently loaded and run. When using install-time code generation, the entire
assembly that is being installed is converted into native code, taking into account what is known
about other assemblies that are already installed. The resulting file loads and starts more quickly
than it would have if it were being converted to native code by the standard JIT option.

As part of compiling MSIL to native code, code must pass a verification process unless an
administrator has established a security policy that allows code to bypass verification.
Verification examines MSIL and metadata to find out whether the code is type safe, which
means that it only accesses the memory locations it is authorized to access. Type safety helps
isolate objects from each other and therefore helps protect them from inadvertent or malicious
corruption. It also provides assurance that security restrictions on code can be reliably enforced.

The runtime relies on the fact that the following statements are true for code that is verifiably
type safe:

 A reference to a type is strictly compatible with the type being referenced.


 Only appropriately defined operations are invoked on an object.
 Identities are what they claim to be.

During the verification process, MSIL code is examined in an attempt to confirm that the code
can access memory locations and call methods only through properly defined types. For
example, code cannot allow an object's fields to be accessed in a manner that allows memory
locations to be overrun. Additionally, verification inspects code to determine whether the MSIL
has been correctly generated, because incorrect MSIL can lead to a violation of the type safety
rules. The verification process passes a well-defined set of type-safe code, and it passes only
code that is type safe. However, some type-safe code might not pass verification because of
limitations of the verification process, and some languages, by design, do not produce verifiably
type-safe code. If type-safe code is required by security policy and the code does not pass
verification, an exception is thrown when the code is run.

4. At execution time, a just-in-time (JIT) compiler translates the MSIL into native code.
During this compilation, code must pass a verification process that examines the MSIL
and metadata to find out whether the code can be determined to be type safe.
5. Executing your code.

The common language runtime provides the infrastructure that enables execution to take
place as well as a variety of services that can be used during execution.
6.3 SQL Server 2008
CLR architecture
SearchSQLServer.com

 Digg This
 Stumble
 Delicious

The .NET Framework CLR is very tightly integrated with the SQL Server 2005 database engine.
In fact, the SQL Server database engine hosts the CLR. This tight level of integration gives SQL
Server 2005 several distinct advantages over the .NET integration that's provided by DB2 and
Oracle. You can see an overview of the SQL Server 2005 database engine and CLR integration
in Figure 3-1.

As you can see in Figure 3-1, the CLR is hosted within the SQL Server database engine. A SQL
Server database uses a special API or hosting layer to communicate with the CLR and interface
the CLR with the Windows operating system. Hosting the CLR within the SQL Server database
gives the SQL Server database engine the ability to control several important aspects of the CLR,
including

 Memory management 
 Threading 
 Garbage collection

The DB2 and Oracle implementation both use the CLR as an external process, which means that
the CLR and the database engine both compete for system resources. SQL Server 2005's in-
process hosting of the CLR provides several important advantages over the external
implementation used by Oracle or DB2. First, in-process hosting enables SQL Server to control
the execution of the CLR, putting essential functions such as memory management, garbage
collection, and threading under the control of the SQL Server database engine. In an external
implementation the CLR will manage these things independently. The database engine has a
better view of the system requirements as a whole and can manage memory and threads better
than the CLR can do on its own. In the end, hosting the CLR in-process will provide better
performance and scalability.

Figure 3-1: The SQL Server CLR database architecture


Enabling CLR support

By default, the CLR support in the SQL Server database engine is turned off. This ensures that
update installations of SQL Server do not unintentionally introduce new functionality without the
explicit involvement of the administrator. To enable SQL Server's CLR support, you need to use
the advanced options of SQL Server's sp_configure system stored procedure, as shown in the
following listing:

sp_configure 'show advanced options', 1

GO

RECONFIGURE

GO

sp_configure 'clr enabled', 1


GO

RECONFIGURE

GO

CLR Database object components

To create .NET database objects, you start by writing managed code in any one of the .NET
languages, such as VB, C#, or Managed C++, and compile it into a .NET DLL (dynamic link
library). The most common way to do this would be to use Visual Studio 2005 to create a new
SQL Server project and then build that project, which creates the DLL. Alternatively, you create
the .NET code using your editor of choice and then compiling the code into a .NET DLL using
the .NET Framework SDK. ADO.NET is the middleware that connects the CLR DLL to the SQL
Server database. Once the .NET DLL has been created, you need to register that DLL with SQL
Server, creating a new SQL Server database object called an assembly. The assembly essentially
encapsulates the .NET DLL. You then create a new database object such as a stored procedure or
a trigger that points to the SQL Server assembly. You can see an overview of the process to
create a CLR database object in Figure 3-2.

Figure 3-2: Creating CLR database objects


CLR assemblies in SQL Server 2005
SearchSQLServer.com

 Digg This
 Stumble
 Delicious

SQL Server .NET data provider

If you're familiar with ADO.NET, you may wonder exactly how CLR database objects connect
to the database. After all, ADO.NET makes its database connection using client-based .NET data
providers such as the .NET Framework Data Provider for SQL Server, which connects using
networked libraries. While that's great for a client application, going through the system's
networking support for a database call isn't the most efficient mode for code that's running
directly on the server. To address this issue, Microsoft created the new SQL Server .NET Data
Provider. The SQL Server .NET Data Provider establishes an in-memory connection to the SQL
Server database.

Assemblies

After the coding for the CLR object has been completed, you can use that code to create a SQL
Server assembly. If you're using Visual Studio 2005, then you can simply select the Deploy
option, which will take care of both creating the SQL Server assembly as well as creating the
target database object.

If you're not using Visual Studio 2005 or you want to perform the deployment process manually,
then you need to copy the .NET DLL to a common storage location of your choice. Then, using
SQL Server Management Studio, you can execute a T-SQL CREATE ASSEMBLY statement
that references the location of the .NET DLL, as you can see in the following listing:

CREATE ASSEMBLY MyCLRDLL

FROM 'SERVERNAMECodeLibraryMyCLRDLL.dll'
The CREATE ASSEMBLY command takes a parameter that contains the path to the DLL that
will be loaded into SQL Server. This can be a local path, but more often it will be a path to a
networked file share. When the CREATE ASSEMBLY is executed, the DLL is copied into the
master database.

If an assembly is updated or becomes deprecated, then you can remove the assembly using the
DROP ASSEMBLY command as follows:

DROP ASSEMBLY MyCLRDLL

Because assemblies are stored in the database, when the source code for that assembly is
modified and the assembly is recompiled, the assembly must first be dropped from the database
using the DROP ASSEMBLY command and then reloaded using the CREATE ASSEMBLY
command before the updates will be reflected in the SQL Server database objects.

You can use the sys.assemblies view to view the assemblies that have been added to SQL Server
2005 as shown here:

SELECT * FROM sys.assemblies

Since assemblies are created using external files, you may also want to view the files that were
used to create those assemblies. You can do that using the sys.assembly_files view as shown
here:

SELECT * FROM sys.assembly_files

Creating CLR database objects


SearchSQLServer.com

 Digg This
 Stumble
 Delicious

After the SQL Server assembly is created, you can then use SQL Server Management Studio to
execute a T-SQL CREATE PROCEDURE, CREATE TRIGGER, CREATE FUNCTION,
CREATE TYPE, or CREATE AGGREGATE statement that uses the EXTERNAL NAME
clause to point to the assembly that you created earlier.

When the assembly is created, the DLL is copied into the target SQL Server database and the
assembly is registered. The following code illustrates creating the MyCLRProc stored procedure
that uses the MyCLRDLL assembly:

CREATE PROCEDURE MyCLRProc

AS EXTERNAL NAME

MyCLRDLL.StoredProcedures.MyCLRProc

The EXTERNAL NAME clause is new to SQL Server 2005. Here the EXTERNAL NAME
clause specifies that the stored procedure MyCLRProc will be created using a .SQL Server
assembly. The DLL that is encapsulated in the SQL Server assembly can contain multiple classes
and methods; the EXTERNAL NAME statement uses the following syntax to identify the correct
class and method to use from the assembly:

Assembly Name.ClassName.MethodName

In the case of the preceding example, the registered assembly is named MyCLRDLL. The class
within the assembly is StoredProcedures, and the method within that class that will be executed
is MyCLRProc.

Specific examples showing how you actually go about creating a new managed code project with
Visual Studio 2005 are presented in the next section.

Creating CLR database objects

The preceding section presented an overview of the process along with some example manual
CLR database object creation steps to help you better understand the creation and deployment
process for CLR database objects. However, while it's possible to create CLR database objects
manually, that's definitely not the most productive method. The Visual Studio 2005 Professional,
Enterprise, and Team System Editions all have tools that help create CLR database objects as
well as deploy and debug them. In the next part of this chapter you'll see how to create each of
the new CLR database objects using Visual Studio 2005.

NOTE: The creation of SQL Server projects is supported in Visual Studio 2005 Professional
Edition and higher. It is not present in Visual Studio Standard Edition or the earlier releases of
Visual Studio.

6.4 Java Script


JavaScript is a script-based programming language that was developed by Netscape Communication
Corporation. JavaScript was originally called Live Script and renamed as JavaScript to indicate its
relationship with Java. JavaScript supports the development of both client and server components of
Web-based applications. On the client side, it can be used to write programs that are executed by a
Web browser within the context of a Web page. On the server side, it can be used to write Web
server programs that can process information submitted by a Web browser and then update the
browser’s display accordingly
Even though JavaScript supports both client and server Web programming, we prefer JavaScript
at Client side programming since most of the browsers supports it. JavaScript is almost as easy to
learn as HTML, and JavaScript statements can be included in HTML documents by enclosing the
statements between a pair of scripting tags
<SCRIPTS>
<SCRIPT LANGUAGE = “JavaScript”>
JavaScript statements
</SCRIPT>
Here are a few things we can do with JavaScript:
 Validate the contents of a form and make calculations.
 Add scrolling or changing messages to the Browser’s status line.
 Animate images or rotate images that change when we move the mouse over them.

Java script is an easy-to-use programming language that can be embedded in the header of your
web pages. It can enhance the dynamics and interactive features of your page by allowing you to
perform calculations, check forms, write interactive games, add special effects, customize graphics
selections, create security passwords and more.

Benefits of Java Script


Following are the benefits of JavaScript.

 associative arrays
 loosely typed variables
 regular expressions
 objects and classes
 highly evolved date, math, and string libraries

W3C DOM support in the JavaScript

Disadvantages of JavaScript

 Developer depends on the browser support for the JavaScript

There is no way to hide the JavaScript code in case of commercial application

6.5 UML Diagrams


UML stands for Unified Modeling Language
"The Unified Modeling Language (UML) is a graphical language for visualizing,
specifying, constructing, and documenting the artifacts of a software-intensive system.
The UML offers a standard way to write a system's blueprints, including conceptual
things such as business processes and system functions as well as concrete things such
as programming language statements, database schemas, and reusable software
components."
UML is unique in that it has a standard data representation. This representation is called the meta
model. The meta-model is a description of UML in UML. It describes the objects, attributes, and
relationships necessary to represent the concepts of UML within a software application.
The UML notation is rich and full bodied. It is comprised of two major subdivisions. There is a
notation for modeling the static elements of a design such as classes, attributes, and relationships.
There is also a notation for modeling the dynamic elements of a design such as objects,
messages, and finite state machines. The unified modeling language allows the software engineer
to express an analysis model using the modeling notation that is governed by a set of syntactic
semantic and pragmatic rules.
A UML system is represented using five different views that describe the system from distinctly
different perspective.
User Model View
This view represents the system from the user’s perspective. The analysis representation
describes a usage scenario from the end-users perspective.
Structural model view
In this model the data and functionality are arrived from inside the system. This model view
models the static structures.
Behavioral Model View
It represents the dynamic of behavioral as parts of the system, depicting the interactions of
collection between various structural elements described in the user model and structural model
view.
Implementation Model View
In this the structural and behavioral as parts of the system are represented as they are to be built.
Environmental Model View
In this the structural and behavioral aspects of the environment in which the system is to be
implemented are represented.
UML is specifically constructed through two different domains they are:
 UML Analysis modeling, this focuses on the user model and structural model views of the
system.
 UML design modeling, which focuses on the behavioral modeling, implementation modeling
and environmental model views.
Relationships in UML
Generalization relationship

In UML modeling, a generalization relationship is a relationship in which one model element


(the child) is based on another model element (the parent). Generalization relationships are used
in class, component, deployment, and use case diagrams.

To comply with UML semantics, the model elements in a generalization relationship must be
the same type. For example, a generalization relationship can be used between actors or between
use cases; however, it cannot be used between an actor and a use case.

The parent model element can have one or more children, and any child model element can have
one or more parents. It is more common to have a single parent model element and multiple child
model elements.

Generalization relationships do not have names. A generalization relationship indicates that a


specialized (child) model element is based on a general (parent) model element. Although the
parent model element can have one or more children, and any child model element can have one
or more parents, typically a single parent has multiple children.

Association relationship
An association relationship is a structural relationship between two model elements that shows
that objects of one classifier (actor, use case, class, interface, node, or component) connect and
can navigate to objects of another classifier. Even in bidirectional relationships, an association
connects two classifiers, the primary (supplier) and secondary (client).
In UML models, an association is a relationship between two classifiers, such as classes or use
cases that describes the reasons for the relationship and the rules that govern the relationship, an
association appears as a solid line between two classifiers.

Aggregation relationship
In UML models, an aggregation relationship shows a classifier as a part of or subordinate to
another classifier. An aggregation is a special type of association in which objects are assembled
or configured together to create a more complex object. Aggregation protects the integrity of an
assembly of objects by defining a single point of control, called the aggregate, in the object that
represents the assembly. Aggregation also uses the control object to decide how the assembled
objects respond to changes or instructions that might affect the collection.

An aggregation association appears as a solid line with an unfilled diamond at the association
end, which is connected to the classifier that represents the aggregate. Aggregation relationships
do not have to be unidirectional.

Composition Relationship

A composition relationship represents a whole–part relationship and is a type of aggregation. A


composition relationship specifies that the lifetime of the part classifier is dependent on the
lifetime of the whole classifier. Each instance of type Circle seems to contain an instance of type
Point. This is a relationship known as composition.

The black diamond represents composition. It is placed on the Circle class because it is the
Circle that is composed of a Point. The arrowhead on the other end of the relationship denotes
that the relationship is navigable in only one direction. That is, Point does not know about Circle.
In UML relationships are presumed to be bidirectional unless the arrowhead is present to restrict
them.

Inheritance Relationship

The inheritance relationship in UML is depicted by a peculiar triangular arrowhead. This


arrowhead, that looks rather like a slice of pizza, points to the base class. One or more lines
proceed from the base of the arrowhead connecting it to the derived classes.

Dependency relationships

In UML modeling, a dependency relationship is a relationship in which changes to one model


element (the supplier) impact another model element (the client). Dependency relationship can
also be used to represent precedence, where one model element must precede another.
Dependency relationships usually do not have names.

A dependency is displayed as a dashed line with an open arrow that points from the client model
element to the supplier model element.
6.7 USECASE DIAGRAM

A use case is a set of scenarios that describing an interaction between a user and a system.  A use
case diagram displays the relationship among actors and use cases.  The two main components of
a use case diagram are use cases and actors.

An actor is represents a user or another system that will interact with the system you
are modeling.  A use case is an external view of the system that represents some action the user
might perform in order to complete a task.

6.8 CLASS DIAGRAM

Class diagrams are widely used to describe the types of objects in a system and their
relationships.  Class diagrams model class structure and contents using design elements such as
classes, packages and objects.  Class diagrams describe three different perspectives when
designing a system, conceptual, specification, and implementation. These perspectives become
evident as the diagram is created and help solidify the design. 

Classes are composed of three things: a name, attributes, and operations.  Below is an example
of a class.
6.9 INTERACTION DIAGRAMS
Interaction diagrams model the behavior of  use cases by describing the way groups of
objects interact to complete the task.  The two kinds of interaction diagrams are
sequence and collaboration diagrams.
Sequence diagrams demonstrate the behavior of objects in a use case by describing the objects
and the messages they pass. The diagrams are read left to right and descending.  The example
below shows an object of class 1 start the behavior by sending a message to an object of class 2. 
Messages pass between the different objects until the object of class 1 receives the final message.
6.10 Collaboration diagrams

Collaboration diagrams are also relatively easy to draw.  They show the relationship between
objects and the order of messages passed between them.  The objects are listed as icons and
arrows indicate the messages being passed between them. The numbers next to the messages are
called sequence numbers.  As the name suggests, they show the sequence of the messages as they
are passed between the objects.  There are many acceptable sequence numbering schemes in
UML.  A simple 1, 2, 3... format can be used, as the example below shows.

Information Gathering

We have taken an approach of gathering information with sensitivity and precautions.

Information about project:

During the analysis, we collected whole information from “Mr. Alok Roy”, Scientist ‘D’, NIC,
and staff members of the DGLW, Labour ministry, New Delhi.
Information Sources:

We have collected the information about the current system from:

1. Reports

2. Personal staff

3. System Documentation

4. Trainees

5. Existing System

DFD

Login form

user
usertable
userlevel

Allocating particular menu


Username,pa
ssword
Menuorder,
menugroup,
menutitle,
menulink,
loginform userlevel itemorder
Verifying user
menutable
Username,pa
ssword
DFD of Leave form

Emp_name,
Finding particular entry Emp_ID
Updating entry
Emp_name,Emp_ID

Emp_name,
Listing Leave Details Emp_ID
Type,fro Type, From, To,
m,to,Re Reason
marks,D
Employee
ate

Type, From, To,


Reason

Adding new entry

Type, From, To,


Reason

User
DFD of Transfer

Emp_ID
Region,prev_Regi
on,prev_statn, Finding
Employee_ID particular entry Updating entry

Prev_regn,
Prev_statn
Select Region
1

Region
Employee Posting

Adding new entry

User
Dfd of form Employee

Region,
Designation
Update particular entry
Find particular entry
Designation,
Region
1
Listing Designation

GnrlCode Employee

1
Region,
Designation
Adding new entry

User
Region,
DFD of Vacancyposition Station,
Designation
Updating an entry
Finding Particular entries

Region,
Station
1

VacancyPosition
Station

Listing Station

Region,
Station,
Designation
Adding new entry

User
DFD of EmployeeCR

Region,Station,
Region, Emp_name,Emp_I
Find particular EmployeeCR
D entry Update EmployeeCR entry
Station,
Emp_nam
e,
Emp_ID
Listing Employees
Emp_ID,Crye
ar,Received,r
emarks
EmployeeCR
Emp_ID,E Employee
mp_name

Emp_ID,Crye
ar,Received,r
Add new entry of EmployeeCR emarks

User
7.Implementation
8. Testing

Software Testing is the process used to help identify the correctness, completeness, security, and
quality of developed computer software. Testing is a process of technical investigation, performed
on behalf of stakeholders, that is intended to reveal quality-related information about the product
with respect to the context in which it is intended to operate. This includes, but is not limited to, the
process of executing a program or application with the intent of finding errors. Quality is not an
absolute; it is value to some person. With that in mind, testing can never completely establish the
correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares
the state and behavior of the product against a specification. An important point is that software
testing should be distinguished from the separate discipline of Software Quality Assurance (SQA),
which encompasses all business process areas, not just testing.
There are many approaches to software testing, but effective testing of complex products is
essentially a process of investigation, not merely a matter of creating and following routine
procedure. One definition of testing is "the process of questioning a product in order to evaluate it",
where the "questions" are operations the tester attempts to execute with the product, and the product
answers with its behavior in reaction to the probing of the tester[citation needed]. Although most of
the intellectual processes of testing are nearly identical to that of review or inspection, the word
testing is connoted to mean the dynamic analysis of the product by putting the product through its
paces. Some of the common quality efficiency, portability, maintainability, attributes includes
capability, reliability, compatibility and usability. A good test is sometimes described as one which
reveals an error. However, more recent thinking suggests that a good test is one which reveals
information of interest to someone who matters within the project community.
Introduction

In general, software engineers distinguish software faults from software failures. In case of a
failure, the software does not do what the user expects. A fault is a programming error that may or
may not actually manifest as a failure. A fault can also be described as an error in the correctness of
the semantic of a computer program. A fault will become a failure if the exact computation
conditions are met, one of them being that the faulty portion of computer software executes on the
CPU. A fault can also turn into a failure when the software is ported to a different hardware platform
or a different compiler, or when the software gets extended. Software testing is the technical
investigation of the product under test to provide stakeholders with quality related information.
Software testing may be viewed as a sub-field of Software Quality Assurance but typically exists
independently In SQA, software process specialists and auditors take a broader view on software and
its development. They examine and change the software engineering process itself to reduce the
amount of faults that end up in the code or deliver faster.
Regardless of the methods used or level of formality involved the desired result of testing is a
level of confidence in the software so that the organization is confident that the software has an
acceptable defect rate. What constitutes an acceptable defect rate depends on the nature of the
software. An arcade video game designed to simulate flying an airplane would presumably have a
much higher tolerance for defects than software used to control an actual airliner.
A problem with software testing is that the number of defects in a software product can be very
large, and the number of configurations of the product larger still. Bugs that occur infrequently are
difficult to find in testing. A rule of thumb is that a system that is expected to function without faults
for a certain length of time must have already been tested for at least that length of time. This has
severe consequences for projects to write long-lived reliable software.
A common practice of software testing is that it is performed by an independent group of testers
after the functionality is developed but before it is shipped to the customer. This practice often
results in the testing phase being used as project buffer to compensate for project delays. Another
practice is to start software testing at the same moment the project starts and it is a continuous
process until the project finishes.
Another common practice is test suites to be developed during technical support escalation
procedures. Such tests are then maintained in regression testing suites to ensure that future updates to
the software don't repeat any of the known mistakes. It is commonly believed that the earlier a
defect is found the cheaper it is to fix it.

In counterpoint, some emerging software disciplines such as extreme programming and the agile
software development movement, adhere to a "test-driven software development" model. In this
process unit tests are written first, by the programmers (often with pair programming in the extreme
programming methodology). Of course these tests fail initially as they are expected to. Then as code
is written it passes incrementally larger portions of the test suites. The test suites are continuously
updated as new failure conditions and corner cases are discovered, and they are integrated with any
regression tests that are developed.

Unit tests are maintained along with the rest of the software source code and generally integrated
into the build process (with inherently interactive tests being relegated to a partially manual build
acceptance process). The software, tools, samples of data input and output, and configurations are all
referred to collectively as a test harness.

History

The separation of debugging from testing was initially introduced by Glen ford J. Myers in his
1978 book the "Art of Software Testing". Although his attention was on breakage testing it
illustrated the desire of the software engineering community to separate fundamental development
activities, such as debugging, from that of verification. Drs. Dave Gelperin and William C. Hetzel
classified in 1988 the phases and goals in software testing as follows: until 1956 it was the
debugging oriented period, where testing was often associated to debugging: there was no clear
difference between testing and debugging. From 1957-1978 there was the demonstration oriented
period where debugging and testing was distinguished now in this period it was shown, that software
satisfies the requirements. The time between 1979-1982 is announced as the destruction oriented
period, where the goal was to find errors. 1983-1987 is classified as the evaluation oriented period.
Intention here is that during the software lifecycle a product evaluation is provided and measuring
quality. From 1988 on it was seen as prevention oriented period where tests were to demonstrate that
software satisfies its specification, to detect faults and to prevent faults. Dr. Gelperin chaired the
IEEE 829-1988 (Test Documentation Standard) with Dr. Hetzel writing the book "The Complete
Guide of Software Testing". Both works were pivotal in to today's testing culture and remain a
consistent source of reference. Dr. Gelperin and Jerry E. Durant also went on to develop High
Impact Inspection Technology that builds upon traditional Inspections but utilizes a test driven
additive.
Objectives of Testing:

This section introduces the concept of testing and how important is, for the successful
implementation of the project. Different phases of testing are described along with the level of
testing incorporated in this particular project.

Testing is vital to the success of any system. Testing is done at different stages within the phase.
System testing makes a logical assumption that if all phases of the system are correct, the goals
will be achieved successfully. Inadequate testing at all leads to errors that may come up after a
long time when correction would be extremely difficult. Another objective of testing is its utility
as a user-oriented vehicle before implementation. The testing of the system was done on both
artificial and live data.

Testing involves operation of a system or application under controlled conditions and evaluating
the results (e.g., “if the user is in interface A of the application while using hardware B and does
C, then D should not happen”). The controlled conditions should include both normal and
abnormal conditions.

Typically, the project team includes a mix of testers and developers who work closely together,
with the overall QA processes being monitored by the project managers.

Types of Testing

Black Box Testing

Also known as functional testing, this is a software testing technique whereby the tester does not
know the internal working of the item being tested. Black-box test design treats the system as a
“black-box”, so it does not explicitly use knowledge of the internal structure. Black-box test
design is usually described as focusing on testing functional requirements. Synonyms for black-
box includes: behavioral, functional, opaque-box and closed-box.
White Box Testing

White box test design allows one to peek inside the “box”, and it focuses specifically on using
internal knowledge of the software to guide the selection of test data. Synonyms for white-box
include: structural, glass-box and clear-box.

Condition Testing

An improvement over White-box testing, the process of condition testing ensures that a
controlling expression has been adequately exercised whist the software is under test by
constructing a constraint set for every expression and then ensuring that every member on the
constraint set is included in the values whish are presented to the expression

Data Life-Cycle Testing

It is based upon the consideration that in the software code, a variable is at some stage created,
and subsequently may have its value changed or used in a controlling expression several times
before being destroyed. If only locally declared Boolean used in control conditions are
considered then an examination of the sources code will indicate the place in the source code
where the variable is created, places where it is given a value is used as a part of a control
expression and the place where it is destroyed.

This approach to testing requires all possible feasible lifecycles of the variable to be covered
whilst the module is under test.

Unit Testing

The purpose of this phase is to test the individual units of the developing software component.
This phase is recursive and is to be repeated, as many as there are, levels of testing. In the
DGLW project, each individual form has been tested using techniques of testing namely: Client
side testing using JavaScript.

Each individual form has been validated so that user enters only valid data at every time.
Functional Testing:

This is done for each module / sub module of the system. Functional testing serve as a means of
validating whether the functionality of the system Confers the original user requirement i.e. does
the module do what it was supposed to do? Separate schedules were made for functional testing.
It involves preparation of the test data, writing of test cases, testing for conformance to test cases
and preparation of bugs listing for non-conformities.

System Testing:

System testing is done when the entire system has been fully integrated. The purpose of the
system testing is to test how the different modules interact with each other and whether the entire
system provides the functionality that was expected.

System testing consists of the following steps:

a) Program Testing
b) String Testing
c) System Testing
d) System Documentation
e) User Acceptance Testing

Various Levels Of Testing


Before implementation the system is tested at two levels:

Level 1

Level 2

Level 1 Testing (Alpha Testing)

At this level a test data is prepared for testing. Project leaders test the system on this test data
keeping the following points into consideration:

● Proper error handling

● Exit Pints in code

● Exception handling

● Input / Output format

● Glass box testing

● Black box testing

If the system is through with testing phase at LEVEL 1 then it is passed on to LEVEL 2.

Level 2 Testing (Beta Testing)

Here the testing is done on the live database. If errors are detected then it is sent back to LEVEL
1 for modification otherwise it is passed on to LEVEL 3.

This is the level at which the system actually becomes live and implemented for the use of END
USERS.

We have also checked the proposed system for :

Recovery & Security

A forced system failure is induced to test a backup recovery procedure for file integrity.
Inaccurate data are entered to see how the system responds in terms of error detection and
protection. Related to file integrity is a test to demonstrate that data and programs are secure
from unauthorized access.

Usability Documentation & Procedure:

The usability test verifies the user-friendly nature of the system. This relates to normal operating
and error-handling procedures.

Quality Assurance

Proper documentation is must for mainframe of any software. Apart from In-line documentation
while coding. Help coding, help files corresponding to each program were prepared so as to
tackle the person-dependency of the existing system.

System Implementation

During the implementation stage the system is physically created. Necessary programs are
coded, debugged and documented. A new hardware is selected, ordered and installed.

System Specification

Every computer system consists of three major elements.

1. The Hardware

2. Application Software such as visual studio.

3. Operating system

For successful operation of the package following must be kept in mind:

Too many packages should not be used, as very few systems may have all those packages
installed due to memory problem. Thus, the compatibility of the system developed will get
reduced.
Installation

The Application installation scripts have to be generated from the current server where the
application source code is saved and installed in the main server from where the application is to
be run. This was done using a special code, which generates all SQL-Statements to insert
preliminary data (like menu entries, code in code directories etc) at server and the operational
modules of the application made available to the end users successfully.
9.Screens
10.Conclusion

Today we are at the cross roads of innovation. The right direction to take will only evolve with
time, but effort has to be taken seriously by everyone involved in education; the
school/University, administration, faulty, students and parents.

By designing the “Directorate General of Labour Welfare” through ASP server side technology,
we are able to provide the basic functionality related to the submission activities with great ease.
The use of ASP technology has made it easier to design and develop the n-tired architecture of
this application. We were using the Microsoft Software Development Platform for the
development of this project, which had given a complete, tight and integrated approach for the
process of design and development of this project.

Hence we may conclude that the application system being developed helps a great deal in
modifying the computerized DGLW.

Future Scope of Improvement

The “Employee Management System for DGLW” is a big and ambitious project. I am thankful
for being provided this great opportunity to work on it. As already mentioned, this project has
gone through extensive research work. On the basis of the research work, we have successfully
designed and implemented Employee Management System. This system is based upon 3-tier
client server architecture. The tools used for development were as follows.

Front-end-----ASP, JavaScript, VBScript, HTML, DHTML

Back-end----MS SQL Server 2004

Query Language----PL/SQL
11. Bibliography

1. NIC standard templates

http://intranic.nic.in

2. Ministry of labour website

http://labour.nic.in

3. “Software Engineering

A Roger S.Pressman

Practitioner’s approach”

4. “Active Server Pages”

ASP in 21 days

Active Server Pages 3.0

Scott Mitchell and James

You might also like