Professional Documents
Culture Documents
CONTENTS
1) Abstract
2) Introduction
i) Project Overview
ii) Software & Hardware Specification
3) System Analysis
i) Proposed System
ii) Requirement Analysis & Specification
iii) Feasibility Study
4) System Design
i) Data Flow Diagrams
ii) Data Dictionary
iii) ER Diagrams
iv) Software & Hardware Requirements
5) Forms
6) Reports
7) System Testing
i) Unit Testing
ii) Integration Testing
iii) Performance Testing
8) Software Tools
9) Technical Notes
10) Conclusion
11) Bibliography
ABSTRACT
eZee Rewards and Loyalty is a software that can be used in the hospitality industry.
Through this software the Hotel administration can manage their loyalty programs in an effective
way. The Hotel Loyalty Programs can be used to entice guests into becoming a regular guest at
the hotel. These programs are especially beneficial to hotel chains, where the benefits of the
program can span over their entire hotel chain. Personalized service and rewards deliver
exceptional experiences to guests, and make them come back for more.
The scope of the project defines only those functionalities which are provided by
the eZee Rewards and Loyalty Software. On the other hand the scope of the users involved in the
Rewards and Loyalty Program is defined as the roles of each user in the system and their
accessibilities to the different elements and prospects within the system.
The eZee Rewards and Loyalty System will include three main users who take part
in the working of the system as a complete functionality. Considering the complete functionality
and interactions within the eZee Rewards and Loyalty System we will define the roles of each
user along with their access permissions towards the various elements of the system.
PROJECT OVERVIEW:
This section involves the scope of the project and the scope of its users. The scope of
the project defines only those functionalities which are provided by the eZee Rewards and
Loyalty Software. On the other hand the scope of the users involved in the Rewards and Loyalty
Program is defined as the roles of each user in the system and their accessibilities to the different
elements and prospects within the system.
The eZee Rewards and Loyalty System will include three main users who take part
in the working of the system as a complete functionality. Considering the complete functionality
and interactions within the eZee Rewards and Loyalty System we will define the roles of each
user along with their access permissions towards the various elements of the system.
The various users participating in the system are as follows: (i) The Hotel
Administrator (ii) The Hotel Front Desk Office (iii) The Member ( The customer of the hotel
who subscribed for the Hotel’s Rewards and Loyalty Program ) (iv) The System (or the eZee
Rewards and Loyalty Software).
iii) Creation Of The Subscription Form For The Rewards And Loyalty Program
The hotel administrator can design the subscription form for making it available to all
the customers who are new to the hotel. Having made their first reservation, these
customers are provided with the subscription form.
members of the Rewards and Loyalty Program along with their complete details. This
list may include members who have lost their membership cards and also the new
members of the Rewards and Loyalty Program.
c) The Member
Software:
Server Side:
Microsoft Windows XP
Java 6.0
Eclipse 3.4(IDE)
Jboss 4.2.3GA(AS)
Client Side:
Hardware:
eZee Rewards and Loyalty is a software that can be used in the hospitality
industry. Through this software the Hotel administration can manage their loyalty programs in an
effective way. The Hotel Loyalty Programs can be used to entice guests into becoming a regular
guest at the hotel. These programs are especially beneficial to hotel chains, where the benefits of
the program can span over their entire hotel chain. Personalized service and rewards deliver
exceptional experiences to guests, and make them come back for more.
User
Get Information
The scope of the project defines only those functionalities which are provided by the
eZee Rewards and Loyalty Software.
The scope of the users involved in the Rewards and Loyalty Program is defined as the
roles of each user in the system and their accessibilities to the different elements and
prospects within the system..
The scope of this system includes planning and designing. The contents are as follows:
User Module Tasks:
TECHINICAL FEASIBILITY:
Evaluating the technical feasibility is the trickiest part of a feasibility study. This is
because, .at this point in time, not too many detailed design of the system, making it difficult to
access issues like performance, costs on (on account of the kind of technology to be deployed)
etc. A number of issues have to be considered while doing a technical
analysis.
ii) Find out whether the organization currently possesses the required technologies:
o Is the required technology available with the organization?
o If so is the capacity sufficient?
For instance –
“Will the current printer be able to handle the new reports and forms required for the new
system?”
OPERATIONAL FEASIBILITY:
Proposed projects are beneficial only if they can be turned into information
systems that will meet the organizations operating requirements. Simply stated, this test of
feasibility asks if the system will work when it is developed and installed. Are there major
barriers to Implementation? Here are questions that will help test the operational feasibility of a
project:
Is there sufficient support for the project from management from users? If the current
system is well liked and used to the extent that persons will not be able to see reasons for
change, there may be resistance.
Are the current business methods acceptable to the user? If they are not, Users may
welcome a change that will bring about a more operational and useful systems.
Have the user been involved in the planning and development of the project?
Early involvement reduces the chances of resistance to the system and in
General and increases the likelihood of successful project.
Since the proposed system was to help reduce the hardships encountered. In the existing
manual system, the new system was considered to be operational feasible.
ECONOMIC FEASIBILITY:
A simple economic analysis which gives the actual comparison of costs and
benefits are much more meaningful in this case. In addition, this proves to be a useful point of
reference to compare actual costs as the project progresses. There could be various types of
intangible benefits on account of automation. These could include increased customer
satisfaction, improvement in product quality better decision making timeliness of information,
expediting activities, improved accuracy of operations, better documentation and record keeping,
faster retrieval of information, better employee morale.
SYSTEM DESIGN
DATA FLOW DIAGRAMS
Data flows are data structures in motion, while data stores are data structures.
Data flows are paths or ‘pipe lines’, along which data structures travel, where as the data stores
are place where data structures are kept until needed.
Data flows are data structures in motion, while data stores are data structures at
rest. Hence it is possible that the data flow and the data store would be made up of the same data
structure.
Data flow diagrams is a very handy tool for the system analyst because it gives
the analyst the overall picture of the system, it is a diagrammatic approach.
A DFD is a pictorial representation of the path which data takes From its initial
interaction with the existing system until it completes any interaction. The diagram will describe
the logical data flows dealing the movements of any physical items. The DFD also gives the
insight into the data that is used in the system i.e., who actually uses it is temporarily stored.
A DFD does not show a sequence of steps. A DFD only shows what the different
process in a system is and what data flows between them.
External entities
DATAFLOWS
LEVELS OF DFD:
The complexity of the business system means that it is a responsible to represent
the operations of any system of single data flow diagram. At the top level, an Overview of the
different systems in an organization is shown by the way of context analysis diagram. When
exploded into DFD
They are represented by:
• LEVEL-0 : SYSTEM INPUT/OUTPUT
• LEVEL-1 : SUBSYSTEM LEVEL DATAFLOW FUNCTIONAL
• LEVEL-2 : FILE LEVEL DETAIL DATA FLOW.
The input and output data shown should be consistent from one level to the next.
• A UML system is represented using five different views that describe the
system from distinctly different perspective. Each view is defined by a set
of diagram, which is as follows.
In this model the data and functionality are arrived from inside the
system.
In this the structural and behavioral aspects of the environment in which the
system is to be implemented are represented.
Use case Diagrams represent the functionality of the system from a user’s point of
view. Use cases are used during requirements elicitation and analysis to represent the
functionality of the system. Use cases focus on the behavior of the system from external
point of view.
Actors are external entities that interact with the system. Examples of actors
include users like administrator, bank customer …etc., or another system like central
database.
Use Case diagram Admin: eZee Loyalty and Rewards
Login
Password
Configure
Search
Admin
Add/View
Reports
Use Case Diagram User : eZee Loyalty and Rewards
Registration
Login
Forgot Password
View Details
Get Rooms
Log Out
Sequence Diagram - eZee Loyalty and Rewards (Admin Login)
1 : Login()
2 : I nvalid Data()
3 : Request to DB()
5 : Get Home()
User Login Sequence
U se r Lo gin H o me Da ta Ba se
1 : Lo gin ()
2 : I nv a lid Da ta ()
3 : R e qu e st to DB()
5 : G e t H o me ()
User Activities
U se r P a sw o rd V ie w M o d if y D a ta Ba se
1 : C ha ng e ()
2 : V ie w o w n D e t a ils ( )
3 : M o d if y ( )
4 : Sa v e ()
5 : S a v e ()
6 : S a v e ()
SOFTWARE & HARDWARE REQUIREMENTS
Client Server
Over view:
With the varied topic in existence in the fields of computers, Client Server is one,
which has generated more heat than light, and also more hype than reality. This technology has
acquired a certain critical mass attention with its dedication conferences and magazines. Major
computer vendors such as IBM and DEC; have declared that Client Servers is their main future
market. A survey of DBMS magazine reveled that 76% of its readers were actively looking at the
client server solution. The growth in the client server development tools from $200 million in
1992 to more than $1.2 billion in 1996.
Client server implementations are complex but the underlying concept is simple
and powerful. A client is an application running with local resources but able to request the
database and relate the services from separate remote server. The software mediating this client
server interaction is often referred to as MIDDLEWARE.
The typical client either a PC or a Work Station connected through a network to a
more powerful PC, Workstation, Midrange or Main Frames server usually capable of handling
request from more than one client. However, with some configuration server may also act as
client. A server may need to access other server in order to process the original client request.
The key client server idea is that client as user is essentially insulated from the
physical location and formats of the data needs for their application. With the proper
middleware, a client input from or report can transparently access and manipulate both local
database on the client machine and remote databases on one or more servers. An added bonus is
the client server opens the door to multi-vendor database access indulging heterogeneous table
joins.
Two prominent systems in existence are client server and file server systems. It is
essential to distinguish between client servers and file server systems. Both provide shared
network access to data but the comparison dens there! The file server simply provides a remote
disk drive that can be accessed by LAN applications on a file-by-file basis. The client server
offers full relational database services such as SQL-Access, Record modifying, Insert, Delete
with full relational integrity backup/ restore performance for high volume of transactions, etc. the
client server middleware provides a flexible interface between client and server, who does what,
when and to whom.
Client server has evolved to solve a problem that has been around since the earliest
days of computing: how best to distribute your computing, data generation and data storage
resources in order to obtain efficient, cost effective departmental an enterprise wide data
processing. During mainframe era choices were quite limited. A central machine housed both the
CPU and DATA (cards, tapes, drums and later disks). Access to these resources was initially
confined to batched runs that produced departmental reports at the appropriate intervals. A strong
central information service department ruled the corporation. The role of the rest of the
corporation limited to requesting new or more frequent reports and to provide hand written forms
from which the central data banks were created and updated. The earliest client server solutions
therefore could best be characterized as “SLAVE-MASTER”.
Time-sharing changed the picture. Remote terminal could view and even change the
central data, subject to access permissions. And, as the central data banks evolved in to
sophisticated relational database with non-programmer query languages, online users could
formulate adhoc queries and produce local reports with out adding to the MIS applications
software backlog. However remote access was through dumb terminals, and the client server
remained subordinate to the Slave\Master.
The standards of three-tire architecture are given major concentration to keep the
standards of higher cohesion and limited coupling for effectiveness of the operations.
About JAVA
Introduction
The Java programming language and environment is designed to solve a number of
problems in modern programming practice. Java started as a part of a larger project to develop
advanced software for consumer electronics. These devices are small, reliable, portable,
distributed, real-time embedded systems. When we started the project we intended to use C++,
but encountered a number of problems. Initially these were just compiler technology problems,
but as time passed more problems emerged that were best solved by changing the language.
Java
A simple, object-oriented, network-savvy, interpreted, robust, secure, architecture
neutral, portable, high-performance, multithreaded, dynamic language.
One way to characterize a system is with a set of buzzwords. We use a standard set of them in
describing Java. Here's an explanation of what we mean by those buzzwords and the problems
we were trying to solve.
Archimedes Inc. is a fictitious software company that produces software to teach
about basic physics. This software is designed to interact with the user, providing not only text
and illustrations in the manner of a traditional textbook, but also a set of software lab benches on
which experiments can be set up and their behavior simulated. The most basic experiment allows
students to put together levers and pulleys and see how they act. The italicized narrative of the
trials and tribulations of the Archimedes' designers is used here to provide examples of Java
language concepts.
Simple
We wanted to build a systsem that could be programmed easily without a lot of
esoteric training and which leveraged today's standard practice. Most programmers working
these days use C, and most programmers doing object-oriented programming use C++. So even
though we found that C++ was unsuitable, we designed Java as closely to C++ as possible in
order to make the system more comprehensible.
Java omits many rarely used, poorly understood, confusing features of C++ that in
our experience bring more grief than benefit. These omitted features primarily consist of operator
overloading (although the Java language does have method overloading), multiple inheritance,
and extensive automatic coercions.
The folks at Archimedes wanted to spend their time thinking about levers and
pulleys, but instead spent a lot of time on mundane programming tasks. Their central expertise
was teaching, not programming. One of the most complicated of these programming tasks was
figuring out where memory was being wasted across their 20K lines of code.
Another aspect of being simple is being small. One of the goals of Java is to enable the
construction of software that can run stand-alone in small machines. The Java interpreter and
standard libraries have a small footprint. A small size is important for use in embedded systems
and so Java can be easily downloaded over the net.
Object-Oriented
This is, unfortunately, one of the most overused buzzwords in the industry. But
object-oriented design is very powerful because it facilitates the clean definition of interfaces and
makes it possible to provide reusable "software ICs."
Simply stated, object-oriented design is a technique that focuses design on the data
(=objects) and on the interfaces to it. To make an analogy with carpentry, an "object-oriented"
carpenter would be mostly concerned with the chair he was building, and secondarily with the
tools used to make it; a "non-object-oriented" carpenter would think primarily of his tools.
Object-oriented design is also the mechanism for defining how modules "plug and play."
The object-oriented facilities of Java are essentially those of C++, with extensions
from Objective C for more dynamic method resolution.
The folks at Archimedes had lots of things in their simulation, among them ropes
and elastic bands. In their initial C version of the product, they ended up with a pretty big system
because they had to write separate software for describing ropes versus elastic bands. When they
rewrote their application in an object-oriented style, they found they could define one basic
object that represented the common aspects of ropes and elastic bands, and then ropes and elastic
bands were defined as variations (subclasses) of the basic type. When it came time to add chains,
it was a snap because they could build on what had been written before, rather than writing a
whole new object simulation.
Multithreaded
There are many things going on at the same time in the world around us.
Multithreading is a way of building applications with multiple threads Unfortunately, writing
programs that deal with many things happening at once can be much more difficult than writing
in the conventional single-threaded C and C++ style.
Java has a sophisticated set of synchronization primitives that are based on the
widely used monitor and condition variable paradigm introduced by C.A.R.Hoare. By integrating
these concepts into the language (rather than only in classes) they become much easier to use and
are more robust. Much of the style of this integration came from Xerox's Cedar/Mesa system.
Other benefits of multithreading are better interactive responsiveness and real-time behavior.
This is limited, however, by the underlying platform: stand-alone Java runtime environments
have good real-time behavior. Running on top of other systems like Unix, Windows, the
Macintosh, or Windows NT limits the real-time responsiveness to that of the underlying system.
Lots of things were going on at once in their simulations. Ropes were being pulled,
wheels were turning, levers were rocking, and input from the user was being tracked. Because
they had to write all this in a single threaded form, all the things that happen at the same time,
even though they had nothing to do with each other, had to be manually intermixed. Using an
"event loop" made things a little cleaner, but it was still a mess. The system became fragile and
hard to understand. They were pulling in data from all over the net. But originally they were
doing it one chunk at a time. This serialized network communication was very slow. When they
converted to a multithreaded style, it was trivial to overlap all of their network communication.
• Objects
• Classes
• Inheritance
• Data Abstraction
• Data Encapsulation
• Polymorphism
• Overloading
• Reusability
In order to understand the basic concepts in C++, the programmer must have a
command of the basic terminology in object-oriented programming. Below is a brief outline of
the concepts of Object-oriented programming languages:
Objects:
Object is the basic unit of object-oriented programming. Objects are identified by its
unique name. An object represents a particular instance of a class. There can be more than one
instance of an object. Each instance of an object can hold its own relevant data.
Classes:
Classes are data types based on which objects are created. Objects with similar
properties and methods are grouped together to form a Class. Thus a Class represent a set of
individual objects. Characteristics of an object are represented in a class as Properties. The
actions that can be performed by objects becomes functions of the class and is referred to as
Methods.
For example consider we have a Class of Cars under which Santro Xing, Alto and
WaganR represents individual Objects. In this context each Car Object will have its own, Model,
Year of Manufacture, Colour, Top Speed, Engine Power etc., which form Properties of the Car
class and the associated actions i.e., object functions like Start, Move, Stop form the Methods of
Car Class.
Inheritance:
Inheritance is the process of forming a new class from an existing class or base class.
The base class is also known as parent class or super class, The new class that is formed is
called derived class. Derived class is also known as a child class or sub class. Inheritance helps
in reducing the overall code size of the program, which is an important concept in object-
oriented programming.
Data Abstraction:
Data Encapsulation:
Data Encapsulation combines data and functions into a single unit called Class.
When using Data Encapsulation, data is not accessed directly; it is only accessible through the
functions present inside the class. Data Encapsulation enables the important concept of data
hiding possible.
Polymorphism:
Overloading:
Reusability:
This term refers to the ability for multiple programmers to use the same written
and debugged existing class of data. This is a time saving device and adds code efficiency to the
language. Additionally, the programmer can incorporate new features to the existing class,
further developing the application and allowing users to achieve increased performance. This
time saving feature optimizes code, helps in gaining secured applications and facilitates easier
maintenance on the application.
ABOUT JBOSS SEAM
Introduction
Yet another Web Application Framework! This time it is from JBoss Community.
JBoss provides a new Web Application Framework called "JBoss Seam" which combines the
advantages from the two rapidly growing technologies Enterprise Java Beans 3.0 and Java
Server Faces. JBoss Seam, by sitting on top of J2EE provides a nice way of integration between
JSF and EJB Components with other great functionalities. This article is an introductory article
only and it covers the idea that gave birth to JBoss Seam, its advantages, the various modules
involved along with a Sample Application. This article assumes the readers to have some bit of
knowledge and programming in areas like Java Server Faces and Enterprise Java Beans 3.0. For
more information about these technologies, visit http://jsf.javabeat.net/index.php and
http://www.javabeat.net/javabeat/ejb3/index.php.
Let us exactly define what JBoss seam is. JBoss Seam provides a Light-weight
Container for J2EE standards and it addresses the long-standing issues in any Typical Web
Application like State Management and Better Browser Navigation. It also neatly provides an
integration between the two popular technologies, Java Server Faces (in the UI tier) and
Enterprise Java Beans (EJB 3 in the Server Side). Before getting into the various details about
JBoss Seam let us see the common set of problems that are being faced in a Development and the
Usability of a typical Web Application using Java Server Faces and Enterprise Java Beans.
For this, let us assume an imaginary application. Let us keep the requirements of
the imaginary Web Application we are going to consider very small. The Web Application is a
simple Registration Application, which provides the user with a View which contains username
and password text-fields. User can click the submit button after filling both the fields. If the
username password information given by the user is not found in the database, the user is
assumed to be a new user and he is greeted with a welcome message; else an error page is
displayed with appropriate message.
Let us analysis the roles of JSF and EJB 3.0 in this Web Application. More
specifically, we will analysis the various components in both the client as well the server tier
along with their responsibilities.
In the Client side, for designing and presenting the form with the input controls
(text-field and button) to the user, we may have written a userinput.jsp page with the set of JSF
core tag libraries like <f:view> and <h:form>. And then, a JSF Managed Bean called UserBean
encapsulated with properties(username and password) may have been coded which serves as a
Model. The UIcomponents values within the JSP page would have got bound to the properties of
the Managed Bean with the help of JSF Expression Language. Since the logic is to query from
the database for the existence of the username and the password, a Stateless Session Facade Bean
would have been written whose sole purpose is to persistence the client information to the
database. For persistence the information, the Session Bean may depend on the EntityManager
API for querying and persisting entities.
The Data Traversal Logic from JSF to EJB Session Bean should have be taken
care by the Managed Bean only. The Managed Bean apart from representing a Model may also
act as a Listener in handling the button click events. Say, after clicking the register button, one of
the action methods within the Managed Bean would have been called by the framework, and
here the Bean might have a JNDI Look-up to get an instance of the Session Bean to persisting or
querying the user information. If we look at carefully, the JSF Managed Bean is serving as an
intermediary between the transfer of Data from the Client to the Server. Within this Managed
Bean is the code for getting a reference to the Session Bean for doing various other
functionalities. Wouldn't a direct commnication between JSF UI Components and the
Enterprise Bean Components be nice? There is no purpose of the intermediate Managed Bean in
this case. JBoss Seam provides a very good solution for this. Not only this many of the
outstanding problems that are faced in a Web Application are also addressed and given solution
in this Framework.
Following are the major advantages that a Web Application may enjoy if it uses
JBoss Seam Framework. They are
• Integration of JSF and EJB
• Stateful Web Applications
• Dependency Bijection Support
Let us look into the various advantages of JBoss Seam in the subsequent sections.
Today the Web Application World see more and more matured technologies that
are focusing to establish an easy-to-use development by reducing lots and lots of boiler-plate
code along with some other added functionalities in their own domains. Let us consider JSF and
EJBtechnologies to extend further discussion regarding this.
EJB 3.0 which is a specification given from Sun has gained much popularity
because of its simplified yet robust programming model. Much of the middle-ware related
services like Security, Transactions, Connection Pooling etc is delegated to the container itself.
Comparing to its predecessors, EJB 3.0 offers a POJO programming Model. No need for your
beans to extend or implement EJB specific classes or interfaces. And also, along with the new
specification Java Persistence API (JPA)
http://www.javabeat.net/javabeat/ejb3/articles/2007/04/introduction_to_java_persistence_api_jpa
_ejb_3_0_1.php, an unified programming model to access the underlying database along with
rich set of features are now possible thereby completely eliminating the heavy-headed entity
beans.
Before getting into Dependency Bijection, it is wise to look at the two types: namely
Dependency Injection and Dependency Outjection. These two are the popular patterns and
modern Frameworks and Containers makes use of this abundantly. Let us see these two
techniques
This model is used when a Component or a Service which is running inside some
Framework or a Container is well known in the early stages so that the Framework/Container
can create instances of them thereby taking the burden from the Clients. These type of model is
used heavily in most of the J2EEComponents, to name a few, EJB, Servlets, JMS etc.
Since the underlying protocol used in a Web Application is Http, all Web
Application are Stateless in nature. Precisely it means that all the requests that are coming from
the Client Browser are treated as individual requests only. The Server shows no partiality for the
Client Requests. It is up to the Application or the Framework to identify whether requests are
coming from the same Client or not. Session Management in Web Application is a time-
consuming job and typically Servlets/Jsp provides various ways to manage sessions. Even in this
case, Application Developers still have to depend on HttpSession like classes for creating session
objects and storing/retrieving objects from the session.
JBoss Seam provides an excellent way to Manage States amongst multiple client
requests. The State Management Facility is tightly integrated with Seam Components in the
form of various Contexts. Following are the most commonly used Contexts in Seam.
Introduction
RichFaces is an open source framework that adds Ajax capability into existing JSF
applications without resorting to JavaScript.
• Intensify the whole set of JSF benefits while working with Ajax. RichFaces is fully
integrated into the JSF lifecycle. While other frameworks only give you access to the managed
bean facility, RichFaces advantages the action and value change listeners, as well as invokes
server-side validators and converters during the Ajax request-response cycle.
• Add Ajax capability to the existing JSF applications. Framework provides two
components libraries (Core Ajax and UI). The Core library sets Ajax functionality into existing
pages, so there is no need to write any JavaScript code or to replace existing components with
new Ajax ones. RichFaces enables page-wide Ajax support instead of the traditional component-
wide support and it gives the opportunity to define the event on the page. An event invokes an
Ajax request and areas of the page which become synchronized with the JSF Component Tree
after changing the data on the server by Ajax request in accordance with events fired on the
client.
• Create quickly complex View basing on out of the box components. RichFaces UI library
contains components for adding rich user interface features to JSF applications. It extends the
RichFaces framework to include a large (and growing) set of powerful rich Ajax-enabled
components that come with extensive skins support. In addition, RichFaces components are
designed to be used seamlessly with other 3d-party component libraries on the same page, so you
have more options for developing your applications.
• Write your own custom rich components with built-in Ajax support. We're always
working on improvement of Component Development Kit (CDK) that was used for RichFaces
UI library creation. The CDK includes a code-generation facility and a templating facility using
a JSP-like syntax. These capabilities help to avoid a routine process of a component creation.
The component factory works like a well-oiled machine allowing the creation of first-class rich
components with built-in Ajax functionality even more easily than the creation of simpler
components by means of the traditional coding approach.
• Package resources with application Java classes. In addition to its core, Ajax functionality
of RichFaces provides an advanced support for the different resources management: pictures,
JavaScript code, and CSS stylesheets. The resource framework makes possible to pack easily
these resources into Jar files along with the code of your custom components.
• Easily generate binary resources on-the-fly. Resource framework can generate images,
sounds, Excel spreadsheets etc.. on-the-fly so that it becomes for example possible to create
images using the familiar approach of the "Java Graphics2D" library.
• Create a modern rich user interface look-and-feel with skins-based technology.
RichFaces provides a skinnability feature that allows easily define and manage different color
schemes and other parameters of the UI with the help of named skin parameters. Hence, it is
possible to access the skin parameters from JSP code and the Java code (e.g. to adjust generated
on-the-fly images based on the text parts of the UI). RichFaces comes with a number of
predefined skins to get you started, but you can also easily create your own custom skins.
• Test and create the components, actions, listeners, and pages at the same time. An
automated testing facility is in our roadmap for the near future. This facility will generate test
cases for your component as soon as you develop it. The testing framework will not just test the
components, but also any other server-side or client-side functionality including JavaScript code.
What is more, it will do all of this without deploying the test application into the Servlet
container.
RichFaces UI components come ready to use out-of-the-box, so developers save their time and
immediately gain the advantage of the mentioned above features in Web applications creation.
As a result, usage experience can be faster and easily obtained.
Hibernate 3.0, the latest Open Source persistence technology at the heart of J2EE
EJB 3.0 is available for download from Hibernet.org.The Hibernate 3.0 core is 68,549 lines of
Java code together with 27,948 lines of unit tests, all freely available under the LGPL, and has
been in development for well over a year. Hibernate maps the Java classes to the database tables.
It also provides the data query and retrieval facilities that significantly reduces the development
time. Hibernate is not the best solutions for data centric applications that only uses the stored-
procedures to implement the business logic in database. It is most useful with object-oriented
domain modes and business logic in the Java-based middle-tier. Hibernate allows transparent
persistence that enables the applications to switch any database. Hibernate can be used in Java
Swing applications, Java Servlet-based applications, or J2EE applications using EJB session
beans.
Features of Hibernate
• Hibernate 3.0 provides three full-featured query facilities: Hibernate Query Language,
the newly enhanced Hibernate Criteria Query API, and enhanced support for queries
expressed in the native SQL dialect of the database.
• Enhanced Criteria query API: with full support for projection/aggregation and subselects.
• Runtime performance monitoring: via JMX or local Java API, including a second-level
cache browser.
• Eclipse support, including a suite of Eclipse plug-ins for working with Hibernate 3.0,
including mapping editor, interactive query prototyping, schema reverse engineering tool.
• Hibernate is Free under LGPL: Hibernate can be used to develop/package and distribute
the applications for free.
• Hibernate is Scalable: Hibernate is very performant and due to its dual-layer architecture
can be used in the clustered environments.
• Automatic Key Generation: Hibernate supports the automatic generation of primary key
for your.
• JDK 1.5 Enhancements: The new JDK has been released as a preview earlier this year
and we expect a slow migration to the new 1.5 platform throughout 2004. While Hibernate3 still
runs perfectly with JDK 1.2, Hibernate3 will make use of some new JDK features. JSR 175
annotations, for example, are a perfect fit for Hibernate metadata and we will embrace them
aggressively. We will also support Java generics, which basically boils down to allowing type
safe collections.
• EJB3-style persistence operations: EJB3 defines the create() and merge() operations,
which are slightly different to Hibernate's saveOrUpdate() and saveOrUpdateCopy() operations.
Hibernate3 will support all four operations as methods of the Session interface.
• The EJB3 draft specification support for POJO persistence and annotations.
FORMS
REPORTS
SYSTEM TESTING
Testing
Testing is the process of detecting errors. Testing performs a very critical role for
quality assurance and for ensuring the reliability of software. The results of testing are used later
on during maintenance also.
Psychology of Testing
The aim of testing is often to demonstrate that a program works by showing that it
has no errors. The basic purpose of testing phase is to detect the errors that may be present in the
program. Hence one should not start testing with the intent of showing that a program works, but
the intent should be to show that a program doesn’t work. Testing is the process of executing a
program with the intent of finding errors.
Testing Objectives
The main objective of testing is to uncover a host of errors, systematically and
with minimum effort and time. Stating formally, we can say,
Acceptance
Levels of Testing Testing
In order to uncover the errors present in different phases we have the concept of
levels of testing. The basic levels of System Testing
testing are as shown below…
Integration Testing
Unit Testing
Client Needs
Requirements
Design
Code
System Testing
The philosophy behind testing is to find errors. Test cases are devised with this in
mind. A strategy employed for system testing is code testing.
Code Testing:
This strategy examines the logic of the program. To follow this method we
developed some test data that resulted in executing every instruction in the program and module
i.e. every path is tested. Systems are not designed as entire nor are they tested as single systems.
To ensure that the coding is perfect two types of testing is performed or for that matter is
performed or that matter is performed or for that matter is performed on all systems.
Types Of Testing
Unit Testing
Link Testing
Unit Testing
Unit testing focuses verification effort on the smallest unit of software i.e. the
module. Using the detailed design and the process specifications testing is done to uncover errors
within the boundary of the module. All modules must be successful in the unit test before the
start of the integration testing begins.
In this project each service can be thought of a module. There are so many modules
like Login, HWAdmin, MasterAdmin, Normal User, and PManager. Giving different sets of
inputs has tested each module. When developing the module as well as finishing the
development so that each module works without any error. The inputs are validated when
accepting from the user.
Link Testing
Link testing does not test software but rather the integration of each module in
system. The primary concern is the compatibility of each module. The Programmer tests where
modules are designed with different parameters, length, type etc.
Integration Testing
After the unit testing we have to perform integration testing. The goal here is to see
if modules can be integrated properly, the emphasis being on testing interfaces between modules.
This testing activity can be considered as testing the design and hence the emphasis on testing
module interactions.
In this project integrating all the modules forms the main system. When integrating
all the modules I have checked whether the integration effects working of any of the services by
giving different combinations of inputs with which the two services run perfectly before
Integration.
System Testing
Here the entire software system is tested. The reference document for this process
is the requirements document, and the goal is to see if software meets its requirements.
Here entire ‘ATM’ has been tested against requirements of project and it is
checked whether all requirements of project have been satisfied or not.
Acceptance Testing
Acceptance Test is performed with realistic data of the client to demonstrate that
the software is working satisfactorily. Testing here is focused on external behavior of the system;
the internal logic of program is not emphasized.
This is a unit testing method where a unit will be taken at a time and tested
thoroughly at a statement level to find the maximum possible errors. I tested step wise every
piece of code, taking care that every statement in the code is executed at least once. The white
box testing is also called Glass Box Testing.
I have generated a list of test cases, sample data, which is used to check all possible
combinations of execution paths through the code at every module level.
This testing method considers a module as a single unit and checks the unit at
interface and communication with other modules rather getting into details at statement level.
Here the module will be treated as a block box that will take some input and generate output.
Output for a given set of input combinations are forwarded to other modules.
Test cases that reduced by a count that is greater than one, the number of
additional test cases that much be designed to achieve reasonable testing.
Test cases that tell us something about the presence or absence of classes of errors,
rather than an error associated only with the specific test at hand.
SOFTWARE TOOLS
Methodology
The method being used in developing the system is the system Development Life
Cycle (SDLC) The SDLC process includes project identification and selection, project initiation
and planning, analysis, design, implementation and maintenance.
ANALYSIS
LOGICAL DESIGN
PHYSICAL DESIGN
IMPLEMENTATION
MAINTENANCE
In this phase the project information system needs are identified and analyzed
such as identified the title of the project that is Web Based XpathAnalyzer, scope and objective
of the Web Based XpathAnalyzer.
During this phase the Gantt chart has been developed as a time line to determining
the task involve in developing the Web Based XpathAnalyzer.
In the phase, the exiting system is studies by collecting the information through
the Internet and analyzed the information to get alternatives for the used of proposed system.
Determine what the Web Based XpathAnalyzer should do.
Logical design is the fourth phase in SDLC methodology. The functional features
chosen for the proposed system in Analysis phase are described. Part of the logical design of the
information system is to devise the user interface. The interface plays an important role to
connect the user with the system and is thus extremely important.
TECHNICAL NOTES
• Microsoft MYSQL-5.0.51b-win32
The entire project has been developed and deployed as per the requirements
stated by the user, it is found to be bug free as per the testing standards that are implemented.
Any specification untraced errors will be concentrated in the coming versions, which are planned
to be developed in near future.