You are on page 1of 162

ADVANCED WEB TECHNOLOGIES

World Wide Web


From Wikipedia, the free encyclopedia

Jump to: navigation, search


This article is about the system of interlinked hypertext documents. For the web browser,
see WorldWideWeb. For the 1947 Vincent Price film, see The Web (film). For other uses
for "WWW", see WWW (disambiguation).

The Web's historic logo designed by Robert Cailliau

The World Wide Web is a system of interlinked hypertext documents accessed via the
Internet. With a web browser, one can view Web pages that may contain text, images,
videos, and other multimedia and navigate between them using hyperlinks. Using
concepts from earlier hypertext systems, English physicist Tim Berners-Lee, now the
Director of the World Wide Web Consortium, wrote a proposal in March 1989 for what
would eventually become the World Wide Web.[1] He was later joined by Belgian
computer scientist Robert Cailliau while both were working at CERN in Geneva,
Switzerland. In 1990, they proposed using "HyperText [...] to link and access information
of various kinds as a web of nodes in which the user can browse at will",[2] and released
that web in December.[3]

Connected by the existing Internet, other websites were created, around the world, adding
international standards for domain names and the HTML. Since then, Berners-Lee has
played an active role in guiding the development of Web standards (such as the markup
languages in which Web pages are composed), and in recent years has advocated his
vision of a Semantic Web. The World Wide Web enabled the spread of information over
the Internet through an easy-to-use and flexible format. It thus played an important role in
popularizing use of the Internet.[4] Although the two terms are sometimes conflated in
popular use, World Wide Web is not synonymous with Internet.[5] The Web is an
application built on top of the Internet.
Function

The terms Internet and World Wide Web are often used in every-day speech without
much distinction. However, the Internet and the World Wide Web are not one and the
same. The Internet is a global system of interconnected computer networks. In contrast,
the Web is one of the services that runs on the Internet. It is a collection of interconnected
documents and other resources, linked by hyperlinks and URLs. In short, the Web is an
application running on the Internet.[16] Viewing a Web page on the World Wide Web
normally begins either by typing the URL of the page into a web browser, or by
following a hyperlink to that page or resource. The web browser then initiates a series of
communication messages, behind the scenes, in order to fetch and display it.

First, the server-name portion of the URL is resolved into an IP address using the global,
distributed Internet database known as the domain name system, or DNS. This IP address
is necessary to contact the Web server. The browser then requests the resource by
sending an HTTP request to the Web server at that particular address. In the case of a
typical Web page, the HTML text of the page is requested first and parsed immediately
by the web browser, which then makes additional requests for images and any other files
that form parts of the page. Statistics measuring a website's popularity are usually based
either on the number of 'page views' or associated server 'hits' (file requests) that take
place.

While receiving these files from the Web server, browsers may progressively render the
page onto the screen as specified by its HTML, CSS, and other Web languages. Any
images and other resources are incorporated to produce the on-screen Web page that the
user sees. Most Web pages will themselves contain hyperlinks to other related pages and
perhaps to downloads, source documents, definitions and other Web resources. Such a
collection of useful, related resources, interconnected via hypertext links, is what was
dubbed a "web" of information. Making it available on the Internet created what Tim
Berners-Lee first called the WorldWideWeb (in its original CamelCase, which was
subsequently discarded) in November 1990.[2]

[edit] Linking

Graphic representation of a minute fraction of the WWW, demonstrating hyperlinks

Over time, many Web resources pointed to by hyperlinks disappear, relocate, or are
replaced with different content. This phenomenon is referred to in some circles as "link
rot" and the hyperlinks affected by it are often called "dead links". The ephemeral nature
of the Web has prompted many efforts to archive Web sites. The Internet Archive is one
of the best-known efforts; it has been active since 1996.

Caching
If a user revisits a Web page after only a short interval, the page data may not need to be
re-obtained from the source Web server. Almost all web browsers cache recently
obtained data, usually on the local hard drive. HTTP requests sent by a browser will
usually only ask for data that has changed since the last download. If the locally cached
data are still current, it will be reused.

Speed issues

Frustration over congestion issues in the Internet infrastructure and the high latency that
results in slow browsing has led to an alternative, pejorative name for the World Wide
Web: the World Wide Wait.

Accessibility

Main article: Web accessibility

Access to the Web is for everyone regardless of disability including visual, auditory,
physical, speech, cognitive, and neurological. Accessibility features also help others with
temporary disabilities like a broken arm and an aging population as their abilities change.
[50]
The Web is used for receiving information as well as providing information and
interacting with society, making it essential that the Web be accessible in order to provide
equal access and equal opportunity to people with disabilities.

Standards

Main article: Web standards

Many formal standards and other technical specifications define the operation of different
aspects of the World Wide Web, the Internet, and computer information exchange. Many
of the documents are the work of the World Wide Web Consortium (W3C), headed by
Berners-Lee, but some are produced by the Internet Engineering Task Force (IETF) and
other organizations.

Usually, when Web standards are discussed, the following publications are seen as
foundational:

• Recommendations for markup languages, especially HTML and XHTML, from


the W3C. These define the structure and interpretation of hypertext documents.
• Recommendations for stylesheets, especially CSS, from the W3C.
• Standards for ECMAScript (usually in the form of JavaScript), from Ecma
International.
• Recommendations for the Document Object Model, from W3C.

Security
The Web has become criminals' preferred pathway for spreading malware. Cybercrime
carried out on the Web can include identity theft, fraud, espionage and intelligence
gathering.

WWW prefix

Many Web addresses begin with www, because of the long-standing practice of naming
Internet hosts (servers) according to the services they provide. So, the host name for a
web server is often www as it is ftp for an FTP server, and news or nntp for a USENET
news server etc. These host names then appear as DNS subdomain names, as in
"www.example.com". The use of such subdomain names is not required by any technical
or policy standard; indeed, the first ever web server was called "nxoc01.cern.ch",[19] and
many web sites exist without a www subdomain prefix, or with some other prefix such as
"www2", "secure" etc. These subdomain prefixes have no consequence; they are simply
chosen names. Many web servers are set up such that both the domain by itself (e.g.,
example.com) and the www subdomain (e.g., www.example.com) refer to the same site,
others require one form or the other, or they may map two different web sites.

Function

The terms Internet and World Wide Web are often used in every-day speech without
much distinction. However, the Internet and the World Wide Web are not one and the
same. The Internet is a global system of interconnected computer networks. In contrast,
the Web is one of the services that runs on the Internet. It is a collection of interconnected
documents and other resources, linked by hyperlinks and URLs. In short, the Web is an
application running on the Internet.[16] Viewing a Web page on the World Wide Web
normally begins either by typing the URL of the page into a web browser, or by
following a hyperlink to that page or resource. The web browser then initiates a series of
communication messages, behind the scenes, in order to fetch and display it.

Web search engine


From Wikipedia, the free encyclopedia

Jump to: navigation, search


"Search engine" redirects here. For other uses, see Search engine (disambiguation).

A web search engine is a tool designed to search for information on the World Wide
Web. The search results are usually presented in a list and are commonly called hits. The
information may consist of web pages, images, information and other types of files. Some
search engines also mine data available in databases or open directories. Unlike Web
directories, which are maintained by human editors, search engines operate
algorithmically or are a mixture of algorithmic and human input.
Around 2000, the Google search engine rose to prominence.[citation needed] The company
achieved better results for many searches with an innovation called PageRank. This
iterative algorithm ranks web pages based on the number and PageRank of other web
sites and pages that link there, on the premise that good or desirable pages are linked to
more than others. Google also maintained a minimalist interface to its search engine. In
contrast, many of its competitors embedded a search engine in a web portal.

By 2000, Yahoo was providing search services based on Inktomi's search engine. Yahoo!
acquired Inktomi in 2002, and Overture (which owned AlltheWeb and AltaVista) in
2003. Yahoo! switched to Google's search engine until 2004, when it launched its own
search engine based on the combined technologies of its acquisitions.

Microsoft first launched MSN Search in the fall of 1998 using search results from
Inktomi. In early 1999 the site began to display listings from Looksmart blended with
results from Inktomi except for a short time in 1999 when results from AltaVista were
used instead. In 2004, Microsoft began a transition to its own search technology, powered
by its own web crawler (called msnbot).

A search engine operates, in the following order

1. Web crawling
2. Indexing
3. Searching

Web search engines work by storing information about many web pages, which they
retrieve from the WWW itself. These pages are retrieved by a Web crawler (sometimes
also known as a spider) — an automated Web browser which follows every link it sees.
Exclusions can be made by the use of robots.txt. The contents of each page are then
analyzed to determine how it should be indexed (for example, words are extracted from
the titles, headings, or special fields called meta tags). Data about web pages are stored in
an index database for use in later queries. Some search engines, such as Google, store all
or part of the source page (referred to as a cache) as well as information about the web
pages, whereas others, such as AltaVista, store every word of every page they find. This
cached page always holds the actual search text since it is the one that was actually
indexed, so it can be very useful when the content of the current page has been updated
and the search terms are no longer in it. This problem might be considered to be a mild
form of linkrot, and Google's handling of it increases usability by satisfying user
expectations that the search terms will be on the returned webpage. This satisfies the
principle of least astonishment since the user normally expects the search terms to be on
the returned pages. Increased search relevance makes these cached pages very useful,
even beyond the fact that they may contain data that may no longer be available
elsewhere.

When a user enters a query into a search engine (typically by using key words), the
engine examines its index and provides a listing of best-matching web pages according to
its criteria, usually with a short summary containing the document's title and sometimes
parts of the text. Most search engines support the use of the boolean operators AND, OR
and NOT to further specify the search query. Some search engines provide an advanced
feature called proximity search which allows users to define the distance between
keywords.

The usefulness of a search engine depends on the relevance of the result set it gives
back. While there may be millions of web pages that include a particular word or phrase,
some pages may be more relevant, popular, or authoritative than others. Most search
engines employ methods to rank the results to provide the "best" results first. How a
search engine decides which pages are the best matches, and what order the results
should be shown in, varies widely from one engine to another. The methods also change
over time as Internet usage changes and new techniques evolve.

Most Web search engines are commercial ventures supported by advertising revenue and,
as a result, some employ the practice of allowing advertisers to pay money to have their
listings ranked higher in search results. Those search engines which do not accept money
for their search engine results make money by running search related ads alongside the
regular search engine results. The search engines make money every time someone clicks
on one of these ads.

Search engine optimization


From Wikipedia, the free encyclopedia

Jump to: navigation, search

Internet marketing
Display advertising
E-mail marketing
E-mail marketing software
Interactive advertising
Social media optimization
Web analytics
Cost per impression
Affiliate marketing
A typical search engine results page
Cost per action
Search engine optimization (SEO) is the process of Contextual advertising
improving the volume or quality of traffic to a web site Revenue sharing
from search engines via "natural" or un-paid ("organic" Search engine marketing
or "algorithmic") search results as opposed to search
engine marketing (SEM) which deals with paid Search engine optimization
Pay per click advertising
Paid inclusion
Search analytics
Mobile advertising
This box: view • talk • edit
inclusion. Typically, the earlier (or higher) a site appears in the search results list, the
more visitors it will receive from the search engine. SEO may target different kinds of
search, including image search, local search, video search and industry-specific vertical
search engines. This gives a web site web presence.

As an Internet marketing strategy, SEO considers how search engines work and what
people search for. Optimizing a website primarily involves editing its content and HTML
and associated coding to both increase its relevance to specific keywords and to remove
barriers to the indexing activities of search engines.

The acronym "SEO" can also refer to "search engine optimizers," a term adopted by an
industry of consultants who carry out optimization projects on behalf of clients, and by
employees who perform SEO services in-house. Search engine optimizers may offer SEO
as a stand-alone service or as a part of a broader marketing campaign. Because effective
SEO may require changes to the HTML source code of a site, SEO tactics may be
incorporated into web site development and design. The term "search engine friendly"
may be used to describe web site designs, menus, content management systems, images,
videos, shopping carts, and other elements that have been optimized for the purpose of
search engine exposure.

Another class of techniques, known as black hat SEO or spamdexing, use methods such
as link farms, keyword stuffing and article spinning that degrade both the relevance of
search results and the user-experience of search engines. Search engines look for sites
that employ these techniques in order to remove them from their indices.

Methods
Main article: search engine optimization methods

[edit] Getting indexed

The leading search engines, such as Google and Yahoo!, use crawlers to find pages for
their algorithmic search results. Pages that are linked from other search engine indexed
pages do not need to be submitted because they are found automatically. Some search
engines, notably Yahoo!, operate a paid submission service that guarantee crawling for
either a set fee or cost per click.[26] Such programs usually guarantee inclusion in the
database, but do not guarantee specific ranking within the search results.[27] Two major
directories, the Yahoo Directory and the Open Directory Project both require manual
submission and human editorial review.[28] Google offers Google Webmaster Tools, for
which an XML Sitemap feed can be created and submitted for free to ensure that all
pages are found, especially pages that aren't discoverable by automatically following
links.[29]

Search engine crawlers may look at a number of different factors when crawling a site.
Not every page is indexed by the search engines. Distance of pages from the root
directory of a site may also be a factor in whether or not pages get crawled.[30]
[edit] Preventing crawling

Main article: Robots Exclusion Standard

To avoid undesirable content in the search indexes, webmasters can instruct spiders not to
crawl certain files or directories through the standard robots.txt file in the root directory
of the domain. Additionally, a page can be explicitly excluded from a search engine's
database by using a meta tag specific to robots. When a search engine visits a site, the
robots.txt located in the root directory is the first file crawled. The robots.txt file is then
parsed, and will instruct the robot as to which pages are not to be crawled. As a search
engine crawler may keep a cached copy of this file, it may on occasion crawl pages a
webmaster does not wish crawled. Pages typically prevented from being crawled include
login specific pages such as shopping carts and user-specific content such as search
results from internal searches. In March 2007, Google warned webmasters that they
should prevent indexing of internal search results because those pages are considered
search spam.[31]

[edit] Increasing prominence

A variety of other methods are employed to get a webpage shown up in the searchs
results. These include:

• Cross linking between pages of the same website. Giving more links to main
pages of the website, to increase PageRank used by search engines.[32] Linking
from other websites, including link farming and comment spam.
• Writing content that includes frequently searched keyword phrase, so as to be
relevant to a wide variety of search queries.[33] Adding relevant keywords to a web
page meta tags, including keyword stuffing.
• URL normalization of web pages accessible via multiple urls, using the
"canonical" meta tag.[34]

White hat versus black hat

SEO techniques can be classified into two broad categories: techniques that search
engines recommend as part of good design, and those techniques of which search engines
do not approve. The search engines attempt to minimize the effect of the latter, among
them spamdexing. Some industry commentators have classified these methods, and the
practitioners who employ them, as either white hat SEO, or black hat SEO.[35] White hats
tend to produce results that last a long time, whereas black hats anticipate that their sites
may eventually be banned either temporarily or permanently once the search engines
discover what they are doing.[36]

An SEO technique is considered white hat if it conforms to the search engines' guidelines
and involves no deception. As the search engine guidelines[21][22][23][37] are not written as a
series of rules or commandments, this is an important distinction to note. White hat SEO
is not just about following guidelines, but is about ensuring that the content a search
engine indexes and subsequently ranks is the same content a user will see. White hat
advice is generally summed up as creating content for users, not for search engines, and
then making that content easily accessible to the spiders, rather than attempting to trick
the algorithm from its intended purpose. White hat SEO is in many ways similar to web
development that promotes accessibility,[38] although the two are not identical.

Black hat SEO attempts to improve rankings in ways that are disapproved of by the
search engines, or involve deception. One black hat technique uses text that is hidden,
either as text colored similar to the background, in an invisible div, or positioned off
screen. Another method gives a different page depending on whether the page is being
requested by a human visitor or a search engine, a technique known as cloaking.

Search engines may penalize sites they discover using black hat methods, either by
reducing their rankings or eliminating their listings from their databases altogether. Such
penalties can be applied either automatically by the search engines' algorithms, or by a
manual site review. Infamous examples are the February 2006 Google removal of both
BMW Germany and Ricoh Germany for use of deceptive practices.[39] and the April 2006
removal of the PPC Agency BigMouthMedia.[40] All three companies, however, quickly
apologized, fixed the offending pages, and were restored to Google's list.[41]

Many Web applications employ back-end systems that dynamically modify page content
(both visible and meta-data) and are designed to increase page relevance to search
engines based upon how past visitors reached the original page. This dynamic search
engine optimization and tuning process can be (and has been) abused by criminals in the
past. Exploitation of Web applications that dynamically alter themselves can be poisoned.
[42]

Semantic Web
From Wikipedia, the free encyclopedia

Jump to: navigation, search


It has been suggested that Semantic publishing be merged into this article or
section. (Discuss)

W3C's Semantic Web logo

The Semantic Web is an evolving development of the World Wide Web in which the
meaning (semantics) of information and services on the web is defined, making it
possible for the web to understand and satisfy the requests of people and machines to use
the web content.[1][2] It derives from World Wide Web Consortium director Sir Tim
Berners-Lee's vision of the Web as a universal medium for data, information, and
knowledge exchange.[3]
At its core, the semantic web comprises a set of design principles,[4] collaborative
working groups, and a variety of enabling technologies. Some elements of the semantic
web are expressed as prospective future possibilities that are yet to be implemented or
realized.[2] Other elements of the semantic web are expressed in formal specifications.[5]
Some of these include Resource Description Framework (RDF), a variety of data
interchange formats (e.g. RDF/XML, N3, Turtle, N-Triples), and notations such as RDF
Schema (RDFS) and the Web Ontology Language (OWL), all of which are intended to
provide a formal description of concepts, terms, and relationships within a given
knowledge domain.

Purpose

Humans are capable of using the Web to carry out tasks such as finding the Finnish word
for "monkey", reserving a library book, and searching for a low price for a DVD.
However, a computer cannot accomplish the same tasks without human direction because
web pages are designed to be read by people, not machines. The semantic web is a vision
of information that is understandable by computers, so that they can perform more of the
tedious work involved in finding, sharing, and combining information on the web.

Limitations of HTML

Many files on a typical computer can be loosely divided into documents and data.
Documents like mail messages, reports, and brochures are read by humans. Data, like
calendars, addressbooks, playlists, and spreadsheets are presented using an application
program which lets them be viewed, searched and combined in many ways.

Currently, the World Wide Web is based mainly on documents written in Hypertext
Markup Language (HTML), a markup convention that is used for coding a body of text
interspersed with multimedia objects such as images and interactive forms

Semantic Web solutions

The Semantic Web takes the solution further. It involves publishing in languages
specifically designed for data: Resource Description Framework (RDF), Web Ontology
Language (OWL), and Extensible Markup Language (XML). HTML describes
documents and the links between them. RDF, OWL, and XML, by contrast, can describe
arbitrary things such as people, meetings, or airplane parts. Tim Berners-Lee calls the
resulting network of Linked Data the Giant Global Graph, in contrast to the HTML-based
World Wide Web.

These technologies are combined in order to provide descriptions that supplement or


replace the content of Web documents. Thus, content may manifest itself as descriptive
data stored in Web-accessible databases [8], or as markup within documents (particularly,
in Extensible HTML (XHTML) interspersed with XML, or, more often, purely in XML,
with layout or rendering cues stored separately). The machine-readable descriptions
enable content managers to add meaning to the content, i.e., to describe the structure of
the knowledge we have about that content. In this way, a machine can process knowledge
itself, instead of text, using processes similar to human deductive reasoning and
inference, thereby obtaining more meaningful results and helping computers to perform
automated information gathering and research.

An example of a tag that would be used in a non-semantic web page:

<item>cat</item>

Encoding similar information in a semantic web page might look like this:

<item rdf:about="http://dbpedia.org/resource/Cat">Cat</item>

Java Servlet
From Wikipedia, the free encyclopedia

(Redirected from Servlets)


Jump to: navigation, search

Life of a JSP file.


Servlets are Java programming language objects that dynamically process requests and
construct responses. The Java Servlet API allows a software developer to add dynamic
content to a Web server using the Java platform. The generated content is commonly
HTML, but may be other data such as XML. Servlets are the Java counterpart to non-
Java dynamic Web content technologies such as PHP, CGI and ASP.NET, and as such
some find it easier to think of them as 'Java scripts'. Servlets can maintain state across
many server transactions by using HTTP cookies, session variables or URL rewriting.

The servlet API, contained in the Java package hierarchy javax.servlet, defines the
expected interactions of a Web container and a servlet. A Web container is essentially the
component of a Web server that interacts with the servlets. The Web container is
responsible for managing the lifecycle of servlets, mapping a URL to a particular servlet
and ensuring that the URL requester has the correct access rights.

A Servlet is an object that receives a request and generates a response based on that
request. The basic servlet package defines Java objects to represent servlet requests and
responses, as well as objects to reflect the servlet's configuration parameters and
execution environment. The package javax.servlet.http defines HTTP-specific
subclasses of the generic servlet elements, including session management objects that
track multiple requests and responses between the Web server and a client. Servlets may
be packaged in a WAR file as a Web application.

Servlets can be generated automatically by JavaServer Pages (JSP) compiler, or


alternately by template engines such as WebMacro. Often servlets are used in
conjunction with JSPs in a pattern called "Model 2", which is a flavor of the model-view-
controller pattern.

A Servlet is an object that receives a request and generates a response based on that
request. The basic servlet package defines Java objects to represent servlet requests and
responses, as well as objects to reflect the servlet's configuration parameters and
execution environment. The package javax.servlet.http defines HTTP-specific
subclasses of the generic servlet elements, including session management objects that
track multiple requests and responses between the Web server and a client. Servlets may
be packaged in a WAR file as a Web application.

Servlets can be generated automatically by JavaServer Pages (JSP) compiler, or


alternately by template engines such as WebMacro. Often servlets are used in
conjunction with JSPs in a pattern called "Model 2", which is a flavor of the model-view-
controller pattern.
Lifecycle of a servlet

The servlet lifecycle consists of the following steps:

1. The servlet class is loaded by the container during start-up.


2. The container calls the init() method. This method initializes the servlet and
must be called before the servlet can service any requests. In the entire life of a
servlet, the init() method is called only once.
3. After initialization, the servlet can service client requests. Each request is serviced
in its own separate thread. The container calls the service() method of the
servlet for every request. The service() method determines the kind of request
being made and dispatches it to an appropriate method to handle the request. The
developer of the servlet must provide an implementation for these methods. If a
request for a method that is not implemented by the servlet is made, the method of
the parent class is called, typically resulting in an error being returned to the
requester.
4. Finally, the container calls the destroy() method that takes the servlet out of
service. The destroy() method like init() is called only once in the lifecycle of
a servlet.

Here is a simple servlet that just generates HTML. Note that HttpServlet is a subclass of
GenericServlet, an implementation of the Servlet interface. The service() method
dispatches requests to methods doGet(), doPost(), doPut(), doDelete(), etc.,
according to the HTTP request.

import java.io.IOException;
import java.io.PrintWriter;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

public class HelloWorld extends HttpServlet {


public void doGet(HttpServletRequest request, HttpServletResponse
response)
throws ServletException, IOException {
PrintWriter out = response.getWriter();
out.println("<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0 " +
"Transitional//EN\">\n" +
"<html>\n" +
"<head><title>Hello WWW</title></head>\n" +
"<body>\n" +
"<h1>Hello WWW</h1>\n" +
"</body></html>");
}
}

ServletConfig and ServletContext


There is only one ServletContext in every application. This object can be used by all the
servlets to obtain application level information or container details. Every servlet, on the
other hand, gets its own ServletConfig object. This object provides initialization
parameters for a servlet. A developer can obtain the reference to ServletContext using the
ServletConfig object.

The "servletcontext" object was created by the container at the time of servlet
initialization and destroyed just before the servlet deinstantion i.e., the scope of the
sevletcontext object is the scope of the web application..

There will be only one serveltcontext object for the entire web application. servletcontext
object was created by the container if it is the first request form the client other wise it
will create the reference of the existed object.

A Single Java Virtual Machine

Most servlet engines want to execute all servlets in a single JVM. Where that JVM itself
executes can differ depending on the server, though. With a server written in Java, such
as the Java Web Server, the server itself can execute inside a JVM right alongside its
servlets.

With a single-process, multithreaded web server written in another language, the JVM
can often be embedded inside the server process. Having the JVM be part of the server
process maximizes performance because a servlet becomes, in a sense, just another low-
level server API extension. Such a server can invoke a servlet with a lightweight context
switch and can provide information about requests through direct method invocations.

A multiprocess web server (which runs several processes to handle requests) doesn't
really have the choice to embed a JVM directly in its process because there is no one
process. This kind of server usually runs an external JVM that its processes can share.
With this approach, each servlet access involves a heavyweight context switch
reminiscent of FastCGI. All the servlets, however, still share the same external process.

Fortunately, from the perspective of the servlet (and thus from your perspective, as a
servlet author), the server's implementation doesn't really matter because the server
always behaves the same way.

Instance Persistence

We said above that servlets persist between requests as object instances. In other words,
at the time the code for a servlet is loaded, the server creates a single class instance. That
single instance handles every request made of the servlet. This improves performance in
three ways:

• It keeps the memory footprint small.


• It eliminates the object creation overhead that would otherwise be necessary to
create a new servlet object. A servlet can be already loaded in a virtual machine
when a request comes in, letting it begin executing right away.
• It enables persistence. A servlet can have already loaded anything it's likely to
need during the handling of a request. For example, a database connection can be
opened once and used repeatedly thereafter. It can even be used by a group of
servlets. Another example is a shopping cart servlet that loads in memory the
price list along with information about its recently connected clients. Yet another
servlet may choose to cache entire pages of output to save time if it receives the
same request again.

Not only do servlets persist between requests, but so do any threads created by servlets.
This perhaps isn't useful for the run-of-the-mill servlet, but it opens up some interesting
possibilities. Consider the situation where one background thread performs some
calculation while other threads display the latest results. It's quite similar to an animation
applet where one thread changes the picture and another one paints the display.

A Simple Counter

To demonstrate the servlet life cycle, we'll begin with a simple example. Example 3-1
shows a servlet that counts and displays the number of times it has been accessed. For
simplicity's sake, it outputs plain text.

Example 3-1. A simple counter


import java.io.*;
import javax.servlet.*;
import javax.servlet.http.*;

public class SimpleCounter extends HttpServlet {

int count = 0;

public void doGet(HttpServletRequest req, HttpServletResponse res)


throws ServletException, IOException {
res.setContentType("text/plain");
PrintWriter out = res.getWriter();
count++;
out.println("Since loading, this servlet has been accessed " +
count + " times.");
}
}

The code is simple--it just prints and increments the instance variable named count--but
it shows the power of persistence. When the server loads this servlet, the server creates a
single instance to handle every request made of the servlet. That's why this code can be
so simple. The same instance variables exist between invocations and for all invocations.
Imagine that one thread increments the count and just afterward, before the first thread
prints the count, the second thread also increments the count. Each thread will print the
same count value, after effectively increasing its value by 2.[2] The order of execution
goes something like this

count++ // Thread 1
count++ // Thread 2
out.println // Thread 1
out.println // Thread 2

MODULE 3 JSP

JSP DEVELOPMENT MODEL

he early JSP specifications advocated two philosophical approaches for building


applications using JSP technology. These approaches, termed the JSP Model 1 and Model
2 architectures, differ essentially in the location at which the bulk of the request
processing was performed. In the Model 1 architecture, shown in Figure 1, the JSP page
alone is responsible for processing the incoming request and replying back to the client.
There is still separation of presentation from content, because all data access is performed
using beans. Although the Model 1 architecture should be perfectly suitable for simple
applications, it may not be desirable for complex implementations. Indiscriminate usage
of this architecture usually leads to a significant amount of scriptlets or Java code
embedded within the JSP page, especially if there is a significant amount of request
processing to be performed. While this may not seem to be much of a problem for Java
developers, it is certainly an issue if your JSP pages are created and maintained by
designers -- which is usually the norm on large projects. Ultimately, it may even lead to
an unclear definition of roles and allocation of responsibilities, causing easily avoidable
project-management headaches.
Figure 1: JSP Model 1 architecture

The Model 2 architecture, shown in Figure 2, is a hybrid approach for serving dynamic
content, since it combines the use of both servlets and JSP. It takes advantage of the
predominant strengths of both technologies, using JSP to generate the presentation layer
and servlets to perform process-intensive tasks. Here, the servlet acts as the controller
and is in charge of the request processing and the creation of any beans or objects used by
the JSP, as well as deciding, depending on the user's actions, which JSP page to forward
the request to. Note particularly that there is no processing logic within the JSP page
itself; it is simply responsible for retrieving any objects or beans that may have been
previously created by the servlet, and extracting the dynamic content from that servlet for
insertion within static templates. In my opinion, this approach typically results in the
cleanest separation of presentation from content, leading to clear delineation of the roles
and responsibilities of the developers and page designers on your programming team. In
fact, the more complex your application, the greater the benefits of using the Model 2
architecture should be.

Figure 2: JSP Model 2 architecture

In order to clarify the concepts behind the Model 2 architecture, let's walk through a
detailed implementation of it: a sample online music store called Music Without Borders.
Understanding Music Without Borders

The main view, or presentation, for our Music Without Borders online store is facilitated
by the JSP page EShop.jsp (shown in Listing 1). You will notice that the page deals
almost exclusively with presenting the main user interface of the application to the client,
and performs no processing whatsoever -- an optimal JSP scenario.

REQUEST DISPATCHER

A RequestDispatcher object can forward a client's request to a resource or include


the resource itself in the response back to the client. A resource can be another
servlet, or an HTML file, or a JSP file, etc.

You can also think of a RequestDispatcher object as a wrapper for the resource
located at a given path that is supplied as an argument to the
getRequestDispatcher method.

For constructing a RequestDispatcher object, you can use either the


ServletRequest.getRequestDispatcher() method or the
ServletContext.getRequestDispatcher() method. They both do the same thing,
but impose slightly different constraints on the argument path. For the former, it
looks for the resource in the same webapp to which the invoking servlet belongs and
the pathname specified can be relative to invoking servlet. For the latter, the
pathname must begin with '/' and is interpreted relative to the root of the webapp.

To illustrate, suppose you want Servlet_A to invoke Servlet_B. If they are both in
the same directory, you could accomplish this by incorporating the following code
fragment in either the service method or the doGet method of Servlet_A:

RequestDispatcher dispatcher = getRequestDispatcher("Servlet_B");


dispatcher.forward( request, response );

where request, of type HttpServletRequest, is the first parameter of the enclosing


service method (or the doGet method) and response, of type
HttpServletResponse, the second. You could accomplish the same by

RequestDispatcher
dispatcher=getServletContext().getRequestDispatcher( "/servlet/Servlet_
B" );
dispatcher.forward( request, response );

MODULE 4 INTRODUCTION TO WEB SERVICES


Web service
From Wikipedia, the free encyclopedia

Jump to: navigation, search

Web services architecture.

Web services in a service-oriented architecture.

A Web service (also Webservice) is defined by the W3C as "a software system designed
to support interoperable machine-to-machine interaction over a network. It has an
interface described in a machine-processable format (specifically Web Services
Description Language WSDL). Other systems interact with the Web service in a manner
prescribed by its description using SOAP-messages, typically conveyed using HTTP with
an XML serialization in conjunction with other Web-related standards." [1] Web services
are frequently just Internet Application Programming Interfaces (API) that can be
accessed over a network, such as the Internet, and executed on a remote system hosting
the requested services. Other approaches with nearly the same functionality as web
services are Object Management Group's (OMG) Common Object Request Broker
Architecture (CORBA), Microsoft's Distributed Component Object Model (DCOM) or
Sun Microsystems's Java/Remote Method Invocation (RMI).

In common usage the term refers to clients and servers that communicate over the
Hypertext Transfer Protocol (HTTP) protocol used on the Web. Such services tend to fall
into one of two camps: Big Web Services[citation needed] and RESTful Web Services. Such
services are also referred to as web APIs.

"Big Web Services" use Extensible Markup Language (XML) messages that follow the
Simple Object Access Protocol (SOAP) standard and have been popular with traditional
enterprise. In such systems, there is often a machine-readable description of the
operations offered by the service written in the Web Services Description Language
(WSDL). The latter is not a requirement of a SOAP endpoint, but it is a prerequisite for
automated client-side code generation in many Java and .NET SOAP frameworks
(frameworks such as Spring, Apache Axis2 and Apache CXF being notable exceptions).
Some industry organizations, such as the WS-I, mandate both SOAP and WSDL in their
definition of a Web service.

More recently, REpresentational State Transfer (RESTful) Web services have been
regaining popularity, particularly with Internet companies. By using the PUT, GET and
DELETE HTTP methods, alongside POST, these are often better integrated with HTTP
and web browsers than SOAP-based services. They do not require XML messages or
WSDL service-API definitions.

A highly dynamic and loosely coupled environment increases not only the probability of
deviation situations that occur during the execution of composite services, but also the
complexity in exception handling. Due to the distributed nature of SOA, loosely coupled
feature of web services, the monitoring and exception handling issues about web services
in SOA context is still an open research issue.

When running composite web services, each sub service can be considered autonomous.
The user has no control over these services. Also the web services themselves are not
reliable; the service provider may remove, change or update their services without giving
notice to users. The reliability and fault tolerance is not well supported; faults may
happen during the execution. Exception handling in the context of web services is still an
open research issue.

Software as a service
- Software as a Service (SaaS) is a software distribution model in which
applications are hosted by a vendor or service provider and made available to customers
over a network, typically the Internet.

SaaS is becoming an increasingly prevalent delivery model as underlying technologies


that support Web services and service-oriented architecture (SOA) mature and new
developmental approaches, such as Ajax, become popular. Meanwhile, broadband service
has become increasingly available to support user access from more areas around the
world.

SaaS is closely related to the ASP (application service provider) and On Demand
Computing software delivery models. IDC identifies two slightly different delivery
models for SaaS. The hosted application management (hosted AM) model is similar to
ASP: a provider hosts commercially available software for customers and delivers it over
the Web. In the software on demand model, the provider gives customers network-based
access to a single copy of an application created specifically for SaaS distribution. IDC
predicts that SaaS will make up 30 percent of the software market by 2007 and will be
worth $10.7 billion by 2009.

Benefits of the SaaS model include:

• easier administration
• automatic updates and patch management
• compatibility: All users will have the same version of software.
• easier collaboration, for the same reason
• global accessibility.

The traditional model of software distribution, in which software is purchased for and
installed on personal computers, is sometimes referred to as software as a product.

Aims & objectives

The sharing of end-user licenses and on-demand use may also reduce investment in
server hardware or the shift of server use to SaaS suppliers of applications file services.

[edit] History

The concept of "software as a service" started to circulate before 1999.[1] In December


2000, Bennett et al. noted the term as "beginning to gain acceptance in the marketplace".
[2]

Whilst the phrase "software as a service" passed into common usage, the TitleCase
acronym "SaaS" was allegedly not coined until circa 2000 to 2001 in a white paper called
"Strategic Backgrounder: Software as a Service", which was published in February 2001
by the Software & Information Industry's (SIIA) eBusiness Division, but act

Key characteristics

Characteristics of SaaS software include:[5][dead link]

• network-based access to, and management of, commercially available software


• activities managed from central locations rather than at each customer's site,
enabling customers to access applications remotely via the Web
• application delivery typically closer to a one-to-many model (single instance,
multi-tenant architecture) than to a one-to-one model, including architecture,
pricing, partnering, and management characteristics
• centralized feature updating, which obviates the need for end-users to download
patches and upgrades.
• frequent integration into a larger network of communicating software - either as
part of a mashup or as a plugin to a platform as a service. (Service oriented
architecture is naturally more complex than traditional models of software
deployment.)
Providers of SaaS generally price applications on a per-user basis, sometimes with a
relatively small minimum number of users and often with additional fees for extra
bandwidth and storage. SaaS revenue streams to the vendor are therefore lower initially
than traditional software license fees, but are also recurring, and therefore viewed as
more predictable, much like maintenance fees for licensed software.

In addition to the characteristics mentioned above, SaaS software turns the tragedy of the
commons on its head and frequently[weasel words] has these additional benefits:

• More feature requests from users since there is frequently no marginal cost for
requesting new features;
• Faster releases of new features since the entire community of users benefits from
new functionality; and
• The embodiment of recognized best practices — since the community of users
drives the software publisher to support best practice.

WEB SERVICES ARCHITECHTURE

1 Introduction

1.1 Purpose of the Web Service Architecture

Web services provide a standard means of interoperating between different software


applications, running on a variety of platforms and/or frameworks. This document
(WSA) is intended to provide a common definition of a Web service, and define its place
within a larger Web services framework to guide the community. The WSA provides a
conceptual model and a context for understanding Web services and the relationships
between the components of this model.

The architecture does not attempt to specify how Web services are implemented, and
imposes no restriction on how Web services might be combined. The WSA describes
both the minimal characteristics that are common to all Web services, and a number of
characteristics that are needed by many, but not all, Web services.

The Web services architecture is an interoperability architecture: it identifies those global


elements of the global Web services network that are required in order to ensure
interoperability between Web services.

A new architectural approach

Traditional systems architectures incorporate relatively brittle coupling between various


components in the system. The bulk of IT systems, including Web-oriented systems, can
be characterized as tightly coupled applications and subsystems. IBM CICS transactions,
databases, reports, and so on are built with tight coupling, using data structures (database
records, flat files).

Monolithic systems like these are sensitive to change. A change in the output of one of
the subsystems will often cause the whole system to break. A switch to a new
implementation of a subsystem will also often cause old, statically bound collaborations
(which unintentionally relied on the side effects of the old implementation) to break
down. This situation is manageable to a certain extent through skills and numbers of
people. As scale, demand, volume, and rate of business change increase, this brittleness
becomes exposed. Any significant change in any one of these aspects will cause the
brittleness of the systems to become a crisis: unavailable or unresponsive Web sites, lack
of speed to market with new products and services, inability to rapidly shift to new
business opportunities, or competitive threats. IT organizations will not be able to cope
with changes because of the coupling; the dynamics of the Web makes management of
these brittle architectures untenable.

We need to replace the current models of application design with a more flexible
architecture, yielding systems that are more amenable to change.

1.1 The Need for an Architecture

The generalized term "Web services" does not currently describe a coherent or
necessarily consistent set of technologies, architectures, or even visions. The community
of Web services evangelists, architects, developers, and vendors represents a merging of
at least three major sources of inspiration, with various ideas taken from other sources as
well. Several streams of thought and practice have converged to produce an amalgam of
what we think of as "Web services", including:

• "Distributed Objects" or "Application Integration" -- exchange of programming


objects or invocation of software functions over a network.
• EDI / B2B - the exchange of electronic business documents over a network.
• The World Wide Web itself - accessing human readable documents and posting
requests for information, products, or services via the HTTP protocol.

The excitement over Web services is based largely on the potential for a combination of
XML, the Web, the SOAP and WSDL specifications, and to-be-defined protocol stacks
to address many of the problems these technologies have encountered. For example,
distributed object systems such as Microsoft's COM family and the OMG CORBA
standard did not interoperate, each presented numerous security and administration
challenges when deployed over the internet, and neither quite meet the scalability
expectations created by the Web. Various XML-based B2B systems have showed much
potential, but created incompatible protocols on top of the internet standards which lead
to interoperability problems. The Web has proven enormously popular, scalable, and
interoperable, but it too presents many challenges -- reliability, security, database-level
transactions, details of how to map platform-neutral data, URIs and HTTP operations to
back-end application systems, and many others -- that must be handled by Web
applications rather than some standardized infrastructure.

SOA

Service-oriented architecture (SOA)


definition
Free White Paper
Related Content

from
Douglas K. Barry

Online Briefing: Change Analysis of Systems Integration Techniques

Online Briefing: Non-Technical Change Issues Related to SOA

A service-oriented architecture is essentially a collection of services. These services


communicate with each other. The communication can involve either simple data passing
or it could involve two or more services coordinating some activity. Some means of
connecting services to each other is needed.

Service-oriented architectures are not a new thing. The first service-oriented architecture
for many people in the past was with the use DCOM or Object Request Brokers (ORBs)
based on the CORBA specification. For more on DCOM and CORBA, see Prior service-
oriented architectures (new window).

Services

If a service-oriented architecture is to be effective, we need a clear understanding of the


term service. A service is a function that is well-defined, self-contained, and does not
depend on the context or state of other services. See Service (new window).

Connections

The technology of Web services (new window) is the most likely connection technology of
service-oriented architectures. Web services essentially use XML (new window) to create a
robust connection.

The following figure illustrates a basic service-oriented architecture. It shows a service


consumer at the right sending a service request message to a service provider at the left.
The service provider returns a response message to the service consumer. The request and
subsequent response connections are defined in some way that is understandable to both
the service consumer and service provider. How those connections are defined is
explained in Web Services explained (new window). A service provider can also be a service
consumer.
MODULE 5 INTRODUCTION TO .NET FRAMEWORK

EVOLUTION OF .NET

The Internet revolution of the late 1990s represented a dramatic shift in the way
individuals and organizations communicate with each other. Traditional applications,
such as word processors and accounting packages, are modeled as stand-alone
applications: they offer users the capability to perform tasks using data stored on the
system the application resides and executes on. Most new software, in contrast, is
modeled based on a distributed computing model where applications collaborate to
provide services and expose functionality to each other. As a result, the primary role of
most new software is changing into supporting information exchange (through Web
servers and browsers), collaboration (through e-mail and instant messaging), and
individual expression (through Web logs, also known as Blogs, and e-zines — Web
based magazines). Essentially, the basic role of software is changing from providing
discrete functionality to providing services.
The .NET Framework represents a unified, object-oriented set of services and libraries
that embrace the changing role of new network-centric and network-aware software. In
fact, the .NET Framework is the first platform designed from the ground up with the
Internet in mind.
This chapter introduces the .NET Framework in terms of the benefits it provides. I
present some sample code in Visual C# .NET, Visual Basic .NET, Visual Basic 6.0, and
Visual C++; don't worry if you're not familiar with these languages, since I describe in the
discussion what each sample does.
Benefits of the .NET Framework
The .NET Framework offers a number of benefits to developers:
A consistent programming model
Direct support for security
Simplified development efforts
Easy application deployment and maintenance

Simply put, ASP.NET 3.5 is an amazing technology to use to build your Web solutions! When
ASP.NET
1.0 was introduced in 2000, many considered it a revolutionary leap forward in the area of
Web application
development. ASP.NET 2.0 was just as exciting and revolutionary, and ASP.NET 3.5 Service
Pack 1
(SP1) is continuing a forward march in providing the best framework today in building
applications for
the Web. ASP.NET 3.5 SP1 continues to build on the foundation laid by the release of ASP.NET
1.0 by
focusing on the area of developer productivity.
This book covers the whole of ASP.NET. It not only introduces new topics, it also shows you
examples
of these new technologies in action. So sit back, pull up that keyboard, and enjoy!

A Little Bit of History


Before organizations were even thinking about developing applications for the Internet, much
of the
application development focused on thick desktop applications. These thick-client applications
were
used for everything from home computing and gaming to office productivity and more. No end
was in
sight for the popularity of this application model.
During that time, Microsoft developers developed thick-client applications using mainly Visual
Basic
(VB).
Visual Basic was not only a programming language — it was tied to an IDE that allowed for
easy
thick-client application development. In the Visual Basic model, developers could drop controls
onto
a form, set properties for these controls, and provide code behind them to manipulate the
events of the
control. For example, when an end user clicked a button on one of the Visual Basic forms, the
code behind
the form handled the event.
Then, in the mid-1990s, the Internet arrived on the scene. Microsoft was unable to move the
Visual Basic
model to the development of Internet-based applications. The Internet definitely had a lot of
power,
and right away, the problems facing the thick-client application model were revealed. Internet-
based
applications created a single instance of the application that everyone could access. Having
one instance
of an application meant that when the application was upgraded or patched, the changes
made to this
single instance were immediately available to each and every user visiting the application
through a
browser.
To participate in the Web application world, Microsoft developed Active Server Pages (ASP).
ASP was
a quick and easy way to develop Web pages. ASP pages consisted of a single page that
contained a
mix of markup and languages. The power of ASP was that you could include VBScript or JScript
code
instructions in the page executed on the Web server before the page was sent to the end
user’s Web
browser. This was an easy way to create dynamic Web pages customized based on instructions
dictated
by the developer.
ASP used script between brackets and percentage signs — <% %> — to control server-side
behaviors. A
developer could then build an ASP page by starting with a set of static HTML. Any dynamic
element
Microsoft .NET vs. J2EE: How Do They Stack Up?
by Jim Farley
08/01/2000

Even if you don't write code dedicated to Microsoft platforms, you have probably heard
by now about Microsoft .NET, Microsoft's latest volley in their campaign against all
things non-Windows. If you've read the media spin from Microsoft, or browsed through
the scant technical material available on the MSDN site, or even if you attended the
Microsoft Professional Developers' Conference (where the .NET platform was officially
"launched"), you're probably still left with at least two big questions:

• What exactly is the .NET platform?


• How does the .NET architecture measure up against J2EE?

And, if you think more long-term, you might have a third question rattling around your
head:

• What can we learn from the .NET architecture about pushing the envelope of
enterprise software development?

The .NET framework is at a very early stage in its lifecycle, and deep details are still
being eked out by the Microsoft .NET team. But we can, nevertheless, get fairly decent
answers to these questions from the information that's already out there.

What is it?

Current ruminations about .NET in various forums are reminiscent of the fable of the
three blind men attempting to identify an elephant: It's perceived as very different things,
depending on your perspective. Some see .NET as Microsoft's next-generation Visual
Studio development environment. Some see it as yet another new programming language
(C#). Some see it as a new data-exchange and messaging framework, based on XML and
SOAP. In reality, .NET wants to be all of these things, and a bit more.

First, let's get some concrete details. Here's one cut at an itemized list of the technical
components making up the .NET platform:

• C#, a "new" language for writing classes and components, that integrates
elements of C, C++, and Java, and adds additional features, like metadata tags,
related to component development.
• A "common language runtime", which runs bytecodes in an Internal Language
(IL) format. Code and objects written in one language can, ostensibly, be
compiled into the IL runtime, once an IL compiler is developed for the language.
• A set of base components, accessible from the common language runtime, that
provide various functions (networking, containers, etc.).
• ASP+, a new version of ASP that supports compilation of ASPs into the common
language runtime (and therefore writing ASP scripts using any language with an
IL binding).
• Win Forms and Web Forms, new UI component frameworks accessible from
Visual Studio.
• ADO+, a new generation of ADO data access components that use XML and
SOAP for data interchange.

How do .NET and J2EE compare?

As you can see, the .NET platform has an array of technologies under its umbrella.
Microsoft is ostensibly presenting these as alternatives to other existing platforms, like
J2EE and CORBA, in order to attract developers to the Windows platform. But how do
the comparisons play out item-by-item? One way to lay out the alternatives between
.NET and J2EE is shown in the following table:
Microsoft.NET J2EE Key differentiators

C# and Java both derive from C and C++. Most significant


features (e.g., garbage collection, hierarchical namespaces)
are present in both. C# borrows some of the component
concepts from JavaBeans (properties/attributes, events, etc.),
adds some of its own (like metadata tags), but incorporates
these features into the syntax differently.

C# programming Java programming Java runs on any platform with a Java VM. C# only runs in
language language Windows for the foreseeable future.

C# is implicitly tied into the IL common language runtime


(see below), and is run as just-in-time (JIT) compiled
bytecodes or compiled entirely into native code. Java code
runs as Java Virtual Machine (VT) bytecodes that are either
interpreted in the VM or JIT compiled, or can be compiled
entirely into native code.

.NET common
components (aka the High-level .NET components will include support for
Java core API
".NET Framework distributed access using XML and SOAP (see ADO+ below).
SDK")

ASP+ will use Visual Basic, C#, and possibly other


languages for code snippets. All get compiled into native
code through the common language runtime (as opposed to
Active Server Pages+ Java ServerPages
being interpreted each time, like ASPs). JSPs use Java code
(ASP+) (JSP)
(snippets, or JavaBean references), compiled into Java
bytecodes (either on-demand or batch-compiled, depending
on the JSP implementation).

IL Common Java Virtual .NET common language runtime allows code in multiple
Language Runtime Machine and languages to use a shared set of components, on Windows.
CORBA IDL and Underlies nearly all of .NET framework (common
ORB components, ASP+, etc.).
Java's Virtual Machine spec allows Java bytecodes to run on
any platform with a compliant JVM.

CORBA allows code in multiple languages to use a shared set


of objects, on any platform with an ORB available. Not
nearly as tightly integrated into J2EE framework.

Similar web components (e.g., based on JSP) not available in


Java standard platform, some proprietary components
available through Java IDEs, etc.
Win Forms and Web
Java Swing
Forms Win Forms and Web Forms RAD development supported
through the MS Visual Studio IDE - no other IDE support
announced at this writing. Swing support available in many
Java IDEs and tools.

ADO+ is built on the premise of XML data interchange


JDBC, EJB, JMS (between remote data objects and layers of multi-tier apps) on
ADO+ and SOAP- and Java XML top of HTTP (AKA, SOAP). .NET's web services in general
based Web Services Libraries (XML4J, assume SOAP messaging models. EJB, JDBC, etc. leave the
JAXP) data interchange protocol at the developer's discretion, and
operate on top of either HTTP, RMI/JRMP or IIOP.

The comparisons in this table only scratch the surface. Here's an executive summary
of .NET vs. J2EE:

Features: .NET and J2EE offer pretty much the same laundry of list of features, albeit in
different ways.

Portability: The .NET core works on Windows only but theoretically supports
development in many languages (once sub-/supersets of these languages have been
defined and IL compilers have been created for them). Also, Net's SOAP capabilities will
allow components on other platforms to exchange data messages with .NET components.
While a few of the elements in .NET, such as SOAP and its discovery and lookup
protocols, are provided as public specifications, the core components of the framework
(IL runtime environment, ASP+ internals, Win Forms and Web Forms component
"contracts", etc.) are kept by Microsoft, and Microsoft will be the only provider of
complete .NET development and runtime environments. There has already been some
pressure by the development community for Microsoft to open up these specifications,
but this would be counter to Microsoft's standard practices.

Read more on the .NET platform in this in-depth interview by O'Reilly Windows editor
John Osborn:

Deep Inside C#: An Interview with Microsoft chief architect Anders Hejlsberg--John
gets to the bottom of not only Microsoft's detailed plans for the C# programming
language but also the .Net framework.
J2EE, on the other hand, works on any platform with a compliant Java VM and a
compliant set of required platform services (EJB container, JMS service, etc., etc.). All of
the specifications that define the J2EE platform are published and reviewed publicly, and
numerous vendors offer compliant products and development environments. But J2EE is
a single-language platform. Calls from/to objects in other languages are possible through
CORBA, but CORBA support is not a ubiquitous part of the platform.

The Bigger Picture

These last points highlight some of the key differentiators between .NET and J2EE, and
point towards Microsoft's real play here. Microsoft is doing two very notable things
with .NET: It is opening up a channel to developers in other programming languages, and
it is opening up a channel to non-.NET components by integrating XML and SOAP into
their messaging scheme.

By allowing cross-language component interactions, .NET is enfranchising Perl, Eiffel,


Cobol, and other programmers by allowing them to play in the Microsoft sandbox.
Devotees of these languages are particularly amenable to gestures like this, since for the
most part they have felt somewhat disenfranchised and marginalized in the
Microsoft/Sun/Open Source wars. And by using XML and SOAP in their component
messaging layer, Microsoft is bolstering their diplomatic face and adding an element of
openness to their platform, providing ammunition against claims of proprietary behavior.

What's the correct response?

For Microsoft developers:

.NET is a good thing for those of you committed to Microsoft architectures. ASP+ is
better than ASP, ADO+ is better, but different, than ADO and DCOM, C# is better than
C and C++. The initial version of .NET won't be real until sometime in 2001, so you have
some time to prepare, but this will undoubtedly become the default development
environment for Microsoft platforms. And if you're developing within the Microsoft
development framework now, you will undoubtedly benefit from adopting elements of
the .NET framework into your architectures.

However, several of the goals of the .NET platform are fairly lofty and not at all
guaranteed to fly, at least not in the short term. The IL common language runtime, for
example, has some fairly significant hurdles to overcome before it has any real payoff for
developers. Each language that wants to integrate with the component runtime has to
define a subset/superset of the language that maps cleanly into and out of the IL runtime,
and has to define constructs that provide the component metadata that IL requires. Then
compilers (x-to-IL and IL-to-x) will have to be developed to both compile language
structures (objects, components, etc.) into IL component bytecodes, and also generate
language-specific interfaces to existing IL components.
There is some historical precedence here. Numerous bridges from non-Java languages to
the Java VM have been developed, such as JPython, PERCobol, the Tcl/Java project,
and interestingly enough, Bertrand Meyer and some other Eiffel folks put together an
Eiffel-to-JavaVM system a few years back. With the possible exception of JPython,
these tools have not been widely adopted, even within their respective language
communities, even though they seem to offer a way to write code for the Java
environment (albeit not the entire J2EE framework) using your favorite language. Why
this lack of enthusiasm? I believe it's because people are hesitant to take on the headaches
of adding yet another translation step from their development language to the target
framework. If the Java environment is the goal, people will generally choose to learn
Java. I predict that the same will be true of .NET: People will generally choose to learn
C# and write .NET components in that language.

Another caution: Beware of performance issues with .NET's SOAP-based distributed


communications. SOAP essentially means XML over HTTP. HTTP is not a high-
performance data protocol, and XML implies an XML parsing layer, which implies more
compute overhead. The combination of both could significantly reduce transaction rates
relative to alternative messaging/communications channels. XML is a very rich, robust
metalanguage for messaging, and HTTP is very portable and avoids many firewall issues.
But if transaction rates are a priority for you, keep your options open.

For the Java and Open Source communities:

It would be easy to dismiss .NET as more Microsoft marketing-ware and continue on


your merry way. But don't. .NET is a sign of a subtle but significant shift in Microsoft's
strategy to evangelize their platforms. They have been fighting alternative frameworks
and platforms at the management level pretty well, touting the usual questionable
"statistics" about cost of ownership and seamless integration. Now they are fighting Java
and open source initiatives on their own terms, putting their own spin on "open" and
attempting to directly address the needs of developers, two things that they have been
faulted for not doing very well in the past. If you consider yourself an evangelist for Java
or open source platforms, then the nature of the war is changing. Be prepared.

Also, Microsoft's IL runtime has at least one notable, if improbable, goal: eliminate the
programming language as a barrier to entry to the framework. Java eliminates the
platform barrier (within limits, of course: You can't make up for missing hardware
resources with software, for example), but in order to work in J2EE, you have to work in
Java. .NET wants to let you use the language of your choice to build .NET applications.
This is admirable, though there are big questions as to whether and when the IL approach
in .NET will actually become broadly useful (see above). Regardless, this points to a
weakness in the single-language J2EE approach. The importance of this weakness is
questionable, but it exists nonetheless, and deserves some consideration by the Java
community. If this is really desired by developers, then maybe the efforts in Java
bytecode generators for non- Java languages should be organized and consolidated.
Focusing on J2EE, there are a few issues that should be addressed immediately in order
to bolster the advantages of that platform compared to what .NET is shooting for. First,
XML support needs to be integrated seamlessly into the framework. I'm not talking about
bolting an XML SAX/DOM parser to the set of standard services, or extending the use of
XML in configuration files. XML messaging and manipulation need to be there, ready to
use. Admittedly, you can use XML payloads on top of JMS messaging, but the platform
doesn't facilitate this at all. The XML space is a cluttered mess of standards, de facto
standards, APIs and DTDs, which is to be expected when you're dealing with a meta-
language.

But Microsoft has put a stake in the ground with SOAP, and they're pushing hard to put
something understandable and useful in the hands of developers. J2EE proponents need
to do the same with their platform. One possibility that comes to mind is to add an XML
messaging "provider" layer on top of JMS, along the lines of the pattern followed by
Java Naming and Directory Interface, or JNDI, with LDAP, NIS, COS Naming, etc.
This in combination with a standard SOAP/BizTalk provider, an ebXML provider, etc.
would be an impressive statement.

ARCHITECHTURE OF .NET FRAMEWORK

The .NET Framework represents a unified, object-oriented set of services and libraries
that embrace the changing role of new network-centric and network-aware software. In
fact, the .NET Framework is the first platform designed from the ground up with the
Internet in mind.
This chapter introduces the .NET Framework in terms of the benefits it provides. I
present some sample code in Visual C# .NET, Visual Basic .NET, Visual Basic 6.0, and
Visual C++; don't worry if you're not familiar with these languages, since I describe in the
discussion what each sample does.
Benefits of the .NET Framework
The .NET Framework offers a number of benefits to developers:
A consistent programming model
Direct support for security
Simplified development efforts
Easy application deployment and maintenance
Consistent programming model
Different programming languages offer different models for doing the same thing. For
example, the following code demonstrates how to open a file and write a one-line
message to it using Visual Basic 6.0:
Elements of the .NET Framework
The .NET Framework consists of three key elements (as shown in Figure 1-1):
Figure 1-1: Components of the .NET Framework
Common Language Runtime
.NET Class Library
Unifying components
Common Language Runtime
The Common Language Runtime (CLR) is a layer between an application and the
operating system it executes on. The CLR simplifies an application's design and reduces
the amount of code developers need to write because it provides a variety of execution
services that include memory management, thread management, component lifetime
management, and default error handling. The key benefit of the CLR is that it
transparently provides these execution services to all applications, regardless of what
programming language they're written in and without any additional effort on the part of
the developer.
The CLR is also responsible for compiling code just before it executes. Instead of
producing a binary representation of your code, as traditional compilers do, .NET
compilers produce a representation of your code in a language common to the .NET
Framework: Microsoft Intermediate Language (MSIL), often referred to as IL. When your
code executes for the first time, the CLR invokes a special compiler called a Just In Time
(JIT) compiler, which transforms the IL into executable instructions that are specific to
the type and model of your system's processor. Because all .NET languages have the
same compiled representation, they all have similar performance characteristics. This
means that a program written in Visual Basic .NET can perform as well as the same
program written in Visual C++ .NET. (C++ is the language of choice for developers who
need the best possible performance a system can deliver.)
Common Type System
The Common Type System (CTS) is a component of the CLR and provides a common
set of data types, each having a common set of behaviors
.NET Class Library
In an earlier section, "Consistent programming models across programming languages,"
the .NET Class Library was described as containing hundreds of classes that model the
system and services it provides. To make the .NET Class Library easier to work with and
understand, it's divided into namespaces. The root namespace of the .NET Class Library
is called System, and it contains core classes and data types, such as Int32, Object,
Array, and Console. Secondary namespaces reside within the System namespace.
Examples of nested namespaces include the following:
System.Diagnostics: Contains classes for working with the Event Log
System.Data: Makes it easy to work with data from multiple data
sources (System.Data.OleDb resides within this namespace and
contains the ADO.NET classes)
System.IO: Contains classes for working with files and data streams
Figure 1-2 illustrates the relationship between some of the major namespaces in the
.NET Class Library.
Figure 1-2: Organization of the .NET Class Library
The benefits of using the .NET Class Library include a consistent set of services
available to all .NET languages and simplified deployment, because the .NET Class
Library is available on all implementations of the .NET Framework.
Unifying components
Until this point, this chapter has covered the low-level components of the .NET
Framework. The unifying components, listed next, are the means by which you can
access the services the .NET Framework provides:
ASP.NET

METADATA

.NET metadata, in the Microsoft .NET framework, refers to certain data structures
embedded within the Common Intermediate Language code that describes the high-level
structure of the code. Metadata describes all classes and class members that are defined in
the assembly, and the classes and class members that the current assembly will call from
another assembly. The metadata for a method contains the complete description of the
method, including the class (and the assembly that contains the class), the return type and
all of the method parameters.

A .NET language compiler will generate the metadata and store this in the assembly
containing the CIL. When the CLR executes CIL it will check to make sure that the
metadata of the called method is the same as the metadata that is stored in the calling
method. This ensures that a method can only be called with exactly the right number of
parameters and exactly the right parameter types.

Contents
[hide]

Attributes

Developers can add metadata to their code through attributes. There are two types of
attributes, custom and pseudo custom attributes, and to the developer these have the same
syntax. Attributes in code are messages to the compiler to generate metadata. In CIL,
metadata such as inheritance modifiers, scope modifiers, and almost anything that isn't
either opcodes or streams, are also referred to as attributes.

A custom attribute is a regular class that inherits from the Attribute class. A custom
attribute can be used on any method, property, class or entire assembly with the syntax:
[AttributeName(optional parameter, optional name=value pairs)] as in:

[Custom]
[Custom(1)]
[Custom(1, Comment="yes")]

Custom attributes are used by the .NET Framework extensively. Windows


Communication Framework uses attributes to define service contracts, ASP.NET uses
these to expose methods as web services, LINQ to SQL uses them to define the mapping
of classes to the underlying relational schema, Visual Studio uses them to group together
properties of an object, the class developer indicates the category for the object's class by
applying the [Category] custom attribute. Custom attributes are interpreted by
application code and not the CLR.When the compiler sees a custom attribute it will
generate custom metadata that is not recognised by the CLR. The developer has to
provide code to read the metadata and act on it. As an example, the attribute shown in the
example can be handled by the code:

class CustomAttribute : Attribute


{
private int paramNumber = 0;
private string comment = "";

public CustomAttribute() { }
public CustomAttribute(int num) { paramNumber = num; }

public String Comment


{
set { comment = value; }
}
}

The name of the class is mapped to the attribute name. The Visual C# compiler
automatically adds the string "Attribute" at the end of any attribute name.
Consequently every attribute class name should end with this string, but it is legal to
define an attribute without the Attribute-suffix. When affixing an attribute to an item,
the compiler will look for both the literal name and the name with Attribute added to
the end, i.e. if you were to write [Custom] the compiler would look for both Custom and
CustomAttribute. If both exist, the compiler fails. The attribute can be prefixed with "@"
if you don't want to risk ambiguity, so writing [@Custom] will not match
CustomAttribute. Using the attribute invokes the constructor of the class. Overloaded
constructors are supported. Name-Value pairs are mapped to properties, the name denotes
the name of the property and the value supplied is set by the property.
Sometimes there is ambiguity concerning to what you are affixing the attribute. Consider
the following code:

[Orange]
public int ExampleMethod(string input)
{
//method body goes here
}

ASSEMBLIES

What is an assembly?

• An Assembly is a logical unit of code


• Assembly physically exist as DLLs or EXEs
• One assembly can contain one or more files
• The constituent files can include any file types like image files, text files etc.
along with DLLs or EXEs
• When you compile your source code by default the exe/dll generated is actually
an assembly
• Unless your code is bundled as assembly it can not be used in any other
application
• When you talk about version of a component you are actually talking about
version of the assembly to which the component belongs.
• Every assembly file contains information about itself. This information is called
as Assembly Manifest.

What is assembly manifest?

• Assembly manifest is a data structure which stores information about an assembly


• This information is stored within the assembly file(DLL/EXE) itself
• The information includes version information, list of constituent files etc.

What is private and shared assembly?

The assembly which is used only by a single application is called as private assembly.
Suppose you created a DLL which encapsulates your business logic. This DLL will be
used by your client application only and not by any other application. In order to run the
application properly your DLL must reside in the same folder in which the client
application is installed. Thus the assembly is private to your application.

Suppose that you are creating a general purpose DLL which provides functionality which
will be used by variety of applications. Now, instead of each client application having its
own copy of DLL you can place the DLL in 'global assembly cache'. Such assemblies are
called as shared assemblies.

What is Global Assembly Cache?

Global assembly cache is nothing but a special disk folder where all the shared
assemblies will be kept. It is located under <drive>:\WinNT\Assembly folder.

How assemblies avoid DLL Hell?

As stated earlier most of the assemblies are private. Hence each client application refers
assemblies from its own installation folder. So, even though there are multiple versions of
same assembly they will not conflict with each other. Consider following example :

• You created assembly Assembly1


• You also created a client application which uses Assembly1 say Client1
• You installed the client in C:\MyApp1 and also placed Assembly1 in this folder
• After some days you changed Assembly1
• You now created another application Client2 which uses this changed Assembly1
• You installed Client2 in C:\MyApp2 and also placed changed Assembly1 in this
folder
• Since both the clients are referring to their own versions of Assembly1 everything
goes on smoothly

Now consider the case when you develop assembly that is shared one. In this case it is
important to know how assemblies are versioned. All assemblies has a version number in
the form:

major.minor.build.revision

If you change the original assembly the changed version will be considered compatible
with existing one if the major and minor versions of both the assemblies match.

When the client application requests assembly the requested version number is matched
against available versions and the version matching major and minor version numbers
and having most latest build and revision number are supplied.

How do I create shared assemblies?

Following steps are involved in creating shared assemblies :

• Create your DLL/EXE source code


• Generate unique assembly name using SN utility
• Sign your DLL/EXE with the private key by modifying AssemblyInfo file
• Compile your DLL/EXE
• Place the resultant DLL/EXE in global assembly cache using AL utility
How do I create unique assembly name?

Microsoft now uses a public-private key pair to uniquely identify an assembly. These
keys are generated using a utility called SN.exe (SN stands for shared name). The most
common syntax of is :

sn -k mykeyfile.key

Where k represents that we want to generate a key and the file name followed is the file
in which the keys will be stored.

An assembly in ASP.NET is a collection of single-file or multiple files. The assembly


that has more than one file contains either a dynamic link library (DLL) or an EXE
file. The assembly also contains metadata that is known as assembly manifest. The
assembly manifest contains data about the versioning requirements of the assembly,
author name of the assembly, the security requirements that the assembly requires
to run, and the various files that form part of the assembly.

The biggest advantage of using


ASP.NET Assemblies is that
developers can create
applications without interfering
with other applications on the system. When the developer creates an application
that requires an assembly that assembly will not affect other applications. The
assembly used for one application is not applied to another application. However one
assembly can be shared with other applications. In this case the assembly has to be
placed in the bin directory of the application that uses it.

This is in contrast to DLL in the past. Earlier developers used to share libraries of
code through DLL. To use the DLL that is developed by another developer for
another application, you have to register that DLL in your machine. In ASP.NET, the
assembly is created by default whenever you build a DLL. You can check the details
of the manifest of the assembly by using classes located in the System.Reflection
namespace.

Thus you can create two types of ASP.NET Assemblies in ASP.NET: private ASP.NET
Assemblies and shared assemblies. Private ASP.NET Assemblies are created whey
you build component files like DLLs that can be applied to one application. Shared
ASP.NET Assemblies are created when you want to share the component files across
multiple applications. Shared ASP.NET Assemblies must have a unique name and
must be placed in Global Assembly Cache (GAC). The GAC is located in the Assembly
directory in WinNT. You can view both the manifest and the IL using ILDisassembler
(ildasm.exe).

APPICATION DOMAINS ]
.NET Application Domains
Overview

Before .NET framework 2.0 technology, the only way used to isolate applications
running on the same machine is by the means of process boundaries. Each application run
within a process, and each process has its own boundaries and memory addresses relative
to it and this is how isolation from other processes was performed.

.NET framework 2.0 introduces a new boundary called the Application Domains. Each
application running within its main process boundaries and its application domain
boundaries. So, you can think of the application domain as an extra shell to isolate the
application and making it more secure and robust.

The above is not the main advantage of application domains. The main advantage is the
ability to run several applications domains in a single process or application. All of this is
performed while maintaning the same level and quality of isolation that would exist in
separate processes, without the need of making cross-process calls or switching between
processes.

Advantages

You may ask, why should I create more than one application domain within my
application?

The following advantages of application domains answer this question.

• In terms of isolation, code running in one application domain can not access code
or resources running in another application domain.
• In terms of security, you can run more than one set of web controls in a single
browser process. Each set of them is running in a separate application domain so
each one can not access the data or resources of the other sets. You can control
the permissions granted to a given piece of code by controlling the application
domain inside which the code is running.
• In terms of robustness, fault in code running in one application domain can not
affect other applications although they all are running inside the same process.
Individual application domain can be stopped without stopping the entire process,
you can simply unload the code running in a single application domain.

So, from the above advantages, you can observe that by using application domains you
can create rather robust .NET applications. It increases isolation, stability, and security of
your application.
Relation Between Application Domains and Assemblies

Most development and runtime environments has a definition for the building blocks of
an application. Assemblies are the building blocks of .NET framework applications. They
are the fundamental unite of deployment. An assembly consists of types and resources
working together to form a logical unit of the functionality of your application. You can
divide your .NET application into assemblies. The assembly file can have an .EXE or
a .DLL extension.

As we mentioned previously, you can run more than one application domain within your
application. Each application domain will run a given piece of code. An assembly is
simply the piece of code we mean here. So, each application domain can run an assembly
within the entire application. This is the relation between application domains and
assemblies.

Creating an Application Domain

"System.AppDomain" is the main class y

ou can use to deal with application domains. To create an application domain use one of
the overloaded "CreateDomain" methods in this class.

The following piece of code creates an application domain and assign a name to it.

Dim NDomain As AppDomain


NDomain = AppDomain.CreateDomain("Domain1")

The above code declare an "NDomain" variable of type "AppDomain", then calls the
"CreateDomain" method giving it a string that represents the name of the newly created
application domain.

You can configure the newly created application domain by using the
"AppDomainSetup" class with its most important property "ApplicationBase" which
defines the root directory for this application domain. You can also use this class to
control many settings for the newly created application domain like application name,
cache path, configuration file, license file, and others.

You can use this class as shown in the following code.

Dim NDomainInfo As New AppDomainSetup


NDomainInfo.ApplicationBase = "C:\AppDomains\Ex2"

Dim NDomain As AppDomain


NDomain = AppDomain.CreateDomain("Domain1", Nothing,
NDomainInfo)
MsgBox(NDomain.BaseDirectory())

When you run this code, you will get the following message box showing the base
directory of the newly created application domain.

Figure 1 - The message box showing the base directory of the created application domain

You can obtain setup information of a newly created application domain by using the
newly created instance of the "AppDomain" class. You can obtain information like the
friendly name of the application domain, the base directory of the application domain, the
relative search path of the application domain, and others.

The following code shows how we can obtain these information.

Dim NDomainInfo As New AppDomainSetup


NDomainInfo.ApplicationBase = "C:\AppDomains\Example2"

Dim NDomain As AppDomain


NDomain = AppDomain.CreateDomain("Domain1", Nothing,
NDomainInfo)

MsgBox(NDomain.FriendlyName)
MsgBox(NDomain.BaseDirectory())
MsgBox(NDomain.RelativeSearchPath)

Loading Assemblies

As we mentioned above, each application domain can run an assembly. The running
assembly can be shared between application domains by creating a private copy of that
assembly for each domain. These copies are created by the runtime host (.NET) so it is
not your responsibility anyway.

You can load an assembly into an application domain by using more than one method.
Each method uses a different class and technique.

You can use one of the many overloaded "Load" methods provided by the
"System.AppDomain" class. These methods are mainly used for COM interoperability
but they can be used successfully to load an assembly in this instance of application
domain. You can also use the "CreateInstance" method of the "System.AppDomain" class
for the same reason.

You can also use the two static methods "Load" and "LoadFrom" of the
"System.Reflection.Assembly" to load an assembly into the caller domain.
To use the "Load" method of the "System.AppDomain" class, you have to define an
assembly object first. You will pass to that object the file path of your assembly EXE or
DLL file. After that you will need to get the display name of this assembly and to pass it
to the "Load" function.

You can also use "System.AppDomain.ExecuteAssembly" method to execute an


assembly given its file name path as shown in the following line of code.

NDomain.ExecuteAssembly("C:\AppDomains\Example2.exe")

When you run the above code the EXE file located at "C:\AppDomain\" will be carried
out and will display it's form (a hello message) as shown in the next figure.

Figure 2 - Example2.exe running from Example1.exe in a separate application domain

To use the "LoadFrom" method of the "System.Reflection.Assembly" class, see the


following code.

Dim Assm As System.Reflection.Assembly


Assm = System.Reflection.Assembly.LoadFrom( _
"C:\AppDomains\Example2.exe")

This will load the assembly "Example2.exe" in the currently running application domain.
You can use the "Assm" instance to access many information and properties about the
"Example2.exe" assembly like its name, version, modules, and functions. You can also
invoke a specified method that is located inside the "Example2.exe" assembly.

The following code displaying a set of message boxes, the first displays the location of
the assembly, the second displays the name of the assembly, and the third displays the
version of the loaded assembly.

MsgBox(Assm.Location)
MsgBox(Assm.GetName.Name)
MsgBox(Assm.GetName.Version.ToString)

Figure 3 - Message box showing the name of the loaded assembly


Figure 4 - Message box showing the version of the loaded assembly

Unloading

When you have finished using a newly created application domain, you have to unload it
using the "System.AppDomain.Unload" method to shutdown the application domain and
to free all its resources. To unload an assembly from an application domain, you simply
unload the application domain by using the "Unload" method.

To download the examples used in this tutorial click here.

FEATURES OF .NET

ASP.NET
ASP.NET introduces two major features: Web Forms and Web Services.
Web Forms
Developers not familiar with Web development can spend a great deal of time, for
example, figuring out how to validate the e-mail address on a form. You can validate the
information on a form by using a client-side script or a server-side script. Deciding which
kind of script to use is complicated by the fact that each approach has its benefits and
drawbacks, some of which aren't apparent unless you've done substantial design work.
If you validate the form on the client by using client-side JScript code, you need to take
into consideration the browser that your users may use to access the form. Not all
browsers expose exactly the same representation of the document to programmatic
interfaces. If you validate the form on the server, you need to be aware of the load that
users might place on the server. The server has to validate the data and send the result
back to the client. Web Forms simplify Web development to the point that it becomes as
easy as dragging and dropping controls onto a designer (the surface that you use to edit
a page) to design interactive Web applications that span from client to server.
Web Services
A Web service is an application that exposes a programmatic interface through standard
access methods. Web Services are designed to be used by other applications and
components and are not intended to be useful directly to human end users. Web
Services make it easy to build applications that integrate features from remote sources.
For example, you can write a Web Service that provides weather information for
subscribers of your service instead of having subscribers link to a page or parse through
a file they download from your site. Clients can simply call a method on your Web
Service as if they are calling a method on a component installed on their system — and
have the weather information available in aneasy-to-use format that they can integrate
into their own applications or Web sites with no trouble.

The new features are described below.


Master Pages

ASP.NET didn't have a method for applying a consistent look and feel for a whole web
site.

Master pages in ASP.NET 2.0 solves this problem.

A master page is a template for other pages, with shared layout and functionality. The
master page defines placeholders for content pages. The result page is a combination
(merge) of the master page and the content page.

Read more about master pages.

Themes

Themes is another feature of ASP.NET 2.0. Themes, or skins, allow developers to create
a customized look for web applications.

Design goals for ASP.NET 2.0 themes:

• Make it simple to customize the appearance of a site


• Allow themes to be applied to controls, pages, and entire sites
• Allow all visual elements to be customized

Web Parts

ASP.NET 2.0 Web Parts can provide a consistent look for a site, while still allowing user
customization of style and content.

New controls:

• Zone controls - areas on a page where the content is consistent


• Web part controls - content areas for each zone

Navigation

ASP.NET 2.0 has built-in navigation controls like

• Site Maps
• Dynamic HTML menus
• Tree Views
Security

Security is very important for protecting confidential and personal information.

In ASP.NET 2.0 the following controls has been added:

• A Login control, which provides login functionality


• A LoginStatus control, to control the login status
• A LoginName control to display the current user name
• A LoginView control, to provide different views depending on login status
• A CreateUser wizard, to allow creation of user accounts
• A PasswordRecovery control, to provide the "I forgot my password" functionality

Roles and Personalization

Internet communities are growing very popular.

ASP.NET 2.0 has personalization features for storing user details. This provides an easy
way to customize user (and user group) properties.

Internationalization

Reaching people with different languages is important if you want to reach a larger
audience.

ASP.NET 2.0 has improved support for multiple languages.

Data Access

Many web sites are data driven, using databases or XML files as data sources.

With ASP.NET this involved code, and often the same code had to be used over and over
in different web pages.

A key goal of ASP.NET 2.0 was to ease the use of data sources.

ASP.NET 2.0 has new data controls, removing much of the need for programming and
in-depth knowledge of data connections.
Mobility Support

The problem with Mobile devices is screen size and display capabilities.

In ASP.NET, the Microsoft Mobile Internet Toolkit (MMIT) provided this support.

In ASP.NET 2.0, MMIT is no longer needed because mobile support is built into all
controls.

Images

ASP.NET 2.0 has new controls for handling images:

• The ImageMap control - image map support


• The DynamicImage control - image support for different browsers

These controls are important for better image display on mobile devices, like hand-held
computers and cell phones.

Automatic Compilation

ASP.NET 2.0 provides automatic compilation. All files within a directory will be
compiled on the first run, including support for WSDL, and XSD files.

Compiled Deployment (Installation) and Source Protection

ASP.NET 2.0 also provides pre-compilation. An entire web site can be pre-compiled.
This provides an easy way to deploy (upload to a server) compiled applications, and
because only compiled files are deployed, the source code is protected.

Site Management

ASP.NET 2.0 has three new features for web site configuration and management:

• New local management console


• New programmable management functions (API)
• New web-based management tool
Development Tools

With ASP.NET Visual Studio.NET was released with project and design features
targeted at corporate developers.

With ASP.NET 2.0, Visual Studio 2005 was released.

Key design features for Visual Studio 2005 include:

• Support for the features described above


• Upload files from anywhere (FTP, File System, Front Page....)
• No project files, allowing code to be manipulated outside Visual Studio
• Integrated Web Site Administration Tool
• No "build" step - ability to compile on first run

Visual Web Developer is a new free ASP.NE

ADVANTAGES AND APPLICATION OF .NET

.NET Framework Advantages

The .NET Framework offers a number of advantages to developers. The following


paragraphs describe them in detail.

Consistent Programming Model

Different programming languages

have different approaches for doing a task. For example, accessing data with a VB 6.0
application and a VC++ application is totally different. When using different
programming languages to do a task, a disparity exists among the approach developers
use to perform the task. The difference in techniques comes from how different languages
interact with the underlying system that applications rely on.

With .NET, for example, accessing data with a VB .NET and a C# .NET looks very
similar apart from slight syntactical differences. Both the programs need to import the
System.Data namespace, both the programs establish a connection with the database and
both the programs run a query and display the data on a data grid. The VB 6.0 and VC++
example mentioned in the first paragraph explains that there is more than one way to do a
particular task within the same language. The .NET example explains that there's a
unified means of accomplishing the same task by using the .NET Class Library, a key
component of the .NET Framework.
The functionality that the .NET Class Library provides is available to all .NET languages
resulting in a consistent object model regardless of the programming language the
developer uses.

Direct Support for Security

Developing an application that resides on a local machine and uses local resources is
easy. In this scenario, security isn't an issue as all the resources are available and accessed
locally. Consider an application that accesses data on a remote machine or has to perform
a privileged task on behalf of a nonprivileged user. In this scenario security is much more
important as the application is accessing data from a remote machine.

With .NET, the Framework enables the developer and the system administrator to specify
method level security. It uses industry-standard protocols such as TCP/IP, XML, SOAP
and HTTP to facilitate distributed application communications. This makes distributed
computing more secure because .NET developers cooperate with network security
devices instead of working around their security limitations.

Simplified Development Efforts

Let's take a look at this with Web applications. With classic ASP, when a developer
needs to present data from a database in a Web page, he is required to write the
application logic (code) and presentation logic (design) in the same file. He was required
to mix the ASP code with the HTML code to get the desired result.

ASP.NET and the .NET Framework simplify development by separating the application
logic and presentation logic making it easier to maintain the code. You write the design
code (presentation logic) and the actual code (application logic) separately eliminating
the need to mix HTML code with ASP code. ASP.NET can also handle the details of
maintaining the state of the controls, such as contents in a textbox, between calls to the
same ASP.NET page.

Another advantage of creating applications is debugging. Visual Studio .NET and other
third party providers provide several debugging tools that simplify application
development. The .NET Framework simplifies debugging with support for Runtime
diagnostics. Runtime diagnostics helps you to track down bugs and also helps you to
determine how well an application performs. The .NET Framework provides three types
of Runtime diagnostics: Event Logging, Performance Counters and Tracing.

Easy Application Deployment and Maintenance

The .NET Framework makes it easy to deploy applications. In the most common form, to
install an application, all you need to do is copy the application along with the
components it requires into a directory on the target computer. The .NET Framework
handles the details of locating and loading the components an application needs, even if
several versions of the same application exist on the target computer. The .NET
Framework ensures that all the components the application depends on are available on
the computer b

The .NET framework (DotNet)is a new Microsoft initiative directed to the modification of
computer world. More specifically, it is a large set of development tools, servers,
software, and services. Its main advantages for the user are creation of an integrated
information space connecting him or her with computers and programs, as well as
connection software applications together. For developers, the value of dotNet lies in
interoperability and the seamless connectivity of multiple systems and sources of data.
This empowers them to quickly and easily create required products.

The IT department manager of every company has a dream -- an enterprise that performs
all business transactions with partners exceptionally over the Internet, with no headaches
about the business processes. For this to happen, the processes must be well designed,
stable, and easily customized and controlled both from the local network and from any
computer in the Internet. All company's employees should have general access to work
information, Email and personal documents no matter if they use mobile phone, Pocket
PC, Notebook or high-end workstation.

Nowadays, in an age of rapid development of E-commerce, the existing tools for creating
digital marketplaces do not always handle the business needs. By developing the new
means for this field a major breakthrough belongs to XML Web services. For a long
period of time there were used by program engineering services provided by external
software. When it became clear that it is easier to once create a universal information
storage facility and to integrate it into different programs than invent each time a new one,
there appeared first Database Management Systems. Next step was the creation of
messaging and collaboration systems, e.g. Lotus Notes and Exchange, which
simultaneously served as development platforms. Then came into use the products
providing messages delivery (Message Oriented Middleware), such as IBM MQSeries and
MSMQ. They allowed to organize message exchange in distributed system with manifold
(and often unreliable) communication links. Their difference from mail servers lied in the
fact that they were oriented on information exchange not between people but various parts
of program systems. Finally, one of the last tendencies became Application Servers and
Enterprise Application Integration Servers. First ones allow to create scalable solutions of
simple software components giving them a ready means of supporting distributed
transactions, controlling access to total resources (particularly, connection with database)
etc. Enterprise Application Integration Server acts as a glue, being the intermediate among
existing program systems and helping them to process data and exchange references. Web
services enhance and extend the value of these existing technologies. Theh alow an
object's methods to be called over the Internet via HTTP. As a result, programs written in
any language, and running on any operating system, can access .NET applications
implemented as web services. By introducing a common, well-known standards of
interaction between software, Web service technology allows for the creation
intercorporate information systems without protracted coordination of proprietary
interfaces. In addition, the use of HTTP as the transport mechanism will allow remote
calls to these services to pass through corporate firewalls without compromising security.
Web services existed before .NET was introduced, but the .NET framework makes
creation of web services far easier than they otherwise would be.

Breaking down the distinctions between the Internet, standalone applications, and
computing devices of every kind, Web services provide businesses with the opportunity to
collaborate and to offer an unprecedented range of integrated and customized solutions -
solutions that enable their customers to act on information any time, any place and on any
device.

DotNet technology offers other far-reaching benefits for IT professionals. It enables


programmers to develop powerful information systems using all capabilities of modern
computers and networks without implementing helper functions implementation -- almost
all of these functions are subsumed into the platform). It allows to concentrate only on the
business logic of the product. Thus developers will be able to quickly create high-quality
(and easy!) programs with a multitude of Internet integrated capabilities while reducing
costs.

Built on XML Web service standards Microsoft .NET-connected software enables both
new and existing applications to connect with software and services across platforms,
applications, and programming languages. DotNet is already shifting the focus from
individual Web sites or devices connected to the Internet to constellations of computers,
devices, and services that work together to deliver more comprehensive programs.

efore the application begins to execute.

MODULE 6 C#

BASICS OF OOPS

Fundamentals of Object-Oriented
Programming
The goals of this tutorial are to guide you through the terminology of object-oriented
programming (OOP) and to give you an understanding of the importance of object-
oriented concepts to programming. Many languages, such as C++ and Microsoft Visual
Basic, are said to "support objects," but few languages actually support all the principles
that constitute object-oriented programming. C# is one of these languages: it was
designed from the ground up to be a truly object-oriented, component-based language.
So, to get the absolute maximum out of this book, you need to have a strong grasp of the
concepts presented here.

I know that conceptual tutorials like this are often skipped over by readers who want to
dive into the code right away, but unless you consider yourself an "object guru," I
encourage you to read this tutorial. For those of you only somewhat familiar with object-
oriented programming, you should reap benefits from doing so. Also, keep in mind that
the tutorials that follow this one will refer back to the terminology and concepts discussed
here.

As I've said, many languages claim to be object-oriented or object-based, but few truly
are. C++ isn't, because of the undeniable, uncircumventable fact that its roots are deeply
embedded in the C language. Too many OOP ideals had to be sacrificed in C++ to
support legacy C code. Even the Java language, as good as it is, has some limitations as
an object-oriented language. Specifically, I'm referring to the fact that in Java you have
primitive types and object types that are treated and behave very differently. However,
the focus of this tutorial is not on comparing the faithfulness of different languages to
OOP principles. Rather, this tutorial will present an objective and language-agnostic
tutorial on OOP principles themselves.

Before we get started, I'd like to add that object-oriented programming is much more than
a marketing phrase (although it has become that for some people), a new syntax, or a new
application programming interface (API). Object-oriented programming is an entire set of
concepts and ideas. It's a way of thinking about the problem being addressed by a
computer program and tackling the problem in a more intuitive and therefore more
productive manner.

My first job involved using the Pascal language to program the box-office reporting and
itinerary applications for Holiday on Ice. As I moved on to other jobs and other
applications, I programmed in PL/I and RPG III (and RPG/400). After a few more years,
I started programming applications in the C language. In each of these instances, I was
easily able to apply knowledge I had learned from prior experience. The learning curve
for each successive language was shorter regardless of the complexity of the language I
was learning. This is because until I started programming in C++, all the languages I had
used were procedural languages that mainly differed only in syntax.

However, if you are new to object-oriented programming, be forewarned: prior


experience with other non-object-oriented languages will not help you here! Object-
oriented programming is a different way of thinking about how to design and program
solutions to problems. In fact, studies have shown that people who are new to
programming learn object-oriented languages much more quickly than those of us who
started out in procedural languages such as BASIC, COBOL, and C. These individuals do
not have to "unlearn" any procedural habits that can hamper their understanding of OOP.
They are starting with a clean slate. If you've been programming in procedural languages
for many years and C# is your first object-oriented language, the best advice I can give
you is to keep an open mind and implement the ideas I'm presenting here before you
throw up your hands and say, "I can fake this in [insert your procedural language of
choice]." Anyone who's come from a procedural background to object-oriented
programming has gone through this learning curve, and it's well worth it. The benefits of
programming with an object-oriented language are incalculable, both in terms of writing
code more efficiently and having a system that can be easily modified and extended once
written. It just might not seem that way at first. However, almost 20 years of developing
software (including the past 8 with object-oriented languages) have shown me that OOP
concepts, when applied correctly, do in fact live up to their promise. Without further ado,
let's roll up our sleeves and see what all the fuss is about.

BASIC DATA TYPES

The ability to work with any programming language requires a good understanding of the
data types it offers in order to comprehend the language's possibilities and limitations. In
this article, I look at the characteristics and specifics of C# data types as a way for
developers to have a better grasp of what the language has to offer.

C# allows you to define two types of variables: value types and reference types. The
value types hold actual values, while reference types hold references to values stored
somewhere in memory. Value types are allocated on the stack and are available in most
programming languages. Reference types are allocated on the heap and typically
represent class instances. C# also allows defining your own value and reference types in
the code. All value and reference types are derived from a base type called object. C#
also lets you to convert from one type to another through either implicit (don't result in
the loss of data) or explicit (may result in loss of data/precision) conversions.

Predefined C# value types

• sbyte: Holds 8-bit signed integers. The s in sbyte stands for signed, meaning that
the variable's value can be either positive or negative. The smallest possible value
for ansbyte variable is -128; the largest possible value is 127.
• byte: Holds 8-bit unsigned integers. Unlike sbyte variables, byte variables are not
signed and can only hold positive numbers. The smallest possible value for a byte
variable is 0; the largest possible value is 255.
• short: Holds 16-bit signed integers. The smallest possible value for a short
variable is -32,768; the largest possible value is 32,767.
• ushort: Holds 16-bit unsigned integers. The u in ushort stands for unsigned. The
smallest possible value of an ushort variable is 0; the largest possible value is
65,535.
• int: Holds 32-bit signed integers. The smallest possible value of an int variable is
-2,147,483,648; the largest possible value is 2,147,483,647.
• uint: Holds 32-bit unsigned integers. The u in uint stands for unsigned. The
smallest possible value of a uint variable is 0; the largest possible value is
4,294,967,295.
• long: Holds 64-bit signed integers. The smallest possible value of a long variable
is 9,223,372,036,854,775,808; the largest possible value is
9,223,372,036,854,775,807.
• ulong: Holds 64-bit unsigned integers. The u in ulong stands for unsigned. The
smallest possible value of a ulong variable is 0; the largest possible value is
18,446,744,073,709,551,615.
• char: Holds 16-bit Unicode characters. The smallest possible value of a char
variable is the Unicode character whose value is 0; the largest possible value is
the Unicode character whose value is 65,535.
• float: Holds a 32-bit signed floating-point value. The smallest possible value of a
float type is approximately 1.5 times 10 to the 45th power; the largest possible
value is approximately 3.4 times 10 to the 38th power.
• double: Holds a 64-bit signed floating-point value. The smallest possible value of
a double is approximately 5 times 10 to the 324th; the largest possible value is
approximately 1.7 times 10 to the 308th.
• decimal: Holds a 128-bit signed floating-point value. Variables of type decimal
are good for financial calculations. The smallest possible value of a decimal type
is approximately 1 times 10 to the 28th power; the largest possible value is
approximately 7.9 times 10 to the 28th power.
• bool: Holds one of two possible values, true or false. The use of the bool type is
one of the areas in which C# breaks from its C and C++ heritage. In C and C++,
the integer value 0 was synonymous with false, and any nonzero value was
synonymous with true. In C#, however, the types are not synonymous. You
cannot convert an integer variable into an equivalent bool value. If you want to
work with a variable that needs to represent a true or false condition, use a bool
variable and not an int variable.

Predefined C# reference types

• string: Represents a string of Unicode characters. It allows easy manipulation and


assignment of strings. Strings are immutable, meaning that once it is created it
can't be modified. So when you try to modify a string, such as concatenating it
with another string, a new string object is actually created to hold the new
resulting string.
• object: Represents a general purpose type. In C#, all predefined and user-defined
types inherit from the object type or System.Object class.

Summary

Proper utilization of correct data types allows developers to make the most of the
language, but may take some time for those who have used different programming
languages prior to switching to C#.

BUILDING BLOCKS
C# - An Introduction - C# Building-
Blocks
(Page 5 of 9 )

Whenever you write a C# statement, it will be part of a block, skillfully called a “Block
of code”. It’s much the same as paragraphs in English, which consist of a number of
statements.

It is called a block because it’s consists of related C# statements that will do the job that
it’s created for. C# blocks may contain zero, one or many statements. These blocks must
be delimited with curly braces “{ }” like the following example:

{
int memory = 2 + 5;
Console.WriteLine(memory);
Console.WriteLine("so it's 7, right?");
}

This is a block of code that contains three statements. There are a number of important
points that you must understand here:

1. int is a keyword and VS.Net will color it blue so that you can differentiate it
2. There is a semicolon at the end of each of the three statements
3. Each line contains only one statement

You can have two statements on the same line, simply because the C# compiler knows
that you end the statement with the “;”. So you can write the last building block like so:

{
int memory = 2 + 5;Console.WriteLine(memory);Console.WriteLine("so it's 7,
right?");
}

It’s only apparent that by convention you should write one statement on a line so that
your code is readable. You can also nest blocks, meaning you can write one block inside
another. Look at the following block:

{
int memory = 2 + 5;
if(memory == 7)
{
Console.WriteLine("hi, I'm Michael");
}
}

Again don’t look at the code itself; look only at the blocks. Here we have two nested
blocks of code. For now, know that we can create blocks inside other blocks. Later, we
will discuss how they can be useful.

Something important to note: When you type the left curly brace “{“, and write your
code, then closing it with the right curly brace “}”, you will notice that VS.Net
momentarily bolds the entire block. It is nothing more than VS.Net’s way of showing
you the contents of that block.

There are three more sections that will discuss code organization.

CONTROL STRUCTURES

Control structures in C#

These structures are as follows:

if (expression)
{
statement1...[n]
}
elseif (expression)
{
statement1...[n]
}
.
.
else
{
statement1...[n]
}
Please pay attention to logical operators && and while using then as a expression in if
structure because as we explain earlier they won't check the second operand if the first
one fulfill the condition.
while (expression)
{
statement1...[n]
}
do
{
statement1...[n]
} while (expression);
In comparison between while and do while , do while will execute the statement at least
once.
To explain for stucture we can express an example to clarify it.
for (int i = 1; i < 10;i++ )
{
Console.WriteLine(i);
}
Another control structure is for each. But before we go through it, we should know what
arrays are. That's because for each is used only with a of collection entities, collection of
data types.
Structure of foreach is as bellow.
foreach (collectionDataType varName in collectionName)
{
statement1…[n];
}
It means that each item in collection will be assigned to varName then the code in the
block will be run for varName.
Array
Arrays are of those kinds of variables that are placed in heap memory.(go to first lesson,
memory management)
Array is a collection of specific data type. Let's have an example to understand both for
each control structure and arrays.
int[] nums = new int [3];
nums[0] = 10;
nums[1] = 15;
nums[2] = 20;
int sum = 0;
foreach (int n in nums )
{
sum = sum + n ;
}
Console.WriteLine("The summary is :{0}" ,sum);
In above code each item in nums is assigned to n and then the code in the block will be
run for each item.
Break and Continue
It's time now to talk about break and continue. Remember that, these two statements are
using in loop structures. Like for, while, do while and …. Let's see the usage of them in
an example.
If you run the following code snippet you will see that when i =5 the code will stop. It
means although i is less that 100 but the rest of code won't execute.
for (int i = 1; i < 100; i++)
{
if (i%5 == 0)
{
Console.WriteLine("Hope!");
break ;
}
Console.WriteLine(i);
}
The result will be as follow
Figure 4
But if you change the code like this:
for (int i = 1; i < 100; i++)
{
if (i%5 == 0)
{
Console.Write Line("Hope!");
continue ;
}
Console.WriteLine(i);
}
Instead of each number that is dividable to 5 the word "Hope" will be printed. The result
of running the above code snippet is shown in figure 5

Figure 5
Note: You are not allowed to change the value of the counter in the foreach control
structure. This is a read only value. It means you can not change the value of i in the code
above.
GoTo:
Probably no other construct in programming history is as maligned as the goto statement.
The problem with using goto is use of it in inappropriate places.
The structure of goto is
goto lableName;
lableName:
Statement1…[n];
It is hardly recommended to use goto because it reduces the performance of the
application, except in some cases that it will increase the performance. Using goto in
nested loops that break can't help you will be very handy.
Switch case
Another control structure is switch with the following structure
switch (expression)
{
case constant-expression:
statement
jump-statement
case constant-expressionN:
statementN
[default]
}
In this structure as you can see if you write even one statement you should have a jump-
statement that can be break or got otherwise you as a programmer will face an compile
time error indicating that you lost a jump-statement. Don't forget to define break in
default case. Let's explain the structure in a sample code.
Console.WriteLine ("Enter 10,20 or 30");
int a = Convert.ToInt32 ( Console.ReadLine());
switch (a)
{
case 10:
case 20:
Console.WriteLine("I have more than {0} books", a);
break;
case 30:
Console.WriteLine("I have {0} books", a);
break;
}
In this case you won't get a compiler error because of not having break in the first case
statement. The reason is you don't write any code for that specific case statement. This
means compiler will execute codes till it arrives to a break or goto. If a user enters 10 as
the value or 20 he will receive similar out puts.
But what if you want the compiler execute first and second statements that are in case 10
and 20 if user_entered_value is 10 and execute statement in second case if
user_entered_value is 20. You simply can modify the code as bellow using goto.
Console.WriteLine ("Enter 10,20 or 30");
int a = Convert.ToInt32 ( Console.ReadLine());
switch (a)
{
case 10:
case 20:
Console.WriteLine("I have more than {0} books", a);
break;
case 30:
Console.WriteLine("I have {0} books", a);
break;
}
please pay attention to the out put with different entered_values.

Figure 6- User entered 10


OPERATORS AND EXPRESSIONS

C# Operators and Expressions


From Techotopia

Jump to: navigation, search


Previous Table of Contents Next
C# Variables and C# Flow Control
Constants with if and else

Purchase and download the full PDF version of this eBook for only
$6.99

In C# Variables and Constants we looked at using variables and constants in C# and also
described the different variable and constant types. Being able to create constants and
variables is only part of the story however. The next step is to learn how to use these
variables and constants in C# code. The primary method for working with the data stored
in constants and variables is in the form of expressions. In this chapter we will look in
detail at C# expressions and operators.
Contents
[hide]

• 1 What is an Expression?
• 2 The Basic Assignment Operator
• 3 C# Arithmetic Operators
• 4 C# Operator Precedence
• 5 Compound Assignment Operators
• 6 Increment and Decrement
Operators
• 7 Comparison Operators
• 8 Boolean Logical Operators

• 9 The Ternary Operator

[edit] What is an Expression?

The most basic expression consists of an operator, two operands and an assignment. The
following is an example of an expression:

int theResult = 1 + 2;

In the above example the (+) operator is used to add two operands (1 and 2) together. The
assignment operator (=) subsequently assigns the result of the addition to an integer
variable named theResult. The operands could just have easily been variables or constants
(or a mixture of each) instead of the actual numerical values used in the example.

In the remainder of this chapter we will look at the various types of operators available in
C#.

[edit] The Basic Assignment Operator

We have already looked at the most basic of assignment operators, the = operator. This
assignment operator simply assigns the result of an expression to a variable. In essence
the = assignment operator takes two operands. The left hand operand is the variable to
which a value is to be assigned and the right hand operand is the value to be assigned.
The right hand operand is, more often than not, an expression which performs some type
of arithmetic or logical evaluation. The following examples are all valid uses of the
assignment operator:

x = 10; // Assigns the value 10 to a variable named x


x = y + z; // Assigns the result of variable y added to variable z to
variable x

x = y; // Assigns the value of variable y to variable x

Assignment operators may also be chained to assign the same value to multiple variables.
For example, the following code example assigns the value 20 to the x, y and z variables:

int x, y, z;

x = y = z = 20;

[edit] C# Arithmetic Operators

C# provides a range of operators for the purpose of creating mathematical expressions.


These operators primarily fall into the category of binary operators in that they take two
operands. The exception is the unary negative operator (-) which serves to indicate that a
value is negative rather than positive. This contrasts with the subtraction operator (-)
which takes two operands (i.e. one value to be subtracted from another). For example:

int x = -10; // Unary - operator used to assign -10 to a variable named


x

x = y - z; // Subtraction operator. Subtracts z from y

The following table lists the primary C# arithmetic operators:

Operator Description
-(unary) Negates the value of a variable or expression
* Multiplication
/ Division
+ Addition
- Subtraction
% Modulo

Note that multiple operators may be used in a single expression.

For example:

x = y * 10 + z - 5 / 4;

Whilst the above code is perfectly valid it is important to be aware that C# does not
evaluate the expression from left to right or right to left, but rather in an order specified
by the precedence of the various operators. Operator precedence is an important topic to
understand since it impacts the result of a calculation and will be covered in detail the
next section.

[edit] C# Operator Precedence

When humans evaluate expressions, they usually do so starting at the left of the
expression and working towards the right. For example, working from left to right we get
a result of 300 from the following expression:

10 + 20 * 10 = 300

This is because we, as humans, add 10 to 20, resulting in 30 and then multiply that by 10
to arrive at 300. Ask C# to perform the same calculation and you get a very different
answer:

int x;

x = 10 + 20 * 10;

System.Console.WriteLine (x)

The above code, when compiled and executed, will output the result 210.

This is a direct result of operator precedence. C# has a set of rules that tell it in which
order operators should be evaluated in an expression. Clearly, C# considers the
multiplication operator (*) to be of a higher precedence than the addition (+) operator.

Fortunately the precedence built into C# can be overridden by surrounding the lower
priority section of an expression with parentheses. For example:

int x;

x = (10 + 20) * 10;

System.Console.WriteLine (x)

In the above example, the expression fragment enclosed in parentheses is evaluated


before the higher precedence multiplication resulting in a value of 300.

The following table outlines the C# operator precedence order from highest precedence to
lowest:

Precedence Operators
Highest + - ! ~ ++x --x (T)x
*/%
+-
<< >>
< > <= >= is as
== !=
&
^
|
&&
||
:?
Lowest = *= /= %= += -= <<= >>= &= ^= |=

It should come as no surprise that the assignment operators have the lowest precedence
since you would not want to assign the result of an expression until that expression had
been fully evaluated. Don't worry about memorizing the above table. Most programmers
simply use parentheses to ensure that their expressions are evaluated in the desired order.

[edit] Compound Assignment Operators

C# provides a number of operators designed to combine an assignment with a


mathematical or logical operation. These are primarily of use when performing an
evaluation where the result is to be stored in one of the operands. For example, one might
write an expression as follows:

x = x + y;

The above expression adds the value contained in variable x to the value contained in
variable y and stores the result in variable x. This can be simplified using the addition
compound assignment operator:

x += y

The above expression performs exactly the same task as x = x + y but saves the
programmer some typing. This is yet another feature that C# has inherited from the C
programming language.

Numerous compound assignment operators are available in C#. The most frequently used
are outlined in the following table:
Operator Description
x += y Add x to y and place result in x
x -= y Subtract y from x and place result in x
x *= y Multiply x by y and place result in x
x /= y Divide x by y and place result in x
x %= y Perform Modulo on x and y and place result in x
x &= y Assign to x the result of logical AND operation on x and y
x |= y Assign to x the result of logical OR operation on x and y
x ^= y Assign to x the result of logical Exclusive OR on x and y

[edit] Increment and Decrement Operators

Another useful shortcut can be achieved using the C# increment and decrement operators.
As with the compound assignment operators described in the previous section, consider
the following C# code fragment:

x = x + 1; // Increase value of variable x by 1

x = x - 1; // Decrease value of variable y by 1

These expressions increment and decrement the value of x by 1. Instead of using this
approach it is quicker to use the ++ and -- operators. The following examples perform
exactly the same tasks as the examples above:

x++; Increment x by 1

x--; Decrement x by 1

These operators can be placed either before or after the variable name. If the operator is
placed before the variable name the increment or decrement is performed before any
other operations are performed on the variable. For example, in the following example, x
is incremented before it is assigned to y, leaving y with a value of 10:

int x = 9;
int y;

y = ++x;

In the following example, the value of x (9) is assigned to variable y before the
decrement is performed. After the expression is evaluated the value of y will be 9 and the
value of x will be 8.
int x = 9;
int y;

y = x--;

[edit] Comparison Operators

In addition to mathematical and assignment operators, C# also includes set of logical


operators useful for performing comparisons. These operators all return a Boolean (bool)
true or false result depending on the result of the comparison. These operators are binary
in that they work with two operands.

Comparison operators are most frequently used in constructing program flow control. For
example an if statement may be constructed based on whether one value matches another:

if (x == y)
System.Console.WriteLine ("x is equal to y");

The result of a comparison may also be stored in a bool variable. For example, the
following code will result in a true value being stored in the variable result:

bool result;
int x = 10;
int y = 20;

result = x < y;

Clearly 10 is less than 20, resulting in a true evaluation of the x < y expression. The
following table lists the full set of C# comparison operators:

Operator Description
x == y Returns true if x is equal to y
x>y Returns true if x is greater than y
x >= y Returns true if x is greater than or equal to y
x<y Returns true if x is less than y
x <= y Returns true if x is less than or equal to y
x != y Returns true if x is not equal to y

[edit] Boolean Logical Operators

Another set of operators which return boolean true and false values are the C# boolean
logical operators. These operators both return boolean results and take boolean values as
operands. The key operators are NOT (!), AND (&&), OR (||) and XOR (^).
The NOT (!) operator simply inverts the current value of a boolean variable, or the result
of an expression. For example, if a variable named flag is currently true, prefixing the
variable with a '!' character will invert the value to be false:

bool flag = true; //variable is true


bool secondFlag;

secondFlag = !flag; // secondFlag set to false

The OR (||) operator returns true if one of its two operands evaluates to true, otherwise it
returns false. For example, the following example evaluates to true because at least one of
the expressions either side of the OR operator is true:

if ((10 < 20) || (20 < 10))


System.Console.WriteLine("Expression is true");

The AND (&&) operator returns true only if both operands evaluate to be true. The
following example will return false because only one of the two operand expressions
evaluates to true:

if ((10 < 20) && (20 < 10))


System.Console.WriteLine("Expression is true");

The XOR (^) operator returns true if one and only one of the two operands evaluates to
true. For example, the following example will return true since only one operator
evaluates to be true:

if ((10 < 20) ^ (20 < 10))


System.Console.WriteLine("Expression is true");

If both operands evaluated to be true, or both were false the expression with return false.

[edit] The Ternary Operator

C# uses something called a ternary operator to provide a shortcut way of making


decisions. The syntax of the ternary operator is as follows:

[condition] ? [true expression] : [false expression]

The way this works is that [condition] is replaced with an expression that will return
either true or false. If the result is true then the expression that replaces the [true
expression] is evaluated. Conversely, if the result was false then the [false expression] is
evaluated. Let's see this in action:

int x = 10;
int y = 20;

System.Console.WriteLine( x > y ? x : y );
The above code example will evaluate whether x is greater than y. Clearly this will
evaluate to false resulting in y being returned to the WriteLine method for display to the
user.

VARIABLES

C# Variables
by Wrox Books

Variables

Variables represent storage locations. Every variable has a type that determines the
values to be stored in the variable. C# is a type-safe language and the C# compiler
guarantees that values stored in variables are always of the appropriate type. The value of
a variable can be changed through assignment or through use of the ++ and - operators.

Variables are values that can change as much as needed during the execution of a
program. One reason you need variables in a program is to hold the results of a
calculation. Hence, variables are locations in memory in which values can be stored.

Variable Declarations

In C# all variables must be declared before they are used. In the declaration you must
specify a data type and a name for the variable so that memory can be set aside for the
variable. An optional initial value can be included with the declaration if you know what
the value should be to start.

Syntax for variable declaration

[scope] [=initial value];

Description

The scope determines the accessibility of the variable (public, local, private) and the
default is private. The data type specifies what kind of data the variable will hold.
Optionally the variable can be initialised to a specific value. All variables must be
initialised (given a value) before they are used in any calculation. If the variables are not
initialised, the compilation of the program will fail.

Value Type Variables

Value type variables are also known as stack variables because they are stored on the
stack. Value type variables can be directly declared and referenced. As the variables go
out of scope, they are removed from the stack, ensuring the proper destruction of the
variables. As the variables are created on the stack, they are not initialised; that is the
responsibility of the program. The use of an uncapitalized variable will result in a
compiler error.

Example Value variable

Int n; //uncapitalized int


Long l = 327 //initialised long
Float f = 3.13 F; //float initialised from single-precision literal

Reference Type Variables

Reference type variables are made up of two parts: the reference on the stack and the
object on the heap. The creation of the object and the reference to the object is commonly
known as the instantiation of the object.

Example Reference variable

To declare a reference-type variable, the syntax used are:

string strMimico;city objToronto = null;object objGeneric;

Example Object variable

To create an object on the heap, we go through two steps, as follows:

1. Declare the variable reference.


2. Instantiate the object on the heap by calling the new operator.

City objMimico; //declare objMimico to be a reference to a an


// object of City type
objMimico = new City(); //call the constructor of the City class to return
// a reference that is stored in the objMimico reference

The two lines can be combined into one:

City objMimico = new City();

ARRAYS

Create a C# Array
 Step 1
Start to create an array in C# by typing the data type of your array, followed by an
opening square bracket, then a closing square bracket, in the newly created space. An
example, using the "int" data type, looks like this:
int[]

 Step 2

Type a blank space on the same line, and then type the name you will use for your new
array. For example:
int[] myIntArray

 Step 3

Continue on the same line by typing a blank space, an "equals" sign and another blank
space. Type the "new" keyword followed again by the data type and an opening and
closing square bracket. So far, your line should look like this:
int[] myIntArray = new int[]

 Step 4

Initialize your array with some data. Use a comma-separated list enclosed in curly braces
followed by a semicolon. Here is an example using the integers 1 to 5:
{1, 2, 3, 4, 5}

 Step 5

Verify that your C# array declaration is complete. It should look like this:
int[] myIntArray = new int[] { 1, 2, 3, 4, 5 };

Brief Description

Programming C# is a new self-taught series of articles, in which I will teach you C#


language in a tutorial format. This article concentrates on arrays in .NET and how
you can work with arrays using C# language. Article also covers the Arrays class and
its method, which can be used to sort, search, get, and set an array items.

Introduction

In C#, an array index starts at zero. That means first item of an array will be stored
at 0th position. The position of the last item on an array will total number of items -
1.

In C#, arrays can be declared as fixed length or dynamic. Fixed length array can
stores a predefined number of items, while size of dynamic arrays increases as you
add new items to the array. You can declare an array of fixed length or dynamic. You
can even change a dynamic array to static after it is defined. For example, the
following like declares a dynamic array of integers.

int [] intArray;
The following code declares an array, which can store 5 items starting from index 0
to 4.

int [] intArray;
intArray = new int[5];

The following code declares an array that can store 100 items starting from index 0
to 99.

int [] intArray;
intArray = new int[100];

Single Dimension Arrays

Arrays can be divided into four categories. These categories are single-dimensional
arrays, multidimensional arrays or rectangular arrays, jagged arrays, and mixed
arrays.

Single-dimensional arrays are the simplest form of arrays. These types of arrays are
used to store number of items of a predefined type. All items in a single dimension
array are stored in a row starting from 0 to the size of array - 1.

In C# arrays are objects. That means declaring an array doesn't create an array.
After declaring an array, you need to instantiate an array by using the "new"
operator.

The following code declares a integer array, which can store 3 items. As you can see
from the code, first I declare the array using [] bracket and after that I instantiate
the array by calling new operator.

int [] intArray;
intArray = new int[3];

Array declarations in C# are pretty simple. You put array items in curly braces ({}).
If an array is not initialized, its items are automatically initialized to the default initial
value for the array type if the array is not initialized at the time it is declared.
The following code declares and initializes an array of three items of integer type.

int [] intArray;
intArray = new int[3] {0, 1, 2};

The following code declares and initializes an array of 5 string items.

string[] strArray = new string[5] {"Ronnie", "Jack", "Lori", "Max", "Tricky"};

You can even direct assign these values without using the new operator.

string[] strArray = {"Ronnie", "Jack", "Lori", "Max", "Tricky"};

Multi Dimension Arrays


A multidimensional array is an array with more than one dimension. A multi
dimension array is declared as following:

string[,] strArray;

After declaring an array, you can specify the size of array dimensions if you want a
fixed size array or dynamic arrays. For example, the following code two examples
create two multi dimension arrays with a matrix of 3x2 and 2x2. The first array can
store 6 items and second array can store 4 items respectively.

int[,] numbers = new int[3, 2] { {1, 2}, {3, 4}, {5, 6} };


string[,] names = new string[2, 2] { {"Rosy","Amy"}, {"Peter","Albert"} };

If you don't want to specify the size of arrays, just don't define a number when you
call new operator. For example,

int[,] numbers = new int[,] { {1, 2}, {3, 4}, {5, 6} };


string[,] names = new string[,] { {"Rosy","Amy"}, {"Peter","Albert"} };

You can also omit the new operator as we did in single dimension arrays. You can
assign these values directly without using the new operator. For example:

int[,] numbers = { {1, 2}, {3, 4}, {5, 6} };


string[,] siblings = { {"Rosy", "Amy"}, {"Peter", "Albert"} };

Jagged Arrays

Jagged arrays are often called array of arrays. An element of a jagged array itself is
an array. For example, you can define an array of names of students of a class
where a name itself can be an array of three strings - first name, middle name and
last name. Another example of jagged arrays is an array of integers containing
another array of integers. For example,

int[][] numArray = new int[][] { new int[] {1,3,5}, new int[] {2,4,6,8,10} };

Again, you can specify the size when you call the new operator.

Mixed Arrays

Mixed arrays are a combination of multi-dimension arrays and jagged arrays. Multi-
dimension arrays are also called as rectangular arrays.

Accessing Arrays using foreach Loop

The foreach control statement (loop) of C# is a new to C++ or other developers.


This control statement is used to iterate through the elements of a collection such as
an array. For example, the following code uses foreach loop to read all items of
numArray.

int[] numArray = {1, 3, 5, 7, 9, 11, 13};


foreach (int num in numArray)
{
System.Console.WriteLine(num.ToString());
}

A Simple Example

This sample code listed in Listing 1 shows you how to use arrays. You can access an
array items by using for loop but using foreach loop is easy to use and better.

Listing 1. Using arrays in C#.

using System;
namespace ArraysSamp
{
class Class1
{
static void Main(string[] args)
{
int[] intArray = new int[3];
intArray[0] = 3;
intArray[1] = 6;
intArray[2] = 9;
Console.WriteLine("================");
foreach (int i in intArray)
{
Console.WriteLine(i.ToString() );
}
string[] strArray = new string[5]
{"Ronnie", "Jack", "Lori", "Max", "Tricky"};
Console.WriteLine("================");
foreach( string str in strArray)
{
Console.WriteLine(str);
}
Console.WriteLine("================");
string[,] names = new string[,]
{
{"Rosy","Amy"},
{"Peter","Albert"}
};
foreach( string str in names)
{
Console.WriteLine(str);
}
Console.ReadLine();
}
}
}

The output of Listing 1 looks like Figure 1.


OBJECTS N CLASSES

n Object-Oriented Programming programmers write independent parts of a program called classes. Each
class represents a part of the program functionality and these classes can be assembled to form a program.
When you need to change some of the program functionality all you have to do is to replace the target class
which may contain the problem that needs change. So, OOP applications are created by the use of classes
and these applications can contain any number of classes. This will get us to discuss the Class and Object
concept.

Classes and objects

You may find it a little difficult to understand the class and object story; but, I will
try to do my best in explaining it. Actually the class and object concept is related to
each other. Some beginners don't care about understanding it clearly so I think they
will have a hard time learning C#.

Object-Oriented concepts take most of their functionality from the real-life concepts. For example, I will
discuss the concept of Classes and Objects of the world first and then you will understand the computer's
Classes and Objects before I even write anything about it.

In our world we have classes and objects for those classes. Everything in our world is considered to be an
object. For example, people are objects, animals are objects too, minerals are objects; everything in the
world is an object. Easy, right? But what about classes?

In our world we have to differentiate between objects that we are living with. So we must understand that
there are classifications (this is how they get the name and the concepts of the Class) for all of those
objects. For example, I'm an object, David is object too, Maria is another object. So we are from a people
class (or type). I have a dog called Ricky so it's an object. My friend's dog, Doby, is also an object so they
are from a Dogs class (or type).

A third example: I have a Pentium 3; this is an object. My friend has a Pentium 4, so this is another object
and they are from a Computers class (or type). Now I think you understand the concept of the Class and
Object, but let me crystallize it for you. In our world we have classifications for objects and every object
must be from some classification. So, a Class is a way for describing some properties and functionalities or
behaviors of a group of objects. In other words, the class is considered to be a template for some objects. So
maybe I will create a class called person which is a template of the functionality and the properties of
persons. A C# Class is considered to be the primary building block of the language. What I mean by the
primary building block is that every time you work with C# you will create classes to form a program. We
use classes as a template to put the properties and functionalities or behaviors in one building block for a
group of objects and after that we use the template to create the objects we need.

For example, we need to have persons objects in our program so the first thing to do here is to create a class
called Person that contains all the functionalities or behaviors and properties of any person and after that we
will use that class (or template) to create as many objects as we need. Creating an object of a specific class
type is called "an instance of the class". Don't worry if you didn't grasp it 100% and don't worry if you don't
know what the class and object's properties and functionalities or behaviors are because we are still in the
beginning. Until now I haven’t provided any code examples. So let's take a brief of what is a class and
what is an object:

The class: A building block that contains the properties and functionalities that describe some group of
objects. We can create a class Person that contains:

1. The properties of any normal person on the earth like: hair color , age, height, weight, eye color.
2. The functionalities or behaviors of any normal person on the earth like: drink water, eat, go to the
work.

Later we will see how we can implement the functionalities or behaviors and properties.

There are 2 kinds of classes: The built-it classes that come with the .NET Framework, called Framework
Class Library, and the programmer defined-classes which we create ourselves.

The class contains data (in the form of variables and properties) and behaviors (in the form of methods to
process these data). We will understand this concept later on in the article.

When we declare a variable in a class we call it member variables or instance variables. The name instance
come from the fact that when we create an object we instantiate a class to create that object. So instance of
a class means an object of that class and instance variable means variable that exists in that class.

The object: It's an object of some classification (or class, or type) and when you create the object you can
specify the properties of that object. What I mean here is: I, as an object, can have different properties (hair
color, age, height, weight) than you as another object. For example, I have brown eyes and you have green
eyes. When I create 2 objects I will specify a brown color for my object's eye color property and I will
specify a green color for your object's eye color property.

So to complete my introduction to classes we must discuss properties and variables.

Variables declared in a class store the data for each instance. What does this mean? It means that when you
instantiate this class (that is, when you create an object of this class) the object will allocate memory
locations to store the data of its variables. Let's take an example to understand it well.

class Person
{
public int Age;
public string HairColor;
}

This is our simple class which contains 2 variables. Don't worry about public keyword now because we will
talk about it later. Now we will instantiate this class (that is, when you create an object of this class).

static void Main(string[] args)


{
Person Michael = new Person();
Person Mary = new Person();

// Specify some values for the instance variables


Michael.Age = 20;
Michael.HairColor = "Brown";
Mary.Age = 25;
Mary.HairColor = "Black";
// print the console's screen some of the variable's values
Console.WriteLine("Michael's age = {0}, and Mary's age =
{1}",Michael.Age,
Mary.Age);
Console.ReadLine();
}

So we begin our Main method by creating 2 objects of type Person. After creating the 2 objects we
initialize the instance variables for object Michael and then for object Mary. Finally we print some values
to the console. Here, when you create the Michael object, the C# compiler allocates a memory location for
the 2 instance variables to put the values there. Also, the same thing with the Mary object; the compiler will
create 2 variables in memory for Mary object. So each object now contains different data. Note that we
directly accessed the variables and we put any values we wanted, right? But wait there is a solution to this
problem. We will use properties.

Next: Properties >>


Properties are a way to access the variables of the class in a secure manner. Let's see the same example
using properties.

class Person
{
private int age;
private string hairColor;
public int Age
{
get
{
return age;
}
set
{
if(value <= 65 && value >= 18)
{
age = value;
}
else
age = 18;
}
}
public string HairColor
{
get
{
return hairColor;
}
set
{
hairColor = value;
}
}
}

I made some modifications, but focus on the new 2 properties that I created. So the property consists of 2
accessors. The get accessor, responsible of retrieving the variable value, and the set accessor, responsible of
modifying the variable's value. So the get accessor code is very simple. We just use the keyword return
with the variable name to return its value. So the following code:

get
{
return hairColor;
}

returns the value stored in hairColor.

[Note]

The keyword value is a reserved keyword by C# (that is, reserved keywords means that these keywords are
owned only by C# and you can't create it for any other purposes. For example, you can't create a variable
called value .If you did, the C# compiler would generate an error. To make things easier, Visual
Studio.NET will color the reserved keywords in blue.)

[/Note]

Let's put this code to work and then discuss the set accessor..

static void Main(string[] args)


{
Person Michael = new Person();
Person Mary = new Person();

// Specify some values for the instance variables


Michael.Age = 20;
Michael.HairColor = "Brown";
Mary.Age = 25;
Mary.HairColor = "Black";

// print the console's screen some of the variable's values


Console.WriteLine("Michael's age = {0}, and Mary's age =
{1}",Michael.Age,
Mary.Age);
Console.ReadLine();
}

Here I created the same objects from the last example, except that I used only properties to access the
variable instead of accessing it directly. Look at the following line of code

Michael.Age = 20;

When you assign a value to the property like that C# will use the set accessor. The great thing with the set
accessor is that we can control the assigned value and test it; and maybe change to in some cases. When
you assign a value to a property C# changes the value in a variable and you can access the variable's value
using the reserved keyword value exactly as I did in the example. Let's see it again here.

set
{
if(value <= 65 && value >= 18)
{
age = value;
}
else
age = 18;
}

Here in the code I used if statement to test the assigned value because for some reason I want any object of
type Person to be aged between 18 and 65. Here I test the value and if it is in the range then I will simply
store it in the variable age. If it's not in the range I will put 18 as a value to age.

We create a class by defining it using the keyword class followed by the class name:

class Person

Then we open a left brace "{" and write our methods and properties. We then close it with a right brace
"}". That's how we create a class. Let's see how we create an instance of that class.

In the same way as we declare a variable of type int we create an object variable of Person type with some
modifications:

int age;
Person Michael = new Person();

In the first line of code we specified integer variable called age. In the second line we first specified the
type of Object we need to create, followed by the object's name, followed by a reserved operator called
new. We end by typing the class name again followed by parentheses "()".

Let's understand it step-by-step. Specifying the class name at the beginning tells the C# Compiler to
allocate a memory location for that type (the C# compiler knows all the variables and properties and
methods of the class so it will allocate the right amount of memory). Then we followed the class name by
our object variable name that we want it to go by. The rest of the code "= new Person();" calls the object's
constructor. We will talk about constructors later but for now understand that the constructor is a way to
initialize your object's variable while you are. For example, the Michael object we created in the last
section can be written as following :
Person Michael = new Person(20, "Brown");

Here I specified the variable's values in the parameter list so I initialized the variables while creating the
object. But for this code to work we will need to specify the constructor in the Person class -- I will not do
that yet as constructors will come in a later article.

EXCEPTION HANDLING

Exception handling is an in built mechanism in .NET framework to detect and handle


run time errors. The .NET framework contains lots of standard exceptions. The
exceptions are anomalies that occur during the execution of a program. They can be
because of user, logic or system errors. If a user (programmer) do not provide a
mechanism to handle these anomalies, the .NET run time environment provide a
default mechanism, which terminates the program execution.

C# provides three keywords try, catch and finally to do exception handling. The try
encloses the statements that might throw an exception whereas catch handles an
exception if one exists. The finally can be used for doing any clean up process.

The general form try-catch-finally in C# is shown below

try
{
// Statement which can cause an exception.
}
catch(Type x)
{
// Statements for handling the exception
}
finally
{
//Any cleanup code
}

If any exception occurs inside the try block, the control transfers to the appropriate
catch block and later to the finally block.

But in C#, both catch and finally blocks are optional. The try block can exist either
with one or more catch blocks or a finally block or with both catch and finally blocks.

If there is no exception occurred inside the try block, the control directly transfers to
finally block. We can say that the statements inside the finally block is executed
always. Note that it is an error to transfer control out of a finally block by using
break, continue, return or goto.

In C#, exceptions are nothing but objects of the type Exception. The Exception is the
ultimate base class for any exceptions in C#. The C# itself provides couple of
standard exceptions. Or even the user can create their own exception classes,
provided that this should inherit from either Exception class or one of the standard
derived classes of Exception class like DivideByZeroExcpetion ot ArgumentException
etc.

Uncaught Exceptions

The following program will compile but will show an error during execution. The
division by zero is a runtime anomaly and program terminates with an error
message. Any uncaught exceptions in the current context propagate to a higher
context and looks for an appropriate catch block to handle it. If it can't find any
suitable catch blocks, the default mechanism of the .NET runtime will terminate the
execution of the entire program.

//C#: Exception Handling


//Author: rajeshvs@msn.com
using System;
class MyClient
{
public static void Main()
{
int x = 0;
int div = 100/x;
Console.WriteLine(div);
}
}

The modified form of the above program with exception handling mechanism is as
follows. Here we are using the object of the standard exception class
DivideByZeroException to handle the exception caused by division by zero.

//C#: Exception Handling


using System;
class MyClient
{
public static void Main()
{
int x = 0;
int div = 0;
try
{
div = 100/x;
Console.WriteLine("This line in not executed");
}
catch(DivideByZeroException de)
{
Console.WriteLine("Exception occured");
}
Console.WriteLine("Result is {0}",div);
}
}

In the above case the program do not terminate unexpectedly. Instead the program
control passes from the point where exception occurred inside the try block to the
catch blocks. If it finds any suitable catch block, executes the statements inside that
catch and continues with the normal execution of the program statements.
If a finally block is present, the code inside the finally block will get also be
executed.

//C#: Exception Handling


using System;
class MyClient
{
public static void Main()
{
int x = 0;
int div = 0;
try
{
div = 100/x;
Console.WriteLine("Not executed line");
}
catch(DivideByZeroException de)
{
Console.WriteLine("Exception occured");
}
finally
{
Console.WriteLine("Finally Block");
}
Console.WriteLine("Result is {0}",div);
}
}

Remember that in C#, the catch block is optional. The following program is perfectly
legal in C#.

//C#: Exception Handling


using System;
class MyClient
{
public static void Main()
{
int x = 0;
int div = 0;
try
{
div = 100/x;
Console.WriteLine("Not executed line");
}
finally
{
Console.WriteLine("Finally Block");
}
Console.WriteLine("Result is {0}",div);
}
}

But in this case, since there is no exception handling catch block, the execution will
get terminated. But before the termination of the program statements inside the
finally block will get executed. In C#, a try block must be followed by either a catch
or finally block.

Multiple Catch Blocks

A try block can throw multiple exceptions, which can handle by using multiple catch
blocks. Remember that more specialized catch block should come before a
generalized one. Otherwise the compiler will show a compilation error.

//C#: Exception Handling: Multiple catch


using System;
class MyClient
{
public static void Main()
{
int x = 0;
int div = 0;
try
{
div = 100/x;
Console.WriteLine("Not executed line");
}
catch(DivideByZeroException de)
{
Console.WriteLine("DivideByZeroException" );
}
catch(Exception ee)
{
Console.WriteLine("Exception" );
}
finally
{
Console.WriteLine("Finally Block");
}
Console.WriteLine("Result is {0}",div);
}
}

Catching all Exceptions

By providing a catch block without a brackets or arguments, we can catch all


exceptions occurred inside a try block. Even we can use a catch block with an
Exception type parameter to catch all exceptions happened inside the try block since
in C#, all exceptions are directly or indirectly inherited from the Exception class.

//C#: Exception Handling: Handling all exceptions


using System;
class MyClient
{
public static void Main()
{
int x = 0;
int div = 0;
try
{
div = 100/x;
Console.WriteLine("Not executed line");
}
catch
{
Console.WriteLine("oException" );
}
Console.WriteLine("Result is {0}",div);
}
}

The following program handles all exception with Exception object.

//C#: Exception Handling: Handling all exceptions


using System;
class MyClient
{
public static void Main()
{
int x = 0;
int div = 0;
try
{
div = 100/x;
Console.WriteLine("Not executed line");
}
catch(Exception e)
{
Console.WriteLine("oException" );
}
Console.WriteLine("Result is {0}",div);
}
}

Throwing an Exception

In C#, it is possible to throw an exception programmatically. The 'throw' keyword is


used for this purpose. The general form of throwing an exception is as follows.

throw exception_obj;

For example the following statement throw an ArgumentException explicitly.

throw new ArgumentException("Exception");

//C#: Exception Handling:


using System;
class MyClient
{
public static void Main()
{
try
{
throw new DivideByZeroException("Invalid Division");
}
catch(DivideByZeroException e)
{
Console.WriteLine("Exception" );
}
Console.WriteLine("LAST STATEMENT");
}
}

Re-throwing an Exception

The exceptions, which we caught inside a catch block, can re-throw to a higher
context by using the keyword throw inside the catch block. The following program
shows how to do this.

//C#: Exception Handling: Handling all exceptions


using System;
class MyClass
{
public void Method()
{
try
{
int x = 0;
int sum = 100/x;
}
catch(DivideByZeroException e)
{
throw;
}
}
}
class MyClient
{
public static void Main()
{
MyClass mc = new MyClass();
try
{
mc.Method();
}
catch(Exception e)
{
Console.WriteLine("Exception caught here" );
}
Console.WriteLine("LAST STATEMENT");
}
}

Standard Exceptions

There are two types of exceptions: exceptions generated by an executing program


and exceptions generated by the common language runtime. System.Exception is
the base class for all exceptions in C#. Several exception classes inherit from this
class including ApplicationException and SystemException. These two classes form
the basis for most other runtime exceptions. Other exceptions that derive directly
from System.Exception include IOException, WebException etc.

The common language runtime throws SystemException. The ApplicationException is


thrown by a user program rather than the runtime. The SystemException includes
the ExecutionEngineException, StaclOverFlowException etc. It is not recommended
that we catch SystemExceptions nor is it good programming practice to throw
SystemExceptions in our applications.

System.OutOfMemoryException

• System.NullReferenceException
• Syste.InvalidCastException
• Syste.ArrayTypeMismatchException
• System.IndexOutOfRangeException
• System.ArithmeticException
• System.DevideByZeroException
• System.OverFlowException

User-defined Exceptions

In C#, it is possible to create our own exception class. But Exception must be the
ultimate base class for all exceptions in C#. So the user-defined exception classes
must inherit from either Exception class or one of its standard derived classes.

//C#: Exception Handling: User defined exceptions


using System;
class MyException : Exception
{
public MyException(string str)
{
Console.WriteLine("User defined exception");
}
}
class MyClient
{
public static void Main()
{
try
{
throw new MyException("RAJESH");
}
catch(Exception e)
{
Console.WriteLine("Exception caught here" + e.ToString());
}
Console.WriteLine("LAST STATEMENT");
}
}

Design Guidelines

Exceptions should be used to communicate exceptional conditions. Don't use them to


communicate events that are expected, such as reaching the end of a file. If there's
a good predefined exception in the System namespace that describes the exception
condition-one that will make sense to the users of the class-use that one rather than
defining a new exception class, and put specific information in the message. Finally,
if code catches an exception that it isn't going to handle, consider whether it should
wrap that exception with additional information before re-throwing it

More C# Articles

C# GENERICS

What Are C# Generics?


• June 30, 2003
• By Bradley L. Jones
• Bio »
• Send Email »
• More Articles »

What Are C# Generics?

Generics are one of the new features that Microsoft has proposed be added to the C#
language. While not a part of the current C# specifications as defined by ECMA or ISO,
they could be in the future.

Generics are used to help make the code in your software components much more
reusable. They are a type of data structure that contains code that remains the same;
however, the data type of the parameters can change with each use. Additionally, the
usage within the data structure adapts to the different data type of the passed variables. In
summary, a generic is a code template that can be applied to use the same code
repeatedly. Each time the generic is used, it can be customized for different data types
without needing to rewrite any of the internal code.

While generics would be new, the functionality that is provided by them can be obtained
in C# today. This functionality is done by using type casts and polymorphism. With
generics, however, you can avoid the messy and intensive conversions from reference
types to native types. Additionally, you can create routines that are much more type-safe.

A generic is defined using a slightly different notation. The following is the basic code
for a generic named Compare that can compare two items of the same type and return the
larger or smaller value, depending on which method is called:

• Post a comment
• Email Article
• Print Article
• Share Articles
o Digg
o del.icio.us
o Slashdot
o DZone
o Reddit
o StumbleUpon
o Facebook
o FriendFeed
o Furl
o Newsvine
o Google
o LinkedIn
o MySpace
o Technorati
o Twitter
o Windows Live
o YahooBuzz

public class Compare<ItemType, ItemType>


{
public ItemType Larger(ItemType data, ItemType data2)
{
// logic...
}

public ItemType Smaller(ItemType data, ItemType data2)


{
// logic...
}
}
This generic could be used with any data type, ranging from basic data types such as
integers to complex classes and structures. When you use the generic, you identify what
data type you are using with it. For example, to use an integer with the previous Compare
generic, you would enter code similar to the following:

Compare<int, int> compare = new Compare<int, int>;


int MyInt = compare.Larger(3, 5);

You could use the type with other types as well. One thing to be aware of is that a
declared generic, such Compare in the previous example, is strongly typed. This means
that, if you pass a different data type than an integer to compare.Larger, the compiler will
display an error. If you wanted to use a different data type, you would need to declare
another instance of the generic:

Compare<float, float> f_compare = new Compare<float, float >;


float MyFloat = f_compare.Larger(1.23f, 4.32f);

Because you can use this with different types, you don't need to change the original
generic code.

The example here is a simplification of what can be done with generics. You will find
that, to truly create a generic type that can be used with any data type as a parameter, you
will need to ensure that a number of requirements are met. One way to do this—the
appropriate way—is with a constraint. A constraint is a class or interface that must be
included as a part of the type used for the parameter. For example, in the previous
Compare class, to make sure that any data type will work as a parameter when declaring
the delegate, you can force the data types to have implemented the IComparable interface
from the .NET Framework.

You can add a constraint by including it after the generic class declaration. You indicate a
constraint by using the proposed new C# keyword where:

public class Compare<ItemType, ItemType>


where ItemType : IComparable
{
public ItemType Larger(ItemType data, ItemType data2)
{
// logic...
}

public ItemType Smaller(ItemType data, ItemType data2)


{
// logic...
}
}

C# FILE HANDLING
Introduction

Text files provide a common denominator format where both people and programs can
read and understand. The .NET Framework includes convenience classes that make
reading and writing text files very easy. The following sequence outlines the basic steps
necessary to work with text files:

1. Open the file


2. Read/Write to the file
3. Close the file

It's that simple. Listing 1 shows how to write text data to a file.

Writing to a Text File


Listing 1: Writing Text Data to a File: TextFileWriter.cs

using System;
using System.IO;

namespace csharp_station.howto
{
class TextFileWriter
{
static void Main(string[] args)
{
// create a writer and open the file
TextWriter tw = new StreamWriter("date.txt");

// write a line of text to the file


tw.WriteLine(DateTime.Now);

// close the stream


tw.Close();
}
}
}

This program creates a text file when it runs. In the directory where the executable
program is located, you'll find a file named date.txt. If you view the contents of this file,
you'll see the following textual representation of the date and time when the program last
ran:

2/15/2002 8:54:51 PM

The first task in Listing 1 is to open the file. This happens by instantiating a
StreamWriter class, which returns an object of type TextWriter. The result
could have also been assigned to a StreamWriter instance. The StreamWriter
was called with a single parameter, indicating the name of the file to open. If
this file doesn't exist, the StreamWriter will create it. The StreamWriter also
has 6 other constructor overloads that permit you to specify the file in
different ways, buffer info, and text encoding. Here's the line that opens the
date.txt file:

TextWriter tw = new StreamWriter("date.txt");

Using the TextWriter instance, tw, you can write text info to the file. The
example writes the text for the current date and time, using the static Now
property of the DateTime class. Here's the line from the code:

tw.WriteLine(DateTime.Now);

When you're done writing to the file, be sure to close it as follows:

tw.Close();

Reading From a Text File

Listing 2 shows how to read from a text file:

Listing 2: Reading Text Data from a File: TextFileReader.cs

using System;
using System.IO;

namespace csharp_station.howto
{
class TextFileReader
{
static void Main(string[] args)
{
// create reader & open file
Textreader tr = new StreamReader("date.txt");

// read a line of text


Console.WriteLine(tr.ReadLine());

// close the stream


tr.Close();
}
}
}

In Listing 2, the text file is opened in a manner similar to the method used in Listing 1,
except it uses a StreamReader class constructor to create an instance of a Textreader. The
StreamReader class includes additional overloads that allow you to specify the file in
different ways, text format encoding, and buffer info. This program opens the date.txt
file, which should be in the same directory as the executable file:
Textreader tr = new StreamReader("date.txt");

Within a Console.WriteLine statement, the program reads a line of text from the file,
using the ReadLine() method of the Textreader instance. The Textreader class also
includes methods that allow you to invoke the Read() method to read one or more
character or use the Peek() method to see what the next character is without pulling it
from the stream. Here's the code that reads an entire line from the text file:

Console.WriteLine(tr.ReadLine());

When done reading, you should close the file as follows:

tr.Close();

The following table describes some commonly used classes in the System.IO
namespace.

Class Name Description

It is used to read from and write to any location within a file


FileStream
BinaryReader It is used to read primitive data types from a binary stream

BinaryWriter It is used to write primitive data types in binary format

StreamReader It is used to read characters from a byte Stream

StreamWriter It is used to write characters to a stream.

StringReader It is used to read from a string buffer

StringWriter It is used to write into a string buffer

DirectoryInfo It is used to perform operations on directories

FileInfo It is used to perform operations on files

Reading and writing in the text file

StreamWriter Class

The StreamWriter class in inherited from the abstract class TextWriter. The
TextWriter class represents a writer, which can write a series of characters.

The following table describes some of the methods used by StreamWriter class.

Methods Description

Close Closes the current StreamWriter object and the underlying


stream
Clears all buffers for the current writer and causes any buffered
Flush
data to be written to the underlying stream

Writes to the stream


Write
Writes data specified by the overloaded parameters, followed by
WriteLine end of line

INHERITANCE N POLYMORPHISM

Inheritance in C#

This article discusses Inheritance concepts in the context of C# Before we understand


Inheritance in C# it is important to understand the key players involved, viz Objects,
Classes and Structs

Classes and Structs are ‘blue-prints’ or templates from which we instantiate (create)
objects Example a car may be created based on its blue print Car is the object and blue
print is the class (or template)

What are types?

An object can be of the following types – Class or Struct There are many differences
between the two ‘types’ The main difference between the two is the way in which they
are stored in memory and the way they are accessedClasses are also called reference
types Structs are known as value types Classes are stored in a memory space called
‘heap’ and Structs are stored in a memory space known as ‘stack’

Constructors:

In C#, (like other Objected Oriented languages) constructor is a method having the same
name as the class The constructor is called when the object is being created It can have
one or more parameters
Interfaces:

In the context of C#, an interface provides a contract A class that is derived from this
interface will implement the functions specified by the interface

Inheritance:

C# supports two types of Inheritance mechanisms


1) Implementation Inheritance
2) Interface Inheritance

What is Implementation Inheritance?

- When a class (type) is derived from another class(type) such that it inherits all the
members of the base type it is Implementation Inheritance

What is Interface Inheritance?

- When a type (class or a struct) inherits only the signatures of the functions from another
type it is Interface Inheritance

In general Classes can be derived from another class, hence support Implementation
inheritance At the same time Classes can also be derived from one or more interfaces
Hence they support Interface inheritance Structs can derive from one more interface,
hence support Interface Inheritance Structs cannot be derivedfrom another class they are
always derived from SystemValueType

Multiple Inheritance:

C# does not support multiple implementation inheritance A class cannot be derived from
more than one class However, a class can be derived from multiple interfaces

Inheritance Usage Example:


Here is a syntax example for using Implementation Inheritance
Class derivedClass:baseClass
{
}

derivedClass is derived from baseClass

Interface Inheritance example:

private Class derivedClass:baseClass , InterfaceX , InterfaceY


{
}

derivedClass is now derived from interfaces – InterfaceX, InterfaceY


Similarly a struct can be derived from any number of interfaces

private struct childStruct:InterfaceX, InterfaceY


{
}

Virtual Methods:
If a function or a property in the base class is declared as virtual it can be overridden in
any derived classes

Usage Example:

class baseClass
{
public virtual int fnCount()
{
return 10;
}
}
class derivedClass :baseClass
{
public override int fnCount()
{
return 100;
}
}

This is useful because the compiler verifies that the ‘override’ function has the same
signature as the virtual function

Hiding Methods:
Similar to the above scenario if the methods are declared in a child and base class with
the same signature but without the key words virtual and override

Lesson 9: Polymorphism

This lesson teaches about Polymorphism in C#. Our objectives are as follows:

• Learn What Polymorphism Is.


• Implement a Virtual Method.
• Override a Virtual Method.
• Use Polymorphism in a Program.

Another primary concept of object-oriented programming is Polymorphism. It allows you


to invoke derived class methods through a base class reference during run-time. This is
handy when you need to assign a group of objects to an array and then invoke each of
their methods. They won't necessarily have to be the same object type. However, if
they're related by inheritance, you can add them to the array as the inherited type. Then if
they all share the same method name, that method of each object can be invoked. This
lesson will show you how to accomplish this.

Listing 9-1. A Base Class With a Virtual Method: DrawingObject.cs

using System;

public class DrawingObject


{
public virtual void Draw()
{
Console.WriteLine("I'm just a generic drawing object.");
}
}
Listing 9-1 shows the DrawingObject class. This will be the base class for other objects
to inherit from. It has a single method named Draw(). The Draw() method has a virtual
modifier. The virtual modifier indicates to derived classes that they can override this
method. The Draw() method of the DrawingObject class performs a single action of
printing the statement, "I'm just a generic drawing object.", to the console.

Listing 9-2. Derived Classes With Override Methods: Line.cs, Circle.cs, and
Square.cs

using System;

public class Line : DrawingObject


{
public override void Draw()
{
Console.WriteLine("I'm a Line.");
}
}

public class Circle : DrawingObject


{
public override void Draw()
{
Console.WriteLine("I'm a Circle.");
}
}

public class Square : DrawingObject


{
public override void Draw()
{
Console.WriteLine("I'm a Square.");
}
}

Listing 9-2 shows three classes. These classes inherit the DrawingObject class. Each
class has a Draw() method and each Draw() method has an override modifier. The
override modifier allows a method to override the virtual method of its base class at run-
time. The override will happen only if the class is referenced through a base class
reference. Overriding methods must have the same signature, name and parameters, as
the virtual base class method it is overriding.

Listing 9-3. Program Implementing Polymorphism: DrawDemo.cs

using System;

public class DrawDemo


{
public static int Main( )
{
DrawingObject[] dObj = new DrawingObject[4];
dObj[0] = new Line();
dObj[1] = new Circle();
dObj[2] = new Square();
dObj[3] = new DrawingObject();

foreach (DrawingObject drawObj in dObj)


{
drawObj.Draw();
}

return 0;
}
}

Listing 9-3 shows a program that uses the classes defined in Listing 9-1 and Listing 9-2.
This program implements polymorphism. In the Main() method of the DrawDemo class,
there is an array being created. The type of object in this array is the DrawingObject
class. The array is named dObj and is being initialized to hold four objects of type
DrawingObject.

Next the dObj array is initialized. Because of their inheritance relationship with the
DrawingObject class, the Line, Circle, and Square classes can be assigned to the dObj
array. Without this capability, you would have to create an array for each type.
Inheritance allows derived objects to act like their base class, which saves work.

After the array is initialized, there is a foreach loop that looks at each element of the
array. Within the foreach loop the Draw() method is invoked on each element of the
dObj array. Because of polymorphism, the run-time type of each object is invoked. The
type of the reference object from the dObj array is a DrawingObject. However, that
doesn't matter because the derived classes override the virtual Draw() method of the
DrawingObject class. This makes the overriden Draw() methods of the derived classes
execute when the Draw() method is called using the DrawingObject base class reference
from the dObj array. Here's what the output looks like:

Output:

I'm a Line.
I'm a Circle.
I'm a Square.
I'm just a generic drawing object.

The override Draw() method of each derived class executes as shown in the DrawDemo
program. The last line is from the virtual Draw() method of the DrawingObject class.
This is because the actual run-time type of the fourth array element was a DrawingObject
object.

The code in this lesson can be compiled with the following command line:

csc DrawDemo.cs DrawingObject.cs Circle.cs Line.cs Square.cs


It will create the file DrawDemo.exe, which defaulted to the name of the first file on the
command line.

Summary

You should now have a basic understanding of polymorphism. You know how to define a
virtual method. You can implement a derived class method that overrides a virtual
method. This relationship between virtual methods and the derived class methods that
override them enables polymorphism. This lesson showed how to use this relationship
between classes to implement polymorphism in a program.

DATABSE PROGRAMMING

Database Programming in C# with


MySQL : Using OleDB
(Page 1 of 4 )

Persisting the data processed by an application has become the norm. The data can be
stored either in a file system using normal files or in databases. The functionalities
provided by database packages make them a more attractive proposition. With the advent
of open source database products such as MySQL , the use of databases for data
persistence has become more or less ubiquitous. Hence, no language or platform can
ignore the need to provide libraries to access databases, especially MySQL, and .Net as a
platform and C# as a language are no exceptions.

There are three main Data Providers, as the database access APIs are known in .Net,
which are the SQL Data Provider, the OleDB Data Provider and the ODBC Data
Provider. Of these I will be focusing on the OleDB Data Provider and using it to work
with a MySQL database . The first and second sections of this article will provide
insight into the various APIs that form the OleDB. The third section will detail the steps
required to access MySQL using OleDB. In the last section, I will develop a real-world
application that implements the theory provided in the first three sections. That's the
outline for this discussion.

OleDB: What is it?

OleDB is one of the three Data Providers supported by .Net. It is part of the System.Data
namespace; specifically, all the classes of OleDB come under the System.Data.OleDb
namespace. OleDB had been around before .Net came into the picture. The OleDB
Provider provides a mechanism for accessing the OleDB data source (databases that
could be connected through OleDB) in the managed space of .Net. In essence, the OleDB
Data Provider sits between a .Net-based application and OleDB. The main classes that
form the OleDB Data Provider are:

1. OleDbConnection
2. OleDbCommand
3. OleDbDataAdapter
4. OleDbDataReader

Most of the classes are arranged in a hierarchical manner, that is, one provides an
instance of the other. For example, OleDbCommand provides an instance of
OleDbDataReader.

OleDbConnection represents a connection with a data source such as a database server .


Each connection represented by OleDbConnection's instance is unique. When an instance
of OleDbConnection is created, all its attributes are given or set to their default values. If
the underlying OleDB Provider doesn't support certain properties or methods, the
corresponding properties and methods of OleDbConnection will be disabled. To create an
instance of OleDbConnection, its constructor has to be called with a connection string.
The connection string specifies the parameters needed to connect with the data source.
The following statement shows an example of this:

OleDbConnection conn = new OleDbConnection(


"Provider=MySqlProv;" +
"Data Source=localhost;" +
"User id=UserName;" +
"Password=Secret;"
);

The above example provides a connection to MySQL server at local machine.

OleDbCommand represents a command to be executed against a data source connected


through an OleDbConnection instance. In the context of databases the command can be a
SQL statement or a stored procedure. To get an instance of OleDbCommand, its
constructor has to be called with an instance of the OleDbConnection class and the string
containing the SQL query to be executed. For example, the following statement creates
an instance of an OleDbCommand named command:

string queryString = "SELECT OrderID, CustomerID FROM


Orders";
OleDbCommand command = new OleDbCommand(queryString, conn);
Database Programming in C# with
MySQL : Using OleDB - OleDB
continued
(Page 2 of 4 )

OleDbDataAdapter represents a set of commands and a connection that is used to fill a


DataSet. In other words it is a bridge between a DataSet and the data source to retrieve
and update the data. The constructor of the OleDbDataAdapter needs to be called with a
SQL select statement and an OleDbConnection instance. To cite an example,
the following creates an instance of an OleDbDataAdapter named adapter:

OleDbDataAdapter adapter = new


OleDbDataAdapter(queryString, conn);

OleDbDataReader provides a mechanism for reading forward only a stream of records


and columns from the data source. To obtain an instance of OleDbDataReader, the
executeReader() method of OleDbCommand has to be called. The following statement
does the same:

OleDbDataReader reader = command.ExecuteReader();

Keep in mind that, while OleDbDataReader is being used, the corresponding connection
will be busy, as it uses a stream to communicate with the data source.

Since the main classes have been discussed, the next step involves understanding how
MySQL and the OleDB Data Provider link with each other. The OleDB Data
Provider calls the underlying OleDB Provider. So it is the OleDB Provider that
communicates with the data source. For each database system, the OleDB Provider has to
be provided by the vendor of the database.

In this case the vendor is MySQL. Hence unless MySQL provides the OleDB Provider,
the OleDB Data Provider won't be able to communicate with the database server . The
Provider supplied by MySQL has to be registered with .Net so that the OleDB Data
Provider can call the Provider. The other term for the OleDB Provider is Database Driver.
In the case of MySQL it is also known as the MySQL connector. Next I will be
discussing the steps required to access MySQL.
Database Programming in C# with
MySQL : Using OleDB - Accessing
MySQL, Step by Step
(Page 3 of 4 )

As the Data Provider in this discussion is OleDB, the steps required to access MySQL
aren't any different from those for any other database. The point of difference comes in
the connection string. Let's look at the steps:

1. Creating the Connection object


2. Instantiating the Command object
3. Obtaining the DataReader object
4. Retrieving the records

The connection string comes into the picture in the first step. It is the connection string
that decides which underlying Driver has to be called.

Creating the Connection object

Creating a connection object really means obtaining an instance of the OleDbConnection


class. The constructor takes the connection string as a parameter. The connection string is
composed of the following:

1. Provider: specifies the vendor of the driver. In the case of MySQL, the
value would be MySqlProv.
2. Data Source: the name of the machine on which the server resides.
3. User Id: the user name with which to connect to the database.
4. Password: the password with which to connect with database.

The connection string is a collection of name value pairs separated by a semicolon. For
example, to connect with a database at localhost with the user name root and no
password, the connection string would be:

string strConnect = "Provider=MySqlProv;" +


"Data Source=localhost;" +
"User id=root;" +
"Password=;"

An OleDbConnection instance that can be obtained by using the connection string would
be thus:
OleDbConnection conn = new OleDbConnection( strConnect);

Instantiating the Command Object

The next step in accessing MySQL is creating an instance of the OleDbCommand class
so that a SQL statement can be executed at the database. To obtain an instance of
OleDbConnection, its constructor needs the SQL statement to be executed and the
connection through which the database can be connected. For example, the following
statements create an instance of OleDbCommand named command:

string strSQL= "Select * from user_master";


OleDbCommand command = new OleDbCommand(strSQL, conn);

Obtaining the Data Reader Object

The next step is to retrieve the result. For that, a stream is required that fetches data from
the database. This requirement can be met by obtaining an object of OleDbDataReader.
As discussed in the first section, it is a forward only stream, using which the rows and
columns returned by the executed command can be read. To get an instance of
OleDbDataReader, we use the ExecuteReader() method of OleDbCommand instance. So
accordingly to get an instance named reader, the statement would be:

OleDbDataReader reader = command.ExecuteReader();

Retrieving the records

The records can be retrieved using the Read() method of OleDbDataReder. It returns true
if more records are available, and otherwise returns false. To access the specific column
use the GetString() method of OleDbDataReder. It takes column number as the
argument. For example, the following code block reads the value of the second column of
each row (columns are zero indexed):

while( reader.Read())
Console.WriteLine(reader.GetString(1));

For extracting data from columns having a type different from varchar, the OleDb Data
Provider gives different .Net types mapped to SQL types.

That brings us to the end of this section. In the next section, I will develop a small
application that will use the MySQL OleDB Data Provider to access a MySQL database
server.
MODULE 7 WEB APPLICATIONS IN ASP.NET

In the Internet world, Web servers serve resources that have been put on the Internet and
provide other services like security, logging, etc.

At the beginning of the Internet era, clients' needs were very limited; .htm files were
often satisfactory. As time passed, however, clients' requirements extended beyond the
functionality contained in .htm files or static files.

Developers needed a way to extend or complement the functionality of Web servers. Web
server vendors devised different solutions, but they all followed a common theme: "Plug
some component into the Web server". All Web server complement technologies allowed
developers to create and plug in components for enhancing Web server functionality.
Microsoft came up with ISAPI (Internet Server API); Netscape came up with NSAPI
(Netscape Server API), etc.

ISAPI is an important technology that allows us to enhance the capabilities of an ISAPI-


compliant Web server (IIS is an ISAPI-compliant Web server). The following
components serve this purpose:

• ISAPI Extensions
• ISAPI Filters

ISAPI extensions are implemented by using Win32 DLLs. You can think of an ISAPI
extension as a normal application. ISAPI extensions are the target of http requests. That
means you must call them to activate them. For example, the following URL calls the
store.dll ISAPI extension and passes two values to it:

http://www.myownwebsite.com/Store.dll?sitename=15seconds&location=USA

Think of an ISAPI filter as just that: a filter. It sits between your Web server and the
client. Every time a client makes a request to the server, it passes through the filter.

Clients do not specifically target the filter in their requests, rather, clients simply send
requests to the Web server, and then the Web server passes that request to the interested
filters.

Filters can then modify the request, perform some logging operations, etc.

It was very difficult to implement these components because of the complexities


involved. One had to use C/C++ to develop these components, and for many,
development in C/C++ is a pain.
So what does ASP.NET offer to harness this functionality? ASP.NET offers
HttpHandlers and HttpModules.

Before going into the details of these components, it is worth looking at the flow of http
requests as it passes through the HTTP modules and HTTP handlers.

Setting up the Sample Applications

I have created the following C# projects, which demonstrate different components of the
application.

• NewHandler (HTTP handler)


• Webapp (Demonstrates HTTP handler)
• SecurityModules (HTTP module)
• Webapp2 (Demonstrated HTTP module)

To install the applications:

• Extract all the code from the attached zip file.


• Create two virtual directories named webapp and webapp2; point these directories
to the actual physical folders for the Webapp and Webapp2 web applications.
• Copy the Newhandler.dll file from the NewHandler project into the bin directory
of webapp web application.
• Copy the SecurityModules.dll file from the SecurityModules project into the bin
directory of webapp2 web application.

ASP.NET Request Processing

ASP.NET request processing is based on a pipeline model in which ASP.NET passes http
requests to all the modules in the pipeline. Each module receives the http request and has
full control over it. The module can play with the request in any way it sees fit. Once the
request passes through all of the HTTP modules, it is eventually served by an HTTP
handler. The HTTP handler performs some processing on it, and the result again passes
through the HTTP modules in the pipeline.

The following figure describes this flow.


Notice that during the processing of an http request, only one HTTP handler will be
called, whereas more than one HTTP modules can be called.

Task 1: Developing a Module using .NET

In this task, we examine the development of an authentication module that supports the
HTTP.1 basic authentication scheme. This module was developed using the standard
ASP.NET module pattern available since ASP.NET v1.0. This same pattern is used to
build ASP.NET modules that extend the IIS 7.0 server. In fact, existing ASP.NET
modules written for previous versions of IIS can be used on IIS 7.0, and take advantage
of better ASP.NET integration to provide more power to the web applications which
use them.

Note: The full code for the module is provided in Appendix A.

A managed module is a .NET class that implements the System.Web.IHttpModule


interface. The primary function of this class is to register for one or more events that
occur within IIS 7.0 request processing pipeline, and then perform some useful work
when IIS 7.0 invokes the module's event handlers for those events.

Lets create a new source file named "BasicAuthenticationModule.cs", and create the
module class (the complete source code is provided in Appendix A):

public class BasicAuthenticationModule :


System.Web.IHttpModule
{
void Init(HttpApplication context)
{
}
void Dispose()
{
}
}

The primary function of the Init method is wiring the module's event handler methods to
the appropriate request pipeline events. The module's class provides the event handle
methods, and they implement the desired functionality provided by the module. This is
discussed further in detail.

The Dispose method is used to clean up any module state when the module instance is
discarded. It is typically not implemented unless the module uses specific resources that
require to be released.

Init()

After creating the class, the next step is to implement the Init method. The only
requirement is to register the module for one or more request pipeline events. Wire up
module methods, which follow the System.EventHandler delegate signature, to the
desired pipeline events exposed on the provided System.Web.HttpApplication instance:

public void Init(HttpApplication context)


{
//
// Subscribe to the authenticate event to perform the
// authentication.
//
context.AuthenticateRequest += new
EventHandler(this.AuthenticateUser);

//
// Subscribe to the EndRequest event to issue the
// challenge if necessary.
//
context.EndRequest += new
EventHandler(this.IssueAuthenticationChalleng
e);
}

The AuthenticateUser method is invoked on every request during the


AuthenticateRequest event. We utilize it to authenticate the user based on the credential
information present in the request.

The IssueAuthenticationChallenge method is invoked on every request during the


EndRequest event. It is responsible for issuing a basic authentication challenge back to
the client whenever the authorization module rejects a request, and authentication is
needed.
AuthenticateUser()

Implement the AuthenticateUser method. This method does the following:

• Extract the basic credentials if present from the incoming request headers. To see
the implementation of this step, see the ExtractBasicAuthenticationCredentials
utility method.
• Attempts to validate the provided credentials via Membership (using the default
membership provider configured). To see the implementation of this step, see the
ValidateCredentials utility method.
• Creates a user principal identifying the user if the authentication is successful, and
associates it with the request.

At the end of this processing, if the module was successfully able to obtain and validate
the user credentials, it will produce an authenticated user principal that other modules and
application code later use in access control decisions. For example, the URL
authorization module examines the user in the next pipeline event in order to enforce the
authorization rules configured by the application.

IssueAuthenticationChallenge()

Implement the IssueAuthenticationChallenge method. This method does the following:

• Check the response status code to determine whether this request was rejected.
• If so, issue a basic authentication challenge header to the response to trigger the
client to authenticate.

Utility Methods

Implement the utility methods that the module uses, including:

• ExtractBasicAuthenticationCredentials. This method extracts the basic


authentication credentials from the Authorize request header, as specified in the
basic authentication scheme.
• ValidateCredentials. This method attempts to validate user credentials by using
Membership. The Membership API abstracts the underlying credential store, and
allows the credential store implementations to be configured by adding /
removing Membership providers through configuration.

Note: In this sample, the Membership validation is commented out, and instead the
module simply checks whether the username and password are both equal to the string
"test". This is done for clarity, and is not intended for production deployments. You are
invited to enable Membership-based credential validation by simply un-commenting the
Membership code inside ValidateCredentials, and configuring a Membership provider for
your application. See Appendix C for more information.
Task 2: Deploy the module to the application

After creating the module in the first task, we next add it to the application.

Deploy to Application

First, deploy the module to the application. Here, you have several options:

• Copy the source file containing the module into the /App_Code directory of the
application. This does not require compiling the module - ASP.NET automatically
compiles and loads the module type when the application starts up. Simply save
this source code as BasicAuthenticationModule.cs inside the /App_Code directory
of your application. Do this if you do not feel comfortable with the other steps.
• Compile the module into an assembly, and drop this assembly in the /BIN
directory of the application. This is the most typical option if you only want this
module to be available to this application, and you do not want to ship the source
of the module with your application. Compile the module source file by running
the following from a command line prompt:

<PATH_TO_FX_SDK>csc.exe /out:BasicAuthenticationModule.dll
/target:library BasicAuthenticationModule.cs

Where <PATH_TO_FX_SDK> is the path to the .NET Framework SDK that


contains the CSC.EXE compiler.

• Compile the module into a strongly named assembly, and register this assembly in
the GAC. This is a good option if you want multiple applications on your machine
to use this module. To learn more about building strongly named assemblies, see
this MSDN article .

Before making configuration changes in the application's web.config file, we must unlock
some of the configuration sections that are locked at the server level by default. Run the
following from an Elevated command prompt (Start > Right click on Cmd.exe and
choose "Run as Administrator"):

%windir%\system32\inetsrv\APPCMD.EXE unlock config


/section:windowsAuthentication
%windir%\system32\inetsrv\APPCMD.EXE unlock config
/section:anonymousAuthentication

After running these commands, you will be able to define these configuration sections in
your application's web.config file.

Configure your module to run in the application. Start by creating a new web.config file,
which will contain the configuration necessary to enable and use the new module. Start
by adding the text below and saving it to the root of your application (%systemdrive
%\inetpub\wwwroot\web.config if using the root application in the Default Web Site).

ASP.NET PAGE DIRECTIVES

Asp.Net 2.0 Page Directives


Asp.Net Page directives are something that is a part of every asp.net pages. Page
directives are instructions, inserted at the top of an ASP.NET page, to control the
behavior of the asp.net pages. So it is type of mixed settings related to how a page should
render and processed.

Here’s an example of the page directive.,


<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Sample.aspx.cs"
Inherits="Sample" Title="Sample Page Title" %>

Totally there are 11 types of Pages


directives in Asp.Net 2.0. Some directives
are very important without which we cannot
develop any web applications in Asp.Net.
Some directives are used occasionally
according to its necessity. When used,
directives can be located anywhere in an
.aspx or .ascx file, though standard practice
is to include them at the beginning of the
file. Each directive can contain one or more
attributes (paired with values) that are
specific to that directive.

Asp.Net web form page framework supports the following directives

1. @Page
2. @Master
3. @Control
4. @Register
5. @Reference
6. @PreviousPageType
7. @OutputCache
8. @Import
9. @Implements
10. @Assembly
11. @MasterType

@Page Directive

The @Page directive enables you to specify attributes and values for an Asp.Net Page to
be used when the page is parsed and compiled. Every .aspx files should include this
@Page directive to execute. There are many attributes belong to this directive. We shall
discuss some of the important attributes here.

a. AspCompat: When set to True, this allows to the page to be executed on a single-
threaded apartment. If you want to use a component developed in VB 6.0, you can set
this value to True. But setting this attribute to true can cause your page's performance to
degrade.

b. Language: This attribute tells the compiler about the language being used in the code-
behind. Values can represent any .NET-supported language, including Visual Basic, C#,
or JScript .NET.

c. AutoEventWireup: For every page there is an automatic way to bind the events to
methods in the same .aspx file or in code behind. The default value is true.

d. CodeFile: Specifies the code-behid file with which the page is associated.

e. Title: To set the page title other than what is specified in the master page.

f. Culture: Specifies the culture setting of the page. If you set to auto, enables the page to
automatically detect the culture required for the page.

g. UICulture: Specifies the UI culture setting to use for the page. Supports any valid UI
culture value.

h. ValidateRequest: Indicates whether request validation should occur. If set to true,


request validation checks all input data against a hard-coded list of potentially dangerous
values. If a match occurs, an HttpRequestValidationException Class is thrown. The
default is true. This feature is enabled in the machine configuration file (Machine.config).
You can disable it in your application configuration file (Web.config) or on the page by
setting this attribute to false.

i. Theme: To specify the theme for the page. This is a new feature available in Asp.Net
2.0.

j. SmartNavigation: Indicates the smart navigation feature of the page. When set to
True, this returns the postback to current position of the page. The default value is false.
k. MasterPageFile:
Specify the location of the
MasterPage file to be used
with the current Asp.Net
page.

l. EnableViewState:
Indicates whether view
state is maintained across
page requests. true if view
state is maintained;
otherwise, false. The
default is true.

m. ErrorPage: Specifies a
target URL for redirection
if an unhandled page
exception occurs.

n. Inherits: Specifies a
code-behind class for the
page to inherit. This can be
any class derived from the
Page class.

There are also other


attributes which are of
seldom use such as Buffer,
CodePage, ClassName,
EnableSessionState,
Debug, Description,
EnableTheming,
EnableViewStateMac,
TraceMode, WarningLevel,
etc. Here is an example of
how a @Page directive
looks
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Sample.aspx.cs"
Inherits="Sample" Title="Sample Page Title" %>
@Master Directive

The @Master directive is quite similar to the @Page directive. The @Master directive
belongs to Master Pages that is .master files. The master page will be used in conjunction
of any number of content pages. So the content pages can the inherits the attributes of the
master page. Even though, both @Page and @Master page directives are similar, the
@Master directive has only fewer attributes as follows
a. Language: This attribute tells the compiler about the language being used in the code-
behind. Values can represent any .NET-supported language, including Visual Basic, C#,
or JScript .NET.

b. AutoEventWireup: For every page there is an automatic way to bind the events to
methods in the same master file or in code behind. The default value is True.

c. CodeFile: Specifies the code-behid file with which the MasterPage is associated.

d. Title: Set the MasterPage Title.

e. MasterPageFile: Specifies the location of the MasterPage file to be used with the
current MasterPage. This is called as Nested Master Page.

f. EnableViewState: Indicates whether view state is maintained across page requests.


true if view state is maintained; otherwise, false. The default is true.

g. Inherits: Specifies a code-behind class for the page to inherit. This can be any class
derived from the Page class.

Here is an example of how a @Master directive looks

<%@ Master Language="C#" AutoEventWireup="true"


CodeFile="WebMaster.master.cs" Inherits="WebMaster" %>
@Control Directive

The @Control directive is used when we build an Asp.Net user controls. The @Control
directive helps us to define the properties to be inherited by the user control. These values
are assigned to the user control as the page is parsed and compiled. The attributes of
@Control directives are

a. Language: This attribute tells the compiler about the language being used in the code-
behind. Values can represent any .NET-supported language, including Visual Basic, C#,
or JScript .NET.

b. AutoEventWireup: For every page there is an automatic way to bind the events to
methods in the same .ascx file or in code behind. The default value is true.

c. CodeFile: Specifies the code-behid file with which the user control is associated.

d. EnableViewState: Indicates whether view state is maintained across page requests.


true if view state is maintained; otherwise, false. The default is true.

e. Inherits: Specifies a code-behind class for the page to inherit. This can be any class
derived from the Page class.
f. Debug: Indicates whether the page should be compiled with debug symbols.

g. Src: Points to the source file of the class used for the code behind of the user control.

The other attributes which are very rarely used is ClassName, CompilerOptions,
ComplieWith, Description, EnableTheming, Explicit, LinePragmas, Strict and
WarningLevel.

Here is an example of how a @Control directive looks

<%@ Control Language="C#" AutoEventWireup="true"


CodeFile="MyControl.ascx.cs" Inherits=" MyControl " %>

@Register Directive

The @Register directive associates aliases with namespaces and class names for notation
in custom server control syntax. When you drag and drop a user control onto your .aspx
pages, the Visual Studio 2005 automatically creates an @Register directive at the top of
the page. This register the user control on the page so that the control can be accessed on
the .aspx page by a specific name.

The main atttribues of @Register directive are


a. Assembly: The assembly you are associatin with the TagPrefix.

b. Namespace: The namspace to relate with TagPrefix.

c. Src: The location of the user control.

d. TagName: The alias to relate to the class name.

e. TagPrefix: The alias to relate to the namespace.

Here is an example of how a @Register directive looks

<%@ Register Src="Yourusercontrol.ascx" TagName=" Yourusercontrol "


TagPrefix="uc1" Src="~\usercontrol\usercontrol1.ascx" %>

@Reference Directive

The @Reference directive declares that another asp.net page or user control should be
complied along with the current page or user control. The 2 attributes for @Reference
direcive are

a. Control: User control that ASP.NET should dynamically compile and link to the
current page at run time.
b. Page: The Web Forms page that ASP.NET should dynamically compile and link to the
current page at run time.

c. VirutalPath: Specifies the location of the page or user control from which the active
page will be referenced.

Here is an example of how a @Reference directive looks

<%@ Reference VirutalPath="YourReferencePage.ascx" %>

@PreviousPageType Directive

The @PreviousPageType is a new directive makes excellence in asp.net 2.0 pages. The
concept of cross-page posting between Asp.Net pages is achieved by this directive. This
directive is used to specify the page from which the cross-page posting initiates. This
simple directive contains only two attibutes

a. TagName: Sets the name of the derived class from which the postback will occur.

b. VirutalPath: sets the location of the posting page from which the postback will occur.

Here is an example of @PreviousPageType directive

<%@ PreviousPageType VirtualPath="~/YourPreviousPageName.aspx" %>

@OutputCache Directive

The @OutputCache directive controls the output caching policies of the Asp.Net page or
user control. You can even cache programmatically through code by using Visual
Basic .NET or Visual C# .NET. The very important attributes for the @OutputCache
directive are as follows

Duration: The duration of time in seconds that the page or user control is cached.

Location: To specify the location to store


the output cache. To store the output cache
on the browser client where the request
originated set the value as ‘Client’. To store
the output cache on any HTTP 1.1 cache-
capable devices including the proxy servers
and the client that made request, specify the
Location as Downstream. To store the
output cache on the Web server, mention the
location as Server.

VaryByParam: List of strings used to vary


the output cache, separated with semi-colon.

VaryByControl: List of strings used to vary


the output cache of a user Control, separated
with semi-colon.

VaryByCustom: String of values, specifies


the custom output caching requirements.

VaryByHeader: List of HTTP headers used


to vary the output cache, separated with
semi-colon.
The other attribues which is rarely used are CacheProfile, DiskCacheable, NoStore,
SqlDependency, etc.

<%@ OutputCache Duration="60" Location="Server" VaryByParam="None" %>

To turn off the output cache for an ASP.NET Web page at the client location and at the
proxy location, set the Location attribute value to none, and then set the VaryByParam
value to none in the @ OutputCache directive. Use the following code samples to turn off
client and proxy caching.

<%@ OutputCache Location="None" VaryByParam="None" %>

@Import Directive

The @Import directive allows you to specify any namespaces to the imported to the
Asp.Net pages or user controls. By importing, all the classes and interfaces of the
namespace are made available to the page or user control. The example of the @Import
directive

<%@ Import namespace=”System.Data” %>


<%@ Import namespace=”System.Data.SqlClient” %>

@Implements Directive

The @Implements directive gets the Asp.Net page to implement a specified .NET
framework interface. The only single attribute is Interface, helps to specify the .NET
Framework interface. When the Asp.Net page or user control implements an interface, it
has direct access to all its events, methods and properties.

<%@ Implements Interface=”System.Web.UI.IValidator” %>

@Assembly Directive

The @Assembly directive is used to make your ASP.NET page aware of external
components. This directive supports two attributes:

a. Name: Enables you specify the name of an assembly you want to attach to the page.
Here you should mention the filename without the extension.

b. Src: represents the name of a source code file

<%@ Assembly Name="YourAssemblyName" %>

@MasterType Directive

To access members of a specific master page from a content page, you can create a
strongly typed reference to the master page by creating a @MasterType directive. This
directive supports of two attributes such as TypeName and VirtualPath.

a. TypeName: Sets the name of the derived class from which to get strongly typed
references or members.

b. VirtualPath: Sets the location of the master page from which the strongly typed
references and members will be retrieved.

If you have public properties defined in a Master Page that you'd like to access in a
strongly-typed manner you can add the MasterType directive into a page as shown next

<%@ MasterType VirtualPath="MasterPage.master" %>

this.Master.HeaderText = "Label updated using MasterType directive with


VirtualPath attribute";

Page directives configure the runtime environment that will execute the page. The
complete list of directives is as follows:

@ Assembly - Links an assembly to the current page or user control declaratively.

@ Control - Defines control-specific attributes used by the ASP.NET page parser and
compiler and can be included only in .ascx files (user controls).

@ Implements - Indicates that a page or user control implements a specified .NET


Framework interface declaratively.

@ Import - Imports a namespace into a page or user control explicitly.

@ Master - Identifies a page as a master page and defines attributes used by the
ASP.NET page parser and compiler and can be included only in .master files.

@ MasterType - Defines the class or virtual path used to type the Master property of a
page.

@ OutputCache - Controls the output caching policies of a page or user control


declaratively.

@ Page - Defines page-specific attributes used by the ASP.NET page parser and
compiler and can be included only in .aspx files.

@ PreviousPageType - Creates a strongly typed reference to the source page from the
target of a cross-page posting.

@ Reference - Links a page, user control, or COM control to the current page or user
control declaratively.

@ Register - Associates aliases with namespaces and classes, which allow user controls
and custom server

PAGE EVENTS AND PAGE LIFE CYCLE

General Page Life-cycle Stages


Stage Description
Page request The page request occurs before the page life cycle begins. When the page is requested by a
user, ASP.NET determines whether the page needs to be parsed and compiled or whether a
cached version of the page can be sent in response without running the page.
Start In the start step, page properties such as Request and Response are set. At this stage, the page
also determines whether the request is a postback or a new request and sets the IsPostBack
property. Additionally, during the start step, the page's UICulture property is set.
Page
initialization
During page initialization, controls on the page are available and each control's UniqueID
property is set. Any themes are also applied to the page. If the current request is a postback,
the postback data has not yet been loaded and control property values have not been restored
to the values from view state.
Load During load, if the current request is a postback, control properties are loaded with information
recovered from view state and control state.
Validation During validation, the Validate method of all validator controls is called, which sets the IsValid
property of individual validator controls and of the page.
Postback event
handling
If the request is a postback, any event handlers are called.
Rendering Before rendering, view state is saved for the page and all controls. During the rendering phase,
the page calls the Render method for each control, providing a text writer that writes its output
to the OutputStream of the page's Response property.
Unload Unload is called after the page has been fully rendered, sent to the client, and is ready to be
discarded. At this point, page properties such as Response and Request are unloaded and any
cleanup is performed.

Data Binding Events for Data-Bound Controls


Control Event Typical Use
DataBinding This event is raised by data-bound controls before the PreRender event of the
containing control (or of the Page object) and marks the beginning of binding
the control to the data.
RowCreated (GridView)
ItemCreated (DataList,
DetailsView, SiteMapPath,
DataGrid, FormView,
Repeater)
Use this event to manipulate content that is not dependent on data binding. For
example, at run time, you might programmatically add formatting to a header
or footer row in a GridView control.
RowDataBound (GridView)
ItemDataBound (DataList,
SiteMapPath, DataGrid,
Repeater)
When this event occurs, data is available in the row or item, so you can format
data or set the FilterExpression property on child data source controls for
displaying related data within the row or item.
DataBound This event marks the end of data-binding operations in a data-bound control.
In a GridView control, data binding is complete for all rows and any child
controls. Use this event to format data bound content or to initiate data binding
in other controls that depend on values from the current control's content.
More .NET Cheat Sheats available at http://john-sheehan.com/blog/
More info at http://msdn2.microsoft.com/en-us/library/7949d756-1a79-464e-891f-904b1cfc7991.aspx

Common Life-cycle Events


Page Event Typical Use
PreInit Use this event for the following:
Check the IsPostBack property to determine whether this is the first time the page is
being processed.
Create or re-create dynamic controls.
Set a master page dynamically.
Set the Theme property dynamically.
Read or set profile property values.
Note: If the request is a postback, the values of the controls have not yet been restored
from view state. If you set a control property at this stage, its value might be overwritten
in the next event.
Init Raised after all controls have been initialized and any skin settings have been applied. Use
this event to read or initialize control properties.
InitComplete Raised by the Page object. Use this event for processing tasks that require all initialization
be complete.
PreLoad Use this event if you need to perform processing on your page or control before the Load
event. After the Page raises this event, it loads view state for itself and all controls, and
then processes any postback data included with the Request instance.
Load The Page calls the OnLoad event method on the Page, then recursively does the same for
each child control, which does the same for each of its child controls until the page and all
controls are loaded.
Control events Use these events to handle specific control events, such as a Button control's Click event or
a TextBox control's TextChanged event. In a postback request, if the page contains
validator controls, check the IsValid property of the Page and of individual validation
controls before performing any processing.
LoadComplete Use this event for tasks that require that all other controls on the page be loaded.
PreRender Before this event occurs:
The Page object calls EnsureChildControls for each control and for the page.
Each data bound control whose DataSourceID property is set calls its DataBind
method.
The PreRender event occurs for each control on the page. Use the event to make final
changes to the contents of the page or its controls.
SaveStateComplete Before this event occurs, ViewState has been saved for the page and for all controls. Any
changes to the page or controls at this point will be ignored. Use this event perform tasks
that require view state to be saved, but that do not make any changes to controls.
Render This is not an event; instead, at this stage of processing, the Page object calls this method
on each control. All ASP.NET Web server controls have a Render method that writes out
the control's markup that is sent to the browser. If you create a custom control, you
typically override this method to output the control's markup. However, if your custom
control incorporates only standard ASP.NET Web server controls and no custom markup,
you do not need to override the Render method. A user control (an .ascx file) automatically
incorporates rendering, so you do not need to explicitly render the control in code.
Unload This event occurs for each control and then for the page. In controls, use this event to do
final cleanup for specific controls, such as closing control-specific database connections. For
the page itself, use this event to do final cleanup work, such as closing open files and
database connections, or finishing up logging or other request-specific tasks. Note: During
the unload stage, the page and its controls have been rendered, so you cannot make
further changes to the response stream. If you attempt to call a method such as the
Response.Write method, the page will throw an exception.

CROSSPAGE POSTBACK

Cross Page Post Back in ASP.Net 2.0


In ASP.Net 1.x, an ASPX page cannot be posted to a different ASPX page. ASP.Net 2.0
overcame this drawback by providing us the feature, cross page post back. We can now
post an ASPX page to a different page with minimal effort.
It can be achieved through the new property that comes with the button control.

btnGeturAge.PostBackUrl = "Target.aspx";

Refer the below figure which shows the button's PostBackUrl property in visual studio.

How to access the controls of source page from the target Page?

ASP.Net 2.0 Page object now comes with a property called


PreviousPage which gives the access to the posted page.

Page P=Page.PreviousPage;

The Page object comes with a method called FindControl() which


takes the control ID as parameter and gives the access to that
control.

TextBox text=(TextBox)P.FindControl(“txtDOB”);
Or

TextBox text=(TextBox)PreviousPage.FindControl(“txtDOB”);

The code for the source and target page will be,

SourcePage:

<asp:TextBox ID="txtDOB" runat="server"></asp:TextBox>

<asp:Button ID="btnGeturAge" runat="server"


PostBackUrl="~/Target.aspx" Text="Calculate Age" />

TargetPage:

protected void Page_Load(object sender, EventArgs e)

Page P=Page.PreviousPage;

TextBox text=(TextBox)P.FindControl(“txtDOB”);

<%@ PreviousPageType %> Directive:

In the above example, to access a control that is declared in source


page we have to redeclare it in the target page. ASP.Net 2.0 has a
PreviousPageType directive to redeclare the source page in the
target page. Also, we have to declare a public property for every
control in source page to access it in the target page.

<%@ PreviousPageType VirtualPath="~/Default.aspx" %>

So the code will be like,

SourcePage:

<asp:TextBox ID="txtDOB" runat="server"></asp:TextBox>

<asp:Button ID="btnGeturAge" runat="server"


PostBackUrl="~/Age.aspx" Text="Calculate Age" />

public DateTime DOB

get

return Convert.ToDateTime(txtDOB.Text);

TargetPage:

<%@ PreviousPageType VirtualPath="~/SourcePage.aspx” %>

protected void Page_Load(object sender, EventArgs e)

{
DateTime dt=PreviousPage.DOB;

When the source page is posted to the target page, the target page
will get executed for the first time which makes target page’s
IsPostback to be false.The source page will be loaded into the
memory and executes all the events except the render event. To
ensure that the page is executed for a cross page postback there is a
property called IsCrossPagePostBack in PreviousPage object which
will be true for a cross page postback.

PreviousPage.IsCrossPagePostBack

So we can write the above code like,

if (PreviousPage.IsCrossPagePostBack)

DateTime dt=PreviousPage.DOB;

.NET COMPILATION MODEL

CHAPTER3
ASP.NET Compilation Models
ASP.NET Compilation 32
Automating aspnet_compiler.exe in Visual Web Developer Express Edition 37
IN THIS CHAPTER
05_0789736659_ch03.qxd 9/13/07 7:45 PM Page 31
32 Chapter 3 ASP.NET Compilation Models
ASP.NET Compilation
In the previous chapter, I covered the basics of ASP.NET code models. In this chapter,
we’ll
discuss the details of how ASP.NET applications are compiled. This information is not vital
to your success as an ASP.NET developer, but having an understanding of the
architecture
of your development environment always makes you a better developer.
ASP.NET is nothing like the legacy ASP with which many developers are familiar. You
develop ASP pages by using VBScript or JScript, and they are interpreted, meaning that
they
are executed just as they are written, directly from the page. ASP.NET is entirely different
in
that ASP.NET pages are compiled before they are executed.
When you write ASP.NET code, you do so in human-readable text. Before ASP.NET can
run your code, it has to convert it into something that the computer can understand and
execute.
The process of converting code from what a programmer types into what a computer
can actually execute is called compilation.
Exactly how compilation takes place in ASP.NET depends on the compilation model that
you use. Several different compilation models are available to you in ASP.NET 3.5.
The Web Application Compilation Model
The web application compilation model is the same model provided in ASP.NET 1.0 and
1.1. When you use this model, you use the Build menu in Visual Web Developer to
compile
your application into a single DLL file that is copied to a bin folder in the root of your
application. When the first request comes into your application, the DLL from the bin
folder is copied to the Temporary ASP.NET Files folder, where it is then recompiled into
code that the operating system can execute in a process known as just-in-time (JIT)
compilation.
The JIT compilation causes a delay of several seconds on the first request of the
application.
NOT E The Temporary ASP.NET Files folder is located at Windows\Microsoft.NET\
Framework\v2.0.50727\Temporary ASP.NET Files by default.
To create a new ASP.NET web application using the web application compilation model,
select File, New Project, and then choose the ASP.NET Web Application template as
shown in Figure 3.1.

NOT E The web application model is available only in Visual Studio 2008.Visual Web
Developer 2008 does not enable you to create ASP.NET applications using the
web application model.
05_0789736659_ch03.qxd 9/13/07 7:45 PM Page 32
ASP.NET Compilation 33
The Website Compilation Model
The website compilation model is the model that developers using Visual Web Developer
Express Edition will use because it’s the only model available. In this model, all files in the
application are copied to the remote web server and are then compiled on the fly by
ASP.NET at browse time.
FIGURE 3.1
Choose the New
Project option on the
File menu to create a
new ASP.NET application
that uses the
web application
compilation model.
NOT E You can use the website compilation model whether you are using inline
ASP.NET code or code-behind code.
➔ For more information on inline code and code-behind code models, see “Server-Side
Code Models,” p. 24.
When this compilation model is used, ASP.NET compiles the application into one or more
DLLs in the Temporary ASP.NET Files folder when the first page is requested. The DLLs
are in a subfolder with a name derived from a special naming convention that allows for
dynamic and random directory names. Therefore, a website called MyWebSite might
execute
from a folder on the server that looks similar to this:
C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET
➥Files\MyWebSite\650b10f9\e47ff097
NOT E The website compilation model was the only compilation model available
when Visual Studio 2005 (the previous version of Visual Studio) was released.
Microsoft added the web application model later as an add-on to Visual Studio
2005, and then incorporated it into Visual Studio 2008.
05_0789736659_ch03.qxd 9/13/07 7:45 PM Page 33
34 Chapter 3 ASP.NET Compilation Models
The website compilation model is convenient because developers can open a code file or
an
ASPX page and make modifications to it on the live server. When those changes are
saved, they
go into effect immediately. However, using this method requires you to copy all the
source code
for your application to the live server, and this may be a concern to some developers.
NOT E The compilation of the App_Web DLLs takes place without any explicit action on
your part. It’s all automatic.
C AU T I O N ASP.NET explicitly forbids the download of code files from a website, so no one
will be able to download your source code and access it. However, anyone with
direct access to the web server can access your source code when using the
website compilation model.
The Precompilation Model
The precompilation model allows you to compile your ASP.NET application into one or
more DLLs that can then be copied to the web server in place of any code.
Select Build, Publish Web Site to precompile your website using the Publish Web Site
dialog
shown in Figure 3.2.
FIGURE 3.2

The Publish Web Site


dialog makes precompiling
a website
simple.
Inside that folder will be the actual DLLs that contain the compiled code. The naming
convention
of the DLLs is App_Web_<random_name>.dll.
NOT E The Publish Web Site menu option is available only in the full Visual Studio version.
It is not available in Visual Web Developer Express Edition, but you can add
the capability using the steps provided in the “Automating aspnet_compiler.exe
in Visual Web Developer Express Edition”section later in this chapter.
05_0789736659_ch03.qxd 9/13/07 7:45 PM Page 34
ASP.NET Compilation 35
If you’d like the option of updating any of your ASPX pages on the live server (for
example,
making a change to the HTML code), you should check the Allow This Precompiled Site to
be Updatable check box. If your site is precompiled with this check box checked, you’ll
then
be able to make modifications to the ASPX pages on the live server if necessary. If it is
unchecked, you’ll still need to copy the ASPX files to the server, but the precompilation
process will remove all the code from them, so you won’t be able to change any of them
on
the live server. We’ll talk about that in greater detail a little later in this chapter.
If you are using Visual Web Developer Express Edition, you won’t have the option to
precompile
your website within the user interface, but it can still be accomplished if you use the
aspnet_compiler.exe utility that ships with the .NET Framework.
NOT E The aspnet_compiler.exe utility is located in the Windows\Microsoft.NET\
Framework\v2.0.50727 directory.
The aspnet_compiler.exe utility runs from a command line. If you have the .NET
Framework SDK v2.0 installed, you can select Start, All Programs, Microsoft.NET
Framework SDK v2.0 and then click SDK Command Prompt to open a command prompt.
This command prompt automatically sets the necessary environment variables to enable
you
to run aspnet_compiler.exe without changing into the v2.0.50727 directory.
NOT E You can download the .NET Framework SDK v2.0 from www.microsoft.com/
downloads/details.aspx?familyid=fe6f2099-b7b4-4f47-a244-c96d69c35dec&
displaylang=en. If you’d prefer not to type in that long URL, search on .NET
Framework SDK 2.0 and you’ll find it.
If you don’t have the .NET Framework SDK v2.0 installed, you can still run aspnet_compiler.
exe from a regular command line, but you need to change into the v2.0.50727 directory
first. You can do that by running the following command from a command prompt:
cd \windows\microsoft.net\framework\v2.0.50727
➔ For information on how to configure a menu item in Visual Web Developer Express Edition that will
precompile
your web application, see “Automating aspnet_compiler.exe in Visual Web Developer Express
Edition,” later in this chapter.
Numerous command-line parameters can be used with aspnet_compiler.exe. Table 3.1 lists
a few that you are likely to use often.
05_0789736659_ch03.qxd 9/13/07 7:45 PM Page 35
36 Chapter 3 ASP.NET Compilation Models
Table 3.1 Frequently Used Parameters for aspnet_compiler.exe
Parameter Description
-? Prints a description of all parameters.
-v Specifies the path that follows is a virtual path.
-p The physical path of the application to compile.
-u Specifies the compiled application can be updated.
-c Causes the compiled application to be fully rebuilt, overwriting any existing files.
-d Creates debug output, making the application easier to debug if a problem arises after
it’s copied.
Even though there are a lot of parameters for aspnet_compiler.exe, the command to
precompile
your ASP.NET application is less complex than you might think. For example, if
you have an ASP.NET application located at c:\myApp and you want to precompile it and
save the result to c:\compiledApp, you would run the following command:
aspnet_compiler.exe -p “c:\myApp” -v / “c:\compiledApp”
The -p parameter is used to point to the location of the application (c:\myApp in this case),
and the -v parameter points to the virtual location of the application. In a file-based
ASP.NET application, the virtual location is always /. That is followed by the path where
the compiled files should be written.
If you look in the c:\compiledApp directory after this command runs, you can see what looks
like the ASPX files for your application, but in fact, these are simply marker files. If you
open one of them, you’ll see a single line of text in it that says This is a marker file generated
by the precompilation tool, and should not be deleted! ASP.NET creates this
file so that, when the application is browsed on the live server, users won’t get an error
saying
that the file wasn’t found.
You’ll also see a file called preCompiledApp.config. This file contains the version number of
the precompilation tool (for ASP.NET 3.5, it is version 2) and specifies whether the site is
capable of being updated. If your website has any other configuration files (such as a
web.config file), it is also in the directory containing the precompiled application, along with
any other supporting files and folders such as images and so on.
➔ For more information on web.config files, see Chapter 6,“ASP.NET Configuration and Performance.”
All the code for your application is compiled into a bin directory located at the root of the
precompiled site. If you open that directory, you’ll see one or more DLLs with names such
as App_Web_ekxytkat.dll. All these DLLs start with App_Web_ and then contain a random
group of characters. These DLLs are ASP.NET assemblies, and when your website runs on
the live server, it runs from these DLLs.
To publish a precompiled web application to a live web server, simply copy all the files
and
folders in the directory containing the precompiled website to the live server. When you
do,
ASP.NET automatically begins using the new files.
05_0789736659_ch03.qxd 9/13/07 7:45 PM Page 36
Automating aspnet_compiler.exe in Visual Web Developer Express Edition 37

Automating aspnet_compiler.exe in Visual


Web
Developer Express Edition
As I pointed out previously, there is a menu option in Visual Studio 2008 that automates
the
use of the aspnet_compiler.exe so that you can easily precompile your applications. That
menu option does not exist in Visual Web Developer Express Edition, but you can easily
add
it to the menu by configuring aspnet_compiler.exe as an external tool in Visual Web
Developer Express Edition.
To configure menu options for precompiling your web application in Visual Web Developer
Express Edition, follow these steps:
1. Launch Visual Web Developer Express Edition.
2. Select Tools, External Tools to access the External Tools dialog.
3. Type Pre-&Compile (non-updatable) in the Title text box.
T I P The ampersand in the title means that the character immediately after will be
defined as a hotkey for the menu item.
4. Click the browse button next to the Command text box and browse to aspnet_
compiler.exe in the Windows\Microsoft.NET\Framework\v2.0.50727 directory.
5. Click Open to select the aspnet_compiler.exe application.
6. Type -p “ in the Arguments text box.
7. Click the right-facing arrow button on the right edge of the Arguments text box and
select Project Directory, as shown in Figure 3.3.
FIGURE 3.3

You configure the


arguments for
aspnet_
compiler.exe by
using the menu to
the right of the
Arguments text box.
05_0789736659_ch03.qxd 9/13/07 7:45 PM Page 37
38 Chapter 3 ASP.NET Compilation Models
8. Enter a closing quote at the end of the arguments you’ve entered so far. At this point,
the Arguments text box should contain the following text:
-p “$(ProjectDir)”
9. Press the spacebar to add a space at the end of the arguments you’ve entered so far,
and then type -v / “.
10. Click the right-facing arrow button at the right edge of the Arguments text box again
and select Project Directory.
11. Type \..\CompiledApp” after the existing arguments. At this point, the Arguments text
box should contain the following text:
-p “$(ProjectDir)” -v / “$(ProjectDir)\..\Compiled”
12. Check the Close on Exit check box.
13. Click OK to add the new external tool to the list.
You now have a new menu option in Visual Web Developer Express Edition (see Figure
3.4)
that enables you to precompile your web application as a non-updatable application.
FIGURE 3.4

Adding a menu
option in Visual Web
Developer Express
Edition to automate
the aspnet_
compiler.exe utility
is simple and makes
it much more convenient
to precompile
your applications.
Now let’s add a second external tool that precompiles a website and makes it updatable.
1. Select Tools, External Tools to access the External Tools dialog.
2. Click Add to add a new external tool.
3. Enter Pre-Co&mpile (updatable) in the Title text box.
05_0789736659_ch03.qxd 9/13/07 7:45 PM Page 38
Automating aspnet_compiler.exe in Visual Web Developer Express Edition 39
4. Configure all other settings as you did before, but this time, add the -u argument to
the Arguments text box. The arguments text box should contain the following text:
-p “$(ProjectDir)” -v / “$(ProjectDir)\..\Compiled” -u
When you select one of the new menu items, your ASP.NET application is precompiled
into a directory called Compiled. That directory is one level above your ASP.NET
application.
Therefore, if your ASP.NET application is located at c:\mysites\myWebApp, the precompiled
website is saved to c:\mysites\Compiled.
05_0789736659_ch03.qxd 9/13/07 7:45 PM Page 39

ASP.NET SERVER CONTOLS


Server controls are tags that are understood by the server.

Limitations in Classic ASP

The listing below was copied from the previous chapter:

<html>
<body bgcolor="yellow">
<center>
<h2>Hello W3Schools!</h2>
<p><%Response.Write(now())%></p>
</center>
</body>
</html>

The code above illustrates a limitation in Classic ASP: The code block has to be placed
where you want the output to appear.

With Classic ASP it is impossible to separate executable code from the HTML itself.
This makes the page difficult to read, and difficult to maintain.

ASP.NET - Server Controls

ASP.NET has solved the "spaghetti-code" problem described above with server controls.

Server controls are tags that are understood by the server.

There are three kinds of server controls:

• HTML Server Controls - Traditional HTML tags


• Web Server Controls - New ASP.NET tags
• Validation Server Controls - For input validation

ASP.NET - HTML Server Controls

HTML server controls are HTML tags understood by the server.

HTML elements in ASP.NET files are, by default, treated as text. To make these
elements programmable, add a runat="server" attribute to the HTML element. This
attribute indicates that the element should be treated as a server control. The id attribute is
added to identify the server control. The id reference can be used to manipulate the server
control at run time.

Note: All HTML server controls must be within a <form> tag with the runat="server"
attribute. The runat="server" attribute indicates that the form should be processed on the
server. It also indicates that the enclosed controls can be accessed by server scripts.

In the following example we declare an HtmlAnchor server control in an .aspx file. Then
we manipulate the HRef attribute of the HtmlAnchor control in an event handler (an
event handler is a subroutine that executes code for a given event). The Page_Load event
is one of many events that ASP.NET understands:

<script runat="server">
Sub Page_Load
link1.HRef="http://www.w3schools.com"
End Sub
</script>

<html>
<body>

<form runat="server">
<a id="link1" runat="server">Visit W3Schools!</a>
</form>

</body>
</html>

The executable code itself has been moved outside the HTML.

ASP.NET - Web Server Controls

Web server controls are special ASP.NET tags understood by the server.

Like HTML server controls, Web server controls are also created on the server and they
require a runat="server" attribute to work. However, Web server controls do not
necessarily map to any existing HTML elements and they may represent more complex
elements.

The syntax for creating a Web server control is:

<asp:control_name id="some_id" runat="server" />

In the following example we declare a Button server control in an .aspx file. Then we
create an event handler for the Click event which changes the text on the button:
<script runat="server">
Sub submit(Source As Object, e As EventArgs)
button1.Text="You clicked me!"
End Sub
</script>

<html>
<body>

<form runat="server">
<asp:Button id="button1" Text="Click me!"
runat="server" OnClick="submit"/>
</form>

</body>
</html>

ASP.NET - Validation Server Controls

Validation server controls are used to validate user-input. If the user-input does not pass
validation, it will display an error message to the user.

Each validation control performs a specific type of validation (like validating against a
specific value or a range of values).

By default, page validation is performed when a Button, ImageButton, or LinkButton


control is clicked. You can prevent validation when a button control is clicked by setting
the CausesValidation property to false.

The syntax for creating a Validation server control is:

<asp:control_name id="some_id" runat="server" />

In the following example we declare one TextBox control, one Button control, and one
RangeValidator control in an .aspx file. If validation fails, the text "The value must be
from 1 to 100!" will be displayed in the RangeValidator control:
Example
<html>
<body>

<form runat="server">
<p>Enter a number from 1 to 100:
<asp:TextBox id="tbox1" runat="server" />
<br /><br />
<asp:Button Text="Submit" runat="server" />
</p>

<p>
<asp:RangeValidator
ControlToValidate="tbox1"
MinimumValue="1"
MaximumValue="100"
Type="Integer"
Text="The value must be from 1 to 100!"
runat="server" />
</p>
</form>

</body>
</html>

BUILDING DATABASES

#1 | Creating a Data Access LayerView in VB or C#

#2 | Creating a Business Logic LayerView in VB or C#


#3 | Master Pages and Site NavigationView in VB or C#

Introduction

As web developers, our lives revolve around working with data. We create databases to
store the data, code to retrieve and modify it, and web pages to collect and summarize it.
This is the first tutorial in a lengthy series that will explore techniques for implementing
these common patterns in ASP.NET 2.0. We'll start with creating a software architecture
composed of a Data Access Layer (DAL) using Typed DataSets, a Business Logic Layer
(BLL) that enforces custom business rules, and a presentation layer composed of
ASP.NET pages that share a common page layout. Once this backend groundwork has
been laid, we'll move into reporting, showing how to display, summarize, collect, and
validate data from a web application. These tutorials are geared to be concise and provide
step-by-step instructions with plenty of screen shots to walk you through the process
visually. Each tutorial is available in C# and Visual Basic versions and includes a
download of the complete code used. (This first tutorial is quite lengthy, but the rest are
presented in much more digestible chunks.)

For these tutorials we'll be using a Microsoft SQL Server 2005 Express Edition version
of the Northwind database placed in the App_Data directory. In addition to the database
file, the App_Data folder also contains the SQL scripts for creating the database, in case
you want to use a different database version. These scripts can be also be downloaded
directly from Microsoft, if you'd prefer. If you use a different SQL Server version of the
Northwind database, you will need to update the NORTHWNDConnectionString setting in
the application's Web.config file. The web application was built using Visual Studio
2005 Professional Edition as a file system-based Web site project. However, all of the
tutorials will work equally well with the free version of Visual Studio 2005, Visual Web
Developer.

In this tutorial we'll start from the very beginning and create the Data Access Layer
(DAL), followed by creating the Business Logic Layer (BLL) in the second tutorial, and
working on page layout and navigation in the third. The tutorials after the third one will
build upon the foundation laid in the first three. We've got a lot to cover in this first
tutorial, so fire up Visual Studio and let's get started!

Step 1: Creating a Web Project and Connecting to the Database

Before we can create our Data Access Layer (DAL), we first need to create a web site
and setup our database. Start by creating a new file system-based ASP.NET web site. To
accomplish this, go to the File menu and choose New Web Site, displaying the New Web
Site dialog box. Choose the ASP.NET Web Site template, set the Location drop-down list
to File System, choose a folder to place the web site, and set the language to Visual
Basic.

Figure 1: Create a New File System-Based Web Site (Click to view full-size image)

This will create a new web site with a Default.aspx ASP.NET page, an App_Data
folder, and a Web.config file.

With the web site created, the next step is to add a reference to the database in Visual
Studio's Server Explorer. By adding a database to the Server Explorer you can add tables,
stored procedures, views, and so on all from within Visual Studio. You can also view
table data or create your own queries either by hand or graphically via the Query Builder.
Furthermore, when we build the Typed DataSets for the DAL we'll need to point Visual
Studio to the database from which the Typed DataSets should be constructed. While we
can provide this connection information at that point in time, Visual Studio automatically
populates a drop-down list of the databases already registered in the Server Explorer.

The steps for adding the Northwind database to the Server Explorer depend on whether
you want to use the SQL Server 2005 Express Edition database in the App_Data folder or
if you have a Microsoft SQL Server 2000 or 2005 database server setup that you want to
use instead.

Using a Database in the App_Data Folder

If you do not have a SQL Server 2000 or 2005 database server to connect to, or you
simply want to avoid having to add the database to a database server, you can use the
SQL Server 2005 Express Edition version of the Northwind database that is located in the
downloaded website's App_Data folder (NORTHWND.MDF).
A database placed in the App_Data folder is automatically added to the Server Explorer.
Assuming you have SQL Server 2005 Express Edition installed on your machine you
should see a node named NORTHWND.MDF in the Server Explorer, which you can
expand and explore its tables, views, stored procedure, and so on (see Figure 2).

The App_Data folder can also hold Microsoft Access .mdb files, which, like their SQL
Server counterparts, are automatically added to the Server Explorer. If you don't want to
use any of the SQL Server options, you can always download a Microsoft Access version
of the Northwind database file and drop into the App_Data directory. Keep in mind,
however, that Access databases aren't as feature-rich as SQL Server, and aren't designed
to be used in web site scenarios. Furthermore, a couple of the 35+ tutorials will utilize
certain database-level features that aren't supported by Access.

Connecting to the Database in a Microsoft SQL Server 2000 or


2005 Database Server

Alternatively, you may connect to a Northwind database installed on a database server. If


the database server does not already have the Northwind database installed, you first must
add it to database server by running the installation script included in this tutorial's
download or by downloading the SQL Server 2000 version of Northwind and installation
script directly from Microsoft's web site.

Once you have the database installed, go to the Server Explorer in Visual Studio, right-
click on the Data Connections node, and choose Add Connection. If you don't see the
Server Explorer go to the View / Server Explorer, or hit Ctrl+Alt+S. This will bring up
the Add Connection dialog box, where you can specify the server to connect to, the
authentication information, and the database name. Once you have successfully
configured the database connection information and clicked the OK button, the database
will be added as a node underneath the Data Connections node. You can expand the
database node to explore its tables, views, stored procedures, and so on.
Figure 2: Add a Connection to Your Database Server's Northwind Database

Step 2: Creating the Data Access Layer

When working with data one option is to embed the data-specific logic directly into the
presentation layer (in a web application, the ASP.NET pages make up the presentation
layer). This may take the form of writing ADO.NET code in the ASP.NET page's code
portion or using the SqlDataSource control from the markup portion. In either case, this
approach tightly couples the data access logic with the presentation layer. The
recommended approach, however, is to separate the data access logic from the
presentation layer. This separate layer is referred to as the Data Access Layer, DAL for
short, and is typically implemented as a separate Class Library project. The benefits of
this layered architecture are well documented (see the "Further Readings" section at the
end of this tutorial for information on these advantages) and is the approach we will take
in this series.

All code that is specific to the underlying data source – such as creating a connection to
the database, issuing SELECT, INSERT, UPDATE, and DELETE commands, and so on –
should be located in the DAL. The presentation layer should not contain any references to
such data access code, but should instead make calls into the DAL for any and all data
requests. Data Access Layers typically contain methods for accessing the underlying
database data. The Northwind database, for example, has Products and Categories
tables that record the products for sale and the categories to which they belong. In our
DAL we will have methods like:

• GetCategories(), which will return information about all of the categories


• GetProducts(), which will return information about all of the products
• GetProductsByCategoryID(categoryID), which will return all products that
belong to a specified category
• GetProductByProductID(productID), which will return information about a
particular product

These methods, when invoked, will connect to the database, issue the appropriate query,
and return the results. How we return these results is important. These methods could
simply return a DataSet or DataReader populated by the database query, but ideally these
results should be returned using strongly-typed objects. A strongly-typed object is one
whose schema is rigidly defined at compile time, whereas the opposite, a loosely-typed
object, is one whose schema is not known until runtime.

For example, the DataReader and the DataSet (by default) are loosely-typed objects since
their schema is defined by the columns returned by the database query used to populate
them. To access a particular column from a loosely-typed DataTable we need to use
syntax like: DataTable.Rows(index)("columnName"). The DataTable's loose typing in
this example is exhibited by the fact that we need to access the column name using a
string or ordinal index. A strongly-typed DataTable, on the other hand, will have each of
its columns implemented as properties, resulting in code that looks like:
DataTable.Rows(index).columnName.

To return strongly-typed objects, developers can either create their own custom business
objects or use Typed DataSets. A business object is implemented by the developer as a
class whose properties typically reflect the columns of the underlying database table the
business object represents. A Typed DataSet is a class generated for you by Visual Studio
based on a database schema and whose members are strongly-typed according to this
schema. The Typed DataSet itself consists of classes that extend the ADO.NET DataSet,
DataTable, and DataRow classes. In addition to strongly-typed DataTables, Typed
DataSets now also include TableAdapters, which are classes with methods for populating
the DataSet's DataTables and propagating modifications within the DataTables back to
the database.

Note: For more information on the advantages and disadvantages of using Typed
DataSets versus custom business objects, refer to Designing Data Tier Components and
Passing Data Through Tiers.

We'll use strongly-typed DataSets for these tutorials' architecture. Figure 3 illustrates the
workflow between the different layers of an application that uses Typed DataSets.
Figure 3: All Data Access Code is Relegated to the DAL (Click to view full-size image)

Creating a Typed DataSet and Table Adapter

To begin creating our DAL, we start by adding a Typed DataSet to our project. To
accomplish this, right-click on the project node in the Solution Explorer and choose Add
a New Item. Select the DataSet option from the list of templates and name it
Northwind.xsd.

Figure 4: Choose to Add a New DataSet to Your Project (Click to view full-size image)

After clicking Add, when prompted to add the DataSet to the App_Code folder, choose
Yes. The Designer for the Typed DataSet will then be displayed, and the TableAdapter
Configuration Wizard will start, allowing you to add your first TableAdapter to the
Typed DataSet.

A Typed DataSet serves as a strongly-typed collection of data; it is composed of strongly-


typed DataTable instances, each of which is in turn composed of strongly-typed DataRow
instances. We will create a strongly-typed DataTable for each of the underlying database
tables that we need to work with in this tutorials series. Let's start with creating a
DataTable for the Products table.

Keep in mind that strongly-typed DataTables do not include any information on how to
access data from their underlying database table. In order to retrieve the data to populate
the DataTable, we use a TableAdapter class, which functions as our Data Access Layer.
For our Products DataTable, the TableAdapter will contain the methods –
GetProducts(), GetProductByCategoryID(categoryID), and so on – that we'll invoke
from the presentation layer. The DataTable's role is to serve as the strongly-typed objects
used to pass data between the layers.

The TableAdapter Configuration Wizard begins by prompting you to select which


database to work with. The drop-down list shows those databases in the Server Explorer.
If you did not add the Northwind database to the Server Explorer, you can click the New
Connection button at this time to do so.

Figure 5: Choose the Northwind Database from the Drop-Down List (Click to view full-
size image)

After selecting the database and clicking Next, you'll be asked if you want to save the
connection string in the Web.config file. By saving the connection string you'll avoid
having it hard coded in the TableAdapter classes, which simplifies things if the
connection string information changes in the future. If you opt to save the connection
string in the configuration file it's placed in the <connectionStrings> section, which
can be optionally encrypted for improved security or modified later through the new
ASP.NET 2.0 Property Page within the IIS GUI Admin Tool, which is more ideal for
administrators.

Figure 6: Save the Connection String to Web.config (Click to view full-size image)

Next, we need to define the schema for the first strongly-typed DataTable and provide the
first method for our TableAdapter to use when populating the strongly-typed DataSet.
These two steps are accomplished simultaneously by creating a query that returns the
columns from the table that we want reflected in our DataTable. At the end of the wizard
we'll give a method name to this query. Once that's been accomplished, this method can
be invoked from our presentation layer. The method will execute the defined query and
populate a strongly-typed DataTable.

To get started defining the SQL query we must first indicate how we want the
TableAdapter to issue the query. We can use an ad-hoc SQL statement, create a new
stored procedure, or use an existing stored procedure. For these tutorials we'll use ad-hoc
SQL statements. Refer to Brian Noyes's article, Build a Data Access Layer with the
Visual Studio 2005 DataSet Designer for an example of using stored procedures.
Figure 7: Query the Data Using an Ad-Hoc SQL Statement (Click to view full-size
image)

At this point we can type in the SQL query by hand. When creating the first method in
the TableAdapter you typically want to have the query return those columns that need to
be expressed in the corresponding DataTable. We can accomplish this by creating a query
that returns all columns and all rows from the Products table:
Figure 8: Enter the SQL Query Into the Textbox (Click to view full-size image)

Alternatively, use the Query Builder and graphically construct the query, as shown in
Figure 9.

Figure 9: Create the Query Graphically, through the Query Editor (Click to view full-size
image)

After creating the query, but before moving onto the next screen, click the Advanced
Options button. In Web Site Projects, "Generate Insert, Update, and Delete statements" is
the only advanced option selected by default; if you run this wizard from a Class Library
or a Windows Project the "Use optimistic concurrency" option will also be selected.
Leave the "Use optimistic concurrency" option unchecked for now. We'll examine
optimistic concurrency in future tutorials.
Figure 10: Select Only the “Generate Insert, Update, and Delete statements&rdquuo;
Option (Click to view full-size image)

After verifying the advanced options, click Next to proceed to the final screen. Here we
are asked to select which methods to add to the TableAdapter. There are two patterns for
populating data:

• Fill a DataTable – with this approach a method is created that takes in a


DataTable as a parameter and populates it based on the results of the query. The
ADO.NET DataAdapter class, for example, implements this pattern with its
Fill() method.
• Return a DataTable – with this approach the method creates and fills the
DataTable for you and returns it as the methods return value.

You can have the TableAdapter implement one or both of these patterns. You can also
rename the methods provided here. Let's leave both checkboxes checked, even though
we'll only be using the latter pattern throughout these tutorials. Also, let's rename the
rather generic GetData method to GetProducts.

If checked, the final checkbox, "GenerateDBDirectMethods," creates Insert(),


Update(), and Delete() methods for the TableAdapter. If you leave this option
unchecked, all updates will need to be done through the TableAdapter's sole Update()
method, which takes in the Typed DataSet, a DataTable, a single DataRow, or an array of
DataRows. (If you've unchecked the "Generate Insert, Update, and Delete statements"
option from the advanced properties in Figure 9 this checkbox's setting will have no
effect.) Let's leave this checkbox selected.

MODULE 8 XML
XML Syntax Rules
« Previous Next Chapter »

The syntax rules of XML are very simple and logical. The rules are easy to learn, and
easy to use.

All XML Elements Must Have a Closing Tag

In HTML, you will often see elements that don't have a closing tag:

<p>This is a paragraph
<p>This is another paragraph

In XML, it is illegal to omit the closing tag. All elements must have a closing tag:

<p>This is a paragraph</p>
<p>This is another paragraph</p>

Note: You might have noticed from the previous example that the XML declaration did
not have a closing tag. This is not an error. The declaration is not a part of the XML
document itself, and it has no closing tag.

XML Tags are Case Sensitive

XML elements are defined using XML tags.

XML tags are case sensitive. With XML, the tag <Letter> is different from the tag
<letter>.

Opening and closing tags must be written with the same case:

<Message>This is incorrect</message>
<message>This is correct</message>

Note: "Opening and closing tags" are often referred to as "Start and end tags". Use
whatever you prefer. It is exactly the same thing.

XML Elements Must be Properly Nested

In HTML, you might see improperly nested elements:


<b><i>This text is bold and italic</b></i>

In XML, all elements must be properly nested within each other:

<b><i>This text is bold and italic</i></b>

In the example above, "Properly nested" simply means that since the <i> element is
opened inside the <b> element, it must be closed inside the <b> element.

XML Documents Must Have a Root Element

XML documents must contain one element that is the parent of all other elements. This
element is called the root element.

<root>
<child>
<subchild>.....</subchild>
</child>
</root>

XML Attribute Values Must be Quoted

XML elements can have attributes in name/value pairs just like in HTML.

In XML the attribute value must always be quoted. Study the two XML documents
below. The first one is incorrect, the second is correct:

<note date=12/11/2007>
<to>Tove</to>
<from>Jani</from>
</note>

<note date="12/11/2007">
<to>Tove</to>
<from>Jani</from>
</note>

The error in the first document is that the date attribute in the note element is not quoted.

Entity References

Some characters have a special meaning in XML.


If you place a character like "<" inside an XML element, it will generate an error because
the parser interprets it as the start of a new element.

This will generate an XML error:

<message>if salary < 1000 then</message>

To avoid this error, replace the "<" character with an entity reference:

<message>if salary &lt; 1000 then</message>

There are 5 predefined entity references in XML:

&lt; < less than


&gt; > greater than
&amp; & ampersand
&apos; ' apostrophe
&quot; " quotation mark

Note: Only the characters "<" and "&" are strictly illegal in XML. The greater than
character is legal, but it is a good habit to replace it.

Comments in XML

The syntax for writing comments in XML is similar to that of HTML.

<!-- This is a comment -->

White-space is Preserved in XML

HTML truncates multiple white-space characters to one single white-space:

HTML: Hello my name is Tove


Output: Hello my name is Tove.

With XML, the white-space in a document is not truncated.

XML Stores New Line as LF

In Windows applications, a new line is normally stored as a pair of characters: carriage


return (CR) and line feed (LF). The character pair bears some resemblance to the
typewriter actions of setting a new line. In Unix applications, a new line is normally
stored as a LF character. Macintosh applications use only a CR character to store a new
line.

DTDs and XML SCHEMA

Introduction to DTD
by Jan Egil Refsnes

The purpose of a DTD is to define the legal building blocks of an XML document. It
defines the document structure with a list of legal elements. A DTD can be declared
inline in your XML document, or as an external reference.

Internal DTD

This is an XML document with a Document Type Definition: (Open it in IE5, and select
view source)

<?xml version="1.0"?>
<!DOCTYPE note [
<!ELEMENT note (to,from,heading,body)>
<!ELEMENT to (#PCDATA)>
<!ELEMENT from (#PCDATA)>
<!ELEMENT heading (#PCDATA)>
<!ELEMENT body (#PCDATA)>
]>
<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>Don't forget me this weekend!</body>
</note>

The DTD is interpreted like this:


!ELEMENT note (in line 2) defines the element "note" as having four elements:
"to,from,heading,body".
!ELEMENT to (in line 3) defines the "to" element to be of the type "CDATA".
!ELEMENT from (in line 4) defines the "from" element to be of the type "CDATA"
and so on.....
External DTD

This is the same XML document with an external DTD: (Open it in IE5, and select view
source)

<?xml version="1.0"?>
<!DOCTYPE note SYSTEM "note.dtd">
<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>Don't forget me this weekend!</body>
</note>

This is a copy of the file "note.dtd" containing the Document Type Definition:

<?xml version="1.0"?>
<!ELEMENT note (to,from,heading,body)>
<!ELEMENT to (#PCDATA)>
<!ELEMENT from (#PCDATA)>
<!ELEMENT heading (#PCDATA)>
<!ELEMENT body (#PCDATA)>

Why use a DTD?

XML provides an application independent way of sharing data. With a DTD, independent
groups of people can agree to use a common DTD for interchanging data. Your
application can use a standard DTD to verify that data that you receive from the outside
world is valid. You can also use a DTD to verify your own data.

XML with correct syntax is "Well Formed" XML.

XML validated against a DTD is "Valid" XML.

Well Formed XML Documents

A "Well Formed" XML document has correct XML syntax.

The syntax rules were described in the previous chapters:

• XML documents must have a root element


• XML elements must have a closing tag
• XML tags are case sensitive
• XML elements must be properly nested
• XML attribute values must be quoted
<?xml version="1.0" encoding="ISO-8859-1"?>
<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>Don't forget me this weekend!</body>
</note>

Valid XML Documents

A "Valid" XML document is a "Well Formed" XML document, which also conforms to
the rules of a Document Type Definition (DTD):

<?xml version="1.0" encoding="ISO-8859-1"?>


<!DOCTYPE note SYSTEM "Note.dtd">
<note>
<to>Tove</to>
<from>Jani</from>
<heading>Reminder</heading>
<body>Don't forget me this weekend!</body>
</note>

The DOCTYPE declaration in the example above, is a reference to an external DTD file.
The content of the file is shown in the paragraph below.

XML DTD

The purpose of a DTD is to define the structure of an XML document. It defines the
structure with a list of legal elements:

<!DOCTYPE note
[
<!ELEMENT note (to,from,heading,body)>
<!ELEMENT to (#PCDATA)>
<!ELEMENT from (#PCDATA)>
<!ELEMENT heading (#PCDATA)>
<!ELEMENT body (#PCDATA)>
]>

If you want to study DTD, you will find our DTD tutorial on our homepage.
XML Schema

W3C supports an XML-based alternative to DTD, called XML Schema:

<xs:element name="note">

<xs:complexType>
<xs:sequence>
<xs:element name="to" type="xs:string"/>
<xs:element name="from" type="xs:string"/>
<xs:element name="heading" type="xs:string"/>
<xs:element name="body" type="xs:string"/>
</xs:sequence>
</xs:complexType>

</xs:element>

If you want to study XML Schema, you will find our Schema tutorial on our homepage.

A General XML Validator

To help you check the syntax of your XML files, we have created an XML validator to
syntax-check your XML.

Please see the next chapter.

XPATH

XPath Tutorial
« W3Schools Home Next Chapter »
XPath is used to navigate through elements and attributes in an XML
document.

XPath is a major element in W3C's XSLT standard - and XQuery and


XPointer are both built on XPath expressions.

What is XPath?
• XPath is a syntax for defining parts of an XML
document
• XPath uses path expressions to navigate in XML
documents
• XPath contains a library of standard functions
• XPath is a major element in XSLT

• XPath is a W3C recommendation

XPath Path Expressions

XPath uses path expressions to select nodes or node-sets in an XML document. These
path expressions look very much like the expressions you see when you work with a
traditional computer file system.

XPath Standard Functions

XPath includes over 100 built-in functions. There are functions for string values, numeric
values, date and time comparison, node and QName manipulation, sequence
manipulation, Boolean values, and more.

XPath is Used in XSLT

XPath is a major element in the XSLT standard. Without XPath knowledge you will not
be able to create XSLT documents.

You can read more about XSLT in our XSLT tutorial.

XQuery and XPointer are both built on XPath expressions. XQuery 1.0 and XPath 2.0
share the same data model and support the same functions and operators.

You can read more about XQuery in our XQuery tutorial.


XPATH is a W3C Recommendation

XPath became a W3C Recommendation 16. November 1999.

XPath was designed to be used by XSLT, XPointer and other XML parsing software.

To read more about the XPATH activities at W3C, please read our W3C tutorial.

XPath Syntax
« Previous Next Chapter »

XPath uses path expressions to select nodes or node-sets in an XML document. The node
is selected by following a path or steps.

The XML Example Document

We will use the following XML document in the examples below.

<?xml version="1.0" encoding="ISO-8859-1"?>

<bookstore>

<book>
<title lang="eng">Harry Potter</title>
<price>29.99</price>
</book>

<book>
<title lang="eng">Learning XML</title>
<price>39.95</price>
</book>

</bookstore>

Selecting Nodes

XPath uses path expressions to select nodes in an XML document. The node is selected
by following a path or steps. The most useful path expressions are listed below:

Expression Description
nodename Selects all child nodes of the named node
/ Selects from the root node
// Selects nodes in the document from the current node that match the
selection no matter where they are
. Selects the current node
.. Selects the parent of the current node
@ Selects attributes

XSLT

XSLT Introduction
« Previous Next Chapter »

XSLT is a language for transforming XML documents into XHTML documents or to


other XML documents.

XPath is a language for navigating in XML documents.

What is XSLT?

• XSLT stands for XSL Transformations


• XSLT is the most important part of XSL
• XSLT transforms an XML document into another XML document
• XSLT uses XPath to navigate in XML documents
• XSLT is a W3C Recommendation

XSLT = XSL Transformations

XSLT is the most important part of XSL.

XSLT is used to transform an XML document into another XML document, or another
type of document that is recognized by a browser, like HTML and XHTML. Normally
XSLT does this by transforming each XML element into an (X)HTML element.

With XSLT you can add/remove elements and attributes to or from the output file. You
can also rearrange and sort elements, perform tests and make decisions about which
elements to hide and display, and a lot more.
A common way to describe the transformation process is to say that XSLT transforms
an XML source-tree into an XML result-tree.

XSLT Uses XPath

XSLT uses XPath to find information in an XML document. XPath is used to navigate
through elements and attributes in XML documents.

If you want to study XPath first, please read our XPath Tutorial.

How Does it Work?

In the transformation process, XSLT uses XPath to define parts of the source document
that should match one or more predefined templates. When a match is found, XSLT will
transform the matching part of the source document into the result document.

XSLT is a W3C Recommendation

XSLT became a W3C Recommendation 16. November 1999.

All major browsers have support for XML and XSLT.

Mozilla Firefox

Firefox supports XML, XSLT, and XPath from version 3.

Internet Explorer

Internet Explorer supports XML, XSLT, and XPath from version 6.

Internet Explorer 5 is NOT compatible with the official W3C XSL Recommendation.

Google Chrome

Chrome supports XML, XSLT, and XPath from version 1.


Opera

Opera supports XML, XSLT, and XPath from version 9. Opera 8 supports only XML +
CSS.

Apple Safari

Safari supports XML and XSLT from version 3.

SAX AND DOM

Simple API for XML


From Wikipedia, the free encyclopedia

Jump to: navigation, search


This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be
challenged and removed. (August 2008)

SAX (Simple API for XML) is a serial access parser API for XML.
SAX provides a mechanism for reading data from an XML
document. It is a popular alternative to the DocumentXML
processing with SAX

A parser which implements SAX (ie, a SAX Parser) functions as a stream parser, with an
event-driven API. The user defines a number of callback methods that will be called
when events occur during parsing. The SAX events include:

• XML Text nodes


• XML Element nodes
• XML Processing Instructions
• XML Comments

Events are fired when each of these XML features are encountered, and again when the
end of them is encountered. XML attributes are provided as part of the data passed to
element events.

SAX parsing is unidirectional; previously parsed data cannot be re-read without starting
the parsing operation again.
Object Model (DOM).

Benefits

SAX parsers have certain benefits over DOM-style parsers. The quantity of memory that
a SAX parser must use in order to function is typically much smaller than that of a DOM
parser. DOM parsers must have the entire tree in memory before any processing can
begin, so the amount of memory used by a DOM parser depends entirely on the size of
the input data. The memory footprint of a SAX parser, by contrast, is based only on the
maximum depth of the XML file (the maximum depth of the XML tree) and the
maximum data stored in XML attributes on a single XML element. Both of these are
always smaller than the size of the parsed tree itself.

Because of the event-driven nature of SAX, processing documents can often be faster
than DOM-style parsers. Memory allocation takes time, so the larger memory footprint of
the DOM is also a performance issue.

Due to the nature of DOM, streamed reading from disk is impossible. Processing XML
documents larger than main memory is also impossible with DOM parsers but can be
done with SAX parsers. However, DOM parsers may make use of disk space as memory
to side step this limitation.

The Document Object Model (DOM) is a cross-platform and language-independent


convention for representing and interacting with objects in HTML, XHTML and XML
documents. Aspects of the DOM (such as its "Elements") may be addressed and
manipulated within the syntax of the programming language in use. The public interface
of a DOM are specified in its Application Programming Interface (API).
Contents
[hide]

Applications

DOM is likely to be best suited for applications where the document must be accessed
repeatedly or out of sequence order. If the application is strictly sequential and one-pass,
the SAX model is likely to be faster and use less memory. In addition, non-extractive
XML parsing models, such as VTD-XML, provide a new memory-efficient option.

[edit] Web browsers

A web browser is not obliged to use DOM in order to render an HTML document.
However, the DOM is required by JavaScript scripts that wish to inspect or modify a web
page dynamically. In other words, the Document Object Model is the way JavaScript sees
its containing HTML page and browser state

Because DOM supports navigation in any direction (e.g., parent and previous sibling) and
allows for arbitrary modifications, an implementation must at least buffer the document
that has been read so far (or some parsed form of it).

Why they were both built

SAX (Simple API for XML) and DOM (Document Object Model) were both designed to allow
programmers to access their information without having to write a parser in their programming language of
choice. By keeping the information in XML 1.0 format, and by using either SAX or DOM APIs your
program is free to use whatever parser it wishes. This can happen because parser writers must implement
the SAX and DOM APIs using their favorite programming language. SAX and DOM APIs are both
available for multiple languages (Java, C++, Perl, Python, etc.).

So both SAX and DOM were created to serve the same purpose, which is giving you access to the
information stored in XML documents using any programming language (and a parser for that language).
However, both of them take very different approaches to giving you access to your information. You can
learn more about DOM and SAX in the Java and XML book.

What is DOM?

DOM gives you access to the information stored in your XML document as a hierarchical object model.
DOM creates a tree of nodes (based on the structure and information in your XML document) and you can
access your information by interacting with this tree of nodes.The textual information in your XML
document gets turned into a bunch of tree nodes. Figure 1 illustrates this.

Regardless of the kind of information in your XML document (whether it is tabular data, or a list of items,
or just a document), DOM creates a tree of nodes when you create a Document object given the XML
document. Thus DOM forces you to use a tree model (just like a Swing TreeModel) to access the
information in your XML document. This works out really well because XML is hierarchical in nature.
This is why DOM can put all your information in a tree (even if the information is actually tabular or a
simple list).

Figure 1 is overly simplistic, because in DOM, each element node actually contains a list of other nodes as
its children. These children nodes might contain text values or they might be other element nodes. At first
glance, it might seem unnecessary to access the value of an element node (e.g.: in “<name> Nazmul
</name>”, Nazmul is the value) by looking through a list of children nodes inside of it. If each element
only had one value then this would truly be unnecessary. However, elements may contain text data and
other elements; this is why you have to do extra work in DOM just to get the value of an element node.
Usually when pure data is contained in your XML document, it might be appropriate to “lump” all your
data in one String and have DOM return that String as the value of a given element node. This does not
work so well if the data stored in your XML document is a document (like a Word or Framemaker
document). In documents, the sequence of elements is very important. For pure data (like a database table)
the sequence of elements does not matter. So DOM preserves the sequence of the elements that it reads
from XML documents, because it treats everything as it if were a document. Hence the name DOCUMENT
object model.

If you plan to use DOM as the Java object model for the information stored in your XML document then
you really don’t need to worry about SAX. However, if you find that DOM is not a good object model to
use for the information stored in your XML document then you might want to take a look at SAX. It is very
natural to use SAX in cases where you have to create your own CUSTOM object models. To make matters
a little more confusing, you can also create your object model(s) on top of DOM. OOP is a wonderful
thing.

What is SAX?

SAX chooses to give you access to the information in your XML document, not as a tree of
nodes, but as a sequence of events! You ask, how is this useful? The answer is that SAX chooses not to
create a default Java object model on top of your XML document (like DOM does). This makes SAX
faster, and also necessitates the following things:

• creation of your own custom object model


• creation of a class that listens to SAX events and properly creates your object model.

Note that these steps are not necessary with DOM, because DOM already creates an object model for you
(which represents your information as a tree of nodes).

In the case of DOM, the parser does almost everything, read the XML document in, create a Java object
model on top of it and then give you a reference to this object model (a Document object) so that you can
manipulate it. SAX is not called the Simple API for XML for nothing, it is really simple. SAX doesn’t
expect the parser to do much, all SAX requires is that the parser should read in the XML document, and
fire a bunch of events depending on what tags it encounters in the XML document. You are responsible for
interpreting these events by writing an XML document handler class, which is responsible for making
sense of all the tag events and creating objects in your own object model. So you have to write:

• your custom object model to “hold” all the information in your XML document into
• a document handler that listens to SAX events (which are generated by the SAX parser as its
reading your XML document) and makes sense of these events to create objects in your custom
object model.

SAX can be really fast at runtime if your object model is simple. In this case, it is faster than DOM,
because it bypasses the creation of a tree based object model of your information. On the other hand, you
do have to write a SAX document handler to interpret all the SAX events (which can be a lot of work).

What kinds of SAX events are fired by the SAX parser? These events are really very simple. SAX will fire
an event for every open tag, and every close tag. It also fires events for #PCDATA and CDATA sections.
You document handler (which is a listener for these events) has to interpret these events in some
meaningful way and create your custom object model based on them. Your document handler will have to
interpret these events and the sequence in which these events are fired is very important. SAX also fires
events for processing instructions, DTDs, comments, etc. But the idea is still the same, your handler has to
interpret these events (and the sequence of the events) and make sense out of them.
When to use DOM

If your XML documents contain document data (e.g., Framemaker documents stored in XML format), then
DOM is a completely natural fit for your solution. If you are creating some sort of document information
management system, then you will probably have to deal with a lot of document data. An example of this is
the Datachannel RIO product, which can index and organize information that comes from all kinds of
document sources (like Word and Excel files). In this case, DOM is well suited to allow programs access to
information stored in these documents.

However, if you are dealing mostly with structured data (the equivalent of serialized Java objects in XML)
DOM is not the best choice. That is when SAX might be a better fit.

When to use SAX

If the information stored in your XML documents is machine readable (and generated) data then SAX is the
right API for giving your programs access to this information. Machine readable and generated data include
things like:

• Java object properties stored in XML format


• queries that are formulated using some kind of text based query language (SQL, XQL, OQL)
• result sets that are generated based on queries (this might include data in relational database tables
encoded into XML).

So machine generated data is information that you normally have to create data structures and classes for in
Java. A simple example is the address book which contains information about persons, as shown in Figure
1. This address book XML file is not like a word processor document, rather it is a document that contains
pure data, which has been encoded into text using XML.

When your data is of this kind, you have to create your own data structures and classes (object models)
anyway in order to manage, manipulate and persist this data. SAX allows you to quickly create a handler
class which can create instances of your object models based on the data stored in your XML documents.
An example is a SAX document handler that reads an XML document that contains my address book and
creates an AddressBook class that can be used to access this information. The first SAX tutorial shows you
how to do this. The address book XML document contains person elements, which contain name and email
elements. My AddressBook object model contains the following classes:

• AddressBook class, which is a container for Person objects


• Person class, which is a container for name and email String objects.

So my “SAX address book document handler” is responsible for turning person elements into Person
objects, and then storing them all in an AddressBook object. This document handler turns the name and
email elements into String objects.

Conclusion

The SAX document handler you write does element to object mapping. If your information is structured in
a way that makes it easy to create this mapping you should use the SAX API. On the other hand, if your
data is much better represented as a tree then you should use DOM.