You are on page 1of 6

c 

    
 


Published: 02 May 2007
By: Simone Busoli

In this series of articles I'm going to tackle and describe the life cycle of a web request from the
early stages of its life, when it's accepted by the web server, through its processing into the
ASP.NET pipeline and up to the generation of a response by the endpoints of the pipeline.

  

Microsoft Active Server Pages, also known as ASP, since its first release in late 1996 provided
web developers with a rich and complex framework for building web applications. As years
passed its infrastructure evolved and improved so much that what is now known as c  is
no longer something which resembles its predecessor. c  is a framework for building web
applications, that is, applications that run over the web, where the client-server paradigm is
represented mostly by a browser forwarding requests for resources of different kinds to a web
server. Before the advent of dynamic server-side resource generation techniques like ,  ,
JSP and c , all web servers had to do was accept client¶s requests for static resources and
forward them to the requestor. When dynamic technologies started to grow up web servers
became invested of greater responsibility, since they had to find a way to generate those dynamic
resources on their side and return the result to the client, a task they were not formerly built for.

From a bird¶s eye view, the interaction between client and server is very simple.
Communications over the web occur via HTTP (Hyper Text Transfer Protocol), an application
level protocol which relies on TCP and IP to transmit data between two nodes connected to the
heterogeneous network known as World Wide Web.

Each dynamic server-side technology essentially leans on a particular web server


implementation, and c  is tightly coupled with Microsoft¶s Internet Information Server,
aka  .

Different servers chose different ways to generate and serve dynamic resources and what we¶re
going to examine is how  does that, together with the path a request follows once on the
server and back to the client.

   c


As mentioned, static resources needn¶t to be processed by the server; once a request for such a
resource arrives, the server just retrieves its contents from the file system and sends it back to the
client as a stream of byte flowing on the  protocol. Static resources can be images,
Javascript files, css style sheets or plain old html pages. It¶s clear that the server needs to know
how to distinguish between static and dynamic resource, whereas the second need to be
processed somehow and not just sent back to the client. That¶s where  c extensions appear,
where  c stands for Internet Server Application Programming Interface.  c extensions
are modules implemented as plain old Win32 .dll, on which  relies to process specific
resources. Mappings between  c extensions and files are configured via the  snap-in and
stored in the  metabase, where each file extension can be associated with a particular  c
extension, that is, when a request for such a file arrives,  handles it to the corresponding
 c extension, confident that it will be able to handle it.

Ñ
  c

   

 c extensions obviously need to respect a common interface through which they can be
called by  and provided the necessary data to elaborate the request and generate a response.

As Figure 1 illustrates, the .asp extension is mapped to the


   c extension; at the time
of c  this component was in charge of performing all the tasks required to generate a response,
that is, collecting information about the request, made available into the c  page via the
Request, Response and other common c  intrinsic objects, parsing and executing the c  page
and returning the resulting HTML.

Actually, that was a big improvement compared to a technology like CGI, but c  takes this
approach much further and introduces abstractions which totally shield the developers from
having to care about what happens at this stage.

When installed, c  configures  to redirect requests for c  specific files to a new
 c extension called

 . What this extension does is somewhat different then
the former
  extension, which was essentially responsible just for parsing and executing
the requested c  page. The steps taken by a generic  c module to process a request are
totally hidden from  , therefore  c extension may follow different paradigms in order to
process requests.

 !
 c     
"  


 #
 
$

.asax ASP.NET application files. Usually global.asax.


.ascx ASP.NET user control files.
.ashx HTTP handlers, the managed counterpart of ISAPI extensions.
.asmx ASP.NET web services.
.aspx ASP.NET web pages.
.axd ASP.NET internal HTTP handlers.

As well as the file extensions listed in Table 1, the c   c extension manages other file
extensions which are usually not served to web browsers, like Visual Studio project files, source
code files and config files, for example.


c  



So far we¶ve seen that when a request for an c  file is picked up by  , it is passed to the


 , which is the main entry point for c  related processing. Actually,
what the  c extension does depends sensibly on the version of  available on the system,
and thus the process model, which is the sequence of operations performed by the c 
runtime to process the request and generate a response, may vary quite a bit.

When running under  5.X, all ASP.NET-related requests are dispatched by the  c
extension to an external worker process called
 . The c   c extension,
hosted in the  process , passes the control to
 , along with all
the information concerning the incoming request. The communication between the two is
performed via named pipes, a well known mechanism for IPC (Inter Process Communication).
The c  worker process performs a considerable number of tasks, together with the  c
extension. They are the main authors of all the stuff that happens under the hoods of an c 
request. To introduce a topic which will be discussed later, take note of the fact that each web
application, corresponding to a different virtual directory hosted on  , is executed in the
context of the same process, the c  worker process. To provide isolation and abstraction
from the execution context the c  model introduces the concept of Application Domains,
in brief AppDomains. They can be considered as lightweight processes. More on this later.

If running under  6, on the other side, the


  process is not used, in favour of
another process called  . Furthermore,  is no longer used to forward 
requests to  c extensions, although it keeps running for serving other protocols requests. A
lot of other details change compared to the process model used by previous versions of  ,
although  6 is capable of running in compatibility mode and emulate the behavior of its
predecessor. A big step forward, compared to the process model used when running on top of
 5, is that incoming requests are in the former handled at a lower ± Kernel ± level and then
forwarded to the correct  c extension, thereby avoiding inter process communication
techniques which may represent an expensive operation under a performance and resource
consumption point of view. We¶ll delve deeper into this topic in the following paragraphs.

   



This is the default process model available on Windows 2000 and XP machines. As mentioned it
consists in the   process listening by default on the TCP port 80 for incoming
 requests and queuing them into a single queue, waiting to be processed. If the request is
specific to c , the processing is delegated to the c   c extension,


 . This, in turn, communicates with the c  worker process,

  via named pipes and finally is the worker process which takes care of delivering
the request to the c   runtime environment. This process is graphically represented in
Figure 2.

Ñ
%
   



In Figure 2 is represented an additional element we didn¶t mention yet, the c  
Runtime Environment. It¶s not topic of this article and will eventually be explained in a follow
up article, but for the sake of this discussion the  Runtime Environment can be considered
as a black box where all the c  specific processing takes place, all the managed code lives
and developers can actually put their hands on, from the HttpRuntime straight to the HttpHandler
who will finally process the request and generate the response. This is even referred to as the
c  Pipeline or  Runtime pipeline.
Ane of the interesting points of this process model is that all the requests, once handled by the
 c extension, are passed to the c  worker process. Anly one instance of this process is
active at a time, with one exception, discussed later. Therefore all c  web applications
hosted on  are actually hosted inside the worker process, too. However, this doesn¶t mean
that all the applications are run under the same context and share all their data. As mentioned,
c  introduces the concept of AppDomain, which is essentially a sort of managed
lightweight process which provides isolation and security boundaries. Each  virtual directory
is executed in a single AppDomain, which is loaded automatically into the worker process
whenever a resource belonging to that application is requested for the first time. Ance the
AppDomain is loaded ± that is, all the assemblies required to satisfy that request are loaded into
the AppDomain ± the control is actually passed to the c  pipeline for the actual processing.
Multiple AppDomains can thus run under the same process, while requests for the same
AppDomain can be served by multiple threads. However, a thread doesn¶t belong to an
AppDomain and can serve requests for different AppDomains, but at a given time a thread
belongs to a single AppDomain.

For performance purposes the worker process can be recycled according to some criteria which
can be specified declaratively in the machine.config file placed in the directory
C:\windows\microsoft.net\Framework\[framework version]\CANFIG. These criteria are the age
of the process, number of requests served and queued, time spent idle and consumed memory.
Ance one of the threshold value of these parameters is reached, the  c extension creates a
new instance of the worker process, which will we used from then on to serve the requests. This
is the only time when multiple copies of the process can be running concurrently. In fact, the old
instance of the process isn¶t killed, but it is allowed to terminate serving the pending requests.

 &  



The  6 process model is the default model on machines running Windows 2003 Server
operating system. It introduces several changes and improvements over the  5 process model.
Ane of the biggest changes is the concept of application pools. An  5.X all web applications,
that is, all AppDomains, were hosted by the c  worker process. To achieve a finer
granularity over security boundaries and personalization, the  6 process model allows
applications to run inside different copies of a new worker process,  . Each application
pool can contain multiple AppDomains and is hosted in a single copy of the worker process. In
other words, the shift is from a single process hosting all applications to multiple processes
hosting each an application pool. This model is also called the worker process isolation mode.

Another big change from the previous model is the way  listens for incoming requests. With
the  5 model, it was the  process, , who was listening on a specific TCP
port for  requests. In the  6 architecture, incoming requests are handled and queued at
kernel level instead of user mode via a kernel driver called http.sys; this approach has several
advantages over the old model and is called kernel-level request queuing.

Ñ
'
 &  


Figure 3 illustrates the principal components taking part in the request processing when using the
II 6 model. Ance a request arrives the kernel level device driver http.sys routes it to the right
application pool queue. Each queue belongs to a specific application pool, and thus to a specific
copy of the worker process, which next receives the request from the queue. This approach
highly reduces the overhead introduced by named pipes used in  5 model since no inter
process communication is taking place, but the requests are headed to the worker process directly
from the kernel level driver. This has many advantages concerning reliability, too. Since running
in kernel mode, the request dispatching isn¶t influenced by crashes and malfunctions happing at
user level, that is, in the worker processes. Thereby, even if a worker process crashes, the system
is still capable of accepting incoming requests and eventually restarts the crashed process.

It¶s the worker process who is in charge of loading the c   c extension, which, in turn,
loads the CRL and delegates all the work to the  Runtime.

The   worker process, differently from the


  process used in  5
model, isn¶t c  specific, and is used to handle any kind of requests. The specific worker
process then decides which  c modules to load according to the type of resources it needs to
serve.

A detail not underlined in Figure 3 for simplicity reasons is that incoming requests are forwarded
from the application pool queue to the right worker process via a module loaded in  6 called
Web Administration Service (WAS). This module is responsible for reading worker process ±
web application bindings from the  metabase and forwarding the request to the right worker
process.p

You might also like