Professional Documents
Culture Documents
NET
User Controls:
In ASP.NET: A user-authored server control that enables an ASP.NET page to be re-
used as a server control. An ASP.NET user control is authored declaratively and
persisted as a text file with an .ascx extension. The ASP.NET page framework
compiles a user control on the fly to a class that derives from the
System.Web.UI.UserControl class.
Where does the Web page belong in the .NET Framework class hierarchy?
System.Web.UI.Page
Fragment Caching: Caches the portion of the page generated by the request.
Some times it is not practical to cache the entire page, in such cases we can cache a
portion of page<%@ OutputCache Duration="120"
VaryByParam="CategoryID;SelectedID"%>
Data Caching: Caches the objects programmatically. For data caching asp.net
provides a cache object for eg: cache["States"] = dsStates;
MailMessage and SmtpMail are classes defined in the .NET Framework Class
Library's System.Web.Mail namespace. Due to a security change made to ASP.NET
just before it shipped, you need to set SmtpMail's SmtpServer property to
"localhost" even though "localhost" is the default. In addition, you must use the IIS
configuration applet to enable localhost (127.0.0.1) to relay messages through the
local SMTP service.
VSDISCO files are DISCO files that support dynamic discovery of Web services. If
you place the following VSDISCO file in a directory on your Web server, for example,
it returns references to all ASMX and DISCO files in the host directory and any
subdirectories not noted in <exclude> elements:
<?xml version="1.0" ?>
<dynamicDiscovery
xmlns="urn:schemas-dynamicdiscovery:disco.2000-03-17">
<exclude path="_vti_cnf" />
<exclude path="_vti_pvt" />
<exclude path="_vti_log" />
<exclude path="_vti_script" />
<exclude path="_vti_txt" />
</dynamicDiscovery>
Setting AspCompat to true does two things. First, it makes intrinsic ASP objects
available to the COM components by placing unmanaged wrappers around the
equivalent ASP.NET objects. Second, it improves the performance of calls that the
page places to apartment- threaded COM objects by ensuring that the page (actually,
the thread that processes the request for the page) and the COM objects it creates
share an apartment. AspCompat="true" forces ASP.NET request threads into single-
threaded apartments (STAs). If those threads create COM objects marked
ThreadingModel=Apartment, then the objects are created in the same STAs as the
threads that created them. Without AspCompat="true," request threads run in a
multithreaded apartment (MTA) and each call to an STA-based COM object incurs a
performance hit when it's marshaled across apartment boundaries.
Do not set AspCompat to true if your page uses no COM objects or if it uses COM
objects that don't access ASP intrinsic objects and that are registered
ThreadingModel=Free or ThreadingModel=Both.
Should validation (did the user enter a real date) occur server-side or client-
side? Why?
Client-side validation because there is no need to request a server side date when
you could obtain a date from the client machine.
What are ASP.NET Web Forms? How is this technology different than what
is available though ASP?
Web Forms are the heart and soul of ASP.NET. Web Forms are the User Interface (UI)
elements that give your Web applications their look and feel. Web Forms are similar
to Windows Forms in that they provide properties, methods, and events for the
controls that are placed onto them. However, these UI elements render themselves
in the appropriate markup language required by the request, e.g. HTML. If you use
Microsoft Visual Studio .NET, you will also get the familiar drag-and-drop interface
used to create your UI for your Web application.
How do you turn off cookies for one page in your site?
Since no Page Level directive is present, I am afraid that cant be done.
Which method do you use to redirect the user to another page without
performing a round trip to the client?
Server.Transfer and Server.Execute
What property do you have to set to tell the grid which page to go to when
using the Pager object?
CurrentPageIndex
Should validation (did the user enter a real date) occur server-side or client-
side? Why?
It should occur both at client-side and Server side.By using expression validator
control with the specified expression ie.. the regular expression provides the facility
of only validatating the date specified is in the correct format or not. But for checking
the date where it is the real data or not should be done at the server side, by getting
the system date ranges and checking the date whether it is in between that range or
not.
Response.Dedirect() :client know the physical location (page name and query string
as well). Context.Items loses the persisitance when nevigate to destination page. In
earlier versions of IIS, if we wanted to send a user to a new Web page, the only
option we had was Response.Redirect. While this method does accomplish our goal,
it has several important drawbacks. The biggest problem is that this method causes
each page to be treated as a separate transaction. Besides making it difficult to
maintain your transactional integrity, Response.Redirect introduces some additional
headaches. First, it prevents good encapsulation of code. Second, you lose access to
all of the properties in the Request object. Sure, there are workarounds, but they're
difficult. Finally, Response.Redirect necessitates a round trip to the client, which, on
high-volume sites, causes scalability problems. As you might suspect, Server.Transfer
fixes all of these problems. It does this by performing the transfer on the server
without requiring a roundtrip to the client.
Can you give an example of what might be best suited to place in the
Application_Start and Session_Start subroutines?
The Application_Start event is guaranteed to occur only once throughout the lifetime
of the application. It's a good place to initialize global variables. For example, you
might want to retrieve a list of products from a database table and place the list in
application state or the Cache object. SessionStateModule exposes both
Session_Start and Session_End events.
1. Simplicity. There is no need to write possibly complex code to store form data
between page submissions.
2. Flexibility. It is possible to enable, configure, and disable ViewState on a control-
by-control basis, choosing to persist the values of some fields but not others.
There are, however a few disadvantages that are worth pointing out:
1. Does not track across pages. ViewState information does not automatically
transfer from page to page. With the session
approach, values can be stored in the session and accessed from other pages. This is
not possible with ViewState, so storing
data into the session must be done explicitly.
2. ViewState is not suitable for transferring data for back-end systems. That is, data
still has to be transferred to the back
end using some form of data object.
Describe session handling in a webfarm, how does it work and what are the
limits?
ASP.NET Session supports storing of session data in 3 ways, i] in In-Process ( in the
same memory that ASP.NET uses) , ii] out-of-process using Windows NT Service )in
separate memory from ASP.NET ) or iii] in SQL Server (persistent storage). Both the
Windows Service and SQL Server solution support a webfarm scenario where all the
web-servers can be configured to share common session state store.
1. Windows Service :
We can start this service by Start | Control Panel | Administrative Tools | Services | .
In that we service names ASP.NET State Service. We can start or stop service by
manually or configure to start automatically. Then we have to configure our
web.config file
<CONFIGURATION><configuration>
<system.web>
<SessionState
mode = “StateServer”
stateConnectionString = “tcpip=127.0.0.1:42424”
stateNetworkTimeout = “10”
sqlConnectionString=”data source = 127.0.0.1; uid=sa;pwd=”
cookieless =”Flase”
timeout= “20” />
</system.web>
</configuration> </SYSTEM.WEB>
</CONFIGURATION>
Here ASP.Net Session is directed to use Windows Service for state management on
local server (address : 127.0.0.1 is TCP/IP loop-back address). The default port is
42424. we can configure to any port but for that we have to manually edit the
registry.
Follow these simple steps
- In a webfarm make sure you have the same config file in all your web servers.
- Also make sure your objects are serializable.
- For session state to be maintained across different web servers in the webfarm, the
application path of the web-site in the IIS Metabase should be identical in all the
web-servers in the webfarm.
What property must you set, and what method must you call in your code, in
order to bind the data from some data source to the Repeater control?
Set the DataMember property to the name of the table to bind to. (If this property is
not set, by default the first table in the dataset is used.)
DataBind method, use this method to bind data from a source to a server control.
This method is commonly used after retrieving a data set through a database query.
ASP.NET automatically deletes a user's Session object, dumping its contents, after it
has been idle for a configurable timeout interval. This interval, in minutes, is set in
the <SESSIONSTATE>section of the web.config file. The default is 20 minutes.
How do you turn off cookies for one page in your site?
Use Cookie.Discard property, Gets or sets the discard flag set by the server. When
true, this property instructs the client application not to save the Cookie on the
user's hard disk when a session ends.
What tags do you need to add within the asp:datagrid tags to bind columns
manually?
< asp:DataGrid id="dgCart" AutoGenerateColumns="False" CellPadding="4"
Width="448px" runat="server" >
< Columns >
< asp:ButtonColumn HeaderText="SELECT" Text="SELECT" CommandName="select"
>< /asp:ButtonColumn >
< asp:BoundColumn DataField="ProductId" HeaderText="Product ID" ><
/asp:BoundColumn >
< asp:BoundColumn DataField="ProductName" HeaderText="Product Name" ><
/asp:BoundColumn >
< asp:BoundColumn DataField="UnitPrice" HeaderText="UnitPrice" ><
/asp:BoundColumn >
< /Columns >
< /asp:DataGrid >
Which method do you use to redirect the user to another page without
performing a round trip to the client?
Server.transfer
What is the transport protocol you use to call a Web service SOAP ?
HTTP Protocol
What tags do you need to add within the asp:datagrid tags to bind columns
manually?
Set AutoGenerateColumns Property to false on the datagrid tag and then use
Column tag and an ASP:databound tag
Which control would you use if you needed to make sure the values in two
different controls matched?
CompareValidator is used to ensure that two fields are identical.
What are the various ways of securing a web site that could prevent from
hacking etc .. ?
1) Authentication/Authorization
2) Encryption/Decryption
3) Maintaining web servers outside the corporate firewall. etc.,
On Windows 2003 in IIS 5.0 emulation mode, 2000, or XP, it's running within the IIS
helper process whose name I do not remember, it being quite a while since I last
used IIS 5.0.
When multiple versions of the .NET Framework are executing side-by-side on a single
computer, the ASP.NET ISAPI version mapped to an ASP.NET application determines
which version of the common language runtime is used for the application.
The tool can be launched with a set of optional parameters. Option "i" Installs the
version of ASP.NET associated with Aspnet_regiis.exe and updates the script maps at
the IIS metabase root and below. Note that only applications that are currently
mapped to an earlier version of ASP.NET are affected
What is a PostBack?
The process in which a Web page sends data back to the same page on the server.
What is ViewState? How is it encoded? Is it encrypted? Who uses
ViewState?
ViewState is the mechanism ASP.NET uses to keep track of server control state
values that don't otherwise post back as part of the HTTP form. ViewState Maintains
the UI State of a Page
ViewState is base64-encoded.
It is not encrypted but it can be encrypted by setting EnableViewStatMAC="true" &
setting the machineKey validation type to 3DES. If you want to NOT maintain the
ViewState, include the directive < %@ Page EnableViewState="false" % > at the top
of an .aspx page or add the attribute EnableViewState="false" to any control.
What is the < machinekey > element and what two ASP.NET technologies is
it used for?
Configures keys to use for encryption and decryption of forms authentication cookie
data and view state data, and for verification of out-of-process session state
identification.There fore 2 ASP.Net technique in which it is used are
Encryption/Decryption & Verification
What three Session State providers are available in ASP.NET 1.1? What are
the pros and cons of each?
ASP.NET provides three distinct ways to store session data for your application: in-
process session state, out-of-process session state as a Windows service, and out-of-
process session state in a SQL Server database. Each has it advantages.
2. The State Server simply stores session state in memory when in out-of-proc
mode. In this mode the worker process talks directly to the State Server
3. SQL mode, session states are stored in a SQL Server database and the worker
process talks directly to SQL. The ASP.NET worker processes are then able to take
advantage of this simple storage service by serializing and saving (using .NET
serialization services) all objects within a client's Session collection at the end of
each Web request
Both these out-of-process solutions are useful primarily if you scale your application
across multiple processors or multiple computers, or where data cannot be lost if a
server or process is restarted.
Name and describe some HTTP Status Codes and what they express to the
requesting client.
When users try to access content on a server that is running Internet Information
Services (IIS) through HTTP or File Transfer Protocol (FTP), IIS returns a numeric
code that indicates the status of the request. This status code is recorded in the IIS
log, and it may also be displayed in the Web browser or FTP client. The status code
can indicate whether a particular request is successful or unsuccessful and can also
reveal the exact reason why a request is unsuccessful. There are 5 groups ranging
from 1xx - 5xx of http status codes exists.
101 - Switching protocols.
200 - OK. The client request has succeeded
302 - Object moved.
400 - Bad request.
500.13 - Web server is too busy.
The Repeater class is not derived from the WebControl class, like the DataGrid and
DataList. Therefore, the Repeater lacks the stylistic properties common to both the
DataGrid and DataList. What this boils down to is that if you want to format the data
displayed in the Repeater, you must do so in the HTML markup.
The Repeater control provides the maximum amount of flexibility over the HTML
produced. Whereas the DataGrid wraps the DataSource contents in an HTML < table
>, and the DataList wraps the contents in either an HTML < table > or < span > tags
(depending on the DataList's RepeatLayout property), the Repeater adds absolutely
no HTML content other than what you explicitly specify in the templates.
While using Repeater control, If we wanted to display the employee names in a bold
font we'd have to alter the "ItemTemplate" to include an HTML bold tag, Whereas
with the DataGrid or DataList, we could have made the text appear in a bold font by
setting the control's ItemStyle-Font-Bold property to True.
The Repeater's lack of stylistic properties can drastically add to the development time
metric. For example, imagine that you decide to use the Repeater to display data
that needs to be bold, centered, and displayed in a particular font-face with a
particular background color. While all this can be specified using a few HTML tags,
these tags will quickly clutter the Repeater's templates. Such clutter makes it much
harder to change the look at a later date. Along with its increased development time,
the Repeater also lacks any built-in functionality to assist in supporting paging,
editing, or editing of data. Due to this lack of feature-support, the Repeater scores
poorly on the usability scale.
However, The Repeater's performance is slightly better than that of the DataList's,
and is more noticeably better than that of the DataGrid's. Following figure shows the
number of requests per second the Repeater could handle versus the DataGrid and
DataList
Can we handle the error and redirect to some pages using web.config?
Yes, we can do this, but to handle errors, we must know the error codes; only then
we can take the user to a proper error message page, else it may confuse the user.
CustomErrors Configuration section in web.config file:
The default configuration is:
< customErrors mode="RemoteOnly" defaultRedirect="Customerror.aspx" >
< error statusCode="404" redirect="Notfound.aspx" / >
< /customErrors >
If mode is set to Off, custom error messages will be disabled. Users will receive
detailed exception error messages.
If mode is set to On, custom error messages will be enabled.
If mode is set to RemoteOnly, then users will receive custom errors, but users
accessing the site locally will receive detailed error messages.
Add an < error > tag for each error you want to handle. The error tag will redirect
the user to the Notfound.aspx page when the site returns the 404 (Page not found)
error.
[Example]
[Web.Config]
The DataGrid provides the means to display a group of records from the data source
(for example, the first 10), and then navigate to the "page" containing the next 10
records, and so on through the data.
Using Ado.Net we can explicit control over the number of records returned from the
data source, as well as how much data is to be cached locally in the DataSet.
1.Using DataAdapter.fill method give the value of 'Maxrecords' parameter
(Note: - Don't use it because query will return all records but fill the dataset based
on value of 'maxrecords' parameter).
2.For SQL server database, combines a WHERE clause and a ORDER BY clause with
TOP predicate.
3.If Data does not change often just cache records locally in DataSet and just take
some records from the DataSet to display.
Server.Transfer() : client is shown as it is on the requesting page only, but the all the
content is of the requested page. Data can be persist across the pages using
Context.Item collection, which is one of the best way to transfer data from one page
to another keeping the page state alive.
Response.Dedirect() :client knows the physical location (page name and query string
as well). Context.Items loses the persistence when navigate to destination page. In
earlier versions of IIS, if we wanted to send a user to a new Web page, the only
option we had was Response.Redirect. While this method does accomplish our goal,
it has several important drawbacks. The biggest problem is that this method causes
each page to be treated as a separate transaction. Besides making it difficult to
maintain your transactional integrity, Response.Redirect introduces some additional
headaches. First, it prevents good encapsulation of code. Second, you lose access to
all of the properties in the Request object. Sure, there are workarounds, but they're
difficult. Finally, Response.Redirect necessitates a round trip to the client, which, on
high-volume sites, causes scalability problems. As you might suspect, Server.Transfer
fixes all of these problems. It does this by performing the transfer on the server
without requiring a roundtrip to the client.
Response.Redirect sends a response to the client browser instructing it to request the
second page. This requires a round-trip to the client, and the client initiates the
Request for the second page. Server.Transfer transfers the process to the second
page without making a round-trip to the client. It also transfers the HttpContext to
the second page, enabling the second page access to all the values in the
HttpContext of the first page.
Yes, We can create user app domain by calling on of the following overload static
methods of the System.AppDomain class
1. Public static AppDomain CreateDomain(String friendlyName)
2. Public static AppDomain CreateDomain(String friendlyName, Evidence
securityInfo)
3. Public static AppDomain CreateDomain(String friendlyName, Evidence
securityInfo, AppDomainSetup info)
4. Public static AppDomain CreateDomain(String friendlyName, Evidence
securityInfo, String appBasePath, String appRelativeSearchPath, bool
shadowCopyFiles)
What are the various security methods which IIS Provides apart from .NET ?
a) Authentication Modes
b) IP Address and Domain Name Restriction
c) DNS Lookups DNS Lookups
d) The Network ID and Subnet Mask
e) SSL
Two attributes in the section affect the Web garden model. They are webGarden and
cpuMask. The webGarden attribute takes a Boolean value that indicates whether or
not multiple worker processes (one per each affinitized CPU) have to be used. The
attribute is set to false by default. The cpuMask attribute stores a DWORD value
whose binary representation provides a bit mask for the CPUs that are eligible to run
the ASP.NET worker process. The default value is -1 (0xFFFFFF), which means that all
available CPUs can be used. The contents of the cpuMask attribute is ignored when
the webGarden attribute is false. The cpuMask attribute also sets an upper bound to
the number of copies of aspnet_wp.exe that are running.
Web gardening enables multiple worker processes to run at the same time. However,
you should note that all processes will have their own copy of application state, in-
process session state, ASP.NET cache, static data, and all that is needed to run
applications. When the Web garden mode is enabled, the ASP.NET ISAPI launches as
many worker processes as there are CPUs, each a full clone of the next (and each
affinitized with the corresponding CPU). To balance the workload, incoming requests
are partitioned among running processes in a round-robin manner. Worker processes
get recycled as in the single processor case. Note that ASP.NET inherits any CPU
usage restriction from the operating system and doesn't include any custom
semantics for doing this.
All in all, the Web garden model is not necessarily a big win for all applications. The
more stateful applications are, the more they risk to pay in terms of real
performance. Working data is stored in blocks of shared memory so that any
changes entered by a process are immediately visible to others. However, for the
time it takes to service a request, working data is copied in the context of the
process. Each worker process, therefore, will handle its own copy of working data,
and the more stateful the application, the higher the cost in performance. In this
context, careful and savvy application benchmarking is an absolute must.
Changes made to the section of the configuration file are effective only after IIS is
restarted. In IIS 6, Web gardening parameters are stored in the IIS metabase; the
webGarden and cpuMask attributes are ignored.
Next >>
When was .NET announced?
Bill Gates delivered a keynote at Forum 2000, held June 22, 2000, outlining the .NET
'vision'. The July 2000 PDC had a number of sessions on .NET technology, and
delegates were given CDs containing a pre-release version of the .NET
framework/SDK and Visual Studio.NET.
What is IL?
IL = Intermediate Language. Also known as MSIL (Microsoft Intermediate Language)
or CIL (Common Intermediate Language). All .NET source code (of any language) is
compiled to IL. The IL is then converted to machine code at the point where the
software is installed, or at run-time by a Just-In-Time (JIT) compiler.
What is reflection?
All .NET compilers produce metadata about the types defined in the modules they
produce. This metadata is packaged along with the module (modules in turn are
packaged together in assemblies), and can be accessed by a mechanism called
reflection. The System.Reflection namespace contains classes that can be used to
interrogate the types for a module/assembly.
Using reflection to access .NET metadata is very similar to using ITypeLib/ITypeInfo
to access type library data in COM, and it is used for similar purposes - e.g.
determining data type sizes for marshaling data across context/process/machine
boundaries.
Reflection can also be used to dynamically invoke methods (see
System.Type.InvokeMember ) , or even create types dynamically at run-time (see
System.Reflection.Emit.TypeBuilder).
What's the difference between the Debug class and Trace class?
Documentation looks the same. Use Debug class for debug builds, use Trace class
for both debug and release builds.
What is serialization?
Serialization is the process of converting an object into a stream of bytes.
Deserialization is the opposite process of creating an object from a stream of bytes.
Serialization / Deserialization is mostly used to transport objects (e.g. during
remoting), or to persist
objects (e.g. to a file or database).
Note that the numeric label (1.3.1) is just a caspol invention to make the code
groups easy to manipulate from the command-line. The underlying runtime never
sees it.
How do I change the permission set for a code group?
Use caspol. If you are the machine administrator, you can operate at the 'machine'
level - which means not only that the changes you make become the default for the
machine, but also that users cannot change the permissions to be more permissive.
If you are a normal (non-admin) user you can still modify the permissions, but only
to make them more restrictive. For example, to allow intranet code to do what it
likes you might do this:
caspol -cg 1.2 FullTrust
Note that because this is more permissive than the default policy (on a standard
system), you should only do this at the machine level - doing it at the user level will
have no effect.
I can't be bothered with all this CAS stuff. Can I turn it off?
Yes, as long as you are an administrator. Just run: caspol -s off
Heap:
A portion of memory reserved for a program to use for the temporary storage of data
structures whose existence or size cannot be determined until the program is
running.
Un-Managed Code:
Code that is created without regard for the conventions and requirements of the
common language runtime. Unmanaged code executes in the common language
runtime environment with minimal services (for example, no garbage collection,
limited debugging, and so on).
MSIL or native code as well as metadata, enables the operating system to recognize
common language runtime images. The
presence of metadata in the file along with the MSIL enables your code to describe
itself, which means that there is no need for type libraries or Interface Definition
Language (IDL). The runtime locates and extracts the metadata from the file as
needed during
execution.
Value Type:
Value types are allocated on the stack just like primitive types in VBScript, VB6 and
C/C++. Value types are not instantiated using new go out of scope when the function
they are defined within returns.
Value types in the CLR are defined as types that derive from system.valueType.
A data type that fully describes a value by specifying the sequence of bits that
constitutes the value's representation. Type information for a value type instance is
not stored with the instance at run time, but it is available in metadata. Value type
instances can be treated as objects using boxing.
Un-Boxing:
The conversion of an object instance to a value type.
What is JIT and how is works ?
An acronym for "just-in-time," a phrase that describes an action that is taken only
when it becomes necessary, such as just-in-time compilation or just-in-time object
activation
What is namespace used for loading assemblies at run time and name the
methods?
System.Reflection
Explain encapsulation ?
The implementation is hidden, the interface is exposed.
What data type should you use if you want an 8-bit value that's signed?
sbyte.
What happens when you encounter a continue statement inside the for
loop?
The code for the rest of the loop is ignored, the control is transferred back to the
beginning of the loop.
How can you sort the elements of the array in descending order?
By calling Sort() and then Reverse() methods.
What's the .NET datatype that allows the retrieval of data by a unique key?
HashTable.
Will finally block get executed if the exception had not occurred?
Yes.
What's the difference between the Debug class and Trace class?
Documentation looks the same. Use Debug class for debug builds, use Trace class for
both debug and release builds.
What are three test cases you should go through in unit testing?
Positive test cases (correct data, correct output), negative test cases (broken or
missing data, proper handling), exception test
cases (exceptions are thrown and caught properly).
Can you declare the override method static while the original method is
non-static?
No, you can't, the signature of the virtual method must remain the same, only the
keyword virtual is changed to keyword override.
Can you prevent your class from being inherited and becoming a base class
for some other classes?
Yes, that's what keyword sealed in the class definition is for. The developer trying to
derive from your class will get a message: cannot inherit from Sealed class
WhateverBaseClassName. It's the same concept as final class in Java.
Can you allow class to be inherited, but prevent the method from being
over-ridden?
Yes, just leave the class public and make the method sealed.
Why can't you specify the accessibility modifier for methods inside the
interface?
They all must be public. Therefore, to prevent you from getting the false impression
that you have any freedom of choice, you are not allowed to specify any accessibility,
it's public by default.
Can you inherit multiple interfaces?
Yes, why not.
What's the .NET class that allows the retrieval of a data element using a
unique key?
HashTable.
Will the finally block get executed if an exception has not occurred?
Yes.
What's an interface?
It's an abstract class with public abstract methods all of which must be implemented
in the inherited classes.
Why can't you specify the accessibility modifier for methods inside the
interface?
They all must be public. Therefore, to prevent you from getting the false impression
that you have any freedom of choice,
you are not allowed to specify any accessibility, it's public by default.
What is a formatter?
A formatter is an object that is responsible for encoding and serializing data into
messages on one end, and deserializing and decoding messages into data on the
other end.
Vendor Neutrality
The .NET platform is not vendor neutral, it is tied to the Microsoft operating systems.
But neither are any of the J2EE implementations
Many companies buy into J2EE believing that it will give them vendor neutrality. And,
in fact, this is a stated goal of Sun's vision:
A wide variety of J2EE product configurations and implementations, all of which meet
the requirements of this specification, are possible. A portable J2EE application will
function correctly when successfully deployed in any of these products. (ref : Java 2
Platform Enterprise Edition Specification, v1.3, page 2-7 available at
http://java.sun.com/j2ee/)
Overall Maturity
Given that the .NET platform has a three year lead over J2EE, it should be no
surprise to learn that the .NET platform is far more mature than the J2EE platform.
Whereas we have high volume highly reliable web sites using .NET technologies
(NASDAQ and Dell being among many examples)
The .NET platform eCollaboration model is, as I have discussed at length, based on
the UDDI and SOAP standards. These standards are widely supported by more than
100 companies. Microsoft, along with IBM and Ariba, are the leaders in this area.
Sun is a member of the UDDI consortium and recognizes the importance of the UDDI
standards. In a recent press release, Sun's George Paolini, Vice President for the
Java Community Development, says:
"Sun has always worked to help establish and support open, standards-based
technologies that facilitate the growth of network-based applications, and we see
UDDI as an important project to establish a registry framework for business-to-
business e-commerce
But while Sun publicly says it believes in the UDDI standards, in reality, Sun has
done nothing whatsoever to incorporate any of the UDDI standards into J2EE.
Scalability
Typical Comparision w.r.t Systems and their costs
J2EE
Framework Support
The .NET platform includes such an eCommerce framework called Commerce Server.
At this point, there is no equivalent vendor-neutral framework in the J2EE space.
With J2EE, you should assume that you will be building your new eCommerce
solution from scratch
Moreover, no matter what [J2EE] vendor you choose, if you expect a component
framework that will allow you to quickly field complete e-business applications, you
are in for a frustrating experience
Language
In the language arena, the choice is about as simple as it gets. J2EE supports Java,
and only Java. It will not support any other language in the foreseeable future. The
.NET platform supports every language except Java (although it does support a
language that is syntactically and functionally equivalent to Java, C#). In fact, given
the importance of the .NET platform as a language independent vehicle, it is likely
that any language that comes out in the near future will include support for the .NET
platform.
Some companies are under the impression that J2EE supports other languages.
Although both IBM's WebSphere and BEA's WebLogic support other languages,
neither does it through their J2EE technology. There are only two official ways in the
J2EE platform to access other languages, one through the Java Native Interface and
the other through CORBA interoperability. Sun recommends the later approach. As
Sun's Distinguished Scientist and Java Architect Rick Cattell said in a recent
interview.
Portability
The reason that operating system portability is a possibility with J2EE is not so much
because of any inherent portability of J2EE, as it is that most of the J2EE vendors
support multiple operating systems. Therefore as long as one sticks with a given
J2EE vendor and a given database vendor, moving from one operating system to
another should be possible. This is probably the single most important benefit in
favor of J2EE over the .NET platform, which is limited to the Windows operating
system. It is worth noting, however, that Microsoft has submitted the specifications
for C# and a subset of the .NET Framework (called the common language
infrastructure) to ECMA, the group that standardizes JavaScript.
J2EE offers an acceptable solution to ISVs when the product must be marketed to
non-Windows customers, particularly when the J2EE platform itself can be bundled
with the ISV's product as an integrated offering.
If the primary customer base for the ISV is Windows customers, then the .NET
platform should be chosen. It will provide much better performance at a much lower
cost.
The major difference being that with Java, it is the presentation tier programmer that
determines the ultimate HTML that will be delivered to the client, and with .NET, it is
a Visual Studio.NET control.
This Java approach has three problems. First, it requires a lot of code on the
presentation tier, since every possible thin client system requires a different code
path. Second, it is very difficult to test the code with every possible thin client
system. Third, it is very difficult to add new thin clients to an existing application,
since to do so involves searching through, and modifying a tremendous amount of
presentation tier logic.
The .NET Framework approach is to write device independent code that interacts
with visual controls. It is the control, not the programmer, that is responsible for
determining what HTML to deliver, based on the capabilities of the client device.. In
the .NET Framework model, one can forget that such a thing as HTML even exists!
Contd ....
Conclusion
Sun's J2EE vision is based on a family of specifications that can be implemented by
many vendors. It is open in the sense that any company can license and implement
the technology, but closed in the sense that it is controlled by a single vendor, and a
self contained architectural island with very limited ability to interact outside of itself.
One of J2EE's major disadvantages is that the choice of the platform dictates the use
of a single programming language, and a programming language that is not well
suited for most businesses. One of J2EE's major advantages is that most of the J2EE
vendors do offer operating system portability.
* The ability to scale up is much greater, with the proved ability to support at least
ten times the number of clients any J2EE platform has shown itself able to support.
Assemblies
Defines the concept of assemblies, which are collections of types and resources that
form logical units of functionality. Assemblies are the fundamental units of
deployment, version control, reuse, activation scoping, and security permissions.
Application Domains
Explains how to use application domains to provide isolation between applications.
Runtime Hosts
Describes the runtime hosts supported by the .NET Framework, including ASP.NET,
Internet Explorer, and shell executables.
Cross-Language Interoperability
Explains how managed objects created in different programming languages can
interact with one another.
Because the common language runtime supplies a JIT compiler for each supported
CPU architecture, developers can write a set of MSIL that can be JIT-compiled and
run on computers with different architectures. However, your managed code will run
only on a specific operating system if it calls platform-specific native APIs, or a
platform-specific class library.
JIT compilation takes into account the fact that some code might never get called
during execution. Rather than using time and memory to convert all the MSIL in a
portable executable (PE) file to native code, it converts the MSIL as needed during
execution and stores the resulting native code so that it is accessible for subsequent
calls. The loader creates and attaches a stub to each of a type's methods when the
type is loaded. On the initial call to the method, the stub passes control to the JIT
compiler, which converts the MSIL for that method into native code and modifies the
stub to direct execution to the location of the native code. Subsequent calls of the
JIT-compiled method proceed directly to the native code that was previously
generated, reducing the time it takes to JIT-compile and run the code.
What meant of assembly & global assembly cache (gac) & Meta data.
Assembly :-- An assembly is the primary building block of a .NET based application.
It is a collection of functionality that is built, versioned, and deployed as a single
implementation unit (as one or more files). All managed types and resources are
marked either as accessible only within their implementation unit, or as accessible by
code outside that unit. It overcomes the problem of 'dll Hell'.The .NET Framework
uses assemblies as the fundamental unit for several purposes:
• Security
• Type Identity
• Reference Scope
• Versioning
• Deployment
Global Assembly Cache :-- Assemblies can be shared among multiple applications on
the machine by registering them in global Assembly cache(GAC). GAC is a machine
wide a local cache of assemblies maintained by the .NET Framework. We can register
the assembly to global assembly cache by using gacutil command.
We can Navigate to the GAC directory, C:\winnt\Assembly in explore. In the tools
menu select the cache properties; in the windows displayed you can set the memory
limit in MB used by the GAC
MetaData :--Assemblies have Manifests. This Manifest contains Metadata information
of the Module/Assembly as well as it contains detailed Metadata of other
assemblies/modules references (exported). It's the Assembly Manifest which
differentiates between an Assembly and a Module.
GUIDs can be created in a number of ways, but usually they are a combination of a
few unique settings based on specific point in time (e.g., an IP address, network MAC
address, clock date/time, etc.).
Describe the difference between inline and code behind - which is best in a
loosely coupled solution
ASP.NET supports two modes of page development: Page logic code that is written
inside runat="server"> blocks within an .aspx file and dynamically compiled the first
time the page is requested on the server. Page logic code that is written within an
external class that is compiled prior to deployment on a server and linked ""behind""
the .aspx file at run time.
Whats an assembly?
Assemblies are the building blocks of .NET Framework applications; they form the
fundamental unit of deployment, version control, reuse, activation scoping, and
security permissions. An assembly is a collection of types and resources that are built
to work together and form a logical unit of functionality. An assembly provides the
common language runtime with the information it needs to be aware of type
implementations. To the runtime, a type does not exist outside the context of an
assembly.
What is manifest?
It is the metadata that describes the assemblies.
What is metadata?
Metadata is machine-readable information about a resource, or ""data about data.""
Such information might include details on content, format, size, or other
characteristics of a data
source. In .NET, metadata includes type definitions, version information, external
assembly references, and other standardized information.
Static assemblies
These are the .NET PE files that you create at compile time.
Dynamic assemblies
These are PE-formatted, in-memory assemblies that you dynamically create at
runtime using the classes in the System.Reflection.Emit namespace.
Private assemblies
These are static assemblies used by a specific application.
In .NET, an assembly is the smallest unit to which you can associate a version
number;
Eg:-
Consider the following declaration of a value-type variable:
int i = 123;
object o = (object) i;
Boxing Conversion
UnBoxing :- Unboxing is an explicit conversion from the type object to a value type
Eg:
int i = 123; // A value type
object box = i; // Boxing
int j = (int)box; // Unboxing
Enum->An enum type is a distinct type that declares a set of named constants.They
are strongly typed constants. They are unique types that allow to declare symbolic
names to integral values. Enums are value types, which means they contain their
own value, can't inherit or be inherited from and assignment copies the value of one
enum to another.
What is namespaces?.
Namespace is a logical naming scheme for group related types.Some class types that
logically belong together they can be put into a common namespace. They prevent
namespace collisions and they provide scoping. They are imported as "using" in C#
or "Imports" in Visual Basic. It seems as if these directives specify a particular
assembly, but they don't. A namespace can span multiple assemblies, and an
assembly can define multiple namespaces. When the compiler needs the definition
for a class type, it tracks through each of the different imported namespaces to the
type name and searches each referenced assembly until it is found.
Namespaces can be nested. This is very similar to packages in Java as far as scoping
is concerned.
What is MSIL?.
When compiling to managed code, the compiler translates your source code into
Microsoft intermediate language (MSIL), which is a CPU-independent set of
instructions that can be efficiently converted to native code. MSIL includes
instructions for loading, storing, initializing, and calling methods on objects, as well
as instructions for arithmetic and logical operations, control flow, direct memory
access, exception handling, and other operations. Before code can be run, MSIL must
be converted to CPU-specific code, usually by a just-in-time (JIT) compiler. Because
the common language runtime supplies one or more JIT compilers for each computer
architecture it supports, the same set of MSIL can be JIT-compiled and run on any
supported architecture.
When a compiler produces MSIL, it also produces metadata. Metadata describes the
types in your code, including the definition of each type, the signatures of each
type's members, the members that your code references, and other data that the
runtime uses at execution time. The MSIL and metadata are contained in a portable
executable (PE) file that is based on and extends the published Microsoft PE and
common object file format (COFF) used historically for executable content. This file
format, which accommodates MSIL or native code as well as metadata, enables the
operating system to recognize common language runtime images. The presence of
metadata in the file along with the MSIL enables your code to describe itself, which
means that there is no need for type libraries or Interface Definition Language (IDL).
The runtime locates and extracts the metadata from the file as needed during
execution.
In .NET we have objects called Trace Listeners. A listener is an object that receives
the trace output and outputs it somewhere; that somewhere could be a window in
your development environment, a file on your hard drive, a Windows Event log, a
SQL Server or Oracle database, or any other customized data store.
All Trace Listeners have the following functions. Functionality of these functions is
same except that the target media for the tracing output is determined by the Trace
Listener.
Method Name
Result Fail Outputs the specified text with the Call Stack.
Write Outputs the specified text.
WriteLine Outputs the specified text and a carriage return.
Flush Flushes the output buffer to the target media.
Close Closes the output stream in order to not receive the tracing/debugging
output.
How to set the debug mode?
Debug Mode for ASP.NET applications - To set ASP.NET appplication in debugging
mode, edit the application's web.config and assign the "debug" attribute in <
compilation > section to "true" as show below:
< configuration >
< system.web >
< compilation defaultLanguage="vb" debug="true" / >
....
...
..
< / configuration >
Both XmlReader and XmlWriter are abstract base classes, which define the
functionality that all derived classes must support.
Code that runs outside the CLR is referred to as "unmanaged code." COM
components, ActiveX components, and Win32 API functions are examples of
unmanaged code.
What is encapsulation ?
Encapsulation is the ability to hide the internal workings of an object's behavior and
its data. For instance, let's say you have a object named Bike and this object has a
method named start(). When you create an instance of a Bike object and call its
start() method you are not worried about what happens to accomplish this, you just
want to make sure the state of the bike is changed to 'running' afterwards. This kind
of behavior hiding is encapsulation and it makes programming much easier.
class Moon:Planet
{
//Not allowed as base class is sealed
}
What is GUID and why we need to use it and in what condition? How this is
created.
A GUID is a 128-bit integer (16 bytes) that can be used across all computers and
networks wherever a unique identifier is required. Such an identifier has a very low
probability of being duplicated. Visual Studio .NET IDE has a utility under the tools
menu to generate GUIDs.
Managed code is compiled for the .NET run-time environment. It runs in the
Common Language Runtime (CLR), which is the heart of the .NET Framework. The
CLR provides services such as security,
memory management, and cross-language integration. Managed applications written
to take advantage of the features of the CLR perform more efficiently and safely, and
take better advantage of developers existing expertise in languages that support the
.NET Framework.
Unmanaged code includes all code written before the .NET Framework was
introduced—this includes code written to use COM, native Win32, and Visual Basic 6.
Because it does not run inside the .NET environment, unmanaged code cannot make
use of any .NET managed facilities."
using System;
namespace SampleMultiCastDelegate
{
class MultiCast
{
public delegate string strMultiCast(string s);
}
}
namespace SampleMultiCastDelegate
{
using System;
using System.Threading;
namespace SampleMultiCastDelegate
{
MultiCast.strMultiCast myDelegate;
}
}
}
The PID (Process ID) a unique number for each item on the Process Tab, Image
Name list. How do you get the PID to appear? In Task Manger, select the View menu,
then select columns and check PID (Process Identifier).
All the assemblies that need to be shared across applications need to be done
through the Global assembly Cache only. However it is not necessary to install
assemblies into the global assembly cache to make them accessible to COM interop
or unmanaged code.
There are several ways to deploy an assembly into the global assembly cache:
· Use an installer designed to work with the global assembly cache. This is the
preferred option for installing assemblies into the global assembly cache.
· Use a developer tool called the Global Assembly Cache tool (Gacutil.exe), provided
by the .NET Framework SDK.
· Use Windows Explorer to drag assemblies into the cache.
GAC solves the problem of DLL Hell and DLL versioning. Unlike earlier situations,
GAC can hold two assemblies of the same name but different version. This ensures
that the applications which access a particular assembly continue to access the same
assembly even if another version of that assembly is installed on that machine.
Identifier is the name of the interface and InterfaceBody refers to the abstract
methods and static final variables that make up the interface. Because it is assumed
that all the methods in an interface are abstract, it isn't necessary to use the abstract
keyword
But what does it mean to implement an interface. The interface acts as a contract or
promise. If a class implements an interface, then it must have the properties and
methods of the interface defined in the class. This is enforced by the compiler.
What is the difference between XML Web Services using ASMX and .NET
Remoting using SOAP?
ASP.NET Web services and .NET Remoting provide a full suite of design options for
cross-process and cross-plaform communication in distributed applications. In
general, ASP.NET Web services provide the highest levels of interoperability with full
support for WSDL and SOAP over HTTP, while .NET Remoting is designed for common
language runtime type-system fidelity and supports additional data format and
communication channels. Hence if we looking cross-platform communication than
web services is the choice coz for .NET remoting .Net framework is requried which
may or may not present for the other platform.
Security
Since ASP.NET Web services rely on HTTP, they integrate with the standard Internet
security infrastructure. ASP.NET leverages the security features available with IIS to
provide strong support for standard HTTP authentication schemes including Basic,
Digest, digital certificates, and even Microsoft® .NET Passport. (You can also use
Windows Integrated authentication, but only for clients in a trusted domain.) One
advantage of using the available HTTP authentication schemes is that no code
change is required in a Web service; IIS performs authentication before the ASP.NET
Web services are called. ASP.NET also provides support for .NET Passport-based
authentication and other custom authentication schemes. ASP.NET supports access
control based on target URLs, and by integrating with the .NET code access security
(CAS) infrastructure. SSL can be used to ensure private communication over the
wire.
Although these standard transport-level techniques to secure Web services are quite
effective, they only go so far. In complex scenarios involving multiple Web services in
different trust domains, you have to build custom ad hoc solutions. Microsoft and
others are working on a set of security specifications that build on the extensibility of
SOAP messages to offer message-level security capabilities. One of these is the XML
Web Services Security Language (WS-Security), which defines a framework for
message-level credential transfer, message integrity, and message confidentiality.
As noted in the previous section, the .NET Remoting plumbing does not secure cross-
process invocations in the general case. A .NET Remoting endpoint hosted in IIS with
ASP.NET can leverage all the same security features available to ASP.NET Web
services, including support for secure communication over the wire using SSL. If you
are using the TCP channel or the HTTP channel hosted in processes other than
aspnet_wp.exe, you have to implement authentication, authorization and privacy
mechanisms yourself.
One additional security concern is the ability to execute code from a semi-trusted
environment without having to change the default security policy. ASP.NET Web
Services client proxies work in these environments, but .NET Remoting proxies do
not. In order to use a .NET Remoting proxy from a semi-trusted environment, you
need a special serialization permission that is not given to code loaded from your
intranet or the Internet by default. If you want to use a .NET Remoting client from
within a semi-trusted environment, you have to alter the default security policy for
code loaded from those zones. In situations where you are connecting to systems
from clients running in a sandbox—like a downloaded Windows Forms application, for
instance—ASP.NET Web Services are a simpler choice because security policy
changes are not required.
Early binding implies that the class of the called object is known at compile-time;
late-binding implies that the class is not known until run-time, such as a call through
an interface or via Reflection.
Early binding is the preferred method. It is the best performer because your
application binds directly to the address of the function being called and there is no
extra overhead in doing a run-time lookup. In terms of overall execution speed, it is
at least twice as fast as late binding.
Early binding also provides type safety. When you have a reference set to the
component's type library, Visual Basic provides IntelliSense support to help you code
each function correctly. Visual Basic also warns you if the data type of a parameter or
return value is incorrect, saving a lot of time when writing and debugging code.
Late binding is still useful in situations where the exact interface of an object is not
known at design-time. If your application seeks to talk with multiple unknown
servers or needs to invoke functions by name (using the Visual Basic 6.0 CallByName
function for example) then you need to use late binding. Late binding is also useful
to work around compatibility problems between multiple versions of a component
that has improperly modified or adapted its interface between versions.
Strong names are implemented using standard public key cryptography. In general,
the process works as follows: The author of an assembly generates a key pair (or
uses an existing one), signs the file containing the manifest with the private key, and
makes the public key available to callers. When references are made to the
assembly, the caller records the public key corresponding to the private key used to
generate the strong name.
Weak named assemblies are not suitable to be added in GAC and shared. It is
essential for an assembly to be strong named.
Strong naming prevents tampering and enables assemblies to be placed in the GAC
alongside other assemblies of the same name.
How does the generational garbage collector in the .NET CLR manage object
lifetime? What is non-deterministic finalization?
The hugely simplistic version is that every time it garbage-collects, it starts by
assuming everything to be garbage, then goes through and builds a list of everything
reachable. Those become not-garbage, everything else doesn't, and gets thrown
away. What makes it generational is that every time an object goes through this
process and survives, it is noted as being a member of an older generation (up to 2,
right now). When the garbage-collector is trying to free memory, it starts with the
lowest generation (0) and only works up to higher ones if it can't free up enough
space, on the grounds that shorter-lived objects are more likely to have been freed
than longer-lived ones.
Non-deterministic finalization implies that the destructor (if any) of an object will not
necessarily be run (nor its memory cleaned up, but that's a relatively minor issue)
immediately upon its going out of scope. Instead, it will wait until first the garbage
collector gets around to finding it, and then the finalisation queue empties down to
it; and if the process ends before this happens, it may not be finalised at all.
(Although the operating system will usually clean up any process-external resources
left open - note the usually there, especially as the exceptions tend to hurt a lot.)
What are PDBs? Where must they be located for debugging to work?
A program database (PDB) files holds debugging and project state information that
allows incremental linking of debug configuration of your program.There are several
different types of symbolic debugging information. The default type for Microsoft
compiler is the so-called PDB file. The compiler setting for creating this file is /Zi, or
/ZI for C/C++(which creates a PDB file with additional information that enables a
feature called ""Edit and Continue"") or a Visual Basic/C#/JScript .NET program with
/debug.
A PDB file is a separate file, placed by default in the Debug project subdirectory, that
has the same name as the executable file with the extension .pdb. Note that the
Visual C++ compiler by default creates an additional PDB file called VC60.pdb for
VisulaC++6.0 and VC70.PDB file for VisulaC++7.0. The compiler creates this file
during compilation of the source code, when the compiler isn't aware of the final
name of the executable. The linker can merge this temporary PDB file into the main
one if you tell it to, but it won't do it by default. The PDB file can be useful to display
the detailed stack trace with source files and line numbers.
What is the difference between a Debug and Release build? Is there a significant
speed difference? Why or why not?
The Debug build is the program compiled with full symbolic debug information and
no optimization. The Release build is the program compiled employing optimization
and contains no symbolic debug information. These settings can be changed as per
need from Project Configuration properties. The release runs faster since it does not
have any debug symbols and is optimized.
Explain the use of virtual, sealed, override, and abstract.
Abstract: The keyword can be applied for a class or method.
1. Class: If we use abstract keyword for a class it makes the
class an abstract class, which means it cant be instantiated. Though
it is not nessacary to make all the method within the abstract class to be virtual. ie,
Abstract class can have concrete methods
2. Method: If we make a method as abstract, we dont need to provide
implementation
of the method in the class but the derived class need to implement/override this
method.
Sealed: It can be applied on a class and methods. It stops the type from further
derivation i.e no one can derive class
from a sealed class,ie A sealed class cannot be inherited.A sealed class cannot be a
abstract class.A compile time error is thrown if you try to specify sealed class as a
base class.
When an instance method declaration includes a sealed modifier, that method is said
to be a sealed method. If an instance method declaration includes the sealed
modifier, it must also include the override modifier. Use of the sealed modifier
prevents a derived class from further overriding the method For Egs: sealed override
public void Sample() { Console.WriteLine("Sealed Method"); }
Virtual & Override: Virtual & Override keywords provides runtime polymorphism. A
base class can make some of its methods
as virtual which allows the derived class a chance to override the base class
implementation by using override keyword.
class Rectangle:Shape
{
public override void Display()
{
Console.WriteLine("Derived");
}
}
PublicKeyToken: Each assembly can have a public key embedded in its manifest that
identifies the developer. This ensures that once the assembly ships, no one can
modify the code or other resources contained in the assembly.
Public: Allows class, methods, fields to be accessible from anywhere i.e. within and
outside an assembly.
Private: When applied to field and method allows to be accessible within a class.
Protected: Similar to private but can be accessed by members of derived class also.
Internal: They are public within the assembly i.e. they can be accessed by anyone
within an assembly but outside assembly they are not visible.
/*
*/
1. Combine fragments from different documents without any naming conflicts. (See
example below.)
2. Write reusable code modules that can be invoked for specific elements and
attributes. Universally unique names guarantee that
such modules are invoked only for the correct elements and attributes.
3. Define elements and attributes that can be reused in other schemas or instance
documents without fear of name collisions. For
example, you might use XHTML elements in a parts catalog to provide part
descriptions. Or you might use the nil attribute
defined in XML Schemas to indicate a missing value.
Now Manifest is a part of metadata only , fully called as “manifest metadata tables” ,
it contains the details of the references needed by the assembly of any other
external assembly / type , it could be a custom assembly or standard System
namespace .
Now for an assembly that can independently exists and used in the .Net world both
the things ( Metadata with Manifest ) are mandatory , so that it can be fully
described assembly and can be ported anywhere without any system dependency .
Essentially .Net framework can read all assembly related information from assembly
itself at runtime .
But for .Net modules , that can’t be used independently , until they are being
packaged as a part of an assembly , they don’t contain Manifest but their complete
structure is defined by their respective metadata .
Ultimately . .Net modules use Manifest Metadata tables of parent assembly which
contain them .
a. Check up the total space if there’s any free space on the declared list .
b. If yes add the new item and increase count by 1 .
c. If No Copy the whole thing to a temporary Array of Last Max. Size .
d. Create new Array with size ( Last Array Size + Increase Value )
e. Copy back values from temp and reference this new array as original array .
f. Must doing Method updates too , need to check it up .
Disabled: - There is no transaction. COM+ does not provide transaction support for
this component.
Not Supported: - Component does not support transactions. Hence even if the
calling component in the hierarchy is transaction enabled this component will not
participate in the transaction.
Required: - Components with this attribute require a transaction i.e. either the
calling should have a transaction in place else this component will start a new
transaction.
Required New: - Components enabled with this transaction type always require a
new transaction. Components with required new transaction type instantiate a new
transaction for themselves every time.
Create Runtime Callable Wrapper out of COM component. Reference the metadata
assembly Dll in the project and use its methods & properties RCW can be created
using Type Library Importer utility or through VS.NET. Using VS.NET, add reference
through COM tab to select the desired DLL. VS.NET automatically generates
metadata assembly putting the classes provided by that component into a
namespace with the same name as COM dll (XYZRCW.dll)
.NET components can be invoked by unmanaged code through COM Callable Wrapper
(CCW) in COM/.NET interop. The unmanaged code will talk to this proxy, which
translates call to managed environment. We can use COM components in .NET
through COM/.NET interoperability. When managed code calls an unmanaged
component, behind the scene, .NET creates proxy called COM Callable wrapper
(CCW), which accepts commands from a COM client, and forwards it to .NET
component. There are two prerequisites to creating .NET component, to be used in
unmanaged code:
1. .NET class should be implement its functionality through interface. First define
interface in code, then have the class to imlpement it. This way, it prevents breaking
of COM client, if/when .NET component changes.
2.Secondly, .NET class, which is to be visible to COM clients must be declared public.
The tools that create the CCW only define types based
on public classes. The same rule applies to methods, properties, and events that will
be used by COM clients.
Implementation Steps -
1. Generate type library of .NET component, using TLBExporter utility. A type library
is the COM equivalent of the metadata contained within
a .NET assembly. Type libraries are generally contained in files with the extension
.tlb. A type library contains the necessary information to allow a COM client to
determine which classes are located in a particular server, as well as the methods,
properties, and events supported by those classes.
2. Secondly, use Assembly Registration tool (regasm) to create the type library and
register it.
3. Lastly install .NET assembly in GAC, so it is available as shared assembly.
What benefit do you get from using a Primary Interop Assembly (PIA)?
PIAs are important because they provide unique type identity. The PIA distinguishes
the official type definitions from counterfeit definitions provided by other interop
assemblies. Having a single type identity ensures type compatibility between
applications that share the types defined in the PIA. Because the PIA is signed by its
publisher and labeled with the PrimaryInteropAssembly attribute, it can be
differentiated from other interop assemblies that define the same types.
ADO.NET
When sending and retrieving a DataSet from an XML Web service, the DiffGram
format is implicitly used. Additionally, when loading the contents of a DataSet from
XML using the ReadXml method, or when writing the contents of a DataSet in XML
using the WriteXml method, you can select that the contents be read or written as a
DiffGram.
The DiffGram format is divided into three sections: the current data, the original (or
"before") data, and an errors section, as shown in the following example.
<?xml version="1.0"?>
<diffgr:diffgram
xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"
xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<DataInstance>
</DataInstance>
<diffgr:before>
</diffgr:before>
<diffgr:errors>
</diffgr:errors>
</diffgr:diffgram>
<DataInstance>
The name of this element, DataInstance, is used for explanation purposes in this
documentation. A DataInstance element represents a DataSet or a row of a
DataTable. Instead of DataInstance, the element would contain the name of the
DataSet or DataTable. This block of the DiffGram format contains the current data,
whether it has been modified or not. An element, or row, that has been modified is
identified with the diffgr:hasChanges annotation.
<diffgr:before>
This block of the DiffGram format contains the original version of a row. Elements in
this block are matched to elements in the DataInstance block using the diffgr:id
annotation.
<diffgr:errors>
This block of the DiffGram format contains error information for a particular row in
the DataInstance block. Elements in this block are matched to elements in the
DataInstance block using the diffgr:id annotation.
Which method do you invoke on the DataAdapter control to load your generated
dataset with data?
You have to use the Fill method of the DataAdapter control and pass the dataset
object as an argument to load the generated data.
Default :
The row the default version for the current DataRowState. For a DataRowState value
of Added, Modified or Current, the default version is Current. For a DataRowState of
Deleted, the version is Original. For a DataRowState value of Detached, the version is
Proposed.
Original:
The row contains its original values.
Proposed:
The proposed values for the row. This row version exists during an edit operation on
a row, or for a row that is not part of a DataRowCollection
Atomicity
A transaction is a unit of work in which a series of operations occur between the
BEGIN TRANSACTION and END TRANSACTION statements of an application. A
transaction executes exactly once and is atomic — all the work is done or none of it
is.
Operations associated with a transaction usually share a common intent and are
interdependent. By performing only a subset of these operations, the system could
compromise the overall intent of the transaction. Atomicity eliminates the chance of
processing a subset of operations.
Consistency
A transaction is a unit of integrity because it preserves the consistency of data,
transforming one consistent state of data into another consistent state of data.
Isolation
A transaction is a unit of isolation — allowing concurrent transactions to behave as
though each were the only transaction running in the system.
Transactions attain the highest level of isolation when they are serializable. At this
level, the results obtained from a set of concurrent transactions are identical to the
results obtained by running each transaction serially. Because a high degree of
isolation can limit the number of concurrent transactions, some applications reduce
the isolation level in exchange for better throughput.
Durability
A transaction is also a unit of recovery. If a transaction succeeds, the system
guarantees that its updates will persist, even if the computer crashes immediately
after the commit. Specialized logging allows the system's restart procedure to
complete unfinished operations, making the transaction durable.
What is a Dataset?
Datasets are the result of bringing together ADO and XML. A dataset contains one or
more data of tabular XML, known as DataTables, these data can be treated
separately, or can have relationships defined between them. Indeed these
relationships give you ADO data SHAPING without needing to master the SHAPE
language, which many people are not comfortable with.
The dataset is a disconnected in-memory cache database. The dataset object model
looks like this:
Dataset
DataTableCollection
DataTable
DataView
DataRowCollection
DataRow
DataColumnCollection
DataColumn
ChildRelations
ParentRelations
Constraints
PrimaryKey
DataRelationCollection
DataView: The way we have views in database, same way we can have DataViews.
We can use these DataViews to do Sort, filter data.
DataRowCollection: Similar to DataTableCollection, to represent each row in each
Table we have DataRowCollection.
PrimaryKey: Dataset defines Primary key for the table and the primary key validation
will take place without going to the database.
Constraints: We can define various constraints on the Tables, and can use
Dataset.Tables(0).enforceConstraints. This will execute all the constraints, whenever
we enter data in DataTable.
The .NET Framework includes the .NET Framework Data Provider for SQL Server (for
Microsoft SQL Server version 7.0 or later), the .NET Framework Data Provider for
OLE DB, and the .NET Framework Data Provider for ODBC.
The .NET Framework Data Provider for SQL Server: The .NET Framework Data
Provider for SQL Server uses its own protocol to communicate with SQL Server. It is
lightweight and performs well because it is optimized to access a SQL Server directly
without adding an OLE DB or Open Database Connectivity (ODBC) layer. The
following illustration contrasts the .NET Framework Data Provider for SQL Server with
the .NET Framework Data Provider for OLE DB. The .NET Framework Data Provider
for OLE DB communicates to an OLE DB data source through both the OLE DB
Service component, which provides connection pooling and transaction services, and
the OLE DB Provider for the data source
The .NET Framework Data Provider for OLE DB: The .NET Framework Data Provider
for OLE DB uses native OLE DB through COM interoperability to enable data access.
The .NET Framework Data Provider for OLE DB supports both local and distributed
transactions. For distributed transactions, the .NET Framework Data Provider for OLE
DB, by default, automatically enlists in a transaction and obtains transaction details
from Windows 2000 Component Services.
The .NET Framework Data Provider for ODBC: The .NET Framework Data Provider for
ODBC uses native ODBC Driver Manager (DM) through COM interoperability to enable
data access. The ODBC data provider supports both local and distributed
transactions. For distributed transactions, the ODBC data provider, by default,
automatically enlists in a transaction and obtains transaction details from Windows
2000 Component Services.
The .NET Framework Data Provider for Oracle: The .NET Framework Data Provider
for Oracle enables data access to Oracle data sources through Oracle client
connectivity software. The data provider supports Oracle client software version
8.1.7 and later. The data provider supports both local and distributed transactions
(the data provider automatically enlists in existing distributed transactions, but does
not currently support the EnlistDistributedTransaction method).
The .NET Framework Data Provider for Oracle requires that Oracle client software
(version 8.1.7 or later) be installed on the system before you can use it to connect to
an Oracle data source.
.NET Framework Data Provider for Oracle classes are located in the
System.Data.OracleClient namespace and are contained in the
System.Data.OracleClient.dll assembly. You will need to reference both the
System.Data.dll and the System.Data.OracleClient.dll when compiling an application
that uses the data provider.
Choosing a .NET Framework Data Provider
.NET Framework Data Provider for SQL Server: Recommended for middle-tier
applications using Microsoft SQL Server 7.0 or later. Recommended for single-tier
applications using Microsoft Data Engine (MSDE) or Microsoft SQL Server 7.0 or later.
Recommended over use of the OLE DB Provider for SQL Server (SQLOLEDB) with the
.NET Framework Data Provider for OLE DB. For Microsoft SQL Server version 6.5 and
earlier, you must use the OLE DB Provider for SQL Server with the .NET Framework
Data Provider for OLE DB.
.NET Framework Data Provider for OLE DB: Recommended for middle-tier
applications using Microsoft SQL Server 6.5 or earlier, or any OLE DB provider. For
Microsoft SQL Server 7.0 or later, the .NET Framework Data Provider for SQL Server
is recommended. Recommended for single-tier applications using Microsoft Access
databases. Use of a Microsoft Access database for a middle-tier application is not
recommended.
.NET Framework Data Provider for ODBC: Recommended for middle-tier applications
using ODBC data sources. Recommended for single-tier applications using ODBC data
sources.
.NET Framework Data Provider for Oracle: Recommended for middle-tier applications
using Oracle data sources. Recommended for single-tier applications using Oracle
data sources. Supports Oracle client software version 8.1.7 and later. The .NET
Framework Data Provider for Oracle classes are located in the
System.Data.OracleClient namespace and are contained in the
System.Data.OracleClient.dll assembly. You need to reference both the
System.Data.dll and the System.Data.OracleClient.dll when compiling an application
that uses the data provider.
Can you explain the difference between an ADO.NET Dataset and an ADO Recordset?
Let’s take a look at the differences between ADO Recordset and ADO.Net DataSet:
1. Table Collection: ADO Recordset provides the ability to navigate through a single
table of information. That table would have been formed with a join of multiple tables
and returning columns from multiple tables. ADO.NET DataSet is capable of holding
instances of multiple tables. It has got a Table Collection, which holds multiple tables
in it. If the tables are having a relation, then it can be manipulated on a Parent-Child
relationship. It has the ability to support multiple tables with keys, constraints and
interconnected relationships. With this ability the DataSet can be considered as a
small, in-memory relational database cache.
3. Connectivity Model: The ADO Recordset was originally designed without the ability
to operate in a disconnected environment. ADO.NET DataSet is specifically designed
to be a disconnected in-memory database. ADO.NET DataSet follows a pure
disconnected connectivity model and this gives it much more scalability and
versatility in the amount of things it can do and how easily it can do that.
4. Marshalling and Serialization: In COM, through Marshalling, we can pass data from
1 COM component to another component at any time. Marshalling involves copying
and processing data so that a complex type can appear to the receiving component
the same as it appeared to the sending component. Marshalling is an expensive
operation. ADO.NET Dataset and DataTable components support Remoting in the
form of XML serialization. Rather than doing expensive Marshalling, it uses XML and
sent data across boundaries.
5. Firewalls and DCOM and Remoting: Those who have worked with DCOM know that
how difficult it is to marshal a DCOM component across a router. People generally
came up with workarounds to solve this issue. ADO.NET DataSet uses Remoting,
through which a DataSet / DataTable component can be serialized into XML, sent
across the wire to a new AppDomain, and then Desterilized back to a fully functional
DataSet. As the DataSet is completely disconnected, and it has no dependency, we
lose absolutely nothing by serializing and transferring it through Remoting.
One of the key features of the ADO.NET DataSet is that it can be a self-contained
and disconnected data store. It can contain the schema and data from several
rowsets in DataTable objects as well as information about how to relate the
DataTable objects-all in memory. The DataSet neither knows nor cares where the
data came from, nor does it need a link to an underlying data source. Because it is
data source agnostic you can pass the DataSet around networks or even serialize it
to XML and pass it across the Internet without losing any of its features. However, in
a disconnected model, concurrency obviously becomes a much bigger problem than
it is in a connected model.
In this column, I'll explore how ADO.NET is equipped to detect and handle
concurrency violations. I'll begin by discussing scenarios in which concurrency
violations can occur using the ADO.NET disconnected model. Then I will walk through
an ASP.NET application that handles concurrency violations by giving the user the
choice to overwrite the changes or to refresh the out-of-sync data and begin editing
again. Because part of managing an optimistic concurrency model can involve
keeping a timestamp (rowversion) or another type of flag that indicates when a row
was last updated, I will show how to implement this type of flag and how to maintain
its value after each database update.
There are three common techniques for managing what happens when users try to
modify the same data at the same time: pessimistic, optimistic, and last-in wins.
They each handle concurrency issues differently.
The pessimistic approach says: "Nobody can cause a concurrency violation with my
data if I do not let them get at the data while I have it." This tactic prevents
concurrency in the first place but it limits scalability because it prevents all
concurrent access. Pessimistic concurrency generally locks a row from the time it is
retrieved until the time updates are flushed to the database. Since this requires a
connection to remain open during the entire process, pessimistic concurrency cannot
successfully be implemented in a disconnected model like the ADO.NET DataSet,
which opens a connection only long enough to populate the DataSet then releases
and closes, so a database lock cannot be held.
Another technique for dealing with concurrency is the last-in wins approach. This
model is pretty straightforward and easy to implement-whatever data modification
was made last is what gets written to the database. To implement this technique you
only need to put the primary key fields of the row in the UPDATE statement's WHERE
clause. No matter what is changed, the UPDATE statement will overwrite the changes
with its own changes since all it is looking for is the row that matches the primary
key values. Unlike the pessimistic model, the last-in wins approach allows users to
read the data while it is being edited on screen. However, problems can occur when
users try to modify the same data at the same time because users can overwrite
each other's changes without being notified of the collision. The last-in wins approach
does not detect or notify the user of violations because it does not care. However the
optimistic technique does detect violations. Contd....
In optimistic concurrency models, a row is only locked during the update to the
database. Therefore the data can be retrieved and updated by other users at any
time other than during the actual row update operation. Optimistic concurrency
allows the data to be read simultaneously by multiple users and blocks other users
less often than its pessimistic counterpart, making it a good choice for ADO.NET. In
optimistic models, it is important to implement some type of concurrency violation
detection that will catch any additional attempt to modify records that have already
been modified but not committed. You can write your code to handle the violation by
always rejecting and canceling the change request or by overwriting the request
based on some business rules. Another way to handle the concurrency violation is to
let the user decide what to do. The sample application that is shown in Figure 1
illustrates some of the options that can be presented to the user in the event of a
concurrency violation.
When users are likely to overwrite each other's changes, control mechanisms should
be put in place. Otherwise, changes could be lost. If the technique you're using is the
last-in wins approach, then these types of overwrites are entirely possible.For
example, imagine Julie wants to edit an employee's last name to correct the spelling.
She navigates to a screen which loads the employee's information into a DataSet and
has it presented to her in a Web page. Meanwhile, Scott is notified that the same
employee's phone extension has changed. While Julie is correcting the employee's
last name, Scott begins to correct his extension. Julie saves her changes first and
then Scott saves his.Assuming that the application uses the last-in wins approach
and updates the row using a SQL WHERE clause containing only the primary key's
value, and assuming a change to one column requires the entire row to be updated,
neither Julie nor Scott may immediatelyrealize the concurrency issue that just
occurred. In this particular situation, Julie's changes were overwritten by Scott's
changes because he saved last, and the last name reverted to the misspelled
version.
So as you can see, even though the users changed different fields, their changes
collided and caused Julie's changes to be lost. Without some sort of concurrency
detection and handling, these types of overwrites can occur and even go
unnoticed.When you run the sample application included in this column's download,
you should open two separate instances of Microsoft® Internet Explorer. When I
generated the conflict, I opened two instances to simulate two users with two
separate sessions so that a concurrency violation would occur in the sample
application. When you do this, be careful not to use Ctrl+N because if you open one
instance and then use the Ctrl+N technique to open another instance, both windows
will share the same session.
Detecting Violations
The concurrency violation reported to the user in Figure 1 demonstrates what can
happen when multiple users edit the same data at the same time. In Figure 1, the
user attempted to modify the first name to "Joe" but since someone else had already
modified the last name to "Fuller III," a concurrency violation was detected and
reported. ADO.NET detects a concurrency violation when a DataSet containing
changed values is passed to a SqlDataAdapter's Update method and no rows are
actually modified. Simply using the primary key (in this case the EmployeeID) in the
UPDATE statement's WHERE clause will not cause a violation to be detected because
it still updates the row (in fact, this technique has the same outcome as the last-in
wins technique). Instead, more conditions must be specified in the WHERE clause in
order for ADO.NET to detect the violation.
The key here is to make the WHERE clause explicit enough so that it not only checks
the primary key but that it also checks for another appropriate condition. One way to
accomplish this is to pass in all modifiable fields to the WHERE clause in addition to
the primary key. For example, the application shown in Figure 1 could have its
UPDATE statement look like the stored procedure that's shown in Figure 2.
Notice that in the code in Figure 2 nullable columns are also checked to see if the
value passed in is NULL. This technique is not only messy but it can be difficult to
maintain by hand and it requires you to test for a significant number of WHERE
conditions just to update a row. This yields the desired result of only updating rows
where none of the values have changed since the last time the user got the data, but
there are other techniques that do not require such a huge WHERE clause.
Another way to make sure that the row is only updated if it has not been modified by
another user since you got the data is to add a timestamp column to the table. The
SQL Server(tm) TIMESTAMP datatype automatically updates itself with a new value
every time a value in its row is modified. This makes it a very simple and convenient
tool to help detect concurrency violations.
A third technique is to use a DATETIME column in which to track changes to its row.
In my sample application I added a column called LastUpdateDateTime to the
Employees table.
The binary TIMESTAMP column is simple to create and use since it automatically
regenerates its value each time its row is modified, but since the DATETIME column
technique is easier to display on screen and demonstrate when the change was
made, I chose it for my sample application. Both of these are solid choices, but I
prefer the TIMESTAMP technique since it does not involve any additional code to
update its value.
I prefer to use the output parameter technique since it is the fastest and incurs the
least overhead. Using the RowUpdated event works well, but it requires me to make
a second call from the application to the database. The following code snippet adds
an output parameter to the SqlCommand object that is used to update the Employee
information:
oUpdCmd.Parameters.Add(new SqlParameter("@NewLastUpdateDateTime",
SqlDbType.DateTime, 8, ParameterDirection.Output,
oUpdCmd.UpdatedRowSource = UpdateRowSource.OutputParameters;
The output parameter has its sourcecolumn and sourceversion arguments set to
point the output parameter's return value back to the current value of the
LastUpdateDateTime column of the DataSet. This way the updated DATETIME value
is retrieved and can be returned to the user's .aspx page. Contd....
Saving Changes
Now that the Employees table has the tracking field (LastUpdateDateTime) and the
stored procedure has been created to use both the primary key and the tracking field
in the WHERE clause of the UPDATE statement, let's take a look at the role of
ADO.NET. In order to trap the event when the user changes the values in the
textboxes, I created an event handler for the TextChanged event for each TextBox
control:
// do a Find)
dsEmployee.EmployeeRow oEmpRow =
(dsEmployee.EmployeeRow)oDsEmployee.Employee.Rows[0];
oEmpRow.LastName = txtLastName.Text;
This event retrieves the row and sets the appropriate field's value from the TextBox.
(Another way of getting the changed values is to grab them when the user clicks the
Save button.) Each TextChanged event executes after the Page_Load event fires on a
postback, so assuming the user changed the first and last names, when the user
clicks the Save button, the events could fire in this order: Page_Load,
txtFirstName_TextChanged, txtLastName_TextChanged, and btnSave_Click.
The Page_Load event grabs the row from the DataSet in the Session object; the
TextChanged events update the DataRow with the new values; and the
btnSave_Click event attempts to save the record to the database. The btnSave_Click
event calls the SaveEmployee method (shown in Figure 3) and passes it a
bLastInWins value of false since we want to attempt a standard save first. If the
SaveEmployee method detects that changes were made to the row (using the
HasChanges method on the DataSet, or alternatively using the RowState property on
the row), it creates an instance of the Employee class and passes the DataSet to its
SaveEmployee method. The Employee class could live in a logical or physical middle
tier. (I wanted to make this a separate class so it would be easy to pull the code out
and separate it from the presentation logic.)
Notice that I did not use the GetChanges method to pull out only the modified rows
and pass them to the Employee object's Save method. I skipped this step here since
there is only one row. However, if there were multiple rows in the DataSet's
DataTable, it would be better to use the GetChanges method to create a DataSet that
contains only the modified rows.
Reporting Violations
User's Choice
Once the user has been notified of the concurrency issue, you could leave it up to her
to decide how to handle it. Another alternative is to code a specific way to deal with
concurrency, such as always handling the exception to let the user know (but
refreshing the data from the database). In this sample application I let the user
decide what to do next. She can either cancel changes, cancel and reload from the
database, save changes, or save anyway.
The option to cancel changes simply calls the RejectChanges method of the DataSet
and rebinds the DataSet to the controls in the ASP.NET page. The RejectChanges
method reverts the changes that the user made back to its original state by setting
all of the current field values to the original field values. The option to cancel changes
and reload the data from the database also rejects the changes but additionally goes
back to the database via the Employee class in order to get a fresh copy of the data
before rebinding to the control on the ASP.NET page.
The option to save changes attempts to save the changes but will fail if a
concurrency violation is encountered. Finally, I included a "save anyway" option. This
option takes the values the user attempted to save and uses the last-in wins
technique, overwriting whatever is in the database. It does this by calling a different
command object associated with a stored procedure that only uses the primary key
field (EmployeeID) in the WHERE clause of the UPDATE statement. This technique
should be used with caution as it will overwrite the record.
If you want a more automatic way of dealing with the changes, you could get a fresh
copy from the database. Then overwrite just the fields that the current user
modified, such as the Extension field. That way, in the example I used the proper
LastName would not be overwritten. Use this with caution as well, however, because
if the same field was modified by both users, you may want to just back out or ask
the user what to do next. What is obvious here is that there are several ways to deal
with concurrency violations, each of which must be carefully weighed before you
decide on the one you will use in your application.
Wrapping It Up
I have split the topic of concurrency violation management into two parts. Next time
I will focus on what to do when multiple rows could cause concurrency violations. I
will also discuss how the DataViewRowState enumerators can be used to show what
changes have been made to a DataSet.
Next>>
Can you give an example of when it would be appropriate to use a web
service as opposed to non-serviced .NET component
Web service is one of main component in Service Oriented Architecture. You could
use web services when your clients and servers are running on different networks
and also different platforms. This provides a loosely coupled system. And also if the
client is behind the firewall it would be easy to use web service since it runs on port
80 (by default) instead of having some thing else in Service Oriented Architecture
applications.
What is the standard you use to wrap up a call to a Web service
"SOAP.
"
What is the transport protocol you use to call a Web service SOAP
HTTP with SOAP
True or False: To test a Web service you must create a windows application
or Web application to consume this service?
False.
2.Synchronous Call
Application has to wait until execution has completed.
What are VSDISCO files?
VSDISCO files are DISCO files that support dynamic discovery of Web services. If
you place the following VSDISCO file in a directory on your Web server, for example,
it returns references to all ASMX and DISCO files in the host directory and any
subdirectories not noted in <EXCLUDE>elements:
<DYNAMICDISCOVERY
xmlns="urn:schemas-dynamicdiscovery:disco.2000-03-17">
<EXCLUDE path="_vti_cnf" />
<EXCLUDE path="_vti_pvt" />
<EXCLUDE path="_vti_log" />
<EXCLUDE path="_vti_script" />
<EXCLUDE path="_vti_txt" />
</DYNAMICDISCOVERY>
Note that VSDISCO files are disabled in the release version of ASP.NET. You can
reenable them by uncommenting the line in the <HTTPHANDLERS>section of
Machine.config that maps *.vsdisco to
System.Web.Services.Discovery.DiscoveryRequestHandler and granting the ASPNET
user account permission to read the IIS metabase. However, Microsoft is actively
discouraging the use of VSDISCO files because they could represent a threat to Web
server security.
<%
Response.Cache.SetNoStore ();
Response.Write (DateTime.Now.ToLongTimeString ());
%>
[System.Xml.Serialization.XmlRootAttribute(Namespace="http://tempuri.org/",
IsNullable=false)]
public class AuthToken : SoapHeader { public string Token; }}
In this case, when you create an instance of the proxy in your main application file,
you'll also create an instance of the AuthToken class and assign the string:
Service1 objSvc = new Service1();
processingobjSvc.AuthTokenValue = new AuthToken();
objSvc.AuthTokenValue.Token = <ACTUAL token value>;
Web Servicestring strResult = objSvc.MyBillableWebMethod();
What is WSDL?
WSDL is the Web Service Description Language, and it is implemented as a specific
XML vocabulary. While it's very much more complex than what can be described
here, there are two important aspects to WSDL with which you should be aware.
First, WSDL provides instructions to consumers of Web Services to describe the
layout and contents of the SOAP packets the Web Service intends to issue. It's an
interface description document, of sorts. And second, it isn't intended that you read
and interpret the WSDL. Rather, WSDL should be processed by machine, typically to
generate proxy source code (.NET) or create dynamic proxies on the fly (the SOAP
Toolkit or Web Service Behavior).
What is a Windows Service and how does its lifecycle differ from a
"standard" EXE?
Windows service is a application that runs in the background. It is equivalent to a NT
service.
The executable created is not a Windows application, and hence you can't just click
and run it . it needs to be installed as a service, VB.Net has a facility where we can
add an installer to our program and then use a utility to install the service. Where as
this is not the case with standard exe
Note The tlist.exe file is typically located in the following directory: C:\Program
Files\Debugging Tools for Windows
d. At the command prompt, type tlist to list the image names and the process IDs
of all processes that are currently running on your computer.
Note Make a note of the process ID of the process that hosts the service that you
want to debug.
2 At a command prompt, change the directory path to reflect the location of the
windbg.exe file on your computer.
Note If a command prompt is not open, follow steps a and b of Method 1. The
windbg.exe file is typically located in the following directory: C:\Program
Files\Debugging Tools for Windows.
3 At the command prompt, type windbg –p ProcessID to attach the WinDbg
debugger to the process that hosts the service that you want to debug.
Note ProcessID is a placeholder for the process ID of the process that hosts the
service that you want to debug.
Use the image name of the process that hosts the service that you want to debug
You can use this method only if there is exactly one running instance of the process
that hosts the service that you want to run. To do this, follow these steps:
1 Click Start, and then click Run. The Run dialog box appears.
2 In the Open box, type cmd, and then click OK to open a command prompt.
3 At the command prompt, change the directory path to reflect the location of the
windbg.exe file on your computer.
Note The windbg.exe file is typically located in the following directory: C:\Program
Files\Debugging Tools for Windows.
4 At the command prompt, type windbg –pn ImageName to attach the WinDbg
debugger to the process that hosts the service that you want to debug.
NoteImageName is a placeholder for the image name of the process that hosts the
service that you want to debug. The "-pn" command-line option specifies that the
ImageName command-line argument is the image name of a process.
back to the top
Start the WinDbg debugger and attach to the process that hosts the service that you
want to debug
Note The windbg.exe file is typically located in the following directory: C:\Program
Files\Debugging Tools for Windows
3 Run the windbg.exe file to start the WinDbg debugger.
4 On the File menu, click Attach to a Process to display the Attach to Process dialog
box.
5 Click to select the node that corresponds to the process that hosts the service that
you want to debug, and then click OK.
6 In the dialog box that appears, click Yes to save base workspace information.
Notice that you can now debug the disassembled code of your service.
Configure a service to start with the WinDbg debugger attached
You can use this method to debug services if you want to troubleshoot service-
startup-related problems.
1 Configure the "Image File Execution" options. To do this, use one of the following
methods:
• Method 1: Use the Global Flags Editor (gflags.exe)
a. Start Windows Explorer.
b. Locate the gflags.exe file on your computer.
Note The gflags.exe file is typically located in the following directory: C:\Program
Files\Debugging Tools for Windows.
c. Run the gflags.exe file to start the Global Flags Editor.
d. In the Image File Name text box, type the image name of the process that hosts
the service that you want to debug. For example, if you want to debug a service that
is hosted by a process that has MyService.exe as the image name, type
MyService.exe.
e. Under Destination, click to select the Image File Options option.
f. Under Image Debugger Options, click to select the Debugger check box.
g. In the Debugger text box, type the full path of the debugger that you want to
use. For example, if you want to use the WinDbg debugger to debug a service, you
can type a full path that is similar to the following: C:\Program Files\Debugging Tools
for Windows\windbg.exe
h. Click Apply, and then click OK to quit the Global Flags Editor.
• Method 2: Use Registry Editor
a. Click Start, and then click Run. The Run dialog box appears.
b. In the Open box, type regedit, and then click OK to start Registry Editor.
c. Warning If you use Registry Editor incorrectly, you may cause serious problems
that may require you to reinstall your operating system. Microsoft cannot guarantee
that you can solve problems that result from using Registry Editor incorrectly. Use
Registry Editor at your own risk.
In Registry Editor, locate, and then right-click the following registry subkey:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image
File Execution Options
d. Point to New, and then click Key. In the left pane of Registry Editor, notice that
New Key #1 (the name of a new registry subkey) is selected for editing.
e. Type ImageName to replace New Key #1, and then press ENTER.
Note ImageName is a placeholder for the image name of the process that hosts
the service that you want to debug. For example, if you want to debug a service that
is hosted by a process that has MyService.exe as the image name, type
MyService.exe.
f. Right-click the registry subkey that you created in step e.
g. Point to New, and then click String Value. In the right pane of Registry Editor,
notice that New Value #1, the name of a new registry entry, is selected for editing.
h. Replace New Value #1 with Debugger, and then press ENTER.
i. Right-click the Debugger registry entry that you created in step h, and then click
Modify. The Edit String dialog box appears.
j. In the Value data text box, type DebuggerPath, and then click OK.
Note DebuggerPath is a placeholder for the full path of the debugger that you want
to use. For example, if you want to use the WinDbg debugger to debug a service,
you can type a full path that is similar to the following: C:\Program Files\Debugging
Tools for Windows\windbg.exe
2 For the debugger window to appear on your desktop, and to interact with the
debugger, make your service interactive. If you do not make your service interactive,
the debugger will start but you cannot see it and you cannot issue commands. To
make your service interactive, use one of the following methods:
• Method 1: Use the Services console
a. Click Start, and then point to Programs.
b. On the Programs menu, point to Administrative Tools, and then click Services.
The Services console appears.
c. In the right pane of the Services console, right-click ServiceName, and then click
Properties.
Note ServiceName is a placeholder for the name of the service that you want to
debug.
d. On the Log On tab, click to select the Allow service to interact with desktop check
box under Local System account, and then click OK.
• Method 2: Use Registry Editor
a. In Registry Editor, locate, and then click the following registry subkey:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ServiceName
Note Replace ServiceName with the name of the service that you want to debug.
For example, if you want to debug a service named MyService, locate and then click
the following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MyService
b. Under the Name field in the right pane of Registry Editor, right-click Type, and
then click Modify. The Edit DWORD Value dialog box appears.
c. Change the text in the Value data text box to the result of the binary OR
operation with the binary value of the current text and the binary value,
0x00000100, as the two operands. The binary value, 0x00000100, corresponds to
the SERVICE_INTERACTIVE_PROCESS constant that is defined in the WinNT.h header
file on your computer. This constant specifies that a service is interactive in nature.
3 When a service starts, the service communicates to the Service Control Manager
how long the service must have to start (the time-out period for the service). If the
Service Control Manager does not receive a "service started" notice from the service
within this time-out period, the Service Control Manager terminates the process that
hosts the service. This time-out period is typically less than 30 seconds. If you do not
adjust this time-out period, the Service Control Manager ends the process and the
attached debugger while you are trying to debug. To adjust this time-out period,
follow these steps:
a. In Registry Editor, locate, and then right-click the following registry subkey:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control
b. Point to New, and then click DWORD Value. In the right pane of Registry Editor,
notice that New Value #1 (the name of a new registry entry) is selected for editing.
c. Type ServicesPipeTimeout to replace New Value #1, and then press ENTER.
d. Right-click the ServicesPipeTimeout registry entry that you created in step c, and
then click Modify. The Edit DWORD Value dialog box appears.
e. In the Value data text box, type TimeoutPeriod, and then click OK
Note TimeoutPeriod is a placeholder for the value of the time-out period (in
milliseconds) that you want to set for the service. For example, if you want to set the
time-out period to 24 hours (86400000 milliseconds), type 86400000.
f. Restart the computer. You must restart the computer for Service Control Manager
to apply this change.
4 Start your Windows service. To do this, follow these steps:
a. Click Start, and then point to Programs.
b. On the Programs menu, point to Administrative Tools, and then click Services.
The Services console appears.
c. In the right pane of the Services console, right-click ServiceName, and then click
Start.
Note ServiceName is a placeholder for the name of the service that you want to
debug.