You are on page 1of 92

Table of Contents

1.0.

ABSTRACT INTRODUCTION(REQUIREMENTS ANALYSIS) 2.1. 2.2 2.3 2.4 Purpose Scope Goals/ Aim Features of your project ( Advantages)

2.0.

3.0. SYSTEM ANALYSIS AND DECRIPTION 3.1. Existing system 3.2. Proposed system 3.3Overall description 3.4 Modules Description 3.5. Feasibility Study 3.6 SDLC Model. 4.0. SOFTWARE REQUIREMENT SPECIFICATIONS 4.1 Software interfaces 4.2 Hardware interfaces 4.3 Communications interfaces

5.0. Languages of implementation.


Asp.net description C#.net description. ADO.NET Sqlserver description. 6.0. SOFTWARE DESIGN 6.1 Design Overview 6.2 UML Design 6.3 DFD Design. 6.4 DB Design 6.5) User Interfaces or Output Screens. 7.0 Code (Keep for 2 pages important functionality)

8.0. TESTING 8.1 Testing Introduction 8.2 Unit Testing 8.3 White Box Testing 8.4 Black Box Testing 8.5 Integration Testing 8.6 Test Cases

9.0 Deployment
9.1 Running Application 9.2 Configuring Data Base 10.0. CONCLUSION 11.0 FUTURE ENHANCEMENT 12.0 BIBLIOGRAPHY

1. Abstract

The downloader which is seen upon click of an any attachment in the email or from any site is the normal one which just intimates the completion of the download. But what is needed at present is the downloader which not only intimates the completion of a single downloadable object but of many different formats of downloadable objects with a new way of application of the technology which C# Downloader makes use of in the way of Segmented downloads. Its practical implementation is found where one downloads lot many objects of different formats and all this can be seen just in one window. For eg, downloads of objects from File Transfer Protocols (FTP) from clients network where one can download wide variety of things related to the project of the client. C# Downloader is an open source application written in C# that is almost a complete download manager.

2.0 Introduction
C# Downloader is used to download all the files from the net these files can be an audio files or video files or any kind of files and stores those files in one specified location. Problem with ordinary download is it downloads at separate location which is difficult to find that specified location. If we use c# downloader we can easily find the file location and access the file very easily.

2.1 Purpose
The purpose of this document is to give the overview of the project. The main aim of the project is to develop a tool which downloads the files from Internet with high speed. Our project C# downloader is an open source application written in C# that is almost a complete download manager. It supports all the video file formats, it can download videos from YouTube, google videos etc..,. It supports all types of files and we can download different files from internet in less time. 2.2 Scope The modules and the classes developed in this application are highly extensible and easy to be modified. The C# Downloader project is implemented entirely separately form the applications front end and acts as a middle tier that can be molded in any shape according to requirements. Depending on Organizations needs any number of entities and security level can be added to the application. In this application we can also convert one video file format to another video file format.

2.3 Aim/Goal The goal or aim of our application is to provide a easy and user friendly downloader using which user can easily downloads any files of audio, video or text files at low bandwidth. 2.3 Features of project (Advantages) 1) To provide user friendly and quick file downloader for the users. 2) It automatically resumes the downloads when the system restarts. 3) It uses lower bandwidth when compared to the existing downloaders 4) Downloading will be done in segmented way, there by increasing the downloading speeds with multiple threading. 5) It consists of downloading options. 6) It has rich GUI feature.

3.0. SYSTEM ANALYSIS AND DECRIPTION


3.1. Existing system: There are many download managers available in the market. The main disadvantage is when we are downloading a file in downloading manger it grabs the whole internet speed. There is no option for download files only on allowed times and to Limit the number of simultaneous downloads. We cant restrict the bandwidth when we are using download manager. 3.2 Proposed system Limit the bandwidth at specific times. Possibility to enable "Auto-downloads" at start-up, allowing the downloads to start automatically at application start-up Download files only on allowed times. Limit the number of simultaneous downloads. When one download ends, starts another automatically. Can download all types of files.
Support to convert downloaded videos to MPEG, AVI and MP3.

3.3 Overall description C# downloader is used to download files from internet with high speed. We can download any type of file from internet. We have many internet download manager tools but our C# downloader is provided with unique features. It allows downloaded to be paused and resume later. It supports all types of video files to be downloaded. It supports for auto downloaded. Limits band width at specified times. We can also manage the download user can download files at a specified times. It is easily integrated with IE.

3.4 Modules Description


We have 2 modules in our project 1. Hardware Interaction Module. 2. GUI Interface.

Hardware Interaction module: This is the important module in our project in this module downloading files and splitting the file are done. In this module we write code for storing the files in one specified location and the whole coding part of our project is written in this module. GUI Interface: In any project GUI interface is very important if GUI is not specific than its difficult for any end user to access our site. Our project consists for rich GUI user controls such as File strip menu, Rich textbox, Data List etc.. to make our GUI more easy and flexible to use for end user.

3.5. Feasibility Study

TECHINICAL FEASIBILITY: Evaluating the technical feasibility is the trickiest part of a feasibility study. This is because, at this point in time, not too many detailed design of the system, making it difficult to access issues like performance, costs on (on account of the kind of technology to be deployed) etc. A number of issues have to be considered while doing a technical analysis. i) Understand the different technologies involved in the proposed system: Before commencing the project, we have to be very clear about what are the technologies that are to be required for the development of the new system. ii) Find out whether the organization currently possesses the required technologies: Is the required technology available with the organization? If so is the capacity sufficient? For instance Will the current printer be able to handle the new reports and forms required for the new system?

OPERATIONAL FEASIBILITY: Proposed projects are beneficial only if they can be turned into information systems that will meet the organizations operating requirements. Simply stated, this test of feasibility asks if the system will work when it is developed and installed. Are there major barriers to Implementation? Here are questions that will help test the operational feasibility of a project: Is there sufficient support for the project from management from users? If the current system is well liked and used to the extent that persons will not be able to see reasons for change, there may be resistance. Are the current business methods acceptable to the user? If they are not, Users may welcome a change that will bring about a more operational and useful systems. Have the user been involved in the planning and development of the project? Early involvement reduces the chances of resistance to the system and in General and increases the likelihood of successful project. Since the proposed system was to help reduce the hardships encountered In the existing manual system, the new system was considered to be operational feasible. ECONOMIC FEASIBILITY:

Economic feasibility attempts 2 weigh the costs of developing and implementing a new system, against the benefits that would accrue from having the new system in place. This feasibility study gives the top management the economic justification for the new system. A simple economic analysis which gives the actual comparison of costs and benefits are much more meaningful in this case. In addition, this proves to be a useful point of reference to compare actual costs as the project progresses. There could be various types of intangible benefits on account of automation. These could include increased customer satisfaction, improvement in product quality better decision making timeliness of information, expediting activities, improved accuracy of operations, better documentation and record keeping, faster retrieval of information, better employee morale.

3.6 SDLC Model


THE INCREMENTAL, ITERATIVE SOFTWARE ENGINEERING LIFE CYCLE: When we defining and constructing credit card validation systems will uncover many requirements that may be difficult at outset. Instead knowledge of the system and requirements will grow as work progress the whole software engineering process is designed to uncover details and incompatibilities in the requirements that may not be obvious to customer and bankers at outset. Several cases or increments of software development additional increases will be build and delivered in successive increment system normally involves as are deliver successive new versions, the development of first version from sketch called green field

development is special case of incremental development the development of first increment is an important activity series we establish the architectural base that must last for the entire systems life time.

WATERFALL LIFECYCLE MODEL: Waterfall model states that the phases (analysis, design, and coding, testing, support) are systematized in a linear order and each phase should accomplished entirely earlier of the next phase begins.

In this way the step by step phase initially analyzing phase is completed and that output takes place at the end of analyze phase after that output will be given as input for the design phase, depending on the inputs it generates all design steps ,like ways all phases processed and produced all successful outputs, And will to find out whether the project is pursuing on the exact path or not. If not the project may be discard or any other action takes place to continue. The model is the most commonly used and also known as linear sequential lifecycle model.

ADVANTAGES: 1. This model is very easy to use and implement. 2. Each phase is completed at a time and processed. 3. This model better works for smaller projects if only the requirements are well understood. 4. In each phase have deliverables and that must be reviewed.

DISADVANTAGES:

1. If the requirements are gathered are inaccurate then the final product is inaccurate and the error is known in the final phase of the model. Any sort of errors that cannot be detected in any previous phase. 2. For long, object-oriented, complex and ongoing projects its a poor model. 3. This model has high risks.

Fig: Waterfall Lifecycle Model (Source: http://www.freetutes.com/systemanalysis/sa2-waterfall-software-life-cycle.html

PROTOTYPE MODEL: In this model the requirements are gathered firstly, and the prototype is deployed according to the requirements. This prototype is a quick design which goes through the coding, design and testing. The phases are not done in detail. By seeing this prototype the

client feels like a real system, so that the client understands the entire requirements of the systems.

ADVANTAGES: 1. During the development process the developers are interestingly engaged. 2. The prototype developed that is used by the users for well understanding of the methodology 3. The user involvement is increased and improved. 4. The flaws and faults are identified early. 5. The users opinion about the product is known early which leads to an improved system. DISADVANTAGES: 1. This model focuses on design quite than functionality. 2. The model is implemented firstly and then errors are evaluated later which becomes a complex process 3. The model is also known as throw-away prototype. 4. More time spent on development of the prototype that result in delay of the final product.

Requireme nts Quick Design Refine Requireme Customer Evaluation of the Desig n Impleme Build Prototy

Test

Maintai

Fig: Prototyping Methodology. (Source: http://testingtutor.com/TESTING_ARTICLES/Software_Life_Cycle_Models.html)

4.0. SOFTWARE REQUIREMENT SPECIFICATIONS

a. Document Conventions: Page Nos should be in the centre 12 font Times New Roman. All the Page Headings 16 Bold Times New Roman. o Side Headings 14 Times New Roman. o Side Sub-Headings 12 Times New Roman. o Body text content should be Font size 11 Times New Roman with 1.5 Paragraph spacing. b. Intended Audience and Reading Suggestions:

The intended audience for this document are the internal guides of the organization where the team has developed the project. Further modifications and reviewing will be done by the organization and deliver a final version. The final version of this document is reviewed by the Internal Guides and Head of the Department of the college. The sequence to follow better understanding is here Purpose, Scope, Operating requirements, Advantages, Requirements etc In the rest of the part of this SRS is mentioned with what are our product benefits, how to use this product, how the product was developed, what

are the major things we have taken into the consideration all are mentioned in this rest of the part of the SRS. In the SRS at first we have discussed the importance of our product and functionality of the product and the software we used and all the way how we can get utilized this software is mentioned in this SRS.

4.1 Software interfaces


Environment: .NET Frame work, Microsoft Visual Studio.NET C#, ASP.NET. Data base and web server: Microsoft SQL server 2000 and Internet Information Server (IIS) 5.0. 4.2 Hardware interfaces

Processor:: Pentium-III (or) Higher Ram:: 1GB (or) Higher Hard disk:: 40GB

4.3 Communications interfaces


In order to download videos we need internet facility for our application which can download the files using the minimum bandwidth of the net speed.

5.0. System Development Environment


5.1) Microsoft.NET Framework The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet. The .NET Framework is designed to fulfill the following objectives:

To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely.

To provide a code-execution environment that minimizes software deployment and versioning conflicts.

To provide a code-execution environment that guarantees safe execution of code, including code created by an unknown or semi-trusted third party. To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments. To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications. To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code.

The .NET Framework has two main components: the common language runtime and the .NET Framework class library. The common language runtime is the foundation of the .NET Framework. You can think of the runtime as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that ensure security and robustness. In fact, the concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as

unmanaged code. The class library, the other main component of the .NET Framework, is a comprehensive, object-oriented collection of reusable types that you can use to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services. The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts. For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for managed code. ASP.NET works directly with the runtime to enable Web Forms applications and XML Web services, both of which are discussed later in this topic. Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed managed components or Windows Forms controls in HTML documents. Hosting the runtime in this way makes managed mobile code (similar to Microsoft ActiveX controls) possible, but with significant improvements that only managed code can offer, such as semi-trusted execution and secure isolated file storage. The following illustration shows the relationship of the common language runtime and the class library to your applications and to the overall system. The illustration also shows how managed code operates within a larger architecture.

Features of the Common Language Runtime


The common language runtime manages memory, thread execution, code execution, code safety verification, compilation, and other system services. These features are intrinsic to the managed code that runs on the common language runtime.

With regards to security, managed components are awarded varying degrees of trust, depending on a number of factors that include their origin (such as the Internet, enterprise network, or local computer). This means that a managed component might or might not be able to perform file-access operations, registry-access operations, or other sensitive functions, even if it is being used in the same active application. The runtime enforces code access security. For example, users can trust that an executable embedded in a Web page can play an animation on screen or sing a song, but cannot access their personal data, file system, or network. The security features of the runtime thus enable legitimate Internet-deployed software to be exceptionally feature rich. The runtime also enforces code robustness by implementing a strict type- and codeverification infrastructure called the common type system (CTS). The CTS ensures that all managed code is self-describing. The various Microsoft and third-party language compilers generate managed code that conforms to the CTS. This means that managed code can consume other managed types and instances, while strictly enforcing type fidelity and type safety. In addition, the managed environment of the runtime eliminates many common software issues. For example, the runtime automatically handles object layout and manages references to objects, releasing them when they are no longer being used. This automatic memory management resolves the two most common application errors, memory leaks and invalid memory references. The runtime also accelerates developer productivity. For example, programmers can write applications in their development language of choice, yet take full advantage of the runtime, the class library, and components written in other languages by other developers. Any compiler vendor who chooses to target the runtime can do so. Language compilers that target the .NET Framework make the features of the .NET Framework available to existing code written in that language, greatly easing the migration process for existing applications.

While the runtime is designed for the software of the future, it also supports software of today and yesterday. Interoperability between managed and unmanaged code enables developers to continue to use necessary COM components and DLLs. The runtime is designed to enhance performance. Although the common language runtime provides many standard runtime services, managed code is never interpreted. A feature called just-in-time (JIT) compiling enables all managed code to run in the native machine language of the system on which it is executing. Meanwhile, the memory manager removes the possibilities of fragmented memory and increases memory localityof-reference to further increase performance. Finally, the runtime can be hosted by high-performance, server-side applications, such as Microsoft SQL Server and Internet Information Services (IIS). This infrastructure enables you to use managed code to write your business logic, while still enjoying the superior performance of the industry's best enterprise servers that support runtime hosting.

.NET Framework Class Library


The .NET Framework class library is a collection of reusable types that tightly integrate with the common language runtime. The class library is object oriented, providing types from which your own managed code can derive functionality. This not only makes the .NET Framework types easy to use, but also reduces the time associated with learning new features of the .NET Framework. In addition, third-party components can integrate seamlessly with classes in the .NET Framework. For example, the .NET Framework collection classes implement a set of interfaces that you can use to develop your own collection classes. Your collection classes will blend seamlessly with the classes in the .NET Framework. As you would expect from an object-oriented class library, the .NET Framework types enable you to accomplish a range of common programming tasks, including tasks such as string management, data collection, database connectivity, and file access. In addition to

these common tasks, the class library includes types that support a variety of specialized development scenarios. For example, you can use the .NET Framework to develop the following types of applications and services:

Console applications. Scripted or hosted applications. Windows GUI applications (Windows Forms). ASP.NET applications. XML Web services. Windows services.

For example, the Windows Forms classes are a comprehensive set of reusable types that vastly simplify Windows GUI development. If you write an ASP.NET Web Form application, you can use the Web Forms classes.

Client Application Development


Client applications are the closest to a traditional style of application in Windows-based programming. These are the types of applications that display windows or forms on the desktop, enabling a user to perform a task. Client applications include applications such as word processors and spreadsheets, as well as custom business applications such as data-entry tools, reporting tools, and so on. Client applications usually employ windows, menus, buttons, and other GUI elements, and they likely access local resources such as the file system and peripherals such as printers. Another kind of client application is the traditional ActiveX control (now replaced by the managed Windows Forms control) deployed over the Internet as a Web page. This application is much like other client applications: it is executed natively, has access to local resources, and includes graphical elements. In the past, developers created such applications using C/C++ in conjunction with the Microsoft Foundation Classes (MFC) or with a rapid application development (RAD) environment such as Microsoft Visual Basic. The .NET Framework incorporates

aspects of these existing products into a single, consistent development environment that drastically simplifies the development of client applications. The Windows Forms classes contained in the .NET Framework are designed to be used for GUI development. You can easily create command windows, buttons, menus, toolbars, and other screen elements with the flexibility necessary to accommodate shifting business needs. For example, the .NET Framework provides simple properties to adjust visual attributes associated with forms. In some cases the underlying operating system does not support changing these attributes directly, and in these cases the .NET Framework automatically recreates the forms. This is one of many ways in which the .NET Framework integrates the developer interface, making coding simpler and more consistent. Unlike ActiveX controls, Windows Forms controls have semi-trusted access to a user's computer. This means that binary or natively executing code can access some of the resources on the user's system (such as GUI elements and limited file access) without being able to access or compromise other resources. Because of code access security, many applications that once needed to be installed on a user's system can now be safely deployed through the Web. Your applications can implement the features of a local application while being deployed like a Web page.

Server Application Development


Server-side applications in the managed world are implemented through runtime hosts. Unmanaged applications host the common language runtime, which allows your custom managed code to control the behavior of the server. This model provides you with all the features of the common language runtime and class library while gaining the performance and scalability of the host server. The following illustration shows a basic network schema with managed code running in different server environments. Servers such as IIS and SQL Server can perform standard operations while your application logic executes through the managed code.

Server-side managed code ASP.NET is the hosting environment that enables developers to use the .NET Framework to target Web-based applications. However, ASP.NET is more than just a runtime host; it is a complete architecture for developing Web sites and Internet-distributed objects using managed code. Both Web Forms and XML Web services use IIS and ASP.NET as the publishing mechanism for applications, and both have a collection of supporting classes in the .NET Framework. XML Web services, an important evolution in Web-based technology, are distributed, server-side application components similar to common Web sites. However, unlike Webbased applications, XML Web services components have no UI and are not targeted for browsers such as Internet Explorer and Netscape Navigator. Instead, XML Web services consist of reusable software components designed to be consumed by other applications, such as traditional client applications, Web-based applications, or even other XML Web services. As a result, XML Web services technology is rapidly moving application development and deployment into the highly distributed environment of the Internet. If you have used earlier versions of ASP technology, you will immediately notice the improvements that ASP.NET and Web Forms offers. For example, you can develop Web Forms pages in any language that supports the .NET Framework. In addition, your code no longer needs to share the same file with your HTTP text (although it can continue to do so if you prefer). Web Forms pages execute in native machine language because, like any other managed application, they take full advantage of the runtime. In contrast, unmanaged ASP pages are always scripted and interpreted. ASP.NET pages are faster, more functional, and easier to develop than unmanaged ASP pages because they interact with the runtime like any managed application. The .NET Framework also provides a collection of classes and tools to aid in development and consumption of XML Web services applications. XML Web services are built on standards such as SOAP (a remote procedure-call protocol), XML (an

extensible data format), and WSDL ( the Web Services Description Language). The .NET Framework is built on these standards to promote interoperability with nonMicrosoft solutions. For example, the Web Services Description Language tool included with the .NET Framework SDK can query an XML Web service published on the Web, parse its WSDL description, and produce C# or Visual Basic source code that your application can use to become a client of the XML Web service. The source code can create classes derived from classes in the class library that handle all the underlying communication using SOAP and XML parsing. Although you can use the class library to consume XML Web services directly, the Web Services Description Language tool and the other tools contained in the SDK facilitate your development efforts with the .NET Framework. If you develop and publish your own XML Web service, the .NET Framework provides a set of classes that conform to all the underlying communication standards, such as SOAP, WSDL, and XML. Using those classes enables you to focus on the logic of your service, without concerning yourself with the communications infrastructure required by distributed software development. Finally, like Web Forms pages in the managed environment, your XML Web service will run with the speed of native machine language using the scalable communication of IIS.

Active Server Pages.NET

ASP.NET is a programming framework built on the common language runtime that can be used on a server to build powerful Web applications. ASP.NET offers several important advantages over previous Web development models:

Enhanced

Performance.

ASP.NET

is

compiled

common

language runtime code running on the server. Unlike its interpreted predecessors, ASP.NET can take advantage of early binding, just-in-time

compilation, native optimization, and caching services right out of the box. This amounts to dramatically better performance before you ever write a line of code.

World-Class Tool Support. The ASP.NET framework is

complemented by a rich toolbox and designer in the Visual Studio integrated development environment. WYSIWYG editing, drag-and-drop server controls, and automatic deployment are just a few of the features this powerful tool provides.

Power and Flexibility. Because ASP.NET is based on the

common language runtime, the power and flexibility of that entire platform is available to Web application developers. The .NET Framework class library, Messaging, and Data Access solutions are all seamlessly accessible from the Web. ASP.NET is also languageindependent, so you can choose the language that best applies to your application or partition your application across many languages. Further, common language runtime interoperability guarantees that your existing investment in COM-based development is preserved when migrating to ASP.NET.

Simplicity. ASP.NET makes it easy to perform common tasks,

from simple form submission and client authentication to deployment and site configuration. For example, the ASP.NET page framework allows you to build user interfaces that cleanly separate application logic from presentation code and to handle events in a simple, Visual Basic - like forms processing model. Additionally, the common language runtime simplifies development, with managed code services such as automatic reference counting and garbage collection.

Manageability. ASP.NET employs a text-based, hierarchical

configuration system, which simplifies applying settings to your server environment and Web applications. Because configuration information is

stored as plain text, new settings may be applied without the aid of local administration tools. This "zero local administration" philosophy extends to deploying ASP.NET Framework applications as well. An ASP.NET Framework application is deployed to a server simply by copying the necessary files to the server. No server restart is required, even to deploy or replace running compiled code.

Scalability and Availability. ASP.NET has been designed with

scalability in mind, with features specifically tailored to improve performance in clustered and multiprocessor environments. Further, processes are closely monitored and managed by the ASP.NET runtime, so that if one misbehaves (leaks, deadlocks), a new process can be created in its place, which helps keep your application constantly available to handle requests.

Customizability and Extensibility. ASP.NET delivers a well-

factored architecture that allows developers to "plug-in" their code at the appropriate level. In fact, it is possible to extend or replace any subcomponent of the ASP.NET runtime with your own custom-written component. Implementing custom authentication or state services has never been easier.

Security. With built in Windows authentication and per-

application configuration, you can be assured that your applications are secure. Language Support

The Microsoft .NET Platform currently offers built-in support for three languages: C#, Visual Basic, and JScript. What is ASP.NET Web Forms?

The ASP.NET Web Forms page framework is a scalable common language runtime programming model that can be used on the server to dynamically generate Web pages. Intended as a logical evolution of ASP (ASP.NET provides syntax compatibility with existing pages), the ASP.NET Web Forms framework has been specifically designed to address a number of key deficiencies in the previous model. In particular, it provides:

The ability to create and use reusable UI controls that can

encapsulate common functionality and thus reduce the amount of code that a page developer has to write.

The ability for developers to cleanly structure their page logic in an The ability for development tools to provide strong WYSIWYG

orderly fashion (not "spaghetti code").

design support for pages (existing ASP code is opaque to tools). ASP.NET Web Forms pages are text files with an .aspx file name extension. They can be deployed throughout an IIS virtual root directory tree. When a browser client requests .aspx resources, the ASP.NET runtime parses and compiles the target file into a .NET Framework class. This class can then be used to dynamically process incoming requests. (Note that the .aspx file is compiled only the first time it is accessed; the compiled type instance is then reused across multiple requests). An ASP.NET page can be created simply by taking an existing HTML file and changing its file name extension to .aspx (no modification of code is required). For example, the following sample demonstrates a simple HTML page that collects a user's name and category preference and then performs a form postback to the originating page when a button is clicked: ASP.NET provides syntax compatibility with existing ASP pages. This includes support for <% %> code render blocks that can be intermixed with HTML content

within an .aspx file. These code blocks execute in a top-down manner at page render time.

Code-Behind Web Forms ASP.NET supports two methods of authoring dynamic pages. The first is the method shown in the preceding samples, where the page code is physically declared within the originating .aspx file. An alternative approach--known as the code-behind method--enables the page code to be more cleanly separated from the HTML content into an entirely separate file. Introduction to ASP.NET Server Controls In addition to (or instead of) using <% %> code blocks to program dynamic content, ASP.NET page developers can use ASP.NET server controls to program Web pages. Server controls are declared within an .aspx file using custom tags or intrinsic HTML tags that contain a runat="server" attribute value. Intrinsic HTML to one tags of are handled the by one is of the controls the in type the of System.Web.UI.HtmlControls namespace. Any tag that doesn't explicitly map controls assigned System.Web.UI.HtmlControls.HtmlGenericControl. Server controls automatically maintain any client-entered values between round trips to the server. This control state is not stored on the server (it is instead stored within an <input type="hidden"> form field that is round-tripped between requests). Note also that no client-side script is required. In addition to supporting standard HTML input controls, ASP.NET enables developers to utilize richer custom controls on their pages. For example, the following sample demonstrates how the <asp:adrotator> control can be used to dynamically display rotating ads on a page.

1. 2. 3. 4. 5. 6. 7. 8.

ASP.NET Web Forms provide an easy and powerful way to build ASP.NET Web Forms pages can target any browser client (there ASP.NET Web Forms pages provide syntax compatibility with ASP.NET server controls provide an easy way to encapsulate ASP.NET ships with 45 built-in server controls. Developers can ASP.NET server controls can automatically project both uplevel ASP.NET templates provide an easy way to customize the look ASP.NET validation controls provide an easy way to do

dynamic Web UI. are no script library or cookie requirements). existing ASP pages. common functionality. also use controls built by third parties. and downlevel HTML. and feel of list server controls. declarative client or server data validation.

5) ACTIVE X DATA OBJECTS.NET

ADO.NET ARCHITECHTURE

CONNECTION

DATA ADAPT ER

DataTable1 DataRow/DataColumn

DATASET D
DATA RELATION

COMMAND

DATA PROVIDER
DATA READER

DataTable2

MSSQL

ORACLE

ACEESS

XML OUTPUT DATAVIE W

DATAVI EW

DATA STORE

ADO.NET is an extension of the ADO data access model which consist of only the connected architecture. The Microsoft organization has realized the data related operations and have studied and analyzed different data related technologies among which they found ADO to be interesting , later on they extended the features of the ADO and defined own data related technology by refining the ADO and hence given the name as ADO.Net. The Microsoft organization , grouped some set of namespaces which can operates on data , and put together into technology called ADO.NET. The ADO.Net technology enhances the features of the ADO, which consist of only connected architecture where as in ADO.NET they have introduces a disconnected architecture. The connected architecture important feature is the data provider, which consists of the four important objects namely connection, command, data reader and data adapter. The connection object provides the connection to the data store nothing but to the back end database servers. The connection class consists of default constructors and parameterized constructors. The constructor takes arguments, which provides the connection to the back end servers. The arguments that the connection class constructors takes are Data Source , Database and security.

The first parameter Data Source represents the server name to which our application needs to be connected. Thats means from the available servers we need to select the particular data base server which can be done through the data source parameter. The second parameter indicates the database to which we are going to connect, thats means in that particular database server, to which data base we want to connect can be done through the data base parameter. The third parameter security indicates, the security provided for the database server. If the server is running under windows authentication mode , than will use integrated security to be true thats means no need to specify the user name and password explicitly why because the system will takes the prebuilt username and password which has been set for the system. On the other hand if the back end server Is running under sql authentication mode that will specify the username and password which has be set during the installation of the server, using the security parameter we can connect to the backend database server. Also sqlconnection class consists of the methods such as open and close. The open method is used to open the connection to the database server. Whereas the close method is used to disconnect the connection from the server. Once the connection is opened while in the application use, the connection should be closed when the application terminates. The another object of the data provider is the command object, using which one can write the queries in order to manipulate data in the database. Once the connection is opened ,the sqlcommand class make use of the connection and will operates on the database. The sqlcommand class will do manipulation using the queries or the stored procedures. Which has to be decided by the programmers whether they want to use the queries or the stored procedures using the method command type? If we want to use queries than we need to select the text query or else we need to select stored procedure option from the command type method. Command object consists of three methods namely execute non query, execute reader and execute scalar. The execute non query will returns the integer values as an output which indicates how many records have been updated, or modified etc. the second method execute reader returns the complete records been affected by the operations ,

whereas execute scalar returns the first row first column value remaining will be neglected. The third object in the connected architecture is the data reader ,which reads the data in a forward only mode , thats means its retrieves the data from the data base server and forwards it to the application. The another object is the data adapter which acts like an interface or bridge between the connected architecture and disconnected architecture. In the disconnected architecture the important feature is the dataset. Its a collection of data tables and data rows and the data tables will be linked using the data relations . when the dataset need to be filled ,its request to the data adapter which in turn fills the dataset by making use of fill method. Of data adapter. Features of ADO.NET are as follows: 1. 2. ADO.NET is the next evolution of ADO for the .Net Framework. ADO.NET was created with n-Tier, statelessness and XML in the

forefront. Two new objects, the Dataset and Data Adapter, are provided for these scenarios. 3. 4. 5. ADO.NET can be used to get data from a stream, or to store data in a There is a lot more information about ADO.NET in the documentation. Remember, you can execute a command directly against the database in cache for updates.

order to do inserts, updates, and deletes. You don't need to first put data into a Dataset in order to insert, update, or delete it. 6. Also, you can use a Dataset to bind to the data, move through the data, and navigate data relationships

3) About Microsoft SQL Server Microsoft SQL Server is a Structured Query Language (SQL) based, client/server relational database. Each of these terms describes a fundamental part of the architecture

of SQL Server. Database A database is similar to a data file in that it is a storage place for data. Like a data file, a database does not present information directly to a user; the user runs an application that accesses data from the database and presents it to the user in an understandable format. A database typically has two components: the files holding the physical database and the database management system (DBMS) software that applications use to access data. The DBMS is responsible for enforcing the database structure, including: Maintaining the relationships between data in the database. Ensuring that data is stored correctly and that the rules defining data relationships are not violated. Recovering all data to a point of known consistency in case of system failures.

Client/Server In a client/server system, the server is a relatively large computer in a central location that manages a resource used by many people. When individuals need to use the resource, they connect over the network from their computers, or clients, to the server. Examples of servers are: In a client/server database architecture, the database files and DBMS software reside on a server. A communications component is provided so applications can run on separate clients and communicate to the database server over a network. The SQL Server communication component also allows communication between an application running on the server and SQL Server. Server applications are usually capable of working with several clients at the same time. SQL Server can work with thousands of client applications simultaneously. The server

has features to prevent the logical problems that occur if a user tries to read or modify data currently being used by others. While SQL Server is designed to work as a server in a client/server network, it is also capable of working as a stand-alone database directly on the client. The scalability and ease-of-use features of SQL Server allow it to work efficiently on a client without consuming too many resources. Structured Query Language (SQL) To work with data in a database, you must use a set of commands and statements (language) defined by the DBMS software. There are several different languages that can be used with relational databases; the most common is SQL. Both the American National Standards Institute (ANSI) and the International Standards Organization (ISO) have defined standards for SQL. Most modern DBMS products support the Entry Level of SQL-92, the latest SQL standard (published in 1992).

SQL Server Features Microsoft SQL Server supports a set of features that result in the following benefits:

Ease of installation, deployment, and use SQL Server includes a set of administrative and development tools that improve your ability to install, deploy, manage, and use SQL Server across several sites.

Scalability The same database engine can be used across platforms ranging from laptop computers running Microsoft Windows 95/98 to large, multiprocessor servers running Microsoft Windows NT, Enterprise Edition.

Data warehousing SQL Server includes tools for extracting and analyzing summary data for online analytical processing (OLAP). SQL Server also includes tools for visually designing databases and analyzing data using English-based questions.

System integration with other server software SQL Server integrates with e-mail, the Internet, and Windows.

Databases A database in Microsoft SQL Server consists of a collection of tables that contain data, and other objects, such as views, indexes, stored procedures, and triggers, defined to support activities performed with the data. The data stored in a database is usually related to a particular subject or process, such as inventory information for a manufacturing warehouse.

SQL Server can support many databases, and each database can store either interrelated data or data unrelated to that in the other databases. For example, a server can have one database that stores personnel data and another that stores product-related data. Alternatively, one database can store current customer order data, and another; related database can store historical customer orders that are used for yearly reporting. Before you create a database, it is important to understand the parts of a database and how to design these parts to ensure that the database performs well after it is implemented. Entity Integrity Constraint: Entity Integrity Constraints are of two types: Unique Constraints Primary Key Constraints

The unique constraints designate a column or a group of columns as a unique key. The constraint allows only unique values to be stored in the column Sql Server rejects duplication of records when the unique key constraint is used. The primary key constraint is similar to the unique key constraint. The primary key constraint just like the former avoids duplication of values. Its needs it best felt when a relation has to be set between tables, because in addition to prevent in duplication it also does not allow null values.

Referential Integrity Constraint: The Referential Integrity Constraint enforces relationship between tables. It designates a column or a combination of columns as a foreign key. The foreign key establishes a relationship with a specified primary or unique key in another table, called the referenced key. In this relationship, the table containing the foreign key is called a child table and the table containing the referenced key is called the parent table.

6.0. SOFTWARE DESIGN The design phase begins with the requirements specification for the software to be developed. Design is the first step to moving from the problem domain towards the solution domain. Design is essentially the bridge between requirement specification and the final solution for satisfying the requirements. It is the most critical factor affecting the quality of the software. The design process for software system has two levels. 1. System Design or Top level design 2. Detailed Design or Logical Design System Design: In the system design the focus on the deciding which modules are needed for the system, the specification of these modules and how these modules should be interconnected.

Detailed Design: In detailed design the interconnection of the modules or how the specifications of the modules can be satisfied is decided. Some properties for a software system design are

Verifiability Completeness Consistency Traceability Simplicity / Understandability

1) Application Architecture:

User Interface

BAL Application

The application which we are developing is using One-Tier or single Tier application. With in the same tier we are going to include the business

functionalities. The frontend which we are going to develop is using the windows form application. We will develop all the front end windows forms or User interface forms using Windows application of .NET environment. Once after developing the user interfaces we need to write the code behind in order to specify the business logic. This coding will be done using the C# language in our application, where will write all the necessary business logic code in order to access our application. In our application will use necessary functionalities such as downloading of video/audio/text files, starting/pausing/restarting of files to downloads and will write necessary logic or code in order to perform this functioanlities.In the above application diagram , in the User Interface block will include all the necessary front end screens and in the BAL block will include all the necessary business logic or code required for business functionalities related to our application.

2) Software Architecture:

Downloads Users Downloads operations Database

View type

Downloads cleanups

User Management

In this software architecture, the users block consists of end users who going to use our application. And will design all the forms necessary for the our application. So that users can perform the functionalities such as downloading of video/audio/text files, starting/pausing/restarting of files to downloads , viewing the progress bar in a segmented or grid or view. In the business logic if we have any validations relating in our application than we have to verify that all the validations are satisfied thoroughly. Once after all the validations are evaluated to be true, will move with the data access logic.

UML diagrams Introduction Modeling is an activity that has been carried out over the years in software development. When writing applications by using the simplest languages to the most powerful and complex languages, you still need to model. Modeling can be as straightforward as drawing a flowchart listing the steps carried out by an application. Why do we use modeling? Defining a model makes it easier to break up a complex application or a huge system into simple, discrete pieces that can be individually studied. We can focus more easily on the smaller parts of a system and then understand the "big picture." Hence, the reasons behind modeling can be summed up in two words:

Readability Reusability

Readability brings clarityease of understanding. Understanding a system is the first step in either building or enhancing a system. This involves knowing what a system is made up of, how it behaves, and so forth. Modeling a system ensures that it becomes readable and, most importantly, easy to document. Depicting a system to make it readable involves capturing the structure of a system and the behavior of the system. Reusability is the byproduct of making a system readable. After a system has been modeled to make it easy to understand, we tend to identify similarities or redundancy, be they in terms of functionality, features, or structure. The Unified Modeling Language, or UML, as it is popularly known by its TLA (threeletter acronym!), is the language that can be used to model systems and make them readable. This essentially means that UML provides the ability to capture the characteristics of a system by using notations. UML provides a wide array of simple, easy to understand notations for documenting systems based on the object-oriented design principles. These notations are called the nine diagrams of UML. Different languages have been used for depicting systems using object-oriented methodology. The prominent among these were the Rumbaing methodology, the Brooch methodology, and the Jacobson methodology. The problem was that, although each methodology had its advantages, they were essentially disparate. Hence, if you had to work on different projects that used any of these methodologies, you had to be well versed with each of these methodologies. A very tall order indeed! The Unified Modeling Language is just that. It "unifies" the design principles of each of these methodologies into a single, standard, language that can be easily applied across the board for all object-oriented systems. But, unlike the different methodologies that tended more to the design and detailed design of systems, UML spans the realm of requirements, analysis, and design and, uniquely, implementation as well. The beauty of UML lies in the fact that any of the nine diagrams of UML can be used on an incremental basis as the need arises. Considering all these reasons, it is no wonder that UML is considered "the" language of choice.

UML does not have any dependencies with respect to any technologies or languages. This implies that you can use UML to model applications and systems based on either of the current hot technologies; for example, J2EE and .NET. Every effort has been made to keep UML as a clear and concise modeling language without being tied down to any technologies. INTRODUCTION TO UML: The Unified Modeling Language (UML) is a standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems, as well as for business modeling and other non-software systems. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems. The UML is a very important part of developing objects oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects. Using the UML helps project teams communicate, explore potential designs, and validate the architectural design of the software. Goals of UML The primary goals in the design of the UML were: Provide users with a ready-to-use, expressive visual modeling language so they can develop and exchange meaningful models. Provide extensibility and specialization mechanisms to extend the core concepts. Be independent of particular programming languages and development processes. Provide a formal basis for understanding the modeling language. Encourage the growth of the OO tools market. Support higher-level development concepts such as collaborations, frameworks, patterns and components.

Integrate best practices.

Why we use UML? As the strategic value of software increases for many companies, the industry looks for techniques to automate the production of software and to improve quality and reduce cost and time-to-market. These techniques include component technology, visual programming, patterns and frameworks. Businesses also seek techniques to manage the complexity of systems as they increase in scope and scale. In particular, they recognize the need to solve recurring architectural problems, such as physical distribution, concurrency, replication, security, load balancing and fault tolerance. Additionally, the development for the World Wide Web, while making some things simpler, has exacerbated these architectural problems. The Unified Modeling Language (UML) was designed to respond to these needs.

UML Diagrams The underlying premise of UML is that no one diagram can capture the different elements of a system in its entirety. Hence, UML is made up of nine diagrams that can be used to model a system at different points of time in the software life cycle of a system. The nine UML diagrams are: Use case diagram:

The use case diagram is used to identify the primary elements and processes that form the system. The primary elements are termed as "actors" and the processes are called "use cases." The use case diagram shows which actors interact with each use case. Class diagram:

The class diagram is used to refine the use case diagram and define a detailed design of the system. The class diagram classifies the actors defined in the use case diagram into a set of interrelated classes. The relationship or association between the classes can be either an "is-a" or "has-a" relationship. Each class in the class diagram may be capable of providing certain functionalities. These functionalities provided by the class are termed "methods" of the class. Apart from this, each class may have certain "attributes" that uniquely identify the class.

Object diagram:

The object diagram is a special kind of class diagram. An object is an instance of a class. This essentially means that an object represents the state of a class at a given point of time while the system is running. The object diagram captures the state of different classes in the system and their relationships or associations at a given point of time. State diagram:

A state diagram, as the name suggests, represents the different states that objects in the system undergo during their life cycle. Objects in the system change states in response to events. In addition to this, a state diagram also captures the transition of the object's state from an initial state to a final state in response to events affecting the system. Activity diagram:

The process flows in the system are captured in the activity diagram. Similar to a state diagram, an activity diagram also consists of activities, actions, transitions, initial and final states, and guard conditions. Sequence diagram:

A sequence diagram represents the interaction between different objects in the system. The important aspect of a sequence diagram is that it is time-ordered. This means that the exact sequence of the interactions between the objects is represented step by step. Different objects in the sequence diagram interact with each other by passing "messages". Collaboration diagram:

A collaboration diagram groups together the interactions between different objects. The interactions are listed as numbered interactions that help to trace the sequence of the interactions. The collaboration diagram helps to identify all the possible interactions that each object has with other objects. Component diagram:

The component diagram represents the high-level parts that make up the system. This diagram depicts, at a high level, what components form part of the system and how they are interrelated. A component diagram depicts the components culled after the system has undergone the development or construction phase. Deployment diagram:

The deployment diagram captures the configuration of the runtime elements of the application. This diagram is by far most useful when a system is built and ready to be deployed. Now that we have an idea of the different UML diagrams, let us see if we can somehow group together these diagrams to enable us to further understand how to use them. UML Diagram ClassificationStatic, Dynamic, and Implementation A software system can be said to have two distinct characteristics: a structural, "static" part and a behavioral, "dynamic" part. In addition to these two characteristics, an

additional characteristic that a software system possesses is related to implementation. Before we categorize UML diagrams into each of these three characteristics, let us take a quick look at exactly what these characteristics are.

Static: The static characteristic of a system is essentially the structural aspect of the system. The static characteristics define what parts the system is made up of.

Dynamic: The behavioral features of a system; for example, the ways a system behaves in response to certain events or actions are the dynamic characteristics of a system.

Implementation: The implementation characteristic of a system is an entirely new feature that describes the different elements required for deploying a system. The UML diagrams that fall under each of these categories are: Static Use case diagram Class diagram Dynamic o Object diagram o State diagram o Activity diagram o Sequence diagram o Collaboration diagram

Implementation o Component diagram o Deployment diagram

Finally, let us take a look at the 4+1 view of UML diagrams. Views of UML Diagrams Considering that the UML diagrams can be used in different stages in the life cycle of a system, let us take a look at the "4+1 view" of UML diagrams. The 4+1 view offers a different perspective to classify and apply UML diagrams. The 4+1 view is essentially how a system can be viewed from a software life cycle perspective. Each of these views represents how a system can be modeled. This will enable us to understand where exactly the UML diagrams fit in and their applicability. The different views are:

Design View: The design view of a system is the structural view of the system. This gives an idea of what a given system is made up of. Class diagrams and object diagrams form the design view of the system.

Process View: The dynamic behavior of a system can be seen using the process view. The different diagrams such as the state diagram, activity diagram, sequence diagram, and collaboration diagram are used in this view.

Component View: Component view shows the grouped modules of a given system modeled using the component diagram.

Deployment View: The deployment diagram of UML is used to identify the deployment modules for a given system.

Use case View: Finally, we have the use case view. Use case diagrams of UML are used to view a system from this perspective as a set of discrete activities or transactions.

Use Case Diagram:

Video URL

Video Downloads
Video Status

Actor

Batch Downloads

URL'S Text File

Text downloads

Normal file input

Class Diagram:
View

Download Manager varchar filepath Downlaods()

Segments() Toolbar() Grid()

Downloads Varchar filename varchar filepath VideoDownload() BatchDownload() TextDownload()

DownloadCleanup() varchar fileURL Remove() RemoveCompleted()

DownloadOperations()
Varchar fileURL

StartDownload() PauseDownload() ResumeDownload() AutoDownload()

Sequence Diagram:

USER

DOWNLOADS

DOWNLOADS OPERATIONS

DOWNLOADS CLEANUP

VIEW

APPLICATION

Request for video/batch /text file download Send the URL for download Starts downloads using the URL Response Downloads completes/fails

Request for Download start/ pause/Resume Send URL for Download operations Response for pause/start/Resume File Response

Requesfor remove completed file Request

Response for file remove Response for Remove

Request For view downloader progress Request Response Response

Activity Diagram:

Downloads

Downloads Files

Download Operation

Download CleanUp

View

Audio

Video

Text

Start

Pause

Resume

Stop

Remove

Segment

Grid

ToolBar

DataFlow Diagrams:

Level 0

Audio Downloads USER Video

Text

Audio Video Downloads Text USER Start

Level 1

Pause Download operation Resume Grid

View

Segment

Text

CleanUp

Remove completed

6.4)Database Design

Not Applicable

6.5) User Interfaces or Output Screens.

7.0 Code
public partial class MainForm : Form, ISingleInstanceEnforcer { SpeedLimitExtension speedLimit; public MainForm() { InitializeComponent(); downloadList1.SelectionChange += new EventHandler(downloadList1_SelectionChange); downloadList1.UpdateUI(); speedLimit = (SpeedLimitExtension)App.Instance.GetExtensionByType(typeof(SpeedLimitE xtension)); } void downloadList1_SelectionChange(object sender, EventArgs e) { int cnt = downloadList1.SelectedCount; bool isSelected = cnt > 0; bool isSelectedOnlyOne = cnt == 1; removeToolStripMenuItem.Enabled = isSelected; removeCompletedToolStripMenuItem.Enabled = isSelected; toolStart.Enabled = isSelected; toolPause.Enabled = isSelected; toolRemove.Enabled = isSelected; copyURLToClipboardToolStripMenuItem1.Enabled = isSelectedOnlyOne; toolMoveSelectionsDown.Enabled = isSelected; toolMoveSelectionsUp.Enabled = isSelected; } private void tmrRefresh_Tick(object sender, EventArgs e) { string strRate; if (speedLimit.CurrentEnabled) { strRate = String.Format("[{0:0.##} kpbs] {1:0.##} kbps", speedLimit.CurrentMaxRate / 1024.0, DownloadManager.Instance.TotalDownloadRate / } else {

1024.0);

strRate = String.Format("{0:0.##} kbps", DownloadManager.Instance.TotalDownloadRate / 1024.0); }

toolStripScheduler.Checked = downloadList1.SchedulerStarted(); toolStripLblRateTxt.Text = strRate; notifyIcon.Text = String.Concat(this.Text, "\n", toolStripLblRate.Text, " ", strRate); downloadList1.UpdateList(); } private void MainForm_Load(object sender, EventArgs e) { LoadViewSettings(); notifyIcon.Icon = this.Icon; notifyIcon.Text = this.Text; notifyIcon.Visible = true;

private void toolNewDownload_Click(object sender, EventArgs e) { downloadList1.NewFileDownload(null, true); } private void toolStart_Click(object sender, EventArgs e) { downloadList1.StartSelections(); } private void toolPause_Click(object sender, EventArgs e) { downloadList1.Pause(); } private void toolPauseAll_Click(object sender, EventArgs e) { downloadList1.PauseAll(); } private void toolRemove_Click(object sender, EventArgs e) { downloadList1.RemoveSelections(); } e) private void toolRemoveCompleted_Click(object sender, EventArgs { } private void toolOptions_Click(object sender, EventArgs e) { using (OptionsForm options = new OptionsForm()) { options.ShowDialog(); downloadList1.RemoveCompleted();

private void toolAbout_Click(object sender, EventArgs e) { using (AboutForm about = new AboutForm()) { about.ShowDialog(); } } #region ISingleInstanceEnforcer Members public void OnMessageReceived(MessageEventArgs e) { string[] args = (string[])e.Message; if (args.Length == 2 && args[0] == "/sw") { this.BeginInvoke((MethodInvoker) delegate { downloadList1.NewDownloadFromData(args[1]); }); } else { downloadList1.AddDownloadURLs(ResourceLocation.FromURLA rray(args), 1, null, 0); } } public void OnNewInstanceCreated(EventArgs e) { this.Focus(); } #endregion private void LoadViewSettings() { downloadList1.LoadSettingsView(); toolStripMain.Visible = Settings.Default.ViewToolbar; if (toolStripMain.Visible) { faTabStrip1.Top = menuBarStrip.Height + toolStripMain.Top + 1; } else { faTabStrip1.Top = menuBarStrip.Height + 4; } faTabStrip1.Height = this.ClientSize.Height statusStrip1.Height - faTabStrip1.Top; gridToolStripMenuItem.Checked = Settings.Default.ViewGrid;

segmentsToolStripMenuItem.Checked = Settings.Default.ViewTransDetails; toolbarToolStripMenuItem.Checked = Settings.Default.ViewToolbar; } private void exitToolStripMenuItem_Click(object sender, EventArgs e) { Close(); } private void newBatchDownloadToolStripMenuItem_Click(object sender, EventArgs e) { downloadList1.NewBatchDownload(); } private void viewMenuClickClick(object sender, EventArgs e) { ToolStripMenuItem menu = ((ToolStripMenuItem)sender); menu.Checked = !menu.Checked; Settings.Default.ViewGrid = gridToolStripMenuItem.Checked; Settings.Default.ViewToolbar = toolbarToolStripMenuItem.Checked; Settings.Default.ViewTransDetails = segmentsToolStripMenuItem.Checked; LoadViewSettings(); } private void MainForm_FormClosing(object sender, FormClosingEventArgs e) { Settings.Default.Save(); } private void showHideToolStripMenuItem_Click(object sender, EventArgs e) { ShowHideForm(); } public void ShowHideForm() { if (this.Visible) { HideForm(); } else { ShowForm(); LoadViewSettings(); } }

public void ShowForm() { this.ShowInTaskbar = true; this.Visible = true; this.WindowState = FormWindowState.Normal; } public void HideForm() { this.ShowInTaskbar = false; this.Visible = false; } private void showHideToolStripMenuItem_Click(object sender, MouseEventArgs e) { if (e.Button == MouseButtons.Left) { ShowHideForm(); } } private void newVideoDownloadToolStripMenuItem_Click(object sender, EventArgs e) { downloadList1.NewVideoDownload(); } e) private void toolStripScheduler_Click(object sender, EventArgs { } private void importFromTextFileToolStripMenuItem_Click(object sender, EventArgs e) { downloadList1.ImportFromTextFile(); } private void toolStripButton2_Click(object sender, EventArgs e) { downloadList1.MoveSelectionsUp(); } private void toolStripButton3_Click(object sender, EventArgs e) { downloadList1.MoveSelectionsDown(); } private void setCustomToolStripMenuItem_Click(object sender, EventArgs e) { ((SpeedLimitUIExtension)speedLimit.UIExtension).ShowSpeedLi mitDialog(); } downloadList1.StartScheduler(toolStripScheduler.Checked);

private void enableSpeedLimitToolStripMenuItem_Click(object sender, EventArgs e) { speedLimit.Parameters.Enabled = enableSpeedLimitToolStripMenuItem.Checked; } private void cntxMenuDownLimit_Opening(object sender, CancelEventArgs e) { enableSpeedLimitToolStripMenuItem.Checked = speedLimit.Parameters.Enabled; } private void selectAllToolStripMenuItem_Click(object sender, EventArgs e) { downloadList1.SelectAll(); } private void clipboardMonitoringToolStripMenuItem_Click(object sender, EventArgs e) { downloadList1.ClipboardMonitorEnabled = (clipboardMonitoringToolStripMenuItem.Checked); } private void notifyIconContextMenu_Opening(object sender, CancelEventArgs e) { clipboardMonitoringToolStripMenuItem.Checked = downloadList1.ClipboardMonitorEnabled; } }

public partial class NewDownloadForm : Form { Thread zipReaderThread; public NewDownloadForm() { InitializeComponent(); locationMain.UrlChanged += new EventHandler(locationMain_UrlChanged); } ShowZIPMode(false);

void locationMain_UrlChanged(object sender, EventArgs e) { try { Uri u = new Uri(locationMain.ResourceLocation.URL);

txtFilename.Text = u.Segments[u.Segments.Length - 1]; } catch { txtFilename.Text = string.Empty; } } public ResourceLocation DownloadLocation { get { return locationMain.ResourceLocation; } set { locationMain.ResourceLocation = value; } } public ResourceLocation[] Mirrors { get { MyDownloader.Core.ResourceLocation[] mirrors = new MyDownloader.Core.ResourceLocation[lvwLocations.Items.Count]; for (int i = 0; i < lvwLocations.Items.Count; i++) { ListViewItem item = lvwLocations.Items[i]; mirrors[i] = MyDownloader.Core.ResourceLocation.FromURL( item.SubItems[0].Text, BoolFormatter.FromString(item.SubItems[1].T ext), item.SubItems[2].Text, item.SubItems[3].Text); } return mirrors; } }

public string LocalFile { get { return PathHelper.GetWithBackslash(folderBrowser1.Folder) + txtFilename.Text; } } public int Segments { get { return (int)numSegments.Value;

public bool StartNow { get { return chkStartNow.Checked; } } private void lvwLocations_ItemSelectionChanged(object sender, ListViewItemSelectionChangedEventArgs e) { bool hasSelected = lvwLocations.SelectedItems.Count > 0; btnRemove.Enabled = hasSelected; if (hasSelected) { ListViewItem item = lvwLocations.SelectedItems[0]; locationAlternate.ResourceLocation = MyDownloader.Core.ResourceLocation.FromURL( item.SubItems[0].Text, BoolFormatter.FromString(item.SubItems[1].Text), item.SubItems[2].Text, item.SubItems[3].Text); } else { locationAlternate.ResourceLocation = null; } } private void btnRemove_Click(object sender, EventArgs e) { for (int i = lvwLocations.Items.Count - 1; i >= 0; i--) { if (lvwLocations.Items[i].Selected) { lvwLocations.Items.RemoveAt(i); } } } private void btnAdd_Click(object sender, EventArgs e) { ResourceLocation rl = locationAlternate.ResourceLocation; if (lvwLocations.SelectedItems.Count > 0) { ListViewItem item = lvwLocations.SelectedItems[0]; item.SubItems[0].Text = rl.URL; item.SubItems[1].Text = BoolFormatter.ToString(rl.Authenticate); item.SubItems[2].Text = rl.Login; item.SubItems[3].Text = rl.Password; } else {

e));

ListViewItem item = new ListViewItem(); item.Text = rl.URL; item.SubItems.Add(BoolFormatter.ToString(rl.Authenticat item.SubItems.Add(rl.Login); item.SubItems.Add(rl.Password); lvwLocations.Items.Add(item);

} }

private void btnOK_Click(object sender, EventArgs e) { try { ResourceLocation rl = this.DownloadLocation; rl.BindProtocolProviderType(); if (rl.ProtocolProviderType == null) { MessageBox.Show("Invalid URL format, please check the location field.", AppManager.Instance.Application.MainForm.Text, MessageBoxButtons.OK, MessageBoxIcon.Error); DialogResult = DialogResult.None; return;

ResourceLocation[] mirrors = this.Mirrors; if (mirrors != null && mirrors.Length > 0) { foreach (ResourceLocation mirrorRl in mirrors) { mirrorRl.BindProtocolProviderType(); if (mirrorRl.ProtocolProviderType == null) { MessageBox.Show("Invalid mirror URL format, please check the mirror URLs.", AppManager.Instance.Application.MainFor m.Text, MessageBoxButtons.OK, MessageBoxIcon.Error); DialogResult = DialogResult.None; return; } } if (chkChooseZIP.Checked) { AddDownloadsFromZip(checkableTreeView1.Nodes, } }

mirrors);

else { Downloader download = DownloadManager.Instance.Add( rl, mirrors, this.LocalFile, this.Segments, this.StartNow); } Close(); } catch (Exception) { DialogResult = DialogResult.None; data.", MessageBox.Show("Unknow error, please check your input AppManager.Instance.Application.MainForm.Text, MessageBoxButtons.OK, MessageBoxIcon.Error);

} }

private void AddDownloadsFromZip(TreeNodeCollection nodes, ResourceLocation[] mirrors) { for (int i = 0; i < nodes.Count; i++) { if (nodes[i].Checked) { if (nodes[i].Nodes.Count > 0) { AddDownloadsFromZip(nodes[i].Nodes, mirrors); } else { ResourceLocation newLocation = this.DownloadLocation; newLocation.ProtocolProviderType = typeof(ZipProtocolProvider).AssemblyQualifiedName; string entryName = ((ZipEntry)nodes[i].Tag).Name; Downloader download = DownloadManager.Instance.Add( newLocation, mirrors, this.folderBrowser1.Folder + entryName, 1, false); ZipProtocolProvider.SetZipEntryNameProperty(dow nload, entryName); if (this.StartNow)

{ } } } } }

download.Start();

private void btnCancel_Click(object sender, EventArgs e) { Close(); } private void chkChooseZIP_CheckedChanged(object sender, EventArgs e) { ReleaseZIPThread(); ShowZIPMode(chkChooseZIP.Checked); if (chkChooseZIP.Checked) { LoadZIP(); } } private TreeNode GetNodeFromPath(String path, out string displayName) { string[] subPaths = path.Split('/'); if (subPaths.Length == 0) { displayName = null; return null; } TreeNode result = null; TreeNodeCollection nodes = checkableTreeView1.Nodes; displayName = subPaths[subPaths.Length - 1]; for (int j = 0; j < subPaths.Length - 1; j++) { TreeNode parentNode = null; for (int i = 0; i < nodes.Count; i++) { if (String.Equals(nodes[i].Text, subPaths[j], StringComparison.OrdinalIgnoreCase)) { parentNode = nodes[i]; break; } }

if (parentNode == null) { // add the path result = new TreeNode(subPaths[j]); result.ImageIndex = FileTypeImageList.GetImageIndexFromFolder(false); result.SelectedImageIndex = FileTypeImageList.GetImageIndexFromFolder(true); nodes.Add(result); } else { result = parentNode; } nodes = result.Nodes; } } return result;

private void ReleaseZIPThread() { if (zipReaderThread != null) { if (zipReaderThread.IsAlive) { zipReaderThread.Abort(); zipReaderThread = null; } } } waitControl1.Visible = false;

private void LoadZIP() { checkableTreeView1.Nodes.Clear(); ResourceLocation rl = this.DownloadLocation; rl.BindProtocolProviderType(); if (rl.ProtocolProviderType == null) { chkChooseZIP.Checked = false; MessageBox.Show("Invalid URL format, please check the location field.", AppManager.Instance.Application.MainForm.Text, MessageBoxButtons.OK, MessageBoxIcon.Error); } return;

ReleaseZIPThread();

zipReaderThread = new Thread( delegate(object state) { ZipRemoteFile zipFile = new ZipRemoteFile((ResourceLocation)state); try { if (zipFile.Load()) { this.BeginInvoke((MethodInvoker)delegate() { DisplayZIPOnTree(zipFile); waitControl1.Visible = false; }); } else { this.BeginInvoke((MethodInvoker)delegate() { waitControl1.Visible = false; MessageBox.Show("Unable to load ZIP contents.", nForm.Text, }); } AppManager.Instance.Application.Mai MessageBoxButtons.OK, MessageBoxIcon.Error);

} catch (Exception ex) { this.BeginInvoke((MethodInvoker)delegate() { waitControl1.Visible = false; contents: " + ex.Message, m.Text, }); MessageBox.Show("Unable to load ZIP AppManager.Instance.Application.MainFor MessageBoxButtons.OK, MessageBoxIcon.Error); } ); }

waitControl1.Visible = true; zipReaderThread.Start(rl);

private void DisplayZIPOnTree(ZipRemoteFile zipFile) { checkableTreeView1.ImageList = FileTypeImageList.GetSharedInstance(); checkableTreeView1.Nodes.Clear(); foreach (ZipEntry entry in zipFile) {

// skip folders... if (entry.Name.EndsWith("/")) { continue; } string displayName; TreeNode parentNd = GetNodeFromPath(entry.Name, out displayName); TreeNode newNd = new TreeNode(displayName); newNd.Tag = entry; newNd.ImageIndex = FileTypeImageList.GetImageIndexByExtention(Path.GetExtension(entry.Name )); newNd.SelectedImageIndex = newNd.ImageIndex; if (parentNd == null) { checkableTreeView1.Nodes.Add(newNd); } else { parentNd.Nodes.Add(newNd); }

} }

private void ShowZIPMode(bool show) { if (show) { this.tableLayoutPanel1.Controls.Clear(); //this.tableLayoutPanel1.Controls.Add(this.chkChooseZIP , 0, 0); eView1, 0, 1); r1, 0, 2); 0, 3); 0, 4); this.tableLayoutPanel1.Controls.Add(this.chkChooseZIP); this.tableLayoutPanel1.Controls.Add(this.checkableTreeV iew1); ); this.tableLayoutPanel1.Controls.Add(this.folderBrowser1 this.tableLayoutPanel1.Controls.Add(this.chkStartNow); this.tableLayoutPanel1.Controls.Add(this.pnlSegments); //this.tableLayoutPanel1.RowCount = 5; this.tableLayoutPanel1.AutoSize = true; this.Height = 600; //this.tableLayoutPanel1.Controls.Add(this.checkableTre //this.tableLayoutPanel1.Controls.Add(this.folderBrowse //this.tableLayoutPanel1.Controls.Add(this.chkStartNow, //this.tableLayoutPanel1.Controls.Add(this.pnlSegments,

/* this.tableLayoutPanel1.RowStyles.Add(new System.Windows.Forms.RowStyle()); this.tableLayoutPanel1.RowStyles.Add(new System.Windows.Forms.RowStyle()); this.tableLayoutPanel1.RowStyles.Add(new System.Windows.Forms.RowStyle()); this.tableLayoutPanel1.RowStyles.Add(new System.Windows.Forms.RowStyle()); this.tableLayoutPanel1.RowStyles.Add(new System.Windows.Forms.RowStyle());*/ } else { this.tableLayoutPanel1.Controls.Clear(); //this.tableLayoutPanel1.Controls.Add(this.chkChooseZIP , 0, 0); //this.tableLayoutPanel1.Controls.Add(this.pnlFileName, 0, 1); //this.tableLayoutPanel1.Controls.Add(this.folderBrowse r1, 0, 2); //this.tableLayoutPanel1.Controls.Add(this.chkStartNow, 0, 3); //this.tableLayoutPanel1.Controls.Add(this.pnlSegments, 0, 4); this.tableLayoutPanel1.Controls.Add(this.chkChooseZIP); this.tableLayoutPanel1.Controls.Add(this.pnlFileName); this.tableLayoutPanel1.Controls.Add(this.folderBrowser1 ); this.tableLayoutPanel1.Controls.Add(this.chkStartNow); this.tableLayoutPanel1.Controls.Add(this.pnlSegments); //this.tableLayoutPanel1.RowCount = 5; this.tableLayoutPanel1.AutoSize = true; this.Height = 374; /* this.tableLayoutPanel1.RowStyles.Add(new System.Windows.Forms.RowStyle()); this.tableLayoutPanel1.RowStyles.Add(new System.Windows.Forms.RowStyle()); this.tableLayoutPanel1.RowStyles.Add(new System.Windows.Forms.RowStyle()); this.tableLayoutPanel1.RowStyles.Add(new System.Windows.Forms.RowStyle()); this.tableLayoutPanel1.RowStyles.Add(new System.Windows.Forms.RowStyle());*/ } } private void NewDownloadForm_FormClosing(object sender, FormClosingEventArgs e) { this.ReleaseZIPThread(); } }

8.0. TESTING
Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design and coding. The increasing visibility of software as a system element and attendant costs associated with a software failure are motivating factors for we planned, through testing. Testing is the process of executing a program with the intent of finding an error. The design of tests for software and other engineered products can be as challenging as the initial design of the product itself. There of basically two types of testing approaches. One is Black-Box testing the specified function that a product been designed to perform, tests can be conducted function is fully operated. The other is White-Box testing knowing the internal workings of the product ,tests can be conducted to ensure that the internal operation of the product performs according to specifications and all internal components have been adequately exercised. White box and Black box testing test this package. with a view to All check the methods have have been used to been tested for decisions. data was designed has

that demonstrate each

loop constructs

their boundary and intermediate conditions. The test for all

the conditions and logical

Error handling has been taken care of by the use of exception handlers.

Testing Strategies :

Testing is a set of activities that can be planned in advanced and conducted systematically. A strategy for software testing must accommodation low-level tests that are necessary to verify that a small source code segment has been correctly implemented as well as high-level tests that validate major system functions against customer requirements. Software testing is one element of verification and validation. Verification refers to the set of activities that ensure that software correctly implements as specific function. Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements. The objective of software testing to uncover errors. planned and executed. To fulfill this

objective, a series of test steps unit, integration, validation and system tests are Each test step is accomplished through a series of systematic test technique that assist in the design of test cases. With each testing step, the level of abstraction with which software is considered is broadened.

Unit Testing : Unit testing focuses verification effort on the smallest unit of software design the module. The unit test is always white box oriented. The tests that occur as part of unit testing are testing the module interface, examining the local data structures, testing the boundary conditions, executing all the independent paths and testing error-handling paths.

Integration Testing : Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. Scope of testing summarizes the specific functional, performance, and internal design characteristics that are to be tested. It employs top-down testing and bottom-up testing methods for this case. White Box Testing: The purpose of any security testing method is to ensure the robustness of a system in the face of malicious attacks or regular software failures. White box testing is performed based on the knowledge of how the system is implemented. White box testing includes analyzing data flow, control flow, information flow, coding practices, and exception and error handling within the system, to test the intended and unintended software behavior. White box testing can be performed to validate whether code implementation follows intended design, to validate implemented security functionality, and to uncover exploitable vulnerabilities. White box testing requires access to the source code. Though white box testing can be performed any time in the life cycle after the code is developed, it is a good practice to perform white box testing during the unit testing phase. White box testing requires knowing what makes software secure or insecure, how to think like an attacker, and how to use different testing tools and techniques. The first step in white box testing is to comprehend and analyze source code, so knowing what makes software secure is a fundamental requirement. Second, to create tests that exploit software, a tester must think like an attacker. Third, to perform testing effectively, testers need to know the different tools and techniques available for white box testing. The three requirements do not work in isolation, but together.

Black Box Testing:

Also known as functional testing. A software testing technique whereby the internal workings of the item being tested are not known by the tester. For example, in a black box test on software design the tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs. The tester does not ever examine the programming code and does not need any further knowledge of the program other than its specifications. The advantages of this type of testing include:

The test is unbiased because the designer and the tester are independent of each other. The tester does not need knowledge of any specific programming languages. The test is done from the point of view of the user, not the designer. Test cases can be designed as soon as the specifications are complete.

System Testing : System testing validates software once it has been incorporated into a larger system. Software is incorporated with other system elements and a series of system integration and validation tests are conducted. computer- based system. Once the system has been developed it has to be tested. In the present system we have to take care of valid property and assessment numbers i.e. there should not exist any duplicate number in each case. Care should be taken that the appropriate data is retrieved in response to the queries. System testing is actually a series of different test whose primary purpose is to fully exercise the

VALIDATION The terms verification and validations are used interchangeably we will describe both these methods. Verification is the process of determining

whether or not the products of given phase of software development fulfill the specifications established in the previous phase. These activities include proving and reviews. Validation is the process of evaluating the software at the end of software development process, we find how well the software satisfies the requirement specifications. The requirement of the software starts with requirement document and requirement specifications without errors and specifying clients requirements correctly. The validation process of evaluating the developed system at the end is to ensure that it must satisfy all the necessary requirement specification. Requirement verification also checks the factors as completeness, consistency and testability of the requirements. As we all know that testing plays a crucial role in evaluation of the system. That is in order to know whether the system working properly or not. In other words we can say that in order to know whether the system which we have developed will give the expected output or not can be know by doing the testing. Testing phase comes after coding phase . Usually organizations or the software developing companies use different types of testing strategies in order to evaluate the performance of a system. Also it gives the output which provides clear information regarding the project or system , whether the project which we have developed will going to give the expected output or not , that is whether the system fails or succeed in the market. We have many types of testing such as unit testing, integration testing, system testing, black box testing, white box testing and regression analysis testing and so on. In our project Secure Cryptographic messaging we are using unit testing, integration testing , and system testing. Unit testing is the one in which each entity or objects in the module will be tested . Once the entity is evaluated to be tested successfully than will move further with the another kind of testing. Thats is once unit testing is done with all modules, than integration testing will be done, on the every module or on group of two or three modules. Finally system testing will be done , in which all the modules of a system will be tested at once , there by getting the overall performance of a system that means we can conclude the

result on the entire system whether our system is working as per our requirements or as per our expectations or not. The advantage of developing or testing modules wise is that , we can reduce the effort, cost and time. Because if we are testing module wise than we can know clearly which module is working fine and which module is not working , thereby the module which is not working perfectly can be evaluated once again by going necessary modifications unlike the system being tested on a whole , where if any errors comes in than the entire system need to be tested or evaluated which consumes more effort , time and cost.

Test Cases:

Test Cases: FUNCTION EXPECTED RESULTS Download Audio /Video/ Text file should be downloaded ACTUAL RESULTS Audio/video/ Text downloads using the URL Result Start result LOW PRIORITY Yes HIGH PRIOTY

Start

Start file downloading

Yes

Restart

Restarts file downloading

Restart result

Yes

Stop/Resume

Stop/Resume download Remove Stop/Resume Result Yes

Remove

Completed files

Remove files Result

Yes

9.0 Implementation

9.1 Running Application: In order to run the web application the steps we need to follow are listed below: 1) Open the visual studio IDE (Integrated Development Environment) that is Visual Studio2008 or other version.

2) Click on file -> open -> browse the folder in which the project is there then select the Solution file of the project.

3) Click on open option. Than in the solution explorer you will find all the forms and classes that are related to the project.

3) Run the application by pressing F5 or debugging button.

9.2 Configuring Data Base:

Not applicable

10.0. CONCLUSION
Our application is an user friendly downloader using which user can easily downloads any files of audio, video or text files at low bandwidth.

11.0 FUTURE ENHANCEMENT Will include data base functionality in our application for storing files as when required by the end user. 12.0. BIBLIOGRAPHY The following books were referred during the analysis and execution phase of the project SOFTWARE ENGINEERING By Roger.S.Pressman Professional ASP.NET By Wrox MSDN 2002 By Microsoft

Referrences
1.Auto-classified business and data flow information: www.cars.com, www.dayton.beepbeep.com 2. Jeff Poise (2002). Programming Microsoft .NET. Microsoft Press. 3. Asp.net general information and tutorials: http://www.asp.net 4. Building an ASP.NET Intranet, Wrox publication

You might also like