Professional Documents
Culture Documents
The routing fabric of OSMOSIS is entirely optical and has no buffering capability. It operates in a synchronous, time-slotted fashion with fixed-size packets (cells). Switching function is implemented using fast semiconductor optical amplifiers (SOAs) in a broadcastand-select (B&S) structure using a combination of eight-way space- and eight-way wavelength-division multiplexing, thus providing bidirectional connectivity for 64 nodes. Electronic buffers store cells at the ingress of the switch, resulting in an input-queued (IQ) architecture.
To prevent head-of-line (HOL) blocking, the input queues are organized as virtual output queues (VOQs).The B&S switch fabric structure is the optical equivalent of an electronic crossbar switch. To resolve crossbar input and output contention, central scheduling is required, which is also electronic. In addition to a low minimum latency, OSMOSIS must also be able to achieve a high maximum throughput. Therefore, the scheduler must implement an appropriate bipartite graph matching algorithm able to sustain close to 100% throughput. Using appropriate deep pipelining techniques it is possible to obtain maximal matching even for switches with many ports and short cells.
The input adapters receive cells from the incoming links and store them according to their destinations in the VOQs. Upon cell arrival, a request is issued to the scheduler via the
ATMA 1 ISE-2012
control channel (CC), which is operated in a slotted fashion with the same time slot duration as that of the data path. When the round-trip time (RTT,expressed in time slots) is greater than 1, both the data and the control path must be operated in a pipelined fashion to maintain 100% utilization without increasing the cell size. This implies that multiple cells and request/grants may be in flight on the data and control paths, respectively. To cope with a long RTT without loss of performance, employ an incremental VOQ state update protocol that allows deep pipelining of requests and grants without a performance penalty.
1.2. Control-Path Latency This classic centrally-scheduled, crossbar-based IQ architecture, however, incurs a latency penalty: The minimum latency of a cell in the absence of contention comprises two components, namely, the control-path latency and the data-path latency. The former consists of the latency from the issuance of a request to the receipt of the corresponding grant, whereas the latter consists of the transit latency from the input adapter to the output adapter.
The switch-configuration-path latency represents the latency from the issuance of a configuration command by the scheduler until the SOAs are switched accordingly. These latencies comprise serialization and deserialization (SERDES) delays, propagation delays (time of flight) on the physical medium, and processing delays in the switch and the adapter. The processing delays typically include header parsing delays, routing delays, scheduling delays, pipelining delays, etc. In an output-queued (OQ) switch, on the other hand, the minimum latency comprises only the data-path latency. The difference is that in an IQ switch, a newly arriving cell must first request permission to proceed and then wait for a grant, whereas in an OQ switch, a cell can immediately proceed to its output when there is no contention.
The physical implementation and packaging aspects of OSMOSIS (and high-capacity switches in general) have important consequences that imply that the above latencies are significant.
In the OSMOSIS demonstrator, estimate the involved data- and control-path latencies to amount to a minimum cell latency of approx. 1.2 s which is much larger than the cell
ATMA 2 ISE-2012
duration (51.2 ns). This already exceeds our latency target of 1 ms without taking into account the latencies of the driver software stack and the network interface card. Parallel ICTNs often operate at low utilization, or are subjected to highly orchestrated (by the programmer or compiler) traffic patterns. Under such conditions, the mean latency is dominated by the intrinsic control- and data-path latencies rather than by queuing delays. Hence, optimizing latency for such cases improves overall system performance. The main contribution of this project is a hybrid crossbar scheduling scheme that combines scheduled and speculative modes of operation, such that at low utilization most cells can proceed speculatively without waiting for a grant, thus achieving a latency reduction of up to 50%. Moreover, the scheduled mode ensures high utilization without excessive collisions of speculative cells in the B&S switch fabric.
1.3.3. Scheduling
An increasing number of high performance internetworking protocol routers, LAN and asynchronous transfer mode (ATM) switches use a switched backplane based on a crossbar switch. Most often, these systems use input queues to hold packets waiting to traverse the switching fabric. It is well known that if simple first in first out (FIFO), a scheduling
ATMA 3 ISE-2012
algorithm is used to configure the crossbar switch, deciding the order in which packets will be served.
ATMA
ISE-2012
CHAPTER 2
PROBLEM SPECIFICATION
2.1 Problem Statement
Low latency is a critical requirement in some switching applications, specifically in parallel computer interconnection networks. The minimum latency in switches with centralized scheduling comprises two components, namely, the control-path latency and the data-path latency, which in a practical high-capacity, distributed switch implementation can be far greater than the cell duration.
Thus introduce a speculative transmission scheme to significantly reduce the average latency by allowing cells to proceed without waiting for a grant. It operates in conjunction with any centralized matching algorithm to achieve a high maximum utilization. An analytical model is presented to investigate the efficiency of the speculative transmission scheme employed in a non-blocking N*NR input-queued crossbar switch with receivers R per output. The results demonstrate that the can be almost entirely eliminated for loads up to 50%. Our simulations confirm the analytical results.
A method and a system for controlling a plurality of queues of an input port in a switching or routing system. The method supports the regular request-grant protocol along with speculative transmission requests in an integrated fashion. Each regular scheduling request or speculative transmission request is stored in request order using references to minimize memory usage and operation count. Data packet arrival and speculation event triggers can be processed concurrently to reduce operation count and latency. The method supports data packet priorities using a unified linked list for request storage.
ATMA
ISE-2012
CHAPTER 3
SYSTEM REQUIREMENTS & ANALYSIS
3.1 Introduction
Software requirements specification (SRS) is the starting point of the software development activity. It is the most difficult and error prone task. The SRS is the means of translating the ideas in the minds of clients into formal document
3.1.1. Purpose
To develop a system that can prevent the java programs from de-compiling.
3.1.2. Scope
To provide an engine that can read the java byte code in hexadecimal format, to develop a system than can modify the byte code in required places.
To develop a user interface provides the user to select the required class file to be obfuscated.
3.1.3. Objective
To develop a system that can provide the protection for the java programs.
ATMA
ISE-2012
3.2
1. Usability
The system is designed with completely automated process hence there is no less user intervention.
2. Reliability
The system is more reliable because the code has been built by using java which is more reliable.
3. Performance
This system is developed in the high level languages and using the advanced Front-end and back-end technologies it will give response to the end user on client system with in very less time.
3.2.2. Software Requirements Front End Tools Used Operating System Back End 3.2.3. Hardware Requirements PROCESSOR RAM MONITOR HARD DISK : PENTIUM IV 2.6 GHz : 512 MB DD RAM : 15 COLOR : 20 GB : Java Swings
ATMA
ISE-2012
3.3.1 Inputs
Different nodes selected simultaneously and the node input that has to formerly executed in least time through the shortest path.
3.3.2 Processing One or more among the simultaneously selected nodes are selected along with an applet and are processed through our system to find the desired shortest path of execution among all the paths available through the latterly input execution nodes.
3.3.3.Outputs
The outputs are the times taken for the execution of the selected node about all the available paths through the secondary input nodes. Studying those obtain the desired path that best reduces the latency along with the waiting time and the time of execution.
ATMA
ISE-2012
CHAPTER 4
ANALYSIS
4.1. Existing System
Brikoff-von-newmann Switch which is eliminate the scheduler. It incurs a worstcase latency penalty of N time slots. It has to wait for exactly N time slots for the next opportunity.
Control and data path-latencies comprise serialization and de-serialization delays, propagation delay, processing delay between request and response.
4.1.1. Disadvantage
The existing system is if n_packets sending source to destination it is exactly wait for n_time slots.
4.2.1. Advantage
The speculative transmission that does not have to wait for grant hence low latency. The scheduled transmission achieve high maximum throughput.
9 ISE-2012
ATMA
Is there sufficient support for the project from management from users? If the current system is well liked and used to the extent that persons will not be able to see reasons for change, there may be resistance.
ATMA
10
ISE-2012
Technical analysis begins with the assessment of the technical viability of the technologies employed and the risk involved in their development. The proposed system employees advanced technologies such as a OSMOSIS,ICTN,etc. and other technical resources, which are times tested and available. The technologies are plain and simple on the development side and are profound and robust on the implementation side that is worth employing. The above discussion vaporizes any doubts expressed regarding viability of the system.
ATMA
11
ISE-2012
Use-case Analysis
To create a use case, the analyst must first identity the different types of people who use the system or product. These actors actually represent roles that people play as the system operates. Defined somewhat more formally an actor is anything that communicates with the system or product and that is external to the system itself. Once actors have been identified, use cases can be developed. The use case describes the manner in which an actor interacts with the system.
Use-case diagram
Use case diagram identify the functionality provided by the system, identify the users who interact with the system and provides associations between users and use cases. These models behavior of the system with respect to the users. It shows dynamic aspects of the system when user interact with system use case can have all possible interactions of users with use case graphically. Thus use case diagram models use case view of system.
ATMA
12
ISE-2012
The processes, task, functions initiated or participated, performed by each actor are identified. The use case should represent a course of events leading to a clear goal.
The external events that a system must respond to are identified. Relate the events to actors and use cases.
Use Case Diagrams A use case diagram is a graph of actors, a set of use cases enclosed by a system boundary, communication (participation) associations between the actors and users and generalization among use cases. The use case model defines the outside (actors) and inside (use case) of the systems behavior use cases. The use case model defines the outside (actors) and inside (use case) of the systems behavior.
Construct Network
Packet Creation
User
Sends Packet
Centralized Server
Performance Calculation
ATMA
13
ISE-2012
Compile
S elect Sink
Send Packet
ATMA
14
ISE-2012
ATMA ISE-2012
15
ATMA
16
ISE-2012