You are on page 1of 61

Communications protocol

A communications protocol is a system of digital message formats and rules for exchanging those messages in or between computingsystems and in telecommunications. A protocol may have a formal description. Protocols may include signaling, authentication and error detection and correction capabilities. A protocol definition defines the syntax, semantics, and synchronization of communication; the specified behavior is typically independent of how it is to be implemented. A protocol can therefore be implemented as hardware or software or both. Communications protocols have to be agreed upon by the parties involved.[1] To reach agreement a protocol may be developed into a technical standard. Communicating systems use well-defined formats for exchanging messages. Each message has an exact meaning intended to provoke a defined response of the receiver. A protocol therefore describes the syntax, semantics, and synchronization of communication. Aprogramming language describes the same for computations, so there is a close analogy between protocols and programming languages:protocols are to communications what programming languages are to computations.[2]

Contents [hide] 1 Communicating systems 2 Basic requirements of protocols 2.1 Formal specification 3 Protocols and programming languages 4 Universal protocols 5 Protocol design 5.1 Concurrent programming 5.2 A basis for protocol design 5.3 Layering 5.3.1 Protocol layering 5.3.2 Software layering 5.3.3 Strict layering 6 Protocol development 6.1 The need for protocol standards 6.2 Standards organizations 6.3 The standardization process 6.4 Future of standardization (OSI) 7 Taxonomies 8 Common types of protocols 9 Notes 10 References 11 External links Communicating systems The information exchanged between devices on a network or other communications medium is governed by rules or conventions that can be set out in a technical specification called a

communication protocol standard. The nature of the communication, the actual data exchanged and any state-dependent behaviors are defined by the specification. In digital computing systems, the rules can be expressed by algorithms and data structures. Expressing the algorithms in a portable programming language, makes the protocol software operating system independent. Operating systems are usually conceived of as consisting of a set of cooperating processes that manipulate a shared store (on the system itself) to communicate with each other. This communication is governed by well-understood protocols. These protocols can be embedded in the process code itself as small additional code fragments.[3][4] In contrast, communicating systems have to communicate with each other using shared transmission media, because there is no common memory. Transmission is not necessarily reliable and can involve different hardware and operating systems on different systems. To implement a networking protocol, the protocol software modules are interfaced with a framework implemented on the machine's operating system. This framework implements the networking functionality of the operating system.[5] The best known frameworks are the TCP/IP model and the OSI model. At the time the Internet was developed, layering had proven to be a successful design approach for both compiler and operating system design and, given the similarities between programming languages and communication protocols, layering was applied to the protocols as well.[6] This gave rise to the concept of layered protocols which nowadays forms the basis of protocol design.[7] Systems typically do not use a single protocol to handle a transmission. Instead they use a set of cooperating protocols, sometimes called a protocol family or protocol suite.[8] Some of the best known protocol suites include: IPX/SPX, X.25, AX.25, AppleTalk and TCP/IP. The protocols can be arranged based on functionality in groups, for instance there is a group of transport protocols. The functionalities are mapped onto the layers, each layer solving a distinct class of problems relating to, for instance: application-, transport-, internet- and network interfacefunctions.[9] To transmit a message, a protocol has to be selected from each layer, so some sort of multiplexing/demultiplexing takes place. The selection of the next protocol is accomplished by extending the message with a protocol selector for each layer.[10] Basic requirements of protocols Messages are sent and received on communicating systems to establish communications. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed:[11] Data formats for data exchange. Digital message bitstrings are exchanged. The bitstrings are divided in fields and each field carries information relevant to the protocol. Conceptually the bitstring is divided into two parts called the header area and the data area. The actual message is stored in the data area, so the header area contains the fields with more relevance to the protocol. Bitstrings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size.[12] Address formats for data exchange. Addresses are used to identify both the sender and the intended receiver(s). The addresses are stored in the header area of the bitstrings, allowing the receivers to determine whether the bitstrings are intended for themselves and should be processed or should be ignored. A connection between a sender and a receiver can be identified using an address

pair (sender address, receiver address). Usually some address values have special meanings. An all-1s address could be taken to mean an addressing of all stations on the network, so sending to this address would result in a broadcast on the local network. The rules describing the meanings of the address value are collectively called an addressing scheme.[13] Address mapping. Sometimes protocols need to map addresses of one scheme on addresses of another scheme. For instance to translate a logical IP address specified by the application to an Ethernet hardware address. This is referred to as address mapping.[14] Routing. When systems are not directly connected, intermediary systems along the route to the intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks are connected using routers. This way of connecting networks is calledinternetworking. Detection of transmission errors is necessary on networks which cannot guarantee error-free operation. In a common approach, CRCs of the data area are added to the end of packets, making it possible for the receiver to detect differences caused by errors. The receiver rejects the packets on CRC differences and arranges somehow for retransmission.[15] Acknowledgements of correct reception of packets is required for connection-oriented communication. Acknowledgements are sent from receivers back to their respective senders.[16] Loss of information - timeouts and retries. Packets may be lost on the network or suffer from long delays. To cope with this, under some protocols, a sender may expect an acknowledgement of correct reception from the receiver within a certain amount of time. On timeouts, the sender must assume the packet was not received and retransmit it. In case of a permanently broken link, the retransmission has no effect so the number of retransmissions is limited. Exceeding the retry limit is considered an error.[17] Direction of information flow needs to be addressed if transmissions can only occur in one direction at a time as on half-duplex links. This is known as Media Access Control. Arrangements have to be made to accommodate the case when two parties want to gain control at the same time.[18] Sequence control. We have seen that long bitstrings are divided in pieces, and then sent on the network individually. The pieces may get lost or delayed or take different routes to their destination on some types of networks. As a result pieces may arrive out of sequence. Retransmissions can result duplicate pieces. By marking the pieces with sequence information at the sender, the receiver can determine what was lost or duplicated, ask for necessary retransmissions and reassemble the original message.[19] Flow control is needed when the sender transmits faster than the receiver or intermediate network equipment can process the transmissions. Flow control can be implemented by messaging from receiver to sender.[20] Getting the data across a network is only part of the problem for a protocol. The data received has to be evaluated in the context of the progress of the conversation, so a protocol has to specify rules describing the context. These kind of rules are said to express the syntax of the communications. Other rules determine whether the data is meaningful for the context in which the exchange takes place. These kind of rules are said to express the semantics of the communications. Formal specification

Formal ways for describing the syntax of the communications are Abstract Syntax Notation One (a ISO standard) or Augmented Backus-Naur form (a IETF standard). Finite state machine models[21][22] and communicating finite-state machines[23] are used to formally describe the possible interactions of the protocol. Protocols and programming languages Protocols are to communications what algorithms or programming languages are to computations.[24][2] This analogy has important consequences for both the design and the development of protocols. One has to consider the fact that algorithms, programs and protocols are just different ways of describing expected behaviour of interacting objects. A familiar example of a protocolling language is the HTML language used to describe web pages which are the actual web protocols. In programming languages the association of identifiers to a value is termed a definition. Program text is structured using block constructs and definitions can be local to a block. The localized association of an identifier to a value established by a definition is termed a binding and the region of program text in which a binding is effective is known as its scope.[25] The computational state is kept using two components: the environment, used as a record of identifier bindings, and the store, which is used as a record of the effects of assignments.[26] In communications, message values are transferred using transmission media. By analogy, the equivalent of a store would be a collection of transmission media, instead of a collection of memory locations. A valid assignment in a protocol (as an analog of programming language) could be Ethernet:='message' , meaning a message is to be broadcast on the local ethernet. On a transmission medium there can be many receivers. For instance a mac-address identifies an ether network card on the transmission medium (the 'ether'). In our imaginary protocol, the assignment ethernet[mac-address]:=message value could therefore make sense.[27] By extending the assignment statement of an existing programming language with the semantics described, a protocolling language could easily be imagined. Operating systems provide reliable communication and synchronization facilities for communicating objects confined to the same system by means of system libraries. A programmer using a general purpose programming language (like C or ADA) can use the routines in the libraries to implement a protocol, instead of using a dedicated protocolling language. Universal protocols The nice thing about standards is that you have so many to choose from. Andrew S. Tanenbaum in Computer Networks[28] Despite their numbers, networking protocols show little variety, because all networking protocols use the same underlying principles and concepts, in the same way. So, the use of a general purpose programming language would yield a large number of applications only differing in the details.[29] A suitably defined (dedicated) protocolling language would therefore have little syntax, perhaps just enough to specify some parameters or optional modes of operation, because its virtual machine would have incorporated all possible principles and concepts making the virtual machine itself

a universal protocol. The protocolling language would have some syntax and a lot of semantics describing this universal protocol and would therefore in effect be a protocol, hardly differing from this universal networking protocol. In this (networking) context a protocol is a language. The notion of a universal networking protocol provides a rationale for standardization of networking protocols; assuming the existence of a universal networking protocol, development of protocol standards using a consensus model (the agreement of a group of experts) might be a viable way to coordinate protocol design efforts. Networking protocols operate in very heterogeneous environments consisting of very different network technologies and a (possibly) very rich set of applications, so a single universal protocol would be very hard to design and implement correctly. Instead, the IETF decided to reduce complexity by assuming a relatively simple network architecture allowing decomposition of the single universal networking protocol into two generic protocols, TCP and IP, and two classes of specific protocols, one dealing with the low-level network details and one dealing with the high-level details of common network applications (remote login, file transfer, email and web browsing). ISO choose a similar but more general path, allowing other network architectures, to standardize protocols. Protocol design Communicating systems operate in parallel. The programming tools and techniques for dealing with parallel processes are collectively called concurrent programming. Concurrent programming only deals with the synchronization of communication. The syntax and semantics of the communication governed by a low-level protocol usually have modest complexity, so they can be coded with relative ease. High-level protocols with relatively large complexity could however merit the implementation of language interpreters. An example of the latter case is the HTML language. Concurrent programming has traditionally been a topic in operating systems theory texts.[30] Formal verification seems indispensable, because concurrent programs are notorious for the hidden and sophisticated bugs they contain.[31] A mathematical approach to the study of concurrency and communication is referred to as Communicating Sequential Processes (CSP).[32] Concurrency can also be modelled usingfinite state machines like Mealy and Moore machines. Mealy and Moore machines are in use as design tools in digital electronics systems, which we encounter in the form of hardware used in telecommunications or electronic devices in general.[33] This kind of design can be a bit of a challenge to say the least, so it is important to keep things simple. For the Internet protocols, in particular and in retrospect, this meant a basis for protocol design was needed to allow decomposition of protocols into much simpler, cooperating protocols. Concurrent programming A concurrent program is an abstraction of cooperating processes suitable for formal treatment and study. The goal of the abstraction is to prove correctness of the program assuming the existence of some basic synchronization or data exchange mechanisms provided by the operating system (or other software) or hardware. The mechanisms are complex, so more convenient higher level primitives are implemented with these mechanisms. The primitives are used to construct the concurrent program. The basic primitive for synchronization is thesemaphore. All other primitives (locks, reentrant mutexes, semaphores, monitors, message passing, tuple space) can be defined using semaphores. The semaphore is sufficiently elementary to be successfully studied by formal methods.[34]

In order to synchronize or exchange data the processes must communicate by means of either a shared memory, used to store data or access-restricted procedures, or the sending/receiving of signals (message passing) using a shared transmission medium. Most third generation operating systems implement separate processes that use special instructions to ensure only one process can execute the restricted procedures. On distributed systems there is no common central memory so the communications are always by means of message passing. In this case the processes simply have to wait for each other (synchronization by rendezvous) before exchanging data.[3] Conceptually, the concurrent program consists of several sequential processes whose execution sequences are interleaved. The execution sequences are divided into sections. A section manipulating shared resources is called a critical section. The interleaving scheme makes no timing assumptions other than that no process halts in its critical section and that ready processes are eventually scheduled for execution. For correct operation of the program, the critical sections of the processes need to be properly sequenced and synchronized. This is achieved using small code fragments (protocols) at the start and the end of the critical sections. The code fragments determine whether the critical sections of two communicating processes should execute in parallel (rendezvous of processes) or should be executed sequentially (mutual exclusion of processes). A concurrent program is correct if it does not violate some safety property such as mutual exclusion or rendezvous of critical sections and does not suffer of liveness properties such as deadlock or lockout. Correctness of the concurrent program can only be shown using a mathematical argument. Specifications of concurrent programs can be formulated using formal logics (like CSP) which make it possible to prove properties of the programs. Incorrectness can be shown using execution scenarios.[4] Mutual exclusion is extensively studied in the mutual exclusion problem. The rendezvous is studied in the producer-consumer problem in which a producer process only produces data if and only if the consumer process is ready to consume the data. Although both problems only involve two processes, their solutions require rather complex algorithms (Dekker's algorithm, Lamport's bakery algorithm). The readers-writers problem is a generalization of the mutual exclusion problem. The dining philosophers problem is a classical problem sufficiently difficult to expose many of the potential pitfalls of newly proposed primitives.[35] A basis for protocol design Systems do not use a single protocol to handle a transmission. Instead they use a set of cooperating protocols, sometimes called a protocol family or protocol suite.[8] To cooperate the protocols have to communicate with each other, so some kind of conceptual framework is needed to make this communication possible. Also note that software is needed to implement both the 'xfer-mechanism' and a protocol (no protocol, no communication). In literature there are numerous references to the analogies between computer communication and programming. By analogy we could say that the aforementioned 'xfer-mechanism' is comparable to a cpu; a 'xfer-mechanism' performs communications and a cpu performs computations and the 'framework' introduces something that allows the protocols to be designed independent of one and another by providing separate execution environments for the protocols. Furthermore, it is repeatedly stated that protocols are to computer communication what programming languages are to computation.[36][37] Layering

Figure 2. The TCP/IP model or Internet layering scheme and its relation to some common protocols.

The communications protocols in use on the Internet are designed to function in very complex and diverse settings. To ease design, communications protocols are structured using a layering scheme as a basis. Instead of using a single universal protocol to handle all transmission tasks, a set of cooperating protocols fitting the layering scheme is used.[38]The layering scheme in use on the Internet is called the TCP/IP model. The actual protocols are collectively called the Internet protocol suite. The group responsible for this design is called the Internet Engineering Task Force (IETF). Typically, a hardware delivery mechanism layer is used to build a connectionless packet delivery system on top of which a reliable transport layer is built, on top of which is the application software. Layers below and above these can be defined, and protocols are very often stacked to give tunnelling, for example the internet protocol can be tunnelled across anATM network protocol to provide connectivity by layering the internet protocol on top of the ATM protocol transport layer. The number of layers of a layering scheme and the way the layers are defined can have a drastic impact on the protocols involved. This is where the analogies come into play for the TCP/IP model, because the designers of TCP/IP employed the same techniques used to conquer the complexity of programming language compilers (design by analogy) in the implementation of its protocols and its layering scheme.[39] Protocol layering

Figure 3. Message flows using a protocol suite. Protocol layering now forms the basis of protocol design.[7] It allows the decomposition of single, complex protocols into simpler, cooperating protocols, but it is also a functional decomposition, because each protocol belongs to a functional class, called a protocol layer.[38] The protocol layers each solve a distinct class of communications problems. The Internet protocol suite consists of the

following layers: application-, transport-, internet- and network interface-functions.[9] Together, the layers make up alayering scheme or model. In computations, we have algorithms and data, and in communications, we have protocols and messages, so the analog of a data flow diagram would be some kind of message flow diagram.[24] To visualize protocol layering and protocol suites, a diagram of the message flows in and between two systems, A and B, is shown in figure 3. The systems both make use of the same protocol suite. The vertical flows (and protocols) are in systemand the horizontal message flows (and protocols) are between systems. The message flows are governed by rules, and dataformats specified by protocols. The blue lines therefore mark the boundaries of the (horizontal) protocol layers. The vertical protocols are not layered because they don't obey the protocol layering principle which states that a layered protocol is designed so that layer n at the destination receives exactly the same object sent by layer n at the source. The horizontal protocols are layered protocols and all belong to the protocol suite. Layered protocols allow the protocol designer to concentrate on one layer at a time, without worrying about how other layers perform.[37] The vertical protocols neednot be the same protocols on both systems, but they have to satisfy some minimal assumptions to ensure the protocol layering principle holds for the layered protocols. This can be achieved using a technique called Encapsulation.[40] Usually, a message or a stream of data is divided into small pieces, called messages or streams, packets, IP datagrams or network framesdepending on the layer in which the pieces are to be transmitted. The pieces contain a header area and a data area. The data in the header area identifies the source and the destination on the network of the packet, the protocol, and other data meaningful to the protocol like CRC's of the data to be sent, data length, and a timestamp.[41][42] The rule enforced by the vertical protocols is that the pieces for transmission are to be encapsulated in the data area of all lower protocols on the sending side and the reverse is to happen on the receiving side. The result is that at the lowest level the piece looks like this: 'Header1,Header2,Header3,data' and in the layer directly above it: 'Header2,Header3,data' and in the top layer: 'Header3,data', both on the sending and receiving side. This rule therefore ensures that the protocol layering principle holds and effectively virtualizes all but the lowest transmission lines, so for this reason some message flows are coloured red in figure 3. To ensure both sides use the same protocol, the pieces also carry data identifying the protocol in their header. The design of the protocol layering and the network (or Internet) architecture are interrelated, so one cannot be designed without the other.[43]Some of the more important features in this respect of the Internet architecture and the network services it provides are described next. The Internet offers universal interconnection, which means that any pair of computers connected to the Internet is allowed to communicate. Each computer is identified by an address on the Internet. All the interconnected physical networks appear to the user as a single large network. This interconnection scheme is called an internetwork or internet.[44]

Conceptually, an Internet addresses consists of a netid and a hostid. The netid identifies a network and the hostid identifies a host. The term host is misleading in that an individual computer can have multiple network interfaces each having its own Internet address. An Internet Address identifies a connection to the network, not an individual computer.[45] The netid is used by routers to decide where to send a packet.[46] Network technology independence is achieved using the low-level address resolution protocol (ARP) which is used to map Internet addresses to physical addresses. The mapping is called address resolution. This way physical addresses are only used by the protocols of the network interface layer.[47] The TCP/IP protocols can make use of almost any underlying communication technology.[48]

Figure 4. Message flows in the presence of a router

Physical networks are interconnected by routers. Routers forward packets between interconnected networks making it possible for hosts to reach hosts on other physical networks. The message flows between two communicating system A and B in the presence of a router R are illustrated in figure 4. Datagrams are passed from router to router until a router is reached that can deliver the datagram on a physically attached network (called direct delivery).[49] To decide whether a datagram is to be delivered directly or is to be sent to a router closer to the destination, a table called the IP routing table is consulted. The table consists of pairs of networkids and the paths to be taken to reach known networks. The path can be an indication that the datagram should be delivered directly or it can be the address of a router known to be closer to the destination.[50] A special entry can specify that a default router is chosen when there are no known paths.[51] All networks are treated equal. A LAN, a WAN or a point-to-point link between two computers are all considered as one network.[52] A Connectionless packet delivery (or packet-switched) system (or service) is offered by the Internet, because it adapts well to different hardware, including best-effort delivery mechanisms like the ethernet. Connectionless delivery means that the messages or streams are divided in pieces that are multiplexed separately on the high speed intermachine connections allowing the connections to be used concurrently. Each piece carries information identifying the destination. The delivery of packets is said to be unreliable, because packets may be lost, duplicated, delayed or delivered out of order without notice to the sender or receiver. Unreliability arises only when resources are exhausted or underlying networks fail.[53] The unreliable connectionless delivery system is defined by the Internet Protocol (IP). The protocol also specifies the routing function, which chooses a path over which data will be sent.[54] It is also possible to use TCP/IP protocols on connection oriented systems. Connection oriented systems build up virtual circuits (paths for exclusive use) between senders and

receivers. Once built up the IP datagrams are sent as if they were data through the virtual circuits and forwarded (as data) to the IP protocol modules. This technique, called tunneling, can be used on X.25 networks and ATM networks.[55] A reliable stream transport service using the unreliable connectionless packet delivery service is defined by the transmission control protocol (TCP). The services are layered as well and the application programs residing in the layer above it, called the application services, can make use of TCP.[56] Programs wishing to interact with the packet delivery system itself can do so using the user datagram protocol (UDP).[57] Software layering Having established the protocol layering and the protocols, the protocol designer can now resume with the software design. The software has a layered organization and its relationship with protocol layering is visualized in figure 5.

Figure 5: Protocol and software layering The software modules implementing the protocols are represented by cubes. The information flow between the modules is represented by arrows. The (top two horizontal) red arrows are virtual. The blue lines mark the layer boundaries. To send a message on system A, the top module interacts with the module directly below it and hands over the message to be encapsulated. This module reacts by encapsulating the message in its own data area and filling in its header data in accordance with the protocol it implements and interacts with the module below it by handing over this newly formed message whenever appropriate. The bottom module directly interacts with the bottom module of system B, so the message is sent across. On the receiving system B the reverse happens, so ultimately (and assuming there were no transmission errors or protocol violations etc.) the message gets delivered in its original form to the topmodule of system B.[58] On protocol errors, a receiving module discards the piece it has received and reports back the error condition to the original source of the piece on the same layer by handing the error message down or in case of the bottom module sending it across.[59] The division of the message or stream of data into pieces and the subsequent reassembly are handled in the layer that introduced the division/reassembly. The reassembly is done at the destination (i.e. not on any intermediate routers).[60] TCP/IP software is organized in four layers.[61] Application layer. At the highest layer, the services available across a TCP/IP internet are accessed by application programs. The application chooses the style of transport to be used which can be a

sequence of individual messages or a continuous stream of bytes. The application program passes data to the transport layer for delivery. Transport layer. The transport layer provides communication from one application to another. The transport layer may regulate flow of information and provide reliable transport, ensuring that data arrives without error and in sequence. To do so, the receiving side sends back acknowledgments and the sending side retransmits lost pieces called packets. The stream of data is divided into packets by the module and each packet is passed along with a destination address to the next layer for transmission. The layer must accept data from many applications concurrently and therefore also includes codes in the packet header to identify the sending and receiving application program. Internet layer. The Internet layer handles the communication between machines. Packets to be sent are accepted from the transport layer along with an identification of the receiving machine. The packets are encapsulated in IP datagrams and the datagram headers are filled. A routing algorithm is used to determine if the datagram should be delivered directly or sent to a router. The datagram is passed to the appropriate network interface for transmission. Incoming datagrams are checked for validity and the routing algorithm is used to decide whether the datagram should be processed locally or forwarded. If the datagram is addressed to the local machine, the datagram header is deleted and the appropriate transport protocol for the packet is chosen. ICMP error and control messages are handled as well in this layer. Network interface layer. The network interface layer is responsible for accepting IP datagrams and transmitting them over a specific network. A network interface may consist of a device driver or a complex subsystem that uses its own data link protocol. Program translation has been divided into four subproblems: compiler, assembler, link editor, and loader. As a result, the translation software is layered as well, allowing the software layers to be designed independently. Noting that the ways to conquer the complexity of program translation could readily be applied to protocols because of the analogy between programming languages and protocols, the designers of the TCP/IP protocol suite were keen on imposing the same layering on the software framework. This can be seen in the TCP/IP layering by considering the translation of a pascal program (message) that is compiled (function of the application layer) into an assembler program that is assembled (function of the transport layer) to object code (pieces) that is linked (function of the Internet layer) together with library object code (routing table) by the link editor, producing relocatable machine code (datagram) that is passed to the loader which fills in the memory locations (ethernet addresses) to produce executeable code (network frame) to be loaded (function of the network interface layer) into physical memory (transmission medium). To show just how closely the analogy fits, the terms between parentheses in the previous sentence denote the relevant analogs and the terms written cursively denote data representations. Program translation forms a linear sequence, because each layer's output is passed as input to the next layer. Furthermore, the translation process involves multiple data representations. We see the same thing happening in protocol software where multiple protocols define the data representations of the data passed between the software modules.[39] The network interface layer uses physical addresses and all the other layers only use IP addresses. The boundary between network interface layer and Internet layer is called the high-level protocol address boundary.[62] The modules below the application layer are generally considered part of the operating system. Passing data between these modules is much less expensive than passing data between an application program and the transport layer. The boundary between application layer and transport layer is called the operating system boundary.[63]

Strict layering Strictly adhering to a layered model, a practice known as strict layering, is not always the best approach to networking.[64] Strict layering, can have a serious impact on the performance of the implementation, so there is at least a trade-off between simplicity and performance.[65]Another, perhaps more important point can be shown by considering the fact that some of the protocols in the Internet Protocol Suite cannot be expressed using the TCP/IP model, in other words some of the protocols behave in ways not described by the model.[66] To improve on the model, an offending protocol could, perhaps be split up into two protocols, at the cost of one or two extra layers, but there is a hidden caveat, because the model is also used to provide a conceptual view on the suite for the intended users. There is a trade-off to be made here between preciseness for the designer and clarity for the intended user.[67] Protocol development For communication to take place, protocols have to be agreed upon. Recall that in digital computing systems, the rules can be expressed by algorithms and datastructures, raising the opportunity of hardware independence. Expressing the algorithms in a portable programming language, makes the protocol software operating system independent. The source code could be considered a protocol specification. This form of specification, however is not suitable for the parties involved. For one thing, this would enforce a source on all parties and for another, proprietary software producers would not accept this. By describing the software interfaces of the modules on paper and agreeing on the interfaces, implementers are free to do it their way. This is referred to as source independence. By specifying the algorithms on paper and detailing hardware dependencies in an unambiguous way, a paper draft is created, that when adhered to and published, ensures interoperability between software and hardware. Such a paper draft can be developed into a protocol standard by getting the approval of a standards organization. To get the approval the paper draft needs to enter and successfully complete the standardization process. This activity is referred to as protocol development. The members of the standards organization agree to adhere to the standard on a voluntary basis. Often the members are in control of large market-shares relevant to the protocol and in many cases, standards are enforced by law or the government, because they are thought to serve an important public interest, so getting approval can be very important for the protocol. It should be noted though that in some cases protocol standards are not sufficient to gain widespread acceptance i.e. sometimes the source code needs to be disclosed enforced by law or the government in the interest of the public. The need for protocol standards The need for protocol standards can be shown by looking at what happened to the bi-sync protocol (BSC) invented by IBM. BSC is an early link-level protocol used to connect two separate nodes. It was originally not intended to be used in a multinode network, but doing so revealed several deficiencies of the protocol. In the absence of standardization, manufacturers and organizations felt free to 'enhance' the protocol, creating incompatible versions on their networks. In some cases, this was deliberately done to discourage users from using equipment from other manufacturers. There are more than 50 variants of the original bi-sync protocol. One can assume, that a standard would have prevented at least some of this from happening.[5]

In some cases, protocols gain market dominance without going through a standardization process. Such protocols are referred to as de facto standards. De facto standards are common on emerging markets, niche markets, or markets that are monopolized (or oligopolized). They can hold a market in a very negative grip, especially when used to scare away competition. From a historical perspective, standardization should be seen as a measure to counteract the ill-effects of de facto standards. Positive exceptions exist; a 'de facto standard' operating system like GNU/Linux does not have this negative grip on its market, because the sources are published and maintained in an open way, thus inviting competition. Standardization is therefore not the only solution for open systems interconnection. Standards organizations Some of the standards organizations of relevance for communications protocols are the International Organization for Standardization (ISO), the International Telecommunications Union (ITU), the Institute of Electrical and Electronics Engineers (IEEE), and the Internet Engineering Task Force (IETF). The IETF maintains the protocols in use on the Internet. The IEEE controls many software and hardware protocols in the electronics industry for commercial and consumer devices. The ITU is an umbrella organization of telecommunications engineers designing the public switched telephone network (PSTN), as well as many radio communication systems. For marine electronics the NMEA standards are used. The World Wide Web Consortium (W3C) produces protocols and standards for Web technologies. International standards organizations are supposed to be more impartial than local organizations with a national or commercial self-interest to consider. Standards organizations also do research and development for standards of the future. In practice, the standards organizations mentioned, cooperate closely with each other.[68] The standardization process The standardization process starts off with ISO commissioning a sub-committee workgroup. The workgroup issues working drafts and discussion documents to interested parties (including other standards bodies) in order to provoke discussion and comments. This will generate a lot of questions, much discussion and usually some disagreement on what the standard should provide and if it can satisfy all needs (usually not). All conflicting views should be taken into account, often by way of compromise, to progress to a draft proposal of the working group. The draft proposal is discussed by the member countries' standard bodies and other organizations within each country. Comments and suggestions are collated and national views will be formulated, before the members of ISO vote on the proposal. If rejected, the draft proposal has to consider the objections and counter-proposals to create a new draft proposal for another vote. After a lot of feedback, modification, and compromise the proposal reaches the status of a draft international standard, and ultimately an international standard. The process normally takes several years to complete. The original paper draft created by the designer will differ substantially from the standard, and will contain some of the following 'features': Various optional modes of operation, for example to allow for setup of different packet sizes at startup time, because the parties could not reach consensus on the optimum packet size. Parameters that are left undefined or allowed to take on values of a defined set at the discretion of the implementor. This often reflects conflicting views of some of the members.

Parameters reserved for future use, reflecting that the members agreed the facility should be provided, but could not reach agreement on how this should be done in the available time. Various inconsistencies and ambiguities will inevitably be found when implementing the standard. International standards are reissued periodically to handle the deficiencies and reflect changing views on the subject.[69] Future of standardization (OSI) A lesson learned from ARPANET (the predecessor of the Internet) is that standardization of protocols is not enough, because protocols also need a framework to operate. It is therefore important to develop a general-purpose, future-proof framework suitable for structured protocols(such as layered protocols) and their standardization. This would prevent protocol standards with overlapping functionality and would allow clear definition of the responsibilities of a protocol at the different levels (layers).[70] This gave rise to the ISO Open Systems Interconnection reference model (RM/OSI), which is used as a framework for the design of standard protocols and services conforming to the various layer specifications.[71] In the OSI model, communicating systems are assumed to be connected by an underlying physical medium providing a basic (and unspecified) transmission mechanism. The layers above it are numbered (from one to seven); the nth layer is referred to as (n)-layer. Each layer provides service to the layer above it (or at the top to the application process) using the services of the layer immediately below it. The layers communicate with each other by means of an interface, called a service access point. Corresponding layers at each system are calledpeer entities. To communicate, two peer entities at a given layer use a (n)-protocol, which is implemented by using services of the (n-1)-layer. When systems are not directly connected, intermediate peer entities (called relays) are used. An address uniquely identifies a service access point. The address naming domains need not be restricted to one layer, so it is possible to use just one naming domain for all layers. [72] For each layer there are two types of standards: protocol standards defining how peer entities at a given layer communicate, and service standards defining how a given layer communicates with the layer above it. In the original version of RM/OSI, the layers and their functionality are (from highest to lowest layer): The application layer may provide the following services to the application processes: identification of the intended communication partners, establishment of the necessary authority to communicate, determination of availability and authentication of the partners, agreement on privacy mechanisms for the communication, agreement on responsibility for error recovery and procedures for ensuring data integrity, synchronization between cooperating application processes, identification of any constraints on syntax (e.g. character sets and data structures), determination of cost and acceptable quality of service, selection of the dialogue discipline, including required logon and logoff procedures.[73] The presentation layer may provide the following services to the application layer: a request for the establishment of a session, data transfer, negotiation of the syntax to be used between the application layers, any necessary syntax transformations, formatting and special purpose transformations (e.g. data compression and data encryption).[74]

The session layer may provide the following services to the presentation layer: establishment and release of session connections, normal and expedited data exchange, a quarantine service which allows the sending presentation entity to instruct the receiving session entity not to release data to its presentation entity without permission, interaction management so presentation entities can control whose turn it is to perform certain control functions, resynchronization of a session connection, reporting of unrecoverable exceptions to the presentation entity.[75] The transport layer provides reliable and transparent data transfer in a cost effective way as required by the selected quality of service. It may support the multiplexing of several transport connections on to one network connection or split one transport connection into several network connections.[76] The network layer does the setup, maintenance and release of network paths between transport peer entities. When relays are needed, routing and relay functions are provided by this layer. The quality of service is negotiated between network and transport entities at the time the connection is set up. This layer is also responsible for (network) congestion control.[77] The data link layer does the setup, maintenance and release of data link connections. Errors occurring in the physical layer are detected and may be corrected. Errors are reported to the network layer. The exchange of data link units (including flow control) is defined by this layer.[78] The physical layer describes details like the electrical characteristics of the physical connection, the transmission techniques used, and the setup, maintenance and clearing of physical connections.[79] In contrast to the TCP/IP layering scheme, which assumes a connectionless network, RM/OSI assumed a connection-oriented network. Connection-oriented networks are more suitable for wide area networks and connectionless networks are more suitable for local area networks. Using connections to communicate implies some form of session and (virtual) circuits, hence the (in the TCP/IP model lacking) session layer. The constituent members of ISO were mostly concerned with wide area networks, so development of RM/OSI concentrated on connection oriented networks and connectionless networks were only mentioned in an addendum to RM/OSI.[80] At the time, the IETF had to cope with this and the fact that the Internet needed protocols which simply were not there. As a result the IETF developed its own standardization process based on "rough consensus and running code".[81] The standardization process is described by RFC2026. Nowadays, the IETF has become a standards organization for the protocols in use on the Internet. RM/OSI has extended its model to include connectionless services and because of this, both TCP and IP could be developed into international standards. Taxonomies Classification schemes for protocols usually focus on domain of use and function. As an example of domain of use, connection-oriented protocols and connectionless protocols are used on connectionoriented networks and connectionless networks respectively. For an example of function consider a tunneling protocol, which is used to encapsulate packets in a high-level protocol, so the packets can be passed across a transport system using the high-level protocol. A layering scheme combines both function and domain of use. The dominant layering schemes are the ones proposed by the IETF and by ISO. Despite the fact that the underlying assumptions of the

layering schemes are different enough to warrant distinguishing the two, it is a common practice to compare the two by relating common protocols to the layers of the two schemes.[82] For an example of this practice see: List of network protocols. The layering scheme from the IETF is called Internet layering or TCP/IP layering. The functionality of the layers has been described in the section on software layering and an overview of protocols using this scheme is given in the article on Internet protocols. The layering scheme from ISO is called the OSI model or ISO layering. The functionality of the layers has been described in the section onthe future of standardization and an overview of protocols using this scheme is given in the article on OSI protocols.

OSI model

OSI model 7. Application layer NNTP SIP SSI DNS FTP Gopher HTTP NFS NTP SMPP SMTP SNMP Telnet DHCP Netconf (more) 6. Presentation layer MIME

XDR 5. Session layer Named pipe NetBIOS SAP PPTP RTP SOCKS SPDY TLS/SSL 4. Transport layer TCP UDP SCTP DCCP SPX 3. Network layer IP IPv4 IPv6

ARP ICMP IPsec IGMP IPX AppleTalk 2. Data link layer ATM

SDLC HDLC CSLIP SLIP GFP PLIP IEEE 802.2 LLC L2TP IEEE 802.3 Frame Relay ITU-T G.hn DLL PPP X.25 1. Physical layer EIA/TIA-232 EIA/TIA-449 ITU-T V-Series I.430 I.431 PDH SONET/SDH PON OTN DSL IEEE 802.3 IEEE 802.11 IEEE 802.15

IEEE 802.16 IEEE 1394 ITU-T G.hn PHY USB Bluetooth RS-232 RS-449
V T E

The Open Systems Interconnection (OSI) model (ISO/IEC 7498-1) is a product of the Open Systems Interconnection effort at the International Organization for Standardization. It is a prescription of characterizing and standardizing the functions of a communications system in terms of abstraction layers. Similar communication functions are grouped into logical layers. A layer serves the layer above it and is served by the layer below it. For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that make up the contents of that path. Two instances at one layer are connected by a horizontal connection on that layer.

Communication in the OSI-Model (example with layers 3 to 5)

Contents [hide] 1 History 2 Description of OSI layers 2.1 Layer 1: physical layer 2.2 Layer 2: data link layer 2.2.1 WAN protocol architecture 2.2.2 IEEE 802 LAN architecture 2.3 Layer 3: network layer 2.4 Layer 4: transport layer 2.5 Layer 5: session layer 2.6 Layer 6: presentation layer 2.7 Layer 7: application layer 3 Cross-layer functions 4 Interfaces 5 Examples 6 Comparison with TCP/IP model 7 See also 8 References 9 External links History Work on a layered model of network architecture was started and the International Organization for Standardization (ISO) began to develop its OSI framework architecture. OSI had two major components: an abstract model of networking, called the Basic Reference Model or seven-layer model, and a set of specific protocols. The concept of a seven-layer model was provided by the work ofCharles Bachman, Honeywell Information Services. Various aspects of OSI design evolved from experiences with the ARPANET, the fledgling Internet, NPLNET, EIN, CYCLADES network and the work in IFIP WG6.1. The new design was documented in ISO 7498 and its various addenda. In this model, a networking system was divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacted directly only with the layer immediately beneath it, and provided facilities for use by the layer above it. Protocols enabled an entity in one host to interact with a corresponding entity at the same layer in another host. Service definitions abstractly described the functionality provided to an (N)-layer by an (N-1) layer, where N was one of the seven layers of protocols operating in the local host. The OSI standards documents are available from the ITU-T as the X.200-series of recommendations.[1] Some of the protocol specifications were also available as part of the ITU-T X series. The equivalent ISO and ISO/IEC standards for the OSI model were available from ISO, but only some of them without fees.[2] Description of OSI layers According to recommendation X.200, there are seven layers, labeled 1 to 7, with layer 1 at the bottom. Each layer is generically known as an N layer. An "N+1 entity" (at layer N+1) requests services from an "N entity" (at layer N).

At each level, two entities (N-entity peers) interact by means of the N protocol by transmitting protocol data units (PDU). A Service Data Unit (SDU) is a specific unit of data that has been passed down from an OSI layer to a lower layer, and which the lower layer has not yet encapsulated into a protocol data unit (PDU). An SDU is a set of data that is sent by a user of the services of a given layer, and is transmitted semantically unchanged to a peer service user. The PDU at a layer N is the SDU of layer N-1. In effect the SDU is the 'payload' of a given PDU. That is, the process of changing an SDU to a PDU, consists of an encapsulation process, performed by the lower layer. All the data contained in the SDU becomes encapsulated within the PDU. The layer N-1 adds headers or footers, or both, to the SDU, transforming it into a PDU of layer N. The added headers or footers are part of the process used to make it possible to get data from a source to a destination. OSI Model Data unit Layer 7. Application Function Network process to application

Host Data layers

Data representation, encryption and decryption, 6. Presentation convert machine dependent data to machine independent data 5. Session Interhost communication, managing sessions between applications End-to-end connections, reliability and flow control Path determination and logical addressing Physical addressing Media, signal and binary transmission

Segments

4. Transport

Packet/Datagram 3. Network Media Frame 2. Data link layers Bit 1. Physical

Some orthogonal aspects, such as management and security, involve every layer. Security services are not related to a specific layer: they can be related by a number of layers, as defined by ITU-T X.800 Recommendation.[3] These services are aimed to improve the CIA triad (confidentiality, integrity, and availability) of transmitted data. Actually the availability of communication service is determined by network design and/or network management protocols. Appropriate choices for these are needed to protect against denial of service. Layer 1: physical layer The physical layer defines electrical and physical specifications for devices. In particular, it defines the relationship between a device and atransmission medium, such as a copper or fiber optical cable. This includes the layout of pins, voltages, line impedance, cable specifications, signal timing, hubs, repeaters, network adapters, host bus adapters (HBA used in storage area networks) and more. The major functions and services performed by the physical layer are:

Establishment and termination of a connection to a communications medium. Participation in the process whereby the communication resources are effectively shared among multiple users. For example, contention resolution and flow control. Modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. These are signals operating over the physical cabling (such as copper and optical fiber) or over a radio link. Parallel SCSI buses operate in this layer, although it must be remembered that the logical SCSI protocol is a transport layer protocol that runs over this bus. Various physical-layer Ethernet standards are also in this layer; Ethernet incorporates both this layer and the data link layer. The same applies to other local-area networks, such as token ring, FDDI, ITU-T G.hn and IEEE 802.11, as well as personal area networks such as Bluetooth and IEEE 802.15.4. Layer 2: data link layer The data link layer provides the functional and procedural means to transfer data between network entities and to detect and possibly correct errors that may occur in the physical layer. Originally, this layer was intended for point-to-point and point-to-multipoint media, characteristic of wide area media in the telephone system. Local area network architecture, which included broadcast-capable multi-access media, was developed independently of the ISO work in IEEE Project 802. IEEE work assumed sublayer-ing and management functions not required for WAN use. In modern practice, only error detection, not flow control using sliding window, is present in data link protocols such as Point-to-Point Protocol (PPP), and, on local area networks, the IEEE 802.2 LLC layer is not used for most protocols on the Ethernet, and on other local area networks, its flow control and acknowledgment mechanisms are rarely used. Sliding window flow control and acknowledgment is used at the transport layer by protocols such as TCP, but is still used in niches where X.25 offers performance advantages. The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete data link layer which provides both error correction and flow control by means of a selective repeat Sliding Window Protocol. Both WAN and LAN service arrange bits from the physical layer into logical sequences called frames. Not all physical layer bits necessarily go into frames, as some of these bits are purely intended for physical layer functions. For example, every fifth bit of the FDDI bit stream is not used by the layer. WAN protocol architecture Connection-oriented WAN data link protocols, in addition to framing, detect and may correct errors. They are also capable of controlling the rate of transmission. A WAN data link layer might implement a sliding window flow control and acknowledgment mechanism to provide reliable delivery of frames; that is the case for Synchronous Data Link Control (SDLC) and HDLC, and derivatives of HDLC such as LAPB andLAPD. IEEE 802 LAN architecture Practical, connectionless LANs began with the pre-IEEE Ethernet specification, which is the ancestor of IEEE 802.3. This layer manages the interaction of devices with a shared medium, which is the function of a media access control (MAC) sublayer. Above this MAC sublayer is the media-

independent IEEE 802.2 Logical Link Control (LLC) sublayer, which deals with addressing and multiplexing on multi-access media. While IEEE 802.3 is the dominant wired LAN protocol and IEEE 802.11 the wireless LAN protocol, obsolete MAC layers include Token Ringand FDDI. The MAC sublayer detects but does not correct errors. Layer 3: network layer The network layer provides the functional and procedural means of transferring variable length data sequences from a source host on one network to a destination host on a different network (in contrast to the data link layer which connects hosts within the same network), while maintaining the quality of service requested by the transport layer. The network layer performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. Routers operate at this layer, sending data throughout the extended network and making the Internet possible. This is a logical addressing scheme values are chosen by the network engineer. The addressing scheme is not hierarchical. The network layer may be divided into three sublayers: Subnetwork access that considers protocols that deal with the interface to networks, such as X.25; Subnetwork-dependent convergence when it is necessary to bring the level of a transit network up to the level of networks on either side Subnetwork-independent convergence handles transfer across multiple networks. An example of this latter case is CLNP, or IPv6 ISO 8473. It manages the connectionless transfer of data one hop at a time, from end system to ingress router, router to router, and from egress router to destination end system. It is not responsible for reliable delivery to a next hop, but only for the detection of erroneous packets so they may be discarded. In this scheme, IPv4 and IPv6 would have to be classed with X.25 as subnet access protocols because they carry interface addresses rather than node addresses. A number of layer-management protocols, a function defined in the Management Annex, ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and network-layer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them. Layer 4: transport layer The transport layer provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. The transport layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control. Some protocols are stateand connection-oriented. This means that the transport layer can keep track of the segments and retransmit those that fail. The transport layer also provides the acknowledgement of the successful data transmission and sends the next data if no errors occurred. OSI defines five classes of connection-mode transport protocols ranging from class 0 (which is also known as TP0 and provides the least features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery, and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions,

such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connectionmode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of TP0-4 classes are shown in the following table:[4]

Feature Name Connection oriented network Connectionless network Concatenation and separation Segmentation and reassembly Error Recovery Multiplexing and demultiplexing over a single virtual circuit Explicit flow control Retransmission on timeout Reliable Transport Service

TP0 TP1 TP2 TP3 TP4 Yes Yes Yes Yes Yes No No No No Yes No Yes Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes Yes No No Yes Yes Yes No No Yes Yes Yes No No No No Yes No Yes No Yes Yes

Reinitiate connection (if an excessive number of PDUs are unacknowledged) No Yes No Yes No

An easy way to visualize the transport layer is to compare it with a Post Office, which deals with the dispatch and classification of mail and parcels sent. Do remember, however, that a post office manages the outer envelope of mail. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunneling protocolsoperate at the transport layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-layer protocol, if the encapsulation of the payload takes place only at endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete frames or packets to deliver to an endpoint. L2TP carries PPP frames inside transport packet. Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the transport layer, theTransmission Control Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet Protocol Suite are commonly categorized as layer-4 protocols within OSI. Layer 5: session layer The session layer controls the dialogues (connections) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for fullduplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. The OSI model made this layer responsible for graceful close of sessions, which is a property of the Transmission Control Protocol, and also for session checkpointing and recovery, which is not usually used in the Internet Protocol Suite. The session layer is commonly implemented explicitly in application environments that use remote procedure calls. On this level, Inter-Process communication happen (SIGHUP, SIGKILL, End Process, etc.). Layer 6: presentation layer

The presentation layer establishes context between application-layer entities, in which the higherlayer entities may use different syntax and semantics if the presentation service provides a mapping between them. If a mapping is available, presentation service data units are encapsulated into session protocol data units, and passed down the stack. This layer provides independence from data representation (e.g., encryption) by translating between application and network formats. The presentation layer transforms data into the form that the application accepts. This layer formats and encrypts data to be sent across a network. It is sometimes called the syntax layer.[5] The original presentation structure used the Basic Encoding Rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded file, or serialization of objects and other data structures from and to XML. Layer 7: application layer The application layer is the OSI layer closest to the end user, which means that both the OSI application layer and the user interact directly with the software application. This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application-layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. When determining resource availability, the application layer must decide whether sufficient network or the requested communication exist. In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Some examples of application-layer implementations also include: On OSI stack: FTAM File Transfer and Access Management Protocol X.400 Mail Common Management Information Protocol (CMIP) On TCP/IP stack: Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP) Simple Network Management Protocol (SNMP). [edit]Cross-layer functions This "datagram service model" reference in MPLS may be confusing or unclear to readers.Please help clarify the "datagram service model" reference in MPLS; suggestions may be found on the talk page. (January 2012) There are some functions or services that are not tied to a given layer, but they can affect more than one layer. Examples include the following: security service (telecommunication)[3] as defined by ITU-T X.800 Recommendation. management functions, i.e. functions that permit to configure, instantiate, monitor, terminate the communications of two or more entities: there is a specific application layer protocol, common management information protocol (CMIP) and its corresponding service, common management

information service (CMIS), they need to interact with every layer in order to deal with their instances. Multiprotocol Label Switching (MPLS) operates at an OSI-model layer that is generally considered to lie between traditional definitions of layer 2 (data link layer) and layer 3 (network layer), and thus is often referred to as a "layer-2.5" protocol. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames. ARP is used to translate IPv4 addresses (OSI layer 3) into Ethernet MAC addresses (OSI layer 2). Interfaces Neither the OSI Reference Model nor OSI protocols specify any programming interfaces, other than as deliberately abstract service specifications. Protocol specifications precisely define the interfaces between different computers, but the software interfaces inside computers, known as network sockets are implementation-specific. For example Microsoft Windows' Winsock, and Unix's Berkeley sockets and System V Transport Layer Interface, are interfaces between applications (layer 5 and above) and the transport (layer 4). NDIS and ODI are interfaces between the media (layer 2) and the network protocol (layer 3). Interface standards, except for the physical layer to media, are approximate implementations of OSI service specifications. Examples

Laye r # Na me

OSI protocols TCP/IP protocols

Signali ng AppleT IPX Syste alk m 7[6]

S N UMTS A A P P C

Misc. examples

Ap plic 7 ati on Pre sen 6 tati on

FTAM, X.400, X.500, DAP,R OSE, RTSE,AC SE[7]CMIP[8]

INAP, NNTP, SIP, SSI,DNS, FTP,G MAP,T AFP, ZI opher, HTTP,NFS, NTP, D RIP,SA CAP,IS P,RTM HCP,SMPP, SMTP,SNMP, P UP,TU P,NBP Telnet,RIP, BGP P

HL7, Modbus

ISO/IEC 8823, X.226, MIME, SSL, TLS,XDR ISO/IEC 95761, X.236

AFP

TDI, ASCII, EBC DIC,MIDI, MPE G Named pipes, NetBIO S,SAP, half duplex, full duplex, simple x, RPC,SOCKS NBF

ISO/IEC 8327, Ses X.225, Sockets. Session 5 sio ISO/IEC 9548- establishment inTCP, RTP n 1, X.235 4 Tra ISO/IEC 8073, TCP, UDP, SCTP,DCCP

D ASP,AD NWLi L SP,PAP nk C ? DDP,S

nsp TP0, TP1, TP2, ort TP3, TP4 (X.224), ISO/IEC 8602, X.234

PX

ISO/IEC 8208, X.25 (PLP), Net ISO/IEC 8878, IP, IPsec, ICMP,IGMP, OS SCCP, 3 wo X.223, PF MTP rk ISO/IEC 84731, CLNPX.233.

ATP(To kenTalk IPX orEther Talk)

RRC (Ra dio Resourc e Control) and BM C(Broad cast/Mu lticast Control)

NBF, Q.931, N DP ARP(maps layer 3 to layer 2 address), IS-IS

ISO/IEC 7666, X.25 (LAPB),T Dat oken Bus, a 2 X.222, PPP, SBTV SLIP,PPTP Lin ISO/IEC 8802k 2 LLC Type 1 and 2[9]

LocalTa lk,Appl eTalk MTP,Q Remot .710 e Access, PPP

IEEE 802.3f ramin g,Ethe rnet II frami ng

S D L C

Packet Data Converg ence Protocol (PDCP)[ 10] ,LLC (L ogical Link Control) , MAC( Media Access Control)

802.3 (Ethernet),802 .11a/b/g/n MAC/LLC,802. 1Q (VLAN), ATM, HDP, FDDI, Fib re Channel, Fram e Relay,HDLC, IS L, PPP, Q.921, Token Ring, CDP, ITU -T G.hn DLL CRC, Bit stuffing, ARQ, Data Over Cable Service Interface Specification (DOCSIS), inte rface bonding

X.25 (X.21bis, Ph EIA/TIA1 ysi 232,EIA/TIAcal 449,EIA530,G.703)[9]

RS232,RSMTP,Q 422,ST .710 P,Phon eNet

RS-232, Full duplex,RJ45, V .35, V.34, I.43 0,I.431, T1, E1, T UMTS 10BASEw Physical T,100BASEin layer or TX,1000BASEa L1 T, POTS,SONE x T, SDH, DSL,80 2.11a/b/g/n PHY, ITU-T G.hn

PHY, Controlle r Area Network, Data Over Cable Service Interface Specification (DOCSIS)

Comparison with TCP/IP model In the TCP/IP model of the Internet, protocols are deliberately not as rigidly designed into strict layers as in the OSI model.[11] RFC 3439contains a section entitled "Layering considered harmful (section link here [1])." However, TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols, namely the scope of the software application, the end-to-end transport connection, the internetworking range, and the scope of the direct links to other nodes on the local network. Even though the concept is different from the OSI model, these layers are nevertheless often compared with the OSI layering scheme in the following way: The Internet application layer includes the OSI application layer, presentation layer, and most of the session layer. Its end-to-end transport layer includes the graceful close function of the OSI session layer as well as the OSI transport layer. The internetworking layer (Internet layer) is a subset of the OSI network layer (see above), while the link layer includes the OSI data link and physical layers, as well as parts of OSI's network layer. These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in such things as the internal organization of the network layer document. The presumably strict peer layering of the OSI model as it is usually described does not present contradictions in TCP/IP, as it is permissible that protocol usage does not follow the hierarchy implied in a layered model. Such examples exist in some routing protocols (e.g., OSPF), or in the description of tunneling protocols, which provide a link layer for an application, although the tunnel host protocol may well be a transport or even an application layer protocol in its own right.

Transport layer

The OSI model 7 Application layer 6 Presentation layer 5 Session layer 4 Transport layer 3 Network layer 2 Data link layer

LLC sublayer MAC sublayer 1 Physical layer


V T E

Internet protocols Application layer DHCP DHCPv6 DNS FTP HTTP IMAP IRC LDAP MGCP NNTP NTP POP RPC RTP RTSP RIP SIP SMTP SNMP SOCKS

SSH Telnet TLS/SSL XMPP (more) Transport layer TCP UDP DCCP SCTP RSVP (more) Internet layer IP IPv4 IPv6

ICMP ICMPv6 OSPF BGP ECN IGMP IPsec (more) Link layer ARP/InARP NDP Tunnels

L2TP

PPP Media access control Ethernet DSL ISDN FDDI

(more)
V T E

In computer networking, the transport layer or layer 4 provides end-to-end communication services for applications[1] within a layered architecture of network components and protocols. The transport layer provides convenient services such as connection-oriented data stream support, reliability, flow control, and multiplexing. Transport layers are contained in both the TCP/IP model (RFC 1122),[2] which is the foundation of the Internet, and the Open Systems Interconnection (OSI) model of general networking. The definitions of the transport layer are slightly different in these two models. This article primarily refers to the TCP/IP model, in which TCP is largely for a convenient application programming interface to internet hosts, as opposed to the OSI-model definition of the transport layer. The most well-known transport protocol is the Transmission Control Protocol (TCP). It lent its name to the title of the entire Internet Protocol Suite, TCP/IP. It is used for connection-oriented transmissions, whereas the connectionless User Datagram Protocol (UDP) is used for simpler messaging transmissions. TCP is the more complex protocol, due to its stateful design incorporating reliable transmission and data stream services. Other prominent protocols in this group are the Datagram Congestion Control Protocol(DCCP) and the Stream Control Transmission Protocol (SCTP).

Contents [hide] 1 Services 2 Analysis 3 Protocols 4 Comparison of transport-layer protocols 5 Comparison of OSI transport protocols 6 References

Services There are many services that can be optionally provided by a transport-layer protocol, and different protocols may or may not implement them. Connection-oriented communication: Interpreting the connection as a data stream can provide many benefits to applications. It is normally easier to deal with than the underlying connection-less models, such as the Transmission Control Protocol's underlying Internet Protocolmodel of datagrams. Byte orientation: Rather than processing the messages in the underlying communication system format, it is often easier for an application to process the data stream as a sequence of bytes. This simplification helps applications work with various underlying message formats. Same order delivery: The network layer doesn't generally guarantee that packets of data will arrive in the same order that they were sent, but often this is a desirable feature. This is usually done through the use of segment numbering, with the receiver passing them to the application in order. This can cause head-of-line blocking. Reliability: Packets may be lost during transport due to network congestion and errors. By means of an error detection code, such as achecksum, the transport protocol may check that the data is not corrupted, and verify correct receipt by sending an ACK or NACKmessage to the sender. Automatic repeat request schemes may be used to retransmit lost or corrupted data. flow control: The rate of data transmission between two nodes must sometimes be managed to prevent a fast sender from transmitting more data than can be supported by the receiving data buffer, causing a buffer overrun. This can also be used to improve efficiency by reducing buffer underrun. Congestion avoidance: Congestion control can control traffic entry into a telecommunications network, so as to avoid congestive collapseby attempting to avoid oversubscription of any of the processing or link capabilities of the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of sending packets. For example, automatic repeat requests may keep the network in a congested state; this situation can be avoided by adding congestion avoidance to the flow control, including slow-start. This keeps the bandwidth consumption at a low level in the beginning of the transmission, or after packet retransmission. Multiplexing: Ports can provide multiple endpoints on a single node. For example, the name on a postal address is a kind of multiplexing, and distinguishes between different recipients of the same location. Computer applications will each listen for information on their own ports, which enables the use of more than one network service at the same time. It is part of the transport layer in the TCP/IP model, but of the session layer in the OSI model. Analysis The transport layer is responsible for delivering data to the appropriate application process on the host computers. This involves statistical multiplexing of data from different application processes, i.e. forming data packets, and adding source and destination port numbers in the header of each transport-layer data packet. Together with the source and destination IP address, the port numbers constitutes a network socket, i.e. an identification address of the process-to-process communication. In the OSI model, this function is supported by the session layer.

Some transport-layer protocols, for example TCP, but not UDP, support virtual circuits, i.e. provide connection oriented communication over an underlying packet oriented datagram network. A byte-stream is delivered while hiding the packet mode communication for the application processes. This involves connection establishment, dividing of the data stream into packets called segments, segment numbering and reordering of out-of order data. Finally, some transport-layer protocols, for example TCP, but not UDP, provide end-to-end reliable communication, i.e. error recovery by means of error detecting code and automatic repeat request (ARQ) protocol. The ARQ protocol also provides flow control, which may be combined with congestion avoidance. UDP is a very simple protocol, and does not provide virtual circuits, nor reliable communication, delegating these functions to the applicationprogram. UDP packets are called datagrams, rather than segments. TCP is used for many protocols, including HTTP web browsing and email transfer. UDP may be used for multicasting and broadcasting, since retransmissions are not possible to a large amount of hosts. UDP typically gives higher throughput and shorter latency, and is therefore often used for real-time multimedia communication where packet loss occasionally can be accepted, for example IP-TV and IP-telephony, and for online computer games.

The OSI model 7 Application layer 6 Presentation layer 5 Session layer 4 Transport layer 3 Network layer 2 Data link layer LLC sublayer MAC sublayer 1 Physical layer
V T E

In many non-IP-based networks, for example X.25, Frame Relay and ATM, the connection oriented communication is implemented at network layer or data link layer rather than the transport layer. In X.25, in telephone network modems and in wireless communication systems, reliable node-to-node communication is implemented at lower protocol layers. The OSI model defines five classes of transport protocols: TP0, providing the least error recovery, to TP4, which is designed for less reliable networks.

Protocols The exact definition of what qualifies as a transport-layer protocol is not firm. The following is a short list: ATP, AppleTalk Transaction Protocol CUDP, Cyclic UDP DCCP, Datagram Congestion Control Protocol FCP, Fiber Channel Protocol IL, IL Protocol NBF, NetBIOS Frames protocol RDP, Reliable Datagram Protocol SCTP, Stream Control Transmission Protocol SPX, Sequenced Packet Exchange SST, Structured Stream Transport TCP, Transmission Control Protocol UDP, User Datagram Protocol UDP Lite TP, Micro Transport Protocol

Comparison of transport-layer protocols

Feature Name Packet header size Transport-layer entity Connection oriented Reliable transport Unreliable transport Ordered delivery Unordered delivery Data checksum Checksum size (bits) Partial checksum Path MTU Flow control Congestion control ECN support Multiple streams packet

UDP 8 bytes

UDP Lite TCP 8 bytes 2060 bytes

SCTP 12 bytes

DCCP 12 or bytes 16

RUDP

Datagram Datagram Segment No No Yes No Yes 16 No No No No No No No No Yes Yes No Yes 16 Yes No No No No No Yes Yes No No Yes No Yes 16 No Yes Yes Yes Yes No

Datagram Datagram Yes Yes Yes Yes Yes Yes Yes 32 No Yes Yes Yes Yes Yes Yes No Yes Yes No Yes Yes 16 Yes Yes No Yes Yes No

Datagram Yes Yes Yes Yes Yes Yes Unsure Unsure No Unsure Yes Unsure No

Preserve message boundary Yes

Optional Yes

Multi-homing support Bundling / Nagle NAT friendly[3]

No No Yes

No No Yes

No Yes Yes

Yes Yes Yes[4]

No No Yes

No Unsure Yes

Comparison of OSI transport protocols The OSI model defines five classes of connection-mode transport protocols designated class 0 (TP0) to class 4 (TP4). Class 0 contains no error recovery, and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. All OSI connection-mode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of the classes are shown in the following table:[5] Service Connection oriented network Connectionless network Concatenation and separation Segmentation and reassembly Error Recovery multiplexing and demultiplexing over a single virtual circuit Explicit flow control Retransmission on timeout Reliable Transport Service TP0 TP1 TP2 TP3 TP4 Yes Yes Yes Yes Yes No No No No Yes No Yes Yes Yes Yes Yes Yes Yes Yes Yes No Yes No Yes Yes No No Yes Yes Yes No No Yes Yes Yes No No No No Yes No Yes No Yes Yes

Reinitiate connection (if an excessive number of PDUs are unacknowledged) No Yes No Yes No

Internet protocol suite

Internet protocols Application layer DHCP DHCPv6 DNS FTP HTTP IMAP IRC

LDAP MGCP NNTP NTP POP RPC RTP RTSP RIP SIP SMTP SNMP SOCKS SSH Telnet TLS/SSL XMPP (more) Transport layer TCP UDP DCCP SCTP RSVP (more) Internet layer IP IPv4

IPv6

ICMP ICMPv6 OSPF BGP ECN IGMP IPsec (more) Link layer ARP/InARP NDP Tunnels L2TP

PPP Media access control Ethernet DSL ISDN FDDI

(more)
V T E

The Internet protocol suite is the set of communications protocols used for the Internet and similar networks, and generally the most popular protocol stack for wide area networks. It is commonly known as TCP/IP, because of its most important protocols: Transmission Control Protocol (TCP) and Internet Protocol (IP), which were the first networking protocols defined in this standard. It is

occasionally known as the DoD model due to the foundational influence of the ARPANET in the 1970s (operated by DARPA, an agency of the United States Department of Defense). TCP/IP provides end-to-end connectivity specifying how data should be formatted, addressed, transmitted, routed and received at the destination. It has four abstraction layers, each with its own protocols.[1][2] From lowest to highest, the layers are: The link layer (commonly Ethernet) contains communication technologies for a local network. The internet layer (IP) connects local networks, thus establishing internetworking. The transport layer (TCP) handles host-to-host communication. The application layer (for example HTTP) contains all protocols for specific data communications services on a process-to-process level (for example how a web browser communicates with a web server). The TCP/IP model and related protocols are maintained by the (IETF) or Internet Engineering Task Force.

Contents [hide] 1 History 1.1 Early research 1.2 Specification 1.3 Adoption 2 Key architectural principles 3 Layers in the Internet protocol suite 3.1 Link layer 3.2 Internet layer 3.3 Transport layer 3.4 Application layer 3.5 Layer names and number of layers in the literature 3.6 OSI and TCP/IP layering differences 4 Implementations 5 See also 6 References 7 Further reading 8 External links

History

Diagram of the first internetworked connection

A Stanford Research Institute packet wayinternetworked transmission. Early research

radio

van,

site

of

the

first

three-

The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the early 1970s. After initiating the pioneeringARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, the developer of the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET. By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman and Louis Pouzin, designer of the CYCLADES network, with important influences on this design. The network's design included the recognition it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. Using a simple design, it became possible to connect almost any network to the ARPANET, irrespective of their local characteristics, thereby solving Kahn's initial problem. One popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, will run over "two tin cans and a string." As a joke, the IP over Avian Carriersformal protocol specification was created and successfully tested. A computer called a router is provided with an interface to each network. It forwards packets back and forth between them.[3] Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways. Specification From 1973 to 1974, Cerf's networking research group at Stanford worked out details of the idea, resulting in the first TCP specification.[4] A significant technical influence was the early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around that time. DARPA then contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on different hardware platforms. Four

versions were developed: TCP v1, TCP v2, TCP v3 and IP v3, and TCP/IP v4. The last protocol is still in use today. In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London (UCL). In November, 1977, a three-network TCP/IP test was conducted between sites in the US, the UK, and Norway. Several other TCP/IP prototypes were developed at multiple research centers between 1978 and 1983. The migration of the ARPANET to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated.[5] Adoption In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking.[6] In 1985, the Internet Architecture Board held a three-day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985 the first Interop conference was held, focusing on network interoperability via further adoption of TCP/IP. It was founded by Dan Lynch, an early Internet activist. From the beginning, it was attended by large corporations, such as IBM and DEC. Interoperability conferences have been held every year since then. Every year from 1985 through 1993, the number of attendees tripled.[citation needed] IBM, ATT and DEC were the first major corporations to adopt TCP/IP, despite having competing internal protocols (SNA, XNS, etc.). In IBM, from 1984, Barry Appelman's group did TCP/IP development. (Appelman later moved to AOL to be the head of all its development efforts.) They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies began offering TCP/IP stacks for DOS and MS Windows, such as the company FTP Software, and the Wollongong Group.[7] The first VM/CMS TCP/IP stack came from the University of Wisconsin.[8] Back then, most of these TCP/IP stacks were written single-handedly by a few talented programmers. For example, John Romkey of FTP Software was the author of the MIT PC/IP package.[9] John Romkey's PC/IP implementation was the first IBM PC TCP/IP stack. Jay Elinsky and Oleg Vishnepolsky of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively.[10] The spread of TCP/IP was fueled further in June 1989, when AT&T agreed to put into the public domain the TCP/IP code developed for UNIX. Various vendors, including IBM, included this code in their own TCP/IP stacks. Many companies sold TCP/IP stacks for Windows until Microsoft released its own TCP/IP stack in Windows 95. This event was a little late in the evolution of the Internet, but it cemented TCP/IP's dominance over other protocols, which eventually disappeared. These protocols included IBM's SNA, OSI, Microsoft's native NetBIOS, and Xerox' XNS.[citation needed] Key architectural principles An early architectural document, RFC 1122, emphasizes architectural principles over layering.[11] End-to-end principle: This principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle.[12]

Robustness Principle: "In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear)." [13] "The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features." [14] Layers in the Internet protocol suite

Two Internet hosts connected via two routers and the corresponding layers used at each hop. The application on each host executes read and write operations as if the processes were directly connected to each other by some kind of data pipe. Every other detail of the communication is hidden from each process. The underlying mechanisms that transmit data between the host computers are located in the lower protocol layers.

Encapsulation of application data descending through the layers described in RFC 1122 The Internet protocol suite uses encapsulation to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers, being further encapsulated at each level. The "layers" of the protocol suite near the top are logically closer to the user application, while those near the bottom are logically closer to the physical transmission of the data. Viewing layers as providing or consuming a service is a method of abstraction to isolate upper layer protocols from the nitty-gritty detail of transmitting bits over, for example,Ethernet and collision detection, while the lower layers avoid having to know the details of each and every application and its protocol. Even when the layers are examined, the assorted architectural documentsthere is no single architectural model such as ISO 7498, the Open Systems Interconnection (OSI) modelhave fewer and less rigidly defined layers than the OSI model, and thus provide an easier fit for real-world protocols. In point of fact, one frequently referenced document, RFC 1958, does not contain a stack of layers. The lack of emphasis on layering is a strong difference between the IETF and OSI approaches. It only refers to the existence of the "internetworking layer" and generally to "upper layers"; this document was intended as a 1996 "snapshot" of the architecture: "The Internet and its architecture have grown in evolutionary fashion from modest beginnings, rather than from a Grand Plan. While this process of evolution is one of the main reasons for the technology's success, it nevertheless seems useful to record a snapshot of the current principles of the Internet architecture." RFC 1122, entitled Host Requirements, is structured in paragraphs referring to layers, but the document refers to many other architectural principles not emphasizing layering. It loosely defines a four-layer model, with the layers having names, not numbers, as follows: Application layer (process-to-process): This is the scope within which applications create user data and communicate this data to other processes or applications on another or the same host. The communications partners are often called peers. This is where the "higher level" protocols such as SMTP, FTP, SSH, HTTP, etc. operate. Transport layer (host-to-host): The transport layer constitutes the networking regime between two network hosts, either on the local network or on remote networks separated by routers. The transport layer provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. This is where flow-control, error-correction, and connection

protocols exist, such as TCP. This layer deals with opening and maintaining connections between Internet hosts. Internet layer (internetworking): The internet layer has the task of exchanging datagrams across network boundaries. It is therefore also referred to as the layer that establishes internetworking, indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next IP router that has the connectivity to a network closer to the final data destination. Link layer: This layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer describes the protocols used to describe the local network topology and the interfaces needed to effect transmission of Internet layer datagrams to next-neighbor hosts. (cf. the OSI data link layer). The Internet protocol suite and the layered protocol stack design were in use before the OSI model was established. Since then, the TCP/IP model has been compared with the OSI model in books and classrooms, which often results in confusion because the two models use different assumptions, including about the relative importance of strict layering. This abstraction also allows upper layers to provide services that the lower layers cannot, or choose not, to provide. Again, the original OSI model was extended to include connectionless services (OSIRM CL).[15] For example, IP is not designed to be reliable and is a best effort delivery protocol. This means that all transport layer implementations must choose whether or not to provide reliability and to what degree. UDP provides data integrity (via a checksum) but does not guarantee delivery; TCP provides both data integrity and delivery guarantee (by retransmitting until the receiver acknowledges the reception of the packet). This model lacks the formalism of the OSI model and associated documents, but the IETF does not use a formal model and does not consider this a limitation, as in the comment by David D. Clark, "We reject: kings, presidents and voting. We believe in: rough consensus and running code." Criticisms of this model, which have been made with respect to the OSI model, often do not consider ISO's later extensions to that model. For multiaccess links with their own addressing systems (e.g. Ethernet) an address mapping protocol is needed. Such protocols can be considered to be below IP but above the existing link system. While the IETF does not use the terminology, this is a subnetwork dependent convergence facility according to an extension to the OSI model, the internal organization of the network layer (IONL).[16] ICMP & IGMP operate on top of IP but do not transport data like UDP or TCP. Again, this functionality exists as layer management extensions to the OSI model, in its Management Framework (OSIRM MF) [17] The SSL/TLS library operates above the transport layer (uses TCP) but below application protocols. Again, there was no intention, on the part of the designers of these protocols, to comply with OSI architecture. The link is treated like a black box here. This is fine for discussing IP (since the whole point of IP is it will run over virtually anything). The IETF explicitly does not intend to discuss transmission systems, which is a less academic but practical alternative to the OSI model.

The following is a description of each layer in the TCP/IP networking model starting from the lowest level. Link layer The link layer is the networking scope of the local network connection to which a host is attached. This regime is called the link in Internet literature. This is the lowest component layer of the Internet protocols, as TCP/IP is designed to be hardware independent. As a result TCP/IP is able to be implemented on top of virtually any hardware networking technology. The link layer is used to move packets between the Internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on a given link can be controlled both in the software device driver for the network card, as well as onfirmware or specialized chipsets. These will perform data link functions such as adding a packet header to prepare it for transmission, then actually transmit the frame over a physical medium. The TCP/IP model includes specifications of translating the network addressing methods used in the Internet Protocol to data link addressing, such as Media Access Control (MAC), however all other aspects below that level are implicitly assumed to exist in the link layer, but are not explicitly defined. This is also the layer where packets may be selected to be sent over a virtual private network or other networking tunnel. In this scenario, the link layer data may be considered application data which traverses another instantiation of the IP stack for transmission or reception over another IP connection. Such a connection, or virtual link, may be established with a transport protocol or even an application scope protocol that serves as a tunnel in the link layer of the protocol stack. Thus, the TCP/IP model does not dictate a strict hierarchical encapsulation sequence. Internet layer The internet layer has the responsibility of sending packets across potentially multiple networks. Internetworking requires sending data from the source network to the destination network. This process is called routing.[18] In the Internet protocol suite, the Internet Protocol performs two basic functions: Host addressing and identification: This is accomplished with a hierarchical addressing system (see IP address). Packet routing: This is the basic task of sending packets of data (datagrams) from source to destination by sending them to the next network node (router) closer to the final destination. The internet layer is not only agnostic of application data structures as the transport layer, but it also does not distinguish between operation of the various transport layer protocols. So, IP can carry data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol(IGMP) are protocols 1 and 2, respectively. Some of the protocols carried by IP, such as ICMP (used to transmit diagnostic information about IP transmission) and IGMP (used to manage IP Multicast data) are layered on top of IP but perform internetworking functions. This illustrates the differences in the architecture of the TCP/IP stack of the Internet and the OSI model.

The internet layer only provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding the transport layer datagrams to an appropriate next-hop router for further relaying to its destination. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet. The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts computers, and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated by the standardization of Internet Protocol version 6 (IPv6) in 1998, and beginning production implementations in approximately 2006.

Transport layer
The transport layer establishes host-to-host connectivity, meaning it handles the details of data transmission that are independent of the structure of user data and the logistics of exchanging information for any particular specific purpose. Its responsibility includes end-to-end message transfer independent of the underlying network, along with error control, segmentation, flow control, congestion control, and application addressing (port numbers). End to end message transmission or connecting applications at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP. The transport layer can be thought of as a transport mechanism, e.g., a vehicle with the responsibility to make sure that its contents (passengers/goods) reach their destination safely and soundly, unless another protocol layer is responsible for safe delivery. The layer simply establishes a basic data channel that an application uses in its task-specific data exchange. For this purpose the layer establishes the concept of the port, a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service announcements or directory services. Since IP provides only a best effort delivery, the transport layer is the first layer of the TCP/IP stack to offer reliability. IP can run over a reliable data link protocol such as the High-Level Data Link Control (HDLC). For example, the TCP is a connection-oriented protocol that addresses numerous reliability issues to provide a reliable byte stream: data arrives in-order data has minimal error (i.e. correctness) duplicate data is discarded lost/discarded packets are resent includes traffic congestion control The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented not byte-stream-oriented like TCP and provides multiple streams multiplexed over a single connection. It also provides multihoming support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP), but can also be used for other applications.

User Datagram Protocol is a connectionless datagram protocol. Like IP, it is a best effort, "unreliable" protocol. Reliability is addressed through error detection using a weak checksum algorithm. UDP is typically used for applications such as streaming media (audio, video,Voice over IP etc.) where ontime arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is designed for real-time data such as streaming audio and video. The applications at any given network address are distinguished by their TCP or UDP port. By convention certain well known ports are associated with specific applications. (See List of TCP and UDP port numbers.) Application layer The application layer contains the higher-level protocols used by most applications for network communication. Examples of application layer protocols include the File Transfer Protocol (FTP) and the Simple Mail Transfer Protocol (SMTP).[19] Data coded according to application layer protocols are then encapsulated into one or (occasionally) more transport layer protocols (such as TCP or UDP), which in turn uselower layer protocols to effect actual data transfer. Since the IP stack defines no layers between the application and transport layers, the application layer must include any protocols that act like the OSI's presentation and session layer protocols. This is usually done through libraries. Application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate, although the applications are usually aware of key qualities of the transport layer connection such as the end point IP addresses and port numbers. As noted above, layers are not necessarily clearly defined in the Internet protocol suite. Application layer protocols are most often associated with client server applications, and the commoner servers have specific ports assigned to them by the IANA: HTTP has port 80; Telnet has port 23; etc. Clients, on the other hand, tend to use ephemeral ports, i.e. port numbers assigned at random from a range set aside for the purpose. Transport and lower level layers are largely unconcerned with the specifics of application layer protocols. Routers and switches do not typically "look inside" the encapsulated traffic to see what kind of application protocol it represents, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications do try to determine what's inside, as with the Resource Reservation Protocol(RSVP). It's also sometimes necessary for Network Address Translation (NAT) facilities to take account of the needs of particular application layer protocols. (NAT allows hosts on private networks to communicate with the outside world via a single visible IP address using port forwarding, and is an almost ubiquitous feature of modern domestic broadband routers). Layer names and number of layers in the literature The following table shows various networking models. The number of layers varies between three and seven. Kurose,[20]Forou Comer,[22]Kozie Stalling Tanenbau RFC zan [21] rok[23] s[24] m[25] 1122, Cisco Mike Academy Padlipsky's OSI model

Interne t STD 3 (1989)

[26]

1982 "Arpanet Reference Model" (RFC 871) Three layers Seven layers OSI model Applicati on

Five layers

Four+one layers

Five layers

Five layers

Four layers

Four layers

"Five-layer "TCP/IP 5"TCP/IP 5-layer "Intern "Arpanet Internet model" "TCP/IP layer "Internet reference et reference or "TCP/IP model" reference model" model" model" model" protocol suite" model"

Application

Application

Applicat Applicatio Applicat Applicati Application/Pr Presenta ion n ion on ocess tion Session Host-tohost or Transpo Transpor Transport transpo rt t Host-to-host rt Internet Internet Internet Internet work Network Network interface interface Transpor t

Transport

Transport

Network

Internet

Network

Data link Physical

Data link Networ (Network Data link k access interface) (Hardware) Physical Physical

Link

Data link Physical

Some of the networking models are from textbooks, which are secondary sources that may contravene the intent of RFC 1122 and otherIETF primary sources.[27] OSI and TCP/IP layering differences The three top layers in the OSI modelthe application layer, the presentation layer and the session layerare not distinguished separately in the TCP/IP model where it is just the application layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the eXternal Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol calledRemote Procedure Call (RPC). RPC provides reliable record transmission, so it can run safely over the best-effort UDP transport. Different authors have interpreted the RFCs differently, about whether the link layer (and the TCP/IP model) covers OSI model layer 1 (physical layer) issues, or if a hardware layer is assumed below the link layer. Several authors have attempted to incorporate the OSI model's layers 1 and 2 into the TCP/IP model, since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often

results in a model with five layers, where the link layer or network access layer is split into the OSI model's layers 1 and 2. The session layer roughly corresponds to the Telnet virtual terminal functionality[citation needed], which is part of text based protocols such as the HTTP and SMTP TCP/IP model application layer protocols. It also corresponds to TCP and UDP port numbering, which is considered as part of the transport layer in the TCP/IP model. Some functions that would have been performed by an OSI presentation layer are realized at the Internet application layer using the MIME standard, which is used in application layer protocols such as HTTP and SMTP. The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated[citation needed] that Internet protocol and architecture development is not intended to be OSI-compliant. RFC 3439, addressing Internet architecture, contains a section entitled: "Layering Considered Harmful".[27] Conflicts are apparent also in the original OSI model, ISO 7498, when not considering the annexes to this model (e.g., ISO 7498/4 Management Framework), or the ISO 8648 Internal Organization of the Network layer (IONL). When the IONL and Management Framework documents are considered, the ICMP and IGMP are neatly defined as layer management protocols for the network layer. In like manner, the IONL provides a structure for "subnetwork dependent convergence facilities" such as ARP and RARP. IETF protocols can be encapsulated recursively, as demonstrated by tunneling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunneling at the network layer. Implementations No specific hardware or software implementation is required by the protocols or the layered model, so there are many. Most computer operating systems in use today, including all consumer-targeted systems, include a TCP/IP implementation. A minimally acceptable implementation includes the following protocols, listed from most essential to least essential: IP, ARP, ICMP, UDP,TCP and sometimes IGMP. In principle, it is possible to support only one transport protocol, such as UDP, but this is rarely done, because it limits usage of the whole implementation. IPv6, beyond its own version of ARP (NDP), ICMP (ICMPv6) and IGMP (IGMPv6), has some additional required functions, and often is accompanied by an integrated IPSec security layer. Other protocols could be easily added later (possibly being implemented entirely in userspace), such as DNS for resolving domain names to IP addresses, or DHCP for automatically configuring network interfaces. Normally, application programmers are concerned only with interfaces in the application layer and often also in the transport layer, while the layers below are services provided by the TCP/IP stack in the operating system. Most IP implementations are accessible to programmers through sockets and APIs. Unique implementations include Lightweight TCP/IP, an open source stack designed for embedded systems, and KA9Q NOS, a stack and associated protocols for amateur packet radio systems and personal computers connected via serial lines.

Microcontroller firmware in the network adapter typically handles link issues, supported by driver software in the operational system. Non-programmable analog and digital electronics are normally in charge of the physical components below the link layer, typically using anapplication-specific integrated circuit (ASIC) chipset for each network interface or other physical standard. Highperformance routers are to a large extent based on fast non-programmable digital electronics, carrying out link level switching.

Tunneling protocol
Computer networks use a tunneling protocol when one network protocol (the delivery protocol) encapsulates a different payload protocol. By using tunneling one can (for example) carry a payload over an incompatible delivery-network, or provide a secure path through an untrusted network. Tunneling typically contrasts with a layered protocol model such as those of OSI or TCP/IP. The delivery protocol usually (but not always) operates at a higher level in the model than does the payload protocol, or at the same level. To understand a particular protocol stack, network engineers must understand both the payload and delivery protocol sets. As an example of network layer over network layer, Generic Routing Encapsulation (GRE), a protocol running over IP (IP Protocol Number 47), often serves to carry IP packets, with RFC 1918 private addresses, over the Internet using delivery packets with public IP addresses. In this case, the delivery and payload protocols are compatible, but the payload addresses are incompatible with those of the delivery network. In contrast, an IP payload might believe it sees a data link layer delivery when it is carried inside the Layer 2 Tunneling Protocol (L2TP), which appears to the payload mechanism as a protocol of the data link layer. L2TP, however, actually runs over the transport layer using User Datagram Protocol (UDP) over IP. The IP in the delivery protocol could run over any data-link protocol from IEEE 802.2 over IEEE 802.3 (i.e., standards-based Ethernet) to the Point-to-Point Protocol (PPP) over a dialup modem link. Tunneling protocols may use data encryption to transport insecure payload protocols over a public network (such as the Internet), thereby providing VPN functionality.IPsec has an end-to-end Transport Mode, but can also operate in a tunneling mode through a trusted security gateway.

Internet protocols Application layer DHCP DHCPv6 DNS FTP HTTP

IMAP IRC LDAP MGCP NNTP NTP POP RPC RTP RTSP RIP SIP SMTP SNMP SOCKS SSH Telnet TLS/SSL XMPP (more) Transport layer TCP UDP DCCP SCTP RSVP (more) Internet layer

IP IPv4 IPv6

ICMP ICMPv6 OSPF BGP ECN IGMP IPsec (more) Link layer ARP/InARP NDP Tunnels L2TP

PPP Media access control Ethernet DSL ISDN FDDI

(more)
V T E

Contents [hide]

1 Secure shell tunneling 2 Tunneling to circumvent firewall policy 3 See also 4 External links 5 Notes Secure shell tunneling A secure shell (SSH) tunnel consists of an encrypted tunnel created through a SSH protocol connection. Users may set up SSH tunnels to transfer unencrypted traffic over a network through an encrypted channel. For example, Microsoft Windows machines can share files using the Server Message Block (SMB) protocol, a non-encrypted protocol. If one were to mount a Microsoft Windows file-system remotely through the Internet, someone snooping on the connection could see transferred files. To mount the Windows file-system securely, one can establish a SSH tunnel that routes all SMB traffic to the remote fileserver through an encrypted channel. Even though the SMB protocol itself contains no encryption, the encrypted SSH channel through which it travels offers security. To set up an SSH tunnel, one configures an SSH client to forward a specified local port to a port on the remote machine. Once the SSH tunnel has been established, the user can connect to the specified local port to access the network service. The local port need not have the same port number as the remote port. SSH tunnels provide a means to bypass firewalls that prohibit certain Internet services so long as a site allows outgoing connections. For example, an organization may prohibit a user from accessing Internet web pages (port 80) directly without passing through the organization's proxy filter (which provides the organization with a means of monitoring and controlling what the user sees through the web). But users may not wish to have their web traffic monitored or blocked by the organization's proxy filter. If users can connect to an external SSH server, they can create a SSH tunnel to forward a given port on their local machine to port 80 on a remote web-server. To access the remote web-server, users would point their browser to the local port at http://localhost/. Some SSH clients support dynamic port forwarding that allows the user to create a SOCKS 4/5 proxy. In this case users can configure their applications to use their local SOCKS proxy server. This gives more flexibility than creating a SSH tunnel to a single port as previously described. SOCKS can free the user from the limitations of connecting only to a predefined remote port and server. If an application doesn't support SOCKS, one can use a "socksifier" to redirect the application to the local SOCKS proxy server. Some "socksifiers" support SSH directly, thus avoiding the need for a SSH client. Tunneling to circumvent firewall policy Users can also use tunneling to "sneak through" a firewall, using a protocol that the firewall would normally block, but "wrapped" inside a protocol that the firewall does not block, such as HTTP. If the firewall policy does not specifically exclude this kind of "wrapping", this trick can function to get around the intended firewall policy. Another HTTP-based tunneling method uses the HTTP CONNECT method/command. A client issues the HTTP CONNECT command to a HTTP proxy. The proxy then makes a TCP connection to a particular server:port, and relays data between that server:port and the client connection. Because this creates a security hole, CONNECT-capable HTTP proxies commonly restrict access to the CONNECT method. The proxy allows access only to a whitelist of specific authorized servers.

Real-time Transport Protocol


The Real-time Transport Protocol (RTP) defines a standardized packet format for delivering audio and video over IP networks. RTP is used extensively in communication and entertainment systems that involve streaming media, such as telephony, video teleconference applications,television services and web-based push-to-talk features. RTP is used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the media streams (e.g., audio and video), RTCP is used to monitor transmission statistics and quality of service (QoS) and aids synchronization of multiple streams. RTP is originated and received on even port numbers and the associated RTCP communication uses the next higher odd port number. RTP is one of the technical foundations of Voice over IP and in this context is often used in conjunction with a signaling protocol which assists in setting up connections across the network. RTP was developed by the Audio-Video Transport Working Group of the Internet Engineering Task Force (IETF) and first published in 1996 asRFC 1889, superseded by RFC 3550 in 2003. Internet protocols Application layer DHCP DHCPv6 DNS FTP HTTP IMAP IRC LDAP MGCP NNTP NTP POP RPC RTP RTSP RIP

SIP SMTP SNMP SOCKS SSH Telnet TLS/SSL XMPP (more) Transport layer TCP UDP DCCP SCTP RSVP (more) Internet layer IP IPv4 IPv6

ICMP ICMPv6 OSPF BGP ECN IGMP IPsec

(more) Link layer ARP/InARP NDP Tunnels L2TP

PPP Media access control Ethernet DSL ISDN FDDI

(more)
V T E

Contents [hide] 1 Overview 1.1 Protocol components 1.2 Sessions 2 Profiles and Payload formats 3 Packet header 4 RTP-based systems 5 RFC references 6 See also 7 Notes 8 References 9 External links [edit]Overview RTP is designed for end-to-end, real-time, transfer of stream data. The protocol provides facility for jitter compensation and detection of out of sequence arrival in data, that are common during transmissions on an IP network. RTP supports data transfer to multiple destinations through IP multicast.[1] RTP is regarded as the primary standard for audio/video transport in IP networks and is used with an associated profile and payload format.[2] Real-time multimedia streaming applications require timely delivery of information and can tolerate some packet loss to achieve this goal. For example, loss of a packet in audio application may result in loss of a fraction of a second of audio data, which can be made unnoticeable with suitable error

concealment algorithms.[3] The Transmission Control Protocol (TCP), although standardized for RTP use,[4] is not normally used in RTP application because TCP favors reliability over timeliness. Instead the majority of the RTP implementations are built on the User Datagram Protocol (UDP).[3] Other transport protocols specifically designed for multimedia sessions are SCTP[5] and DCCP, although, as of 2010, they are not in widespread use.[6][not in citation given] RTP was developed by the Audio/Video Transport working group of the IETF standards organization. RTP is used in conjunction with other protocols such as H.323 and RTSP.[2] The RTP standard defines a pair of protocols, RTP and RTCP. RTP is used for transfer of multimedia data, and the RTCP is used to periodically send control information and QoS parameters.[7] [edit]Protocol components The RTP specification describes two sub-protocols: The data transfer protocol, RTP, which deals with the transfer of real-time data. Information provided by this protocol include timestamps (for synchronization), sequence numbers (for packet loss and reordering detection) and the payload format which indicates the encoded format of the data.[8] A control protocol, RTCP, is used to specify quality of service (QoS) feedback and synchronization between the media streams. The bandwidth of RTCP traffic compared to RTP is small, typically around 5%.[8][9] An optional signaling protocol such as H.323 or Session Initiation Protocol (SIP) An optional media description protocol such as Session Description Protocol [edit]Sessions An RTP Session is established for each multimedia stream. A session consists of an IP address with a pair of ports for RTP and RTCP. For example, audio and video streams will have separate RTP sessions, enabling a receiver to deselect a particular stream.[10] The ports which form a session are negotiated using other protocols such as RTSP (using SDP in the setup method)[11] and SIP. According to the specification, an RTP port should be even and the RTCP port is the next higher odd port number. RTP and RTCP typically use unprivileged UDP ports (1024 to 65535),[12] but may use other transport protocols (most notably, SCTP and DCCP) as well, as the protocol design is transport independent. [edit]Profiles and Payload formats See also: RTP audio video profile One of the design considerations of RTP was to support a range of multimedia formats (such as H.264, MPEG-4, MJPEG, MPEG, etc.) and allow new formats to be added without revising the RTP standard. The design of RTP is based on the architectural principle known asapplication level framing (ALF). The information required by a specific application's needs is not included in the generic RTP header, but is instead provided through RTP profiles and payload formats.[7] For each class of application (e.g., audio, video), RTP defines a profile and one or more associated payload formats.[7] A complete specification of RTP for a particular application usage will require a profile and payload format specification(s).[13]:71 The profile defines the codecs used to encode the payload data and their mapping to payload format codes in the Payload Type (PT) field of the RTP header (see below). Each profile is accompanied by several payload format specifications, each of which describes the transport of a particular encoded data.[2] Some of the audio payload formats include: G.711, G.723, G.726, G.729, GSM, QCELP, MP3, DTMF etc., and some of the video payload formats include: H.261, H.263,[14] H.264, MPEG-4[14] etc.[15] Examples of RTP Profiles include: The RTP profile for Audio and video conferences with minimal control (RFC 3551) defines a set of static payload type assignments, and a mechanism for mapping between a payload format, and a payload type identifier (in header) using Session Description Protocol (SDP). The Secure Real-time Transport Protocol (SRTP) (RFC 3711) defines a profile of RTP that provides cryptographic services for the transfer of payload data.[16]

The experimental Control machine communications. [edit]Packet header RTP packet header bit offset 0 32 64 96 0-1 Version Timestamp SSRC identifier CSRC ... Extension ... 2 3 4-7 P X CC

Data

Profile

for

RTP (RTP/CDP[17])

for machine-to-

9-15

16-31 Sequence Number

M PT

identifiers

96+32CC Profile-specific extension header ID Extension header length 128+32CC header

The RTP header has a minimum size of 12 bytes. After the header, optional header extensions may be present. This is followed by the RTP payload, the format of which is determined by the particular class of application.[18] The fields in the header are as follows: Version: (2 bits) Indicates the version of the protocol. Current version is 2.[19] P (Padding): (1 bit) Used to indicate if there are extra padding bytes at the end of the RTP packet. A padding might be used to fill up a block of certain size, for example as required by an encryption algorithm. The last byte of the padding contains the number of how many padding bytes were added (including itself).[19][13]:12 X (Extension): (1 bit) Indicates presence of an Extension header between standard header and payload data. This is application or profile specific.[19] CC (CSRC Count): (4 bits) Contains the number of CSRC identifiers (defined below) that follow the fixed header.[13]:12 M (Marker): (1 bit) Used at the application level and defined by a profile. If it is set, it means that the current data has some special relevance for the application.[13]:13 PT (Payload Type): (7 bits) Indicates the format of the payload and determines its interpretation by the application. This is specified by an RTP profile. For example, see RTP Profile for audio and video conferences with minimal control (RFC 3551).[20] Sequence Number: (16 bits) The sequence number is incremented by one for each RTP data packet sent and is to be used by the receiver to detect packet loss and to restore packet sequence. The RTP does not specify any action on packet loss; it is left to the application to take appropriate action. For example, video applications may play the last known frame in place of the missing frame.[21]According to RFC 3550, the initial value of the sequence number should be random to make known-plaintext attacks on encryptionmore difficult.[13]:13 RTP provides no guarantee of delivery, but the presence of sequence numbers makes it possible to detect missing packets.[1] Timestamp: (32 bits) Used to enable the receiver to play back the received samples at appropriate intervals. When several media streams are present, the timestamps are independent in each stream, and may not be relied upon for media synchronization. The granularity of the timing is application specific. For example, an audio application that samples data once every 125 s (8 kHz, a common sample rate in digital telephony) could use that value as its clock resolution. The clock granularity is one of the details that is specified in the RTP profile for an application.[21] SSRC: (32 bits) Synchronization source identifier uniquely identifies the source of a stream. The synchronization sources within the same RTP session will be unique.[13]:15

CSRC: Contributing source IDs enumerate contributing sources to a stream which has been generated from multiple sources.[13]:15 Extension header: (optional) The first 32-bit word contains a profile-specific identifier (16 bits) and a length specifier (16 bits) that indicates the length of the extension (EHL=extension header length) in 32-bit units, excluding the 32 bits of the extension header.[13]:17 [edit]RTP-based systems A complete network based system will include other protocols and standards in conjunction with RTP. Protocols like SIP, RTSP, H.225 andH.245 are used for session initiation, control and termination. Other standards like H.264, MPEG, H.263 etc., are used to encode the payload data as specified via RTP Profile.[22] An RTP sender captures the multimedia data, which is then encoded, framed and transmitted as RTP packets with appropriate timestamps and increasing sequence numbers. Depending on the RTP Profile in use, the Payload Type field is set. The RTP receiver captures the RTP packets, detects missing packets, and may perform reordering of packets. The frames are decoded depending on the payload format and presented to the end user.[22] [edit]RFC references RFC 3550, Standard 64, RTP: A Transport Protocol for Real-Time Applications RFC 3551, Standard 65, RTP Profile for Audio and Video Conferences with Minimal Control RFC 6184, Proposed Standard, RTP Payload Format for H.264 Video RFC 3984, Obsolete, RTP Payload Format for H.264 Video RFC 4103, RTP Payload Format for Text Conversation RFC 3640, RTP Payload Format for Transport of MPEG-4 Elementary Streams RFC 3016, RTP Payload Format for MPEG-4 Audio/Visual Streams RFC 2250, Proposed Standard, RTP Payload Format for MPEG1/MPEG2 Video RFC 4175, RTP Payload Format for Uncompressed Video RFC 4695 (obsoleted by RFC 6295) RTP Payload Format for MIDI RFC 4696 An Implementation Guide for RTP MIDI Maximum transmission unit From Wikipedia, the free encyclopedia In computer networking, the maximum transmission unit (MTU) of a communications protocol of a layer is the size (in bytes) of the largestprotocol data unit that the layer can pass onwards. MTU parameters usually appear in association with a communications interface (NIC,serial port, etc.). Standards (Ethernet, for example) can fix the size of an MTU; or systems (such as point-to-point serial links) may decide MTU at connect time. A larger MTU brings greater efficiency because each packet carries more user data while protocol overheads, such as headers or underlying per-packet delays, remain fixed; the resulting higher efficiency means a slight improvement in bulk protocol throughput. A larger MTU also means processing of fewer packets for the same amount of data. In some systems, per-packet-processing can be a critical performance limitation. However, this gain is not without some downside. Large packets can occupy a slow link for some time, causing greater delays to following packets and increasing lag and minimum latency. For example, a 1500-byte packet, the largest allowed by Ethernet at the network layer (and hence over most of the Internet), ties up a 14.4k modem for about one second. Large packets are also problematic in the presence of communications errors. Corruption of a single bit in a packet requires that the entire packet be retransmitted. At a given bit error rate, larger packets are more likely to be corrupted. Retransmissions of larger packets takes longer. Contents [hide] 1 Table of MTUs of common media 2 IP (Internet protocol)

2.1 Path MTU Discovery 3 ATM backbones, an example of MTU tuning 4 MTU in other standards 5 Disruption 6 See also 7 References 8 External links [edit]Table of MTUs of common media Note: the MTUs in this section are given as the maximum size of IP packet that can be transmitted without fragmentation - including IP headers but excluding headers from lower levels in the protocol stack. The MTU must not be confused with the minimum datagram size that all hosts must be prepared to accept, which has a value of 576 for IPv4[1] and of 1280 for IPv6.[2] Maximum Media Transmission Notes Unit (bytes) Practical path MTUs are generally higher. IPv4 links must be able to forward packets of size up to 68 bytes. Systems may use Path MTU Discovery[4] to find the actual path MTU. This should not be mistaken with the packet size every host must be able to handle, which is 576.[5] Practical path MTUs are generally higher. Systems must use Path MTU Discovery[7] to find the actual path MTU. Nearly all IP over Ethernet implementations use the Ethernet V2 frame format. Rarely used The limit varies by vendor. For correct interoperation, the whole Ethernet network must have the same MTU. Jumbo frames are usually only seen in special purpose networks.

Internet IPv4 Path MTU

At least 68[3]

Internet IPv6 Path MTU Ethernet v2

At least 1280[6] 1500[8]

Ethernet 1492[9] withLLC andSNAP,PPPoE

EthernetJumbo Frames

1500-9000 7981[10] 4464

WLAN (802.11) Token Ring (802.5)

FDDI 4352[4] [edit]IP (Internet protocol) DARPA designed the Internet protocol suite to work over many networking technologies, each of which may use packets of different size. While a host will know the MTU of its own interface and possibly that of its peers (from initial handshakes), it will not initially know the lowest MTU in a chain of links to any other peers. Another potential problem is that higher-level protocols may create packets larger than a particular link supports. To get around this issue, IP allows fragmentation: dividing the datagram into pieces, each small enough to pass over the single link that is being fragmented for, using the MTU parameter configured for that interface. This fragmentation process takes place at the IP layer (OSI layer 3) and marks packets it fragments as such, so that the IP layer of the destination host knows it should reassemble the packets into the original datagram. This method implies a number of possible drawbacks:

All fragments of a packet must arrive for the packet to be considered received. If the network drops any fragment, the entire packet is lost. When the size of most or all packets exceed the MTU of a particular link that has to carry those packets, almost everything has to be fragmented. In certain cases the overhead this causes can be considered unreasonable or unnecessary. For example, various tunneling situations cross the MTU by very little as they add just a header's worth of data. The addition is small, but each packet now has to be sent in two fragments, the second of which carries very little payload. The same amount of payload is being moved, but every intermediate router has to do double the work in terms of header parsing and routing decisions. As it is normal to maximize the payload in every fragment, in general as well as when fragmenting, any further fragmentation that turns out to be necessary will increase the overhead even more. There is no simple method to discover the MTU of links beyond a node's direct peers. The Internet Protocol requires that hosts must be able to process IP datagrams of at least 576 bytes (for IPv4) or 1280 bytes (for IPv6). However, this does not preclude Data Link Layers with an MTU smaller than IP's minimum MTU from conveying IP data. For example, according to IPv6's specification, if a particular Data Link Layer physically cannot deliver an IP datagram of 1280 bytes in a single frame, then the link layer MUST provide its own fragmentation and reassembly mechanism, separate from IP's own fragmentation mechanism, to ensure that a 1280-byte IP datagram can be delivered, intact, to the IP layer. [edit]Path MTU Discovery Main article: Path MTU Discovery The Internet Protocol defines the "Path MTU" of an Internet transmission path as the smallest MTU of any of the IP hops of the "path" between a source and destination. Put another way, the path MTU is the largest packet size that can traverse this path without suffering fragmentation. RFC 1191 (IPv4) and RFC 1981 (IPv6) describe "Path MTU Discovery", a technique for determining the path MTU between two IP hosts. It works by setting the DF (Don't Fragment) option in the IP headers of outgoing packets. Any device along the path whose MTU is smaller than the packet will drop such packets and send back an ICMP "Destination Unreachable (Datagram Too Big)" message containing its MTU. This information allows the source host to reduce its assumed path MTU appropriately. The process repeats until the MTU becomes small enough to traverse the entire path without fragmentation. Unfortunately, increasing numbers of networks drop ICMP traffic (e.g. to prevent denial-of-service attacks), which prevents path MTU discovery from working. One often detects such blocking in the cases where a connection works for low-volume data but hangs as soon as a host sends a large block of data. For example, with IRC a connecting client might see the initial messages up to and including the initial ping(sent by the server as an anti spoofing measure), but get no response after that. This is because the large set of welcome messages are sent out in packets bigger than the real MTU. Also, in an IP network, the path from the source address to the destination address often gets modified dynamically, in response to various events (load-balancing, congestion, outages, etc.) - this could result in the path MTU changing (sometimes repeatedly) during a transmission, which may introduce further packet drops before the host finds the new safe MTU. Most Ethernet LANs use an MTU of 1500 bytes (modern LANs can use Jumbo frames, allowing for an MTU up to 9000 bytes); however, border protocols like PPPoE will reduce this. The difference between the MTU seen by end-nodes (e.g. 1500) and the Path MTU causes Path MTU Discovery to come into effect, with the possible result of making some sites behind badly configured firewalls unreachable. One can possibly work around this, depending on which part of the network one controls; for example one can change the MSS (maximum segment size) in the initial packet that sets up the TCP connection at one's firewall. RFC 4821, Packetization Layer Path MTU Discovery, describes a Path MTU Discovery technique which responds more robustly to ICMP filtering. [edit]ATM backbones, an example of MTU tuning

This article needs attention from an expert in Computer Networking. Please add a reason or a talk parameter to this template to explain the issue with the article. WikiProject Computer Networking or the Computer Networking Portal may be able to help recruit an expert. (February 2009) Sometimes the demands of efficiency encourage artificially declaring a reduced MTU in software below the true maximum possible length supported - for example: where an ATM (Asynchronous Transfer Mode) network carries IP traffic. Some providers, particularly those with a telephony background, use ATM on their internal backbone network. ATM operates at optimum efficiency when packet length is a multiple of 48 bytes. This is because ATM is sent as a stream of fixed-length packets (known as 'cells'), each of which can carry a payload of 48 bytes of user data with 5 bytes of overhead for a total cost of 53 bytes per cell. So the total length of the transmitted data length is 53 * ncells bytes, where ncells = the number of required cells of = INT((payload_length+47)/48). So in the worst case, where the total length = (48*n+1) bytes, one additional cell is needed to transmit the one last byte of payload, the final cell costing an extra 53 transmitted bytes 47 of which are padding. For this reason, artificially declaring a reduced MTU in software maximises protocol efficiency at the ATM layer by making the ATM AAL5 total payload length a multiple of 48 bytes whenever possible. For example, 31 completely filled ATM cells carry a payload of 31*48=1488 bytes. Taking this figure of 1488 and subtracting from it any overheads contributed by all relevant higher protocols we can obtain a suggested value for an artificially reduced optimal MTU. In the case where the user would normally send 1500 byte packets, sending between 1489 and 1536 bytes requires an additional fixed cost of 53 bytes transmitted, in the form of one extra ATM cell. For the example of IP over DSL connections using PPPoA/VC-MUX, again choosing to fill 31 ATM cells as before, we obtain a desired optimal reduced MTU figure of 1478 = 31*48-10 taking into account an overhead of 10 bytes consisting of a Point-to-Point Protocol overhead of 2 bytes, and an AAL5 overhead of 8 bytes. This gives a total cost of 31*53=1643 bytes transmitted via ATM from a 1478 byte packet passed to PPPoA. In the case of IP sent over ADSL using PPPoA the figure of 1478 would be the total length of the IP packet including IP headers. So in this example, keeping to a selfimposed reduced MTU of 1478 as opposed to sending IP packets of total length 1500 saves 53 bytes per packet at the ATM layer at a cost of a 22 byte reduction of the length of IP packets. RFC 2516 prescribes a maximum MTU for PPPoE/DSL connections of 1492 bytes: the 1500 byte maximum ethernet payload minus 8 bytes of PPPoE headers (2 bytes for the PPP overhead, and 6 bytes for the PPPoE header). This will not necessarily fill an integer number of ATM cells. [edit]MTU in other standards The G.hn standard, developed by ITU-T, provides a high-speed (up to 1 Gigabit/s) local area network using existing home wiring (power lines, phone lines and coaxial cables). The G.hn Data Link Layer accepts data frames of up to 214 bytes (16384 bytes). In order to avoid the problem of long data-frames taking up the medium for long periods of time, G.hn defines a procedure for segmentation that divides the data frame into smaller segments. [edit]Disruption The transmission of a packet on a physical network segment that is larger than the segment's MTU is known as jabber. This is almost always caused by faulty devices. Many network switches have a built-in capability to detect when a device is jabbering and block it until it resumes proper operation.[11]

You might also like