You are on page 1of 7

An operating system (OS) is a collection of software that manages computer hardware resources and provides common services for

computer programs. The operating system is a vital component of the system software in a computer system. Application programs require an operating system to function.

Real-time A real-time operating system is a multitasking operating system that aims at executing real-time applications. Real-time operating systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of behavior. Multi-user A multi-user operating system allows multiple users to access a computer system concurrently. Time-sharing systems and Internet servers can be classified as multi-user systems as they enable multiple-user access to a computer through the sharing of time Multi-tasking vs. single-tasking When only a single program is allowed to run at a time, the system is grouped as a singletasking system. However, when the operating system allows the execution of multiple tasks at one time, it is classified as a multi-tasking operating system. Multi-tasking can be of two types: pre-emptive or co-operative. Distributed Further information: Distributed system

A distributed operating system manages a group of independent computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing Embedded Embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources.

The Open Systems Interconnect (OSI) model has seven layers. This article describes and explains them, beginning with the 'lowest' in the hierarchy (the physical) and proceeding to the 'highest' (the application). The layers are stacked this way:

Application Presentation Session Transport Network Data Link Physical

DATA LINK LAYER


The data link layer provides error-free transfer of data frames from one node to another over the physical layer, allowing layers above it to assume virtually error-free transmission over the link. To do this, the data link layer provides:

NETWORK LAYER
The network layer controls the operation of the subnet, deciding which physical path the data should take based on network conditions, priority of service, and other factors. It provides:

TRANSPORT LAYER
The transport layer ensures that messages are delivered error-free, in sequence, and with no losses or duplications. It relieves the higher layer protocols from any concern with the transfer of data between them and their peers.

SESSION LAYER
The session layer allows session establishment between processes running on different stations. It provides:

Session establishment, maintenance and termination: allows two application processes on different machines to establish, use and terminate a connection, called a session.

PRESENTATION LAYER
The presentation layer formats the data to be presented to the application layer. It can be viewed as the translator for the network. This layer may translate data from a format used by the application layer into a common format at the sending station, then translate the common format to a format known to the application layer at the receiving station.

APPLICATION LAYER
The application layer serves as the window for users and application processes to access network services. This layer contains a variety of commonly needed functions:

Distributed operating system


A distributed operating system is the logical aggregation of operating system software over a collection of independent, networked, communicating, and physically separate computational nodes

Three basic distributions


To better illustrate this point, examine three system architectures; centralized, decentralized, and distributed. In this examination, consider three structural aspects: organization, connection, and control.
Organization

A centralized system has one level of structure, where all constituent elements directly depend upon a single control element. A decentralized system is hierarchical. The bottom level unites subsets of a systems entities
Connection

Centralized systems connect constituents directly to a central master entity in a hub and spoke fashion. A decentralized system (aka network system) incorporates direct and indirect paths between constituent elements and the central entity.
Control

Centralized and decentralized systems have directed flows of connection to and from the central entity, while distributed systems communicate along arbitrary paths. This is the pivotal notion of the third consideration. Control involves allocating tasks and data to system elements balancing efficiency, responsiveness and complexity.

% Data Migration
Data migration, also known as data shipping, means that remote data is moved to the processor where a request is made. Data migration can be combined with replication, where a copy of the data is sent to the requesting processor; this introduces the problem of coherence. In the remainder of this dissertation I use data migration to include the use of replication, and explicitly state when replication is not involved. Datamigration can take the formof hardware caching in shared-memory

multiprocessors such as Alewife [1] and DASH [66].

Computation Migration
Computation migration is a generalization of active thread migration, in which a portion of a thread migrates upon a remote access. Migrating only part of a thread reduces the granularity of migration, which alleviates the two problems of thread migration. First, we can migrate only the state relevant to a remote access. Second, we can avoid load imbalance by moving only small pieces

of computation. %4. Blocking versus Nonblocking Primitives The message-passing primitives we have described so far are what are called blocking primitives (sometimes called synchronous primitives). When a process calls send it specifies a

destination and a buffer to send to that destination. While the message is being sent, the sending process is blocked (i.e., suspended). An alternative to blocking primitives are nonblocking primitives (sometimes called asynchronous primitives). If send is nonblocking, it returns control to the caller immediately, before the message is sent. The advantage of this scheme is that the sending process can continue computing in parallel with the message transmission, instead of having the CPU go idle (assuming no other process is runnable). The choice between blocking and nonblocking primitives is normally made by the system designers

% marshalling (sometimes spelled marshaling with a single l) is the process of transforming the memory representation of an object to a data format suitable for storage or transmission, and it is typically used when data must be moved between different parts of a computer program or from one program to another. Marshalling is similar to serialization and is used to communicate to remote objects with an object.

%here are a few differences also to be considered though: 1. RPC works with stubs. Client calls the 'client-stub' which in turns calls the 'server-stub' for the call of the procedure. If you talk about browser-server, also RPC (RMI) technology is implemented sometimes to achieve the same effect.

2. Additionally, to call a disadvantage, the call of RPC is not connection-oriented. The client does not know whether the procedure was actually-called. Thus it can be failure in case of unpredictable network problems.

%Lamports Logical Clocks


1.Assume no central time source/global clock. 2. Each system maintains its own local clock. 3.No total ordering of events. 4.No concept of happened-when. 5. Solution proposed by Leslie Lamport was the concept of logical clocks. 1.To implement -> in a distributed system, Lamport introduced the concept of logical clocks, which captures ->numerically. 2. Each process Pi has a logical clock Ci. 3. Clock Ci can assign a value Ci (a) to any event a in process Pi. 4.The value Ci (a) is called the timestamp of event a in process Pi. 5.The value C(a) is called the timestamp of event a in whatever process it occurred. 6. The timestamps have no relation to physical time, which leads to the term logical clock. 7. The logical clocks can be implemented by simple counters.

Vector clocks is an algorithm for generating a partial ordering of events in a distributed system and detecting causality violations.

Initially all clocks are zero. Each time a process experiences an internal event, it increments its own logical clock in the vector by one. Each time a process prepares to send a message, it increments its own logical clock in the vector by one and then sends its entire vector along with the message being sent. Each time a process receives a message, it increments its own logical clock in the vector by one and updates each element in its vector by taking the maximum of the value in its own vector clock and the value in the vector in the received message (for every element).

You might also like