You are on page 1of 10

THREADS

A process an executing program with a single thread of control.

A traditional process has a single thread of control

Modern OS provide features for a process to contain multiple threads of


control.

5.1. Overview

Thread called a lightweight process (LWP),


o is a basic unit of CPU utilization;
o comprises thread ID, program counter, register set, and stack.
o shares its code section, data section, and other operating -system
resources, such as open files and signals with other threads
belonging to the same process

If the process has multiple threads of control, it can do more than one task
at a time.

Figure 5.1 illustrates the difference between a traditional single-threaded


process and a multithreaded process.

5.1.1 Motivation

Many

software

packages

that

run

on

modern

desktop

PCs

are

multithreaded.

Application implemented as a separate process with several threads of


control.
o A web browser might have one thread display images or text while
another thread retrieves data from the network.
o A word processor may have a thread for displaying graphics,
another thread for reading keystrokes from the user, and a third
thread for performing spelling and grammar checking

in the

background.

In certain situations a single application may be required to perform


several similar tasks.
o For example, a web server accepts client requests for web pages,
images, sound, and so forth.
o A busy web server may have several (perhaps hundreds) of clients
concurrentl y accessing it.
o If the web server ran as a traditional single -threaded process, it
would be able to service onl y one client at a time.
o The amount of time that a client might have to wait for its request
to be serviced could be enormous.

Solution is to have
o the server run as a single process that accepts requests.
o When the server receives a request, it creates a separate process to
service that request.
o Process creation is very heavyweight, as was shown in the previous
chapter.
o If the new process will perform the same tasks as the existing
process creates unnecessary overhead.

Alternativel y
o More efficient for one process that contains multiple threads to
serve the same purpose.
o This approach would multithread the web -server process .
o The server would create a separate thread that would listen for
client requests;
o when a request was made, rather than creating another process, it
would create another thread to service the request.

Threads also play a vital role in remote procedure call (RPC) systems.

RPC servers are multithreaded.

When a server receives a message, it services the message using a separate


thread.

This allows the server to service several concurrent requests.

5.1.2 Benefits

The benefits of multithreaded programming four major categories:

Responsiveness :
o Multithreading an interactive application may allow a program to
continue running even if part of it is blocked or is performing a
lengthy operation, thereby inc reasing responsiveness to the user.
o Example a multithreaded web browser allow user interaction in
one thread while an image is being loaded in another thread.

Resource sharing:
o By default, threads share the memory and the resources of the
process to which they belong.
o The benefit of code sharing is that it allows an application to have
several different threads of activity within same address space.

Economy:
o Allocating memory and resources for process creation is costl y.
o Alternativel y, because threa ds share resources of the process to
which they belong, it is more economical to create and context
switch threads.
o It can be difficult to gauge empiricall y the difference in overhead
for creating and maintaining a process rather than a thread, but in
general it is much more time consuming to create and manage
processes than threads.

Scalability (Utilization of multiprocessor architectures ):


o The benefits of multithreading can be greatl y increased in a
multiprocessor architecture, where each thread may be running in
parallel on a different processor.
o A single-threaded process can onl y run on one CPU, no matter how
many are available.
o Multithreading on a multi -CPU machine increases concurrency. In a
single processor architecture, the CPU generall y mo ves between
each thread so quickly as to create an illusion of parallelism, but in
realit y onl y one thread is running at a time.

Multicore Programming

Multicore or multiprocessor systems system design place multiple


computing cores on a single chip. Each core appears as a separate
processor to the operating system.

Multithreaded programming provides a mechanism for more efficient use


of these multiple computing cores and improved concurrency.

Consider an application with four threads.

On a s ystem with a single computing core, concurrency merel y means that


the execution of the threads will be interleaved over time (Figure 4.3),
because the processing core is capable of executing onl y one thread at a
time.

On a system with multiple cores, however , concurrency means that the


threads can run in parallel, because the system can assign a separate
thread to each core (Figure 4.4).

distinction between parallelism and concurrency.


o A system is parallel if it can perform more than one task
simultaneousl y.
o A concurrent system supports more than one task by allowing all the
tasks to make progress.

4.2.1 Programming Challenges

Identifying tasks.
o examine applications to find areas that can be divided into separate,
concurrent tasks. tasks are independent of one another and can run
in parallel on individual cores.

Balance.
o While identifying tasks that can run in parallel ensure that the
tasks perform equal work of equal value.

Data splitting .
o Just as applications are divided into separa te tasks, the data
accessed & manipulated by tasks is divided to run on separate cores.

Data dependency.
o The data accessed by the tasks must be examined for dependencies
between two or more tasks. When one task depends on data from
another, programmers mu st ensure that the execution of the tasks is
synchronized to accommodate the data dependency.

Testing and debugging.


o When a program is running in parallel on multiple cores, many
different execution paths are possible. Testing and debugging such
concurrent programs is inherentl y more difficult than testing and
debugging single -threaded applications.

4.2.2 Two Types of Parallelism

Data parallelism
o focuses on distributing subsets of the same data across multiple
computing cores and performing the same operation on each core.
o Consider, for example, summing the contents of an array of size N.

On a single-core system, one thread would simpl y sum the


elements [0] . . . [N 1].

On a dual-core system, however, thread A, running on core 0,


could sum the elements [0] . . . [N/2 1] while thread B,
running on core 1, could sum the elements [N/2] . . . [N 1].

The two threads would be running in parallel on separate


computing cores.

Task parallelism
o involves distributing tasks (threads) across multiple computing
cores.
o Each thread is performing a unique operation.
o Different threads may be operating on the same data, or they may be
operating on different data.

Fundamentall y, then, data parallelism involves the distribution of data


across multiple cores and task parallelism on the distribution of tasks
across multiple cores. In practice, most applications use a hybrid of these
two strategies.

5.1.3 User and Kernel Threads

Threads may be provided at either the user level, for user threads, or b y
the kernel, for kernel threads.

User threads
o supported above the kernel and managed without kernel support
o implemented by a thread library at the user level.

The library provides support for thread creation, scheduling,


and management with no support from the kernel.

Kernel threads
o Supported and managed directl y by the operating system:
o Contemporary OS Windows NT, Windows 2000, Solaris 2, BeOS,
and Tru64 UNIX (formerl y Digital UN1X) -support kernel threads.

5.2. Multithreading Models

Three common t ypes of threading implementation.

5.2.1 Many-to-One Model

maps many user-level threads to one kernel thread.

Thread management is done in user space, so it is efficient, but the entire


process will block if a thread makes a blocking system call.

onl y one thread can access the kernel at a time, multiple threads are
unable to run in parallel on multiprocessors.

Green Threads thread library of Solaris 2 implements this model.

5.2.2 One-to-one Model

maps each user thread to a kernel thread.

It provides more concurrency than the many -to-one model by allowing


another thread to run when a thread makes a blocking system call;

it also allows multiple threads to run in parallel o n multiprocessors.

drawback creating a user thread requires creating the corresponding


kernel thread.

overhead of creating kernel threads can burden the performance of an


application. Hence most implementations of this model restrict the number
of threads supported by the system.

Windows NT, Windows 2000, and OS/2 implement the one-to-one model.

5.2.3 Many-to-Many Model

Multiplexes many user-level threads to a smaller or equal number of


kernel threads.

The number of kernel threads may be specific to either a particular


application or a particular machine (an application may be allocated more
kernel threads on a multiprocessor than on a uniprocessor).

Solaris 2, IR IX, HP -UX, and Tru64 UNIX support this model.

You might also like