Rss Feed
Resource-Allocation Graph

Resource Allocation Graph SLIDE # 8:

In slide #9 the graph shows that the R1 is holding for the instance in P2, the P2 is requesting



for instance in R3 then R3 is holding an instances of P3. The R2 has 2 instances whis is holding



the instances of P1 and P2. Then the R4 is null.







Resource Allocation Graph SLIDE # 9:

In slide #9 the graph shows that R1 is holding for the instance in P2, the P2 is requesting


for instance in R3, then R3 is holding an instance in P3, P3 is requesting for instance is R2, then


R2 has 2 instances which is holding the instances of P1 and P2. The R4 is null.



Resource Allocation Graph SLIDE # 10:

In slide #10 the graph shows that P1 is requesting for instance in R1 which has 2 instances. The

R1 is holding the instances of P2 and P3. P3 is requesting instance from R2. R2 contains 2


instances which holding the instances of P1 and P4.



Resource Allocation Graph SLIDE # 20:

In slide #20 the graph shows that R1 is holding the instances of P1, then P1 is requesting for a

resource in R2, P2 is requesting intance for R1 and P2 is requesting for a resource in R2.

Resource Allocation Graph SLIDE # 21:

In slide #21 the graph shows that R1 is holding the instances of P1, then P1 is requesting for a resource in R2, then R2 is holding the instances of P2 and P2 is requesting instance of R1.

DEADLOCK

RESOURCE ALLOCATION GRAPH

A set of vertices V and a set of edges E.

�V is partitioned into two types:

* P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.

* R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.

� Request edge – directed edge P1 ® Rj

� Assignment edge – directed edge Rj ® Pi

�Process

�Resource Type with 4 instances

�Pi requests instance of Rj

�Pi is holding an instance of Rj
How would you know if there is a deadlock based on the Resource Allocator Graph?
� If graph contains a cycle :
* if only one instance per resource type, then deadlock.
* if several instances per resource type, possibility of deadlock.





DEADLOCK

DEADLOCK RECOVERY


{} Recovery from Deadlock: Process Termination {}


Abort all deadlocked processes.


Abort one process at a time until the deadlock cycle is eliminated.


In which order should we choose to abort?

* Priority of the process.

* How long process has computed, and how much longer to completion.

* Resources the process has used.

* Resources process needs to complete.F How many processes will need to be terminated.

* Is process interactive or batch?


Abort all deadlocked processes - this method clearly will break the deadlock cycle, but at a great expense, since these processes may have computed for a long time, and the results of these partial computations must be discarded, and probably must be recomputed later.


Abort one process at a time until the deadlock cycle is eliminated - this method incurs considerable overhead, since, after each process is aborted, a deadlock-detection algorithm must be invoked to determine whether any processes are still deadlock.


Minimum cost


{} Recovery from Deadlock: Resource Preemption {}


Selecting a victim – minimize cost.

Rollback – return to some safe state, restart process for that state.

Starvation – same process may always be picked as victim, include number of rollback in cost factor.




DEADLOCK

DEADLOCK DETECTION
� An algorithm that examines the state of the system to determine whether a deadlock has occurred

� An algorithm to recover from the deadlock
Allow system to enter deadlock state
Detection algorithmn
Recovery scheme

DEADLOCK

DEADLOCK PREVENTION

The case of never enter.

Restrain the ways request can be made (at least one of the necessary conditions should not be true).

Mutual Exclusion – not required for sharable resources; must hold for nonsharable resources.In general not possible to prevent deadlock by this, some resources are intrinsically non-sharable

Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources.

* Require process to request and be allocated all its resources before it begins execution, or allow process to request resources only when the process has none.

* Low resource utilization; starvation possible.

No Preemption –

* If a process that is holding some resources requests another resource that cannot be immediately allocated to it,then all resources currently being held are released.

* Preempted resources are added to the list of resources for which the process is waiting.

* Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting.

Circular Wait – impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration.

DEADLOCK

METHOD FOR HANDLING DEADLOCKS
Never enter (prevention & avoidance), detect/recover, ignore…
We can use a protocol to ensure that the system will never enter a deadlock state.
We can allow the system to enter a deadlock� state and then recover
We can ignore the problem all together, and pretend that deadlocks never occur in the system.
This solution is the one used by most operating systems, including UNIX.
Deadlock prevention is a set of methods for ensuring that at least one of the necessary conditions cannot hold.
Deadlock avoidance, on the other hand, requires that the operating system be given in advance additional information concerning which resources a process will request and use during its lifetime.
Ensure that the system will never enter a deadlock state.
Allow the system to enter a deadlock state and then recover.
Ignore the problem and pretend that deadlocks never occur in the system; used by most operating systems, including UNIX.

DEADLOCK

DEADLOCK CHARACTERIZATION
Deadlock can arise if four conditions hold simultaneously.

Mutual exclusion - only one process at a time can use a (non-sharable) resource.

Hold and wait - a process holding at least one resource is waiting to acquire additional resources held by other processes.

No preemption - a resource can be released only voluntarily by the process holding it, after that process has completed its task.

Circular wait - there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0.

Thread Scheduling
• Executes separate from the rest of the process
• An application can be a set of threads that cooperate and execute concurrently in the same address space
• Threads running on separate processors yields a dramatic gain in performance
• Local Scheduling – How the threads library decides which thread to put onto an available LWP
• Global Scheduling – How the kernel decides which kernel thread to run next
Multiprocessor Scheduling
• Very little has to be done to schedule a multiprocessor system.
• Whenever a CPU needs a process to run, it takes the next task from the ready list.
• The scheduling queue must be accessed in a critical section. Busy waiting is usually used.
• Load sharing
– Processes are not assigned to a particular processor
• Gang scheduling
– A set of related threads is scheduled to run on a set of processors at the same time
• Dedicated processor assignment
– Threads are assigned to a specific processor
• Dynamic scheduling
– Number of threads can be altered during course of execution

Will consider only shared memory multiprocessor
Salient features:
One or more caches: cache affinity is important
Semaphores/locks typically implemented as spin-locks: preemption during critical sections
Central queue – queue can be a bottleneck
Distributed queue – load balancing between queue

REAL - TIME SCHEDULING

• Many real time systems run a known collection of tasks. The execution time of the tasks is frequently known ahead of time.


• Tasks have deadlines by which they must complete.


• If a task that runs for 3 time units must be done at time 10, it must start by time 7.


• If two tasks that runs for 3 time units each must be done at time 10, one must start by time 4.


Hard real-time systems – required to complete a critical taskwithin a guaranteed amount of time.


Soft real-time computing – requires that critical processesreceive priority over less fortunate ones.

Correctness of the system may depend not only on the logical result of the computation but alsoon the time when these results are produced,

example:

– Tasks attempt to control events or to react to eventsthat take place in the outside world

– These external events occur in real time andprocessing must be able to keep up

– Processing must happen in a timely fashion

• neither too late, nor too early

• EDF – Earliest Deadline First Scheduling


• Static table-driven
– Table determines at run time when a task begins execution
• Static priority-driven preemptive
– Traditional priority-driven scheduler is used
• Dynamic planning-based
– Feasibility determined at run time
• Dynamic best effort
– No feasibility analysis is performed

SUBSTANIAL INFORMATION ABOUT THREADS OF OS

WINDOWS NT’s Threads


Processes in NT can consist of one or more threads.

[] Primary thread - When a process is created, one thread is generated along with it.

This object is then scheduled on a system wide basis by the kernel to execute on a processor.
After the primary thread has started, it can create other threads that share its address space and system resources but have independent contexts, which include execution stacks and thread specific data. A thread can execute any part of a process' code, including a part currently being executed by another thread.

It is through threads, provided in the Win32 application programmer interface (API), that Windows NT allows programmers to exploit the benefits of concurrency and parallelism.

[ ] Fiber - is NT’s smallest user-level object of execution. It executes in the context of a thread and is unknown to the operating system kernel. A thread can consist of one or more fibers as determined by the application programmer. ˝Some literature ˝[1,11] assume that there is a one-to-one mapping of userlevel objects to kernel-level objects, this is inaccurate. Windows NT does ˝provide the means for many-to-many ˝scheduling. However, NT's design is poorly documented and the application programmer is responsible for the control of fibers such as allocating memory, scheduling them on threads and preemption.

Solaris’s LWPs and Threads
[ ] light weight process (LWP) - Solaris’s smallest kernel-level object of execution.

A Solaris process consists of one or more light weight processes. Like NT’s thread, each LWP shares its address space and system resources with LWPs of the same process and has its own context. However, unlike NT, Solaris allows programmers to exploit parallelism through a user-level object that is built on light weight processes.

In Solaris, a thread is the smallest user-level object of execution. Like Windows NT's fiber, they
are not executable alone.

[ ] Solaris thread
* execute in the context of a light weight process.
* implemented and controlled by a system library.
The library controls the mapping and scheduling of threads onto LWPs automatically.
[ ] mapping - determined by the library or the application programmer. Since the threads execute in the context of a light weight process, the operating system kernel is unaware of their existence.

Solaris's thread library defines two types of threads according to scheduling.

[ ] bound thread is one that permanently executes in the context of a light weight process in which no other threads can execute. Consequently, the bound thread is scheduled by the operating system kernel on a system wide basis.
[ ] unbound thread is one that can execute in the context of any LWP of the same process. Solaris uses the thread library for the scheduling of these unbound threads. The library works by creating a pool of light weight processes for any requesting process. The initial size of the pool is one. The size can be automatically adjusted by the library or can be defined by the application programmer through a programmatic interface. It ˝is the library’s task to increase or decrease the pool size to meet the requirements of an application. Consequently, the pool size determines the concurrency level (CL) of the process. The threads of a process are scheduled on a LWP in the pool, by using a priority based, first-in first-out (FIFO) algorithm. The priority is the primary algorithm and FIFO is the secondary algorithm (within the same priority). In addition, a
thread with a lower priority may be preempted from a LWP by higher priority thread or by a library call.
CPU SCHEDULING ALGORITHMS
First-Come, First Come, First-Served (FCFS) Scheduling
This non-preemptive scheduling algorithm follows the first-in, first-out (FIFO) policy. As each process becomes ready, it joins the ready queue. When the current running process finishes execution, the oldest process in the ready queue is selected to run next.
Shortest-Job Job-First (SJF) Scheduling
Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time
Two schemes:
􀁺 nonpreemptive –once CPU given to the process it cannot be preempted until completes its CPU burst
􀁺 preemptive –if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as theShortest-Remaining-Time-First (SRTF)
SJF is optimal –gives minimum average waiting time for a given set of processes
Round Robin (RR)
􀂄 Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.
􀂄 RR is designed specially for time-sharing systems.
􀂄 A small unit of time, called a time quantum or time slice, is defined.
􀂄 The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum.
􀂄 If there are nprocesses in the ready queue and the time quantum is q, then each process gets 1/nof the CPU time in chunks of at most qtime units at once. No process waits more than (n-1)q time units.
􀂄 Performance
􀁺 qlarge
⇒ FIFO
􀁺 q small
⇒ q must be large with respect to context switch, otherwise overhead is too high
Shortest Remaining Time (SRT)
This preemptive scheduling algorithm favors processes with the shortest remaining expected process time. As each process becomes ready, it joins the ready queue. This triggers an interrupt which preempts the current running process back into the ready queue. The process in the ready queue with the shortest remaining service time is selected to run next.
THREAD LIBRARY

[ ] Thread Library schedules user-level threads to run on LWP
[ ] Thread management done by user-level Threads Library
[ ] The Thread Library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood of priority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler.
Multi - threading Models





Many-to-One Model



[ ]Many user-level threads mapped to single kernel thread





* Thread management is done in user space




* Blocking problem




* No support for running in parallel on MP




* Used on systems that do not support kernel threads.




* Green-threads library in Solaris 2





One-to-One Model
[ ]Each user-level thread maps to a kernel thread


* Creating a user thread requires creating the corresponding kernel thread


* Overhead


* Restrict the number of threads supported by the OS


•Examples


* Windows NT/2000


* OS/2

Many-to-Many Model

[ ] Multiplex many user-level threads to a smaller or equal number of kernel threads

[ ] Allows many user level threads to be mapped to many kernel threads.

[ ] Allows OS to create a sufficient number of kernel threads.

•Examples

* Solaris 2, IRIX, HP-UX, Tru64 UNIX

* Windows NT/2000 with the ThreadFiber package


Kernel Thread
[ ] Kernel maintains context information for the process and the threads
[ ] Supported by the Kernel
[ ] Slower to create and manage threads than are user threads
[ ] If a thread performs a blocking system call, then the kernel can schedule another thread in the application for execution.
[ ] Multiple threads are able to run in parallel on multiprocessors.
[ ] Scheduling is done on a thread basis
[ ] Slower to create and manage
[ ] If a thread performs a blocking system call, the kernel can schedule another thread in the application for execution
[ ] Can take advantage of a multi-processor environment
Examples
{}Windows 95/98/NT/2000
{}Solaris
{}Tru64 UNIX
{}BeOS
{}Linux

Thread

[ ] Lightweight process (LW)

[ ] Basic unit of CPU utilization

[ ] A thread comprises a thread ID, a program counter, a register set, and a stack

[ ] A thread shares with other threads belonging to the same process its code section, data section, and other OS resources, such as open files and signals

[ ] A process with multiple threads can do more than one task at a time

Single-Threaded Process

Single threaded programs have one path of execution, and multi-threaded programs have two or more paths of execution. Single threaded programs can perform only one task at a time, and have to finish each task in sequence before they can start another. For most programs, one thread of execution is all you need, but sometimes it makes sense to use multiple threads in a program to accomplish multiple simultaneous tasks.

Multi-Threaded Process

Multithreading as a widespread programming and execution model allows multiple threads to exist within the context of a single process. These threads share the process' resources but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. However, perhaps the most interesting application of the technology is when it is applied to a single process to enable parallel execution on a multiprocessor system.

Multithreading computers have hardware support to efficiently execute multiple threads. These are distinguished from multiprocessing systems (such as multi-core systems) in that the threads have to share the resources of single core: the computing units, the CPU caches and the translation lookaside buffer(TLB). Where multiprocessing systems include multiple complete processing units, multithreading aims to increase utilization of a single core by leveraging thread-level as well as instruction-level parallelism. As the two techniques are complementary, they are sometimes combined in systems with multiple multithreading CPUs and in CPUs with multiple multithreading cores.