Rss Feed
Showing posts with label OS6. Show all posts
Showing posts with label OS6. Show all posts
Thread Scheduling
• Executes separate from the rest of the process
• An application can be a set of threads that cooperate and execute concurrently in the same address space
• Threads running on separate processors yields a dramatic gain in performance
• Local Scheduling – How the threads library decides which thread to put onto an available LWP
• Global Scheduling – How the kernel decides which kernel thread to run next
Multiprocessor Scheduling
• Very little has to be done to schedule a multiprocessor system.
• Whenever a CPU needs a process to run, it takes the next task from the ready list.
• The scheduling queue must be accessed in a critical section. Busy waiting is usually used.
• Load sharing
– Processes are not assigned to a particular processor
• Gang scheduling
– A set of related threads is scheduled to run on a set of processors at the same time
• Dedicated processor assignment
– Threads are assigned to a specific processor
• Dynamic scheduling
– Number of threads can be altered during course of execution

Will consider only shared memory multiprocessor
Salient features:
One or more caches: cache affinity is important
Semaphores/locks typically implemented as spin-locks: preemption during critical sections
Central queue – queue can be a bottleneck
Distributed queue – load balancing between queue

REAL - TIME SCHEDULING

• Many real time systems run a known collection of tasks. The execution time of the tasks is frequently known ahead of time.


• Tasks have deadlines by which they must complete.


• If a task that runs for 3 time units must be done at time 10, it must start by time 7.


• If two tasks that runs for 3 time units each must be done at time 10, one must start by time 4.


Hard real-time systems – required to complete a critical taskwithin a guaranteed amount of time.


Soft real-time computing – requires that critical processesreceive priority over less fortunate ones.

Correctness of the system may depend not only on the logical result of the computation but alsoon the time when these results are produced,

example:

– Tasks attempt to control events or to react to eventsthat take place in the outside world

– These external events occur in real time andprocessing must be able to keep up

– Processing must happen in a timely fashion

• neither too late, nor too early

• EDF – Earliest Deadline First Scheduling


• Static table-driven
– Table determines at run time when a task begins execution
• Static priority-driven preemptive
– Traditional priority-driven scheduler is used
• Dynamic planning-based
– Feasibility determined at run time
• Dynamic best effort
– No feasibility analysis is performed

CPU SCHEDULING ALGORITHMS
First-Come, First Come, First-Served (FCFS) Scheduling
This non-preemptive scheduling algorithm follows the first-in, first-out (FIFO) policy. As each process becomes ready, it joins the ready queue. When the current running process finishes execution, the oldest process in the ready queue is selected to run next.
Shortest-Job Job-First (SJF) Scheduling
Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time
Two schemes:
􀁺 nonpreemptive –once CPU given to the process it cannot be preempted until completes its CPU burst
􀁺 preemptive –if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as theShortest-Remaining-Time-First (SRTF)
SJF is optimal –gives minimum average waiting time for a given set of processes
Round Robin (RR)
􀂄 Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.
􀂄 RR is designed specially for time-sharing systems.
􀂄 A small unit of time, called a time quantum or time slice, is defined.
􀂄 The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum.
􀂄 If there are nprocesses in the ready queue and the time quantum is q, then each process gets 1/nof the CPU time in chunks of at most qtime units at once. No process waits more than (n-1)q time units.
􀂄 Performance
􀁺 qlarge
⇒ FIFO
􀁺 q small
⇒ q must be large with respect to context switch, otherwise overhead is too high
Shortest Remaining Time (SRT)
This preemptive scheduling algorithm favors processes with the shortest remaining expected process time. As each process becomes ready, it joins the ready queue. This triggers an interrupt which preempts the current running process back into the ready queue. The process in the ready queue with the shortest remaining service time is selected to run next.