Multiprocessor Scheduling
• Very little has to be done to schedule a multiprocessor system.
• Whenever a CPU needs a process to run, it takes the next task from the ready list.
• The scheduling queue must be accessed in a critical section. Busy waiting is usually used.
• Load sharing
• Whenever a CPU needs a process to run, it takes the next task from the ready list.
• The scheduling queue must be accessed in a critical section. Busy waiting is usually used.
• Load sharing
– Processes are not assigned to a particular processor
• Gang scheduling
– A set of related threads is scheduled to run on a set of processors at the same time
• Dedicated processor assignment
– Threads are assigned to a specific processor
• Dynamic scheduling
– Number of threads can be altered during course of execution
• Gang scheduling
– A set of related threads is scheduled to run on a set of processors at the same time
• Dedicated processor assignment
– Threads are assigned to a specific processor
• Dynamic scheduling
– Number of threads can be altered during course of execution
Will consider only shared memory multiprocessor
Salient features:
One or more caches: cache affinity is important
Semaphores/locks typically implemented as spin-locks: preemption during critical sections
Central queue – queue can be a bottleneck
Distributed queue – load balancing between queue
0 comments:
Post a Comment