An operating system schedules preemption events to guarantee that equal priority processes receive round-robin service, discussed earlier. These events are scheduled at context switches, and ensure that a process cannot run continuously for more than a time determined by the granularity of preemption. Thus if a process becomes current at time `t', then a preemption event is scheduled for time `t + QUANTUM', where QUANTUM is a constant whose value determines the granularity of preemption. A preemption event is cancelled if a process relinquishes control of the CPU before its time expires.
An operating system has to be careful in selecting a suitable granularity
of preemption.
Setting it too low,
say
1 tenth of a second,
would be inefficient since the processor would spent too much time rescheduling.
Setting it too high,
say
10 seconds, would allow compute-bound jobs
to `hog' the computer.
In Xinu the granularity of preemption is 1 second.
Older versions of Unix also used this granularity but BSD uses 0.1
second.
Few processes,
however,
use their allocated quantums,
since they make frequent calls to I/O and other routines that call
resched.