Chapter 4: Support for processes

Concept of process: an active element which causes the modules to be executed dynamically, unit of resource allocation (process) and unit of scheduling (thread or process)
function of OS is to create the abstraction of a set of concurrent processes.
(programma = stuk tekst, process = actieve/dynamische uitvoering van programmatuur waarbij resources (processor,…) nodig zijn)

4.1 Use of processes in systems

allocate a process wherever there is a source of asynchronous events (eg, each hardware device)
assign at least one process to each independent unit of work comprising a loaded program, data and library. Such a process will make system calls on the OS to request service.
figure: two active processes assigned to execute the static modules. One executes the user program and makes library and system calls to do I/O; the other programs the device, taking requests from user programs and transferring data to or from the device. Assumption in the figure: a user process enters the OS (with a change of privilege) and executes the top level of the OS.

4.2 Processes and processors

Often far fewer processors than the processes we should like to use in a system. The OS must perform the function of sharing the real processors among the processes => creating virtual processors: the OS is simulating one processor per process.
figure: time graph, shows two device handler processes sharing a single processor. It also shows when their respective devices are active and when the associated interrupt service routines (ISRs) run on the processor. We assume that an ISR does not run as part of any process. Initially, process A runs, starts its device then gets to a point where it can do no more until the device completes its activity. It must be possible for that process to indicate this fact to the process management function, shown here as WAIT. When a process executes WAIT it changes its state from running to blocked. Process B is then run on the processor, starts its device then WAITs. If only A and B are available for running then the system becomes idle. In practice, there may be some other process that can be run. The next event shown in the graph is that A’s device signals an interrupt which is taken by the processor and A’s ISR is entered. This makes A able to run again – state changes from blocked to runnable, then running when it is selected to run on the processor. While A is running, B’s device signals an interrupt which is taken by the processor, interrupting A. B’s ISR is executed and finishes at time T.
non-preemptive scheduling: processes only lose control of the processor voluntarily. Advantage: a process can be assumed to have tidied up its data structures, and not be hogging some valuable resource if it is allowed to finish its current task. Disadvantage: slow response to hardware events. Non-preemptive scheduling is useless if fast response to unpredictable events is required.
(veronderstelling: maar 1 processor, dus A en B kunnen niet gelijk running zijn; WAIT: running-mode => blocked-mode; ISR maakt B van blocked terug runnable – non-preemptivescheduling: het lopende proces gaat verder – ISR is stukje code, geen proces dus A blijft ondertussen running; code wordt uitgevoerd in context v proces => A & code vormen 1 proces, niet verschillend)

4.3 Process state

figure: running on a processor: RUNNING state; able to run on a processor: RUNNABLE state; unable to run on a processor because it is awaiting some event and cannot proceed until that event arrives: BLOCKED state.
preemptive scheduling: process is forcibly preempted from its processor
(running => runnable: huidigproces even aan de kantzettenomander de kanstegevenomterunnen)

4.3.1 Saving process state
contents of any processor registers must be saved
OS keeps a data block, a process descriptor (PD), for each process that it supports. When a process is preempted or blocks voluntarily, the process state is saved in the process descriptor +other information like the process’s state (blocked/runnable) and its start address.
information that must be in the process descriptor is that which is needed when the process is not in main memory.(het proces dat ik onderbreek: toestand van bijhouden om later verder te runnen vanaf dat punt; zelfs al is een proces blocked, descriptor kan veranderen => gepriviligeerd: alleen OS kan dit, niet toegankelijk voor user processes)

4.3.2 Context switching
context switching: the process of saving the state of a process and setting up the state of another
the instructions that are executed in performing these operations and the frequency at which context switching happens are an overhead at the lowest level of a system.

4.4 Synchronizing with the hardware: events and the WAIT operation1 2

WAIT: allows processes to sync with hardware; wait for one specific event/ for any one event
possible hardware events known at design time => set of events can be encoded as an integer, representing a bit-map.
1) process descriptor could encode the reason the process is blocked and the events of interest to this process that have occurred
2) one way of implementing process-event synchronization. Assumption: it is well known which system processes will wait on which events (eg: dedicated device handler that syncs with a single device). Figure represents process descriptors as object – most OS: procedural – not object-oriented – approach is taken. Figure: process level and implementation level for a WAIT for a specific event. When the process invokes OS.WAIT(AnEvent), control is transferred to the OS method WAIT. The OS object (not shown) in turn invokes WAIT(AnEvent) on the process descriptor for the current process. Assuming that the event of interest has not already occurred, the process state is set to blocked and the event it is waiting for, is recorded in the ‘events awaited by process’ slot. Some other process is then run. When the event occurs, we assume that a method SIGNAL(anEvent) is invoked on the process descriptor which causes the process state to be set to runnable and the reason for waiting to be removed. The process will be selected to run again in due course. => system design is such that a single, well-known process waits for each event so SIGNAL is invoked on the appropriate process descriptor.
wake-up waiting: when the process requests to wait for the event, it can continue running and never needs to enter the blocked state.

4.4.1 Race conditions
a race condition exists whenever the relative timing of a set of events affects the outcome
The process will wait indefinitely for the event if this is allowed to happen => WAIT and SIGNAL process descriptor methods must be implemented as atomic
(event is al opgetreden maar process weetditniet => process blijft en blijftwachten op die event en blijftdus blocked => WAIT en SIGNAL mogennietoverlappen!!! (probleem: sync tssprocessen))

4.4.2 Event and process objects
separate event objects form process descriptors
event object: SIGNAL, WAIT; process objects: BLOCK, UNBLOCK
when a process executes OS.WAIT, the OS object OS invokes WAIT on the event object. Process id is recorded in the event data structure and BLOCK method is invoked on the relevant process descriptor, causing the representation of the process state to be changed from running to blocked. SCHEDULE method could then be called to decide which process to run next. When an event occurs, SIGNAL method is invoked on the event object. If one or more processes are recorded as waiting for the event, one of them can be made runnable by invoking UNBLOCK method on the relevant process descriptor object. Again, scheduling is invoked after this. If no process is awaiting the event, its occurrence is recorded as a wake-up waiting in the event data structure.

4.5 The process data structure

OS must handle many processes and will maintain a data structure holding their descriptors. The OS should be able to choose and run the highest-priority process as quickly as possible. The selection policy determines which process will be selected and is effected by the scheduling algorithm.

4.6 Scheduling: General approaches

scheduling: selecting a process to run
dispatching: setting up a process state in process registers (eerst scheduling, dan dispatching)
unary scheduling: system is idle, interrupt frees a process
binary scheduling: process is running, interrupt frees another process (or running process wakes up another one)
general scheduling: when the process that is running terminates or blocks, a general schedule must be carried out (process stopt en dantussenalleanderegaankijkenwelkevoorrangkrijgt)
process behaviour and priority
OS processes are put into permanent, static fixed-priority ordering.
figure 1: possible data structure: array or table of PDs
example of how information on processes might be held in a process mgmt. module and shows system processes handled separately in this way => in fixed-priority order and OS’s scheduler will search from the top of the table for a runnable system process.
multi-user system: nature of applic processes is unknown to OS and their behaviour will change during a typical program run => time slice is allocated to process when it begins to run
figure 2: one or more queues of runnable processes may be chained through the user processes in this table, using the link fields in the PDs. When a process becomes runnable it is added to the end of the appropriate queue
figure 3: alternative view of high- and low-priority run queue. When a process becomes runnable after being blocked it is allocated to high-priority run queue. It if continues to block before using up its time slice, it will always be scheduled from the high-priority queue. If a process runs to the end of its time slice, it is moved to the end of the low-priority queue. Only if high-priority queue is empty, a low-priority queue is examined.
figure 4: system processes: highest priority, handled in static, fixed-priority order (not shown)
high-priority queue: for user processes that blocked, waiting for certain events (like page fault) and are now runnable
medium-priority: user processes that were blocked, waiting for resources which are less critical for system performance and are now runnable
low and lowest: for processes that use up their time slice; two queues allow for an allocation of two priority levels for
processed when they are not doing I/O
(I/O bound processes krijgenprioriteitomdatdezeeen bottleneck vormen
=> als we compute-bound voorranggeven, gaatdeze de CPU monopoliseren, nooitmeervrijgeven
=> runnable nu opgesplitst in high & low priority; hoe meertijdeen process gebruikt, hoe lager prioriteit (multi-level feedback) vb windows 7 heeft 32 prioriteitsniveaus)

4.7 Scheduling for shared-memory multiprocessors

Processor, on becoming free, should execute code such that it first examines its local run queue and, if that contains no runnable processes, should take a process from the global run queue(s).
an app should be able to indicate to the OS that it has a number of processes and to give their relative priorities for scheduling purposes. It is then possible to run them simultaneously on the processors of a multiprocessor.
need for inter-processor interrupt: example: a process A running on a processor P makes a process runnable which is running on processor Q. an inter-processor interrupt from P to Q can start off the required context switch.
one approach to process scheduling for shared-memory multiprocessors is to allocate the process at the top of the run queue to the first processor that becomes free. This ignores the fact that a process may have run recently on a particular processor and may have built useful state in both the cache and the address translation unit. In the case of a page fault, eg, the process should continue on same processor.
It might be appropriate to allow a process to busy wait on a page fault: keep the process scheduled and running. The waiting process might execute in a tight loop, or spin, until it is interrupted by the arrival of the required page. (busy waiting: je blijftwachten op event, maar blijftrunnen, ook al doe je eiglkniets – dusniet in blocked mode; beteralscontect switch teduur is)
relationships between groups of processes: if certain processes are designed to work closely together => ideal arrangement: schedule them simultaneously on the multiple processors of the multiprocessor
the counter-argument is that if the processes are sharing data it is better for them to run on the same processor so that the same cache is used (cache coherency problem)

4.8 Process scheduling to meet real-time requirements

Preemptive scheduling with carefully chosen process priorities may not be sufficient to ensure that a number of processes meet their timing requirements in a real-time system.
two kinds of real-time processes: those which must respond to unpredictable events in a specified time.
figure 1: two processes A and B each with a known computation time (workAand workB) and a known length of time in which the work must be completed. The graph shows this as a deadline (D), work time (W) and slack time (S) for each process.

Figure 2: first graph: possible sequence of events for the two processes. First A’s event occurs and A is scheduled. Before A’s work is complete, B’s event occurs. The scheduler now has to decide whether to schedule A or B. scheduler has the information to compute the deadlines of A and B and the remaining slack time of A and B.
earliest deadline first policy: process B would be selected. B then completes before its deadline by the value of its slack time.
alternative strategy: first schedule the process with the least remaining slack time, provided that they can both meet their deadlines. => third graph: least slack first policy for A and B.
fourth graph: event for a third process C occurring.
Periodic schedules
known number of processes
schedule can be determined when system is designed
suitable value for scheduling period must be chosen
each process: priority proportional to frequency (Rate Monotonic schedule)
EDF, LSF can also be used, deadline being taken as the end of the current period and the slack computed from this value.
Aperiodic processes
guaranteeing response to unpredictable events is more difficult
EDF, LSF

(figuur: je moet voice voorrang kunnen geven, voice is kritisch

4.8.1 System structure and response to events
some OSs are executed procedurally, ie, OS code is executed ‘in-process’. When a user process makes a system call it becomes a system process and enters the OS. Such systems tend to have very few dedicated system processes and much of the work that is done by one process may be on behalf of others. Example: interrupt is services in the context of whatever process is running when it occurs. Before returning to user mode, the process executes system code to determine whether any process is due to be woken up.
the alternative design approach is to have the OS executed by dedicated processes and to request service from them.(procedureel OS: groot deel vd code v OS wordt uitgevoerd in context van user processen die call doen naar OS)

4.10 OS structure and placement of processes

figure: major components of a traditional, closed OS; gives a possible placement of processes to achieve dynamic execution of the system. Processes are allocated as follows:
* single process is allocated to each user-level activity: a user program exec or a command exec
* process is allocated to handle each device instance. Such a process will wait for and respond to device interrupts
* some memory mgmt. processes are indicated to respond to address translation faults, to allocate main memory to processes and to move processes into and out of main memory as necessary (swapping)
* certain modules: no permanent allocation of processes, the code is executed in the context of user processes, as a result of a system call => state change from user state to system state
* the process mgmt. module cannot itself be implemented as processes
(process mgmt.: scheduler kanzelfgeen process zijn => scheduletwnrwelk process uitgevoerdwordt, dusalsdatzelfeen process zouzijn, wiescheduletdan scheduler? – scheduling is onderdeel van process mgmt.)

4.11 Multi-threaded process implementation

User threads share address space of the app and al the resources allocated to the app
Some OS allow user threads to be registered with them as schedulable kernel threads. Such OS support multi-threadedprocesses.
the context switching overhead is relatively low
some OS do not support multi-threaded processes and each process is defined to have a separate address space => heavyweight process moded: overhead on a context switch is high. If an OS is to run on a shared-memory multiprocessor, it is important that multi-threaded processes are supported.
(het zijn eiglk de draden van een process die gescheduled worden (kunnen dus parallel op meerdere processoren uitgevoerd worden) => voor process moet hele omgeving (context) opgezet worden, draden hergebruiken die gwn => scheduling & dispatching gaat dus sneller voor draden
oude OS kennen multi-threading niet => ziet geen draden, enkel processen => heavyweightprocess model)

4.12 Processes in languages, runtime systems and OS

when a sequential programming language is used, one OS process is created to run the program (a). when a concurrent programming language is used, the programmer may set up a number of concurrent processes (to be implemented as user threads) (b), (c). if these user threads may be made known to the OS as kernel threads, through a system call, we have the situation shown in (c).
the programs indicated in (a), (b) and (c) will run as OS processes, each in a separate address space. If the user threads can be registered with the OS as kernel threads, they are scheduled independently and can WAIT for synchronous I/O separately. While one blocks, waiting for I/O, another thread of the same process can run. This is essential if the concurrent system is to run on a multiprocessor.
(a: ieder programma = 1 proces (geen draden); b: mogelijkheid tot meerdere draden, maar OS ziet dit niet, ziet dit als 1 proces (uitvoering op 1 processor); c: meerdere draden, OS ziet dat ook zo)