CPS 210 Midterm Exam

April 2, 2002

Answer all questions. Allocate your time carefully. Your answers will be graded on content, not style. Any kind of pseudocode is fine as long as its meaning is clear.

Part 1: Concurrency and Synchronization

1.Barrier. Barriers are useful for synchronizing threads, typically between iterations of a parallel program. Each barrier object is created for a specified number of “worker” threads and one “controller” thread. Barrier objects have the following methods:

Create (int n) -- Create barrier for n workers.

Arrive () -- Workers call Arrive when they reach the barrier.

Wait () -- Block the controller thread until all workers have arrived.

Release () -- Controller calls Release to wake up blocked workers (all workers must have arrived).

Initially, the controller thread creates the barrier, starts the workers, and calls Barrier::Wait, which blocks it until all threads have arrived at the barrier. The worker threads do some work and then call Barrier::Arrive, which puts them to sleep. When all threads have arrived, the controller thread is awakened. The controller then calls Barrier::Release to awaken the worker threads so that they may continue past the barrier. Release implicitly resets the barrier so that released worker threads can block on the barrier again in Arrive. Show how to implement barriers using condition variables.

2. Dynamic Race Detection. Eraser flags a potential race condition when it discovers that the locks-held sets for accesses to a given variable have an empty intersection.

(a) Define a race condition using the happens-before relationship.

(b) Show how a non-empty locks-held intersection prevents race conditions over a variable, according to your definition.

(c.) Is this a necessary condition for a correctly synchronized program? Is it a sufficient condition? Illustrate your answer with examples where appropriate.

Part 2: OS Structure and Models

3. Multics emphasized shared virtual memory segments as a basis for data exchange and code sharing. Early versions of Unix omitted support for virtual memory sharing, but it later crept back in the form of features for mapped files, shared libraries, and shared-memory IPC. Briefly outline extensions to a simple virtual memory kernel (such as early Unix or Nachos) to support virtual memory sharing. Ignore addressing issues, such as the scheme for segmented addressing or dynamic linking in Multics. Instead, write two or three sentences on each of the following: (a) page reference tracking, (b) backing storage management, (c) page fault handling, (d) the system call interface, and (e) the mechanics of page eviction.

4. In class we discussed several research systems that offer an alternative to the classical monolithic kernel structure, in which the OS kernel manages system resources according to predefined policies (e.g., Unix or Multics). Notable examples are: microkernels (Mach or V), SPIN, Exokernel, and the SEDA service architecture. Rank these alternatives with respect to their fulfillment of each of the goals listed below. Your answer will consist of a rank list with five entries (include monolithic kernels) for each of the four goals; you may group approaches that are equivalent with respect to some goal by joining them with a line or bracket. The goals are: (a) support for safe sharing of data and resources across processes, (b) support for multiple OS personalities or APIs (e.g., Unix and Windows syscall interfaces in the same system), (c) support for application control over physical resource management, (d) overhead of system extensions.

Part 3: Let’s Get Quantitative

5. This question deals with the performance expected from a file server for a community of clients. The file server can be running either of two file systems: FFS (the standard Unix file system) or Network Appliance’s WAFL. The clients can be typical static Web servers (web workload) or standard “personal workstations” generating a mixed workload similar to the SPECsfs benchmark (call it a ws workload). Assume that the system has adequate CPU and network capacity to deliver the disk system performance (i.e., the disk system is the bottleneck).

Sketch rough graphs comparing the peak operation throughput for each of the following scenarios on each of the following RAID configurations, for a fixed number of disks. The RAID configurations are RAID-0 (striping), RAID-1 (mirroring), RAID-4 (fixed parity), and RAID-5 (distributed parity). The workload scenarios are: (a) web-ffs, (b) ws-ffs, (c) web-wafl, (d) ws-wafl. Sketch graphs showing offered load on the x-axis and delivered throughput on the y-axis, with four lines each: give four graphs showing the performance of the scenarios on each of the four RAID types, and four graphs showing the performance of the RAID types for each of the four scenarios, for a total of eight graphs.