Kernel I/O Subsystems
The kernel I/O subsystem improves the efficiency of a computer by providing several services for device drivers that build on the infrastructure of the hardware. Some examples are I/O scheduling and error handling. It then provides the simplest interface possible for the rest of the system when it wants to communicate with the I/O devices. I will only cover I/O scheduling and caching and buffering due to the restriction on the length of the assignment.
I/O Scheduling
I/O scheduling is a service that handles I/O requests and determines in which order they should be executed. This is done by making a request queue for every I/O device, and rearranging the order of the requests after a given scheme. It is needed to improve the performance of the system, because if you choose to execute the requests in the same order as the system calls were made, you will most likely get a lot of unnecessary waiting time. Too see an example of that, look at two processes requesting to access the hard drive. If the arm is near the beginning of the disk, and the process who requested the disk first wants information in the end of the disk while the other process wants information that resides in the beginning of the disk, you would get a lot more waiting time if you serve the first process request first.
Caching and Buffering
Caches and buffers are somewhat similar. A cache copies some data from a slow medium to a faster medium in order to make it available faster. For example, a program resides on disk, but when it is to be executed, it is copied to main memory to be “closer” to the CPU. A buffer, on the other hand, can have the only copy of some data. It is used, for example, when a slow device is communicating with a fast device. The slow device sends data to the buffer, and when that reaches its limit or the data stream ends, the buffer sends all the data in one, fast stream to the faster device. In this way, the faster device is used more efficiently.
Transforming I/O Requests to Hardware Operations
In this section, I will try to explain what happens from the moment a process makes an I/O request to the actual execution of the desired operation. I will use as an example a process that tries to read data from a file. The process provides the name of the file to be read, and from that name, the system determines what device that holds the file. In MS-DOS, the letter before the colon tells which device (C:\foo.txt) while UNIX checks the filename in a mount table that determines the device in question. A request is sent to the device driver, which processes the request and communicates with the interrupt handler and the device controller, all according to the picture below. The desired data is read into a buffer and made available for the requesting process, and control is returned to the process. If, however, the desired file has been opened recently, it might still be in the buffer. In that case, the kernel I/O subsystem may deliver the data without involving any further requests to the device driver etc.
STREAMS
STREAM is a mechanism from UNIX System V. It is a full-duplex communication channel between a process and a device, meaning it is a connection that can send data in either direction simultaneously. It consists of a STREAM head, a driver end, and a number of modules in between (zero is a number!). Each of these has a read- and a write queue, and communication between those queues is done by message passing. The process communicates with the STREAM head, while the driver is communicating with the driver end. The modules may be used by several STREAMS, and hence, several processes may communicate with the same driver and vice versa.
Performance
I/O affects the performance of a system greatly, because it puts a lot of strain on the CPU and the memory bus. The CPU has to handle execution of device-driver code and scheduling, which results in context switching, and the memory bus gets crowded by the data sent between the device controller and the kernel buffer, and between the kernel buffer and the user space allocated to the process. To increase performance, an OS manufacturer must therefore try to reduce the number of context switches and the amount of data being copied.