Create a Page Translation Table That Meets the Requirements of the Virtual Memory System

Create a Page Translation Table That Meets the Requirements of the Virtual Memory System

Section 1

  1. Create a page translation table that meets the requirements of the virtual memory system shown below. Assume a page size of 25, with pages 0 through 7 in logical memory, and frames 0 to 17 in physical memory.

  1. A computer system with 16K of main memory has a segmented memory management unit and the following segment table (with all numbers in hexadecimal):

  1. Logical Address
/
  1. Physical Start
/
  1. Physical Size

  1. 0000
/
  1. 1000
/
  1. 900

  1. 1000
/
  1. 1A00
/
  1. 100

  1. 2000
/
  1. 1F00
/
  1. 240

  1. 3000
/
  1. 800
/
  1. 140

  1. 4000
/
  1. 2C00
/
  1. 320

  1. 5000
/
  1. 0000
/
  1. 700

  1. 6000
/
  1. 2200
/
  1. 550

  1. 7000
/
  1. 3600
/
  1. 458

  1. 8000
/
  1. 3000
/
  1. 60

  1. 9000
/
  1. 3100
/
  1. 400

  1. A000
/
  1. 4160
/
  1. 320

  1. B000
/
  1. 5340
/
  1. 20

  1. C000
/
  1. 8100
/
  1. A00

  1. D000
/
  1. 5100
/
  1. 180

  1. E000
/
  1. 3F00
/
  1. 40

  1. F000
/
  1. 7400
/
  1. 240

  1. Indicate the physical memory location corresponding to logicaladdress C24A.

C24A – C000 = 024A

8100 + 024A = 834A (physical address)

  1. Indicate the physical memory location corresponding to logicaladdress 5A02.

5A02 – 5000 = 0A02

0000 + 0A02 = 0A02 (physical address)

Indicate the logical address corresponding to physical memory location 4200.

4200 – 4160 = 0040

A000 + 0040 = A040

Create a numerical example similar to the one Mano presents on pp 479-482, using your own numbers.

You may stick with Mano's basic parameters (number of segments, page size, word size, memory size and block size), but I think you will get more out of this exercise if you try different numbers for those parameters also.

24-bit logical address

4-bit segment number (16 possible segments)

12-bit page number (8192 possible pages)

8-bit word field (256 possible words)

24-bit physical address

16-bit block number

8-bit word

Each page in the logical address has a page in the physical memory, and both logical and physical addresses use 24 bits.

For a program that requires 12 pages of memory and starts at segment 10 and pages 200 to 212 the logical address range is D0C800 to D0D3FF.

Section 2

Explain a few of the wires in Fig 11-18, pg 419, in Mano.

Try to find information about a more modern I/O processor and give a short description of what you found, including citations.

Design a backup strategy for a computer system. One option is to use external disks, which cost $150 for each 500 GB drive. Another option is to buy a tape drive for $2500, and 400 GB tapes for $50 each. A typical backup strategy is to have two sets of backup media on-site, with backups alternately written on them so in case the system fails while making a backup, the previous version is still intact. There's also a third set kept offsite, with the offsite set periodically swapped with an on-site set.

  1. Assume you have 1 TB of data to back up. How much would a disk backup system cost?

For 1TB of data two external drivers would be required, for a price of $300 per set. This makes the total $900 for the two backups onsite and one offsite.

  1. How much would a tape backup system cost for 1 TB?

The tape backup drive is $2500, plus the necessary tapes for the three backup sets. Each set requires three tapes, for a total of $150 per set. This makes the total price $2950.

  1. How large would each backup have to be in order for a tape strategy to be less expensive?

Cost = hardware + #sets*cost(per 100 GB)*capacity(in 100s of GBs)

External = 3*($37.5*capacity(in 100s of GBs))

Tapes = $2500 + 3*($12.5*capacity(in 100s of GBs))

3*($30*c) = $2500 + 3*($12.5*c)

C = 47.619 (100’s of GBs) approx. 4.76 TB

Tapes are less expensive if the data to be backed up exceeds 4.76 TB.

  1. What kind of backup strategy favors tapes?

A modular backup, that backs up files as updated favors tapes, as only parts of the data would be stored to the tape as needed. This also reduces wasted space.

Section 3

Course Goal/Objective

Describe how concepts such as RISC, pipelining, cache memory, and virtual memory have evolved over the past 25 years to improve system performance.

Instructions

In this short research paper, you will investigate the evolution of and current trends in improving system performance with concepts such as RISC, pipelining, cache memory, and virtual memory. In this paper you must, as part of your conclusion, explicitly state the concept or approach that seems most important to you and explain your selection.

A minimum of two references is required for this paper. At least one article should be from a peer-reviewed journal. If you use Web sites other than the article databases provided by the Library in your research, be sure to evaluate the content you find therefor authority, accuracy, coverage, and currency.

Format and length

Your paper should be written using APA style. It should be no more than five pages long, but no less than three pages long. The font size should be 12 point, with one-inch margins and double spacing.

One main goal of the RISC (Reduced Instruction Set Computer) processor was to reduce the instruction set, creating simplified instructions. When compared to CISC (Complex Instruction Set Computer), RISC processors require more instructions to complete the same operation, and more memory, but the use of computer cycles is reduced overall. While the size of the instruction set is reduced the instructions themselves may be extremely complicated. Microcoding is a method used within the processor to speed a program’s execution by implementing operations in the hardware.

Each instruction for a RISC processor takes a fixed interval, one processor cycle. By regulating the length of all instructions to one constant time duration allows pipelining to be used by RISC processors. Pipelining allows for multiple instructions to be executed at the same time, but pipelining also requires that the information required to execute the instructions. The pipeline for a RISC architecture is developed such that both memory access and operations are achieved with equivalent efficiency. This means that memory access must be fast and efficient to keep up with the processor’s ability to execute the instructions available to the processor.

Caches store a small amount of main memory in SRAM. The goal is for the SRAM to contain the subset of main memory that is required by the program. If only one level of cache exists, then when a portion of the required memory is not loaded into cache, the program must pull the missing information directly from memory. High miss rates, or low hit rates, will result in slow program execution. To increase program execution speed additional layers of cache can be added.

Cache does not store just data, but also instructions, and there exists multiple methods of mapping main memory to the cache. Direct mapping can cause multiple misses due to the fact that a miss of any data requires reloading the entire line of cache. Fully associative cache and set-associative cache breaks the cache into parts such that a miss means only a portion of the cache line must be reloaded. This reduces the number of times a cache line must be reloaded, but fails to address the issue that a data miss requires the cache to reload both data and instructions. As a result, a data miss means that the next instruction must wait for cache to be reloaded to get the data, followed by another cache load to get the next instruction. This process is known as thrashing and can add a significant amount of time to the execution of a set of instructions. To address this issue data and the instructions are separated in the cache, so that a miss on data cache will not prevent the next instruction from executing in a pipeline architecture.

Virtual memory associates memory addresses that are relative to each program. This allows for the multiple programs to execute without the necessary de-conflictions while coding the programs. Each program’s relative memory address can start at 0, regardless of the memory location used in the physical memory. To perform these translations page tables are used. These tables use the program’s virtual memory address to compute the address in physical memory. Once the address in physical memory is determined the instruction or data can be fetched from main memory for decoding and execution by the processor.

While cache layering has improved the availability of instructions and data to the processor, this layering process can still introduce processing delays. In the worst cases of cache usage, avoiding cache altogether can reduce processing delays. By increasing the number of memory banks, multiple requests can be sent to main memory such that while one request is being completed another can be initiated. Increasing the width of the memory allows more data and instructions to be sent from main memory with each request. This avoids the storage and retrieval of data and instructions in the SRAM.

The cost of implementing a direct memory access implementation increases as the complexity increases. While the speed gains are real, the increased complexity is significant, in terms of cost and processor layout. For this reason the goal of optimizing for both operations and memory access must be maintained. If the pipeline architecture is improved for memory access or operation execution such that one exceeds the other, complexity and cost will be increased without actual gains in overall computing performance.

A correctly designed pipeline allows for parallelism through pipelining. Instruction execution is divided into five stages; instruction fetch, instruction decode, execute, memory access, and write back. Once one stage is finished for an instruction, that stage becomes open to the next instruction, such that all stages of instruction execution or occupied at all times. This process means that resources available to the RISC processor are fully utilized while programs are executed.

The register-to-register operation of the RISC processor is also referred to as a Load/Store architecture. An additional benefit of the Load/Store architecture is that data already loaded into a register is available for subsequent usage by another operation. This reduces memory access requires and can speed overall program execution.

With all of these performance enhancements a program’s performance is dependant less on the hardware architecture and instruction microcoding compared to the organization of the program’s coding. Using efficient instructions that take into account the register-to-register operation of the RISC processor architecture will lead to a more efficient program with faster execution times. While the instruction coding can be complicated for a human to write, it can be easier for a compiler to write the instructions for the reduced instruction set implemented by the processor.

Severance, Charles (1998).High Performance Computing. Retrieved from:

Mount Holyoke College: Pipeline Topic Notes. Retrieved from:

Oklobdzija, Vojin G. (1999). Reduced Instruction Set Computers. Retrieved from:

What is the Risc in Computer Architecture? Retrieved from: