MC120203263 M.AAMIR LATIF

CS 614 ASSIGNMENTS 3.

Question 1: [5 Marks]

Use the Amdahl’s Law (mentioned below) to prove that the speed up does not remain the same, as the ratio of sequential code and number of processors (on which this code is executed) is doubled.

Amdahl's law can be used to calculate how much a computation can be sped up by running part of it in parallel. Amdahl's law is named after Gene Amdahl who presented the law in 1967. Most developers working with parallel or concurrent systems have an intuitive feel for potential speedup, even without knowing Amdahl's law. Regardless, Amdahl's law may still be useful to know.

Initial fraction of code that can be executed sequentially =10%

Initial number of processor=100

______1______

0.1+(1-0.1)/100

______1______

0.1+(0.9)/100

______1______= 9.17

0.109

If sequential code and number of processor (on which this code is executed) is doubled.

f=20%

N=200

______1______

0.2+(1-0.2)/100

______1______

0.2+(0.8)/100

______1______= 4.90

0.204

So double the sequential code and processor the speed not remain same.

If we increase the sequential code and processor then cost increases .

The larger the variety of number of cores you test the better, but you need to at least test with a single CPU core and all possible CPU cores. If possible, we recommend testing with as many combinations as possible (so if you have an eight-core CPU, test with 1,2,3,4,5,6,7, and 8 cores).

No of Cores / Action Time
(seconds) / Actual Speedup / Amdahl's Law Speedup (97% efficient)
1 / 645.4 / 1 / 1
2 / 328.3 / 1.97 / 1.95
3 / 230 / 2.8 / 2.8
4 / 172 / 3.75 / 3.67
5 / 140.3 / 4.6 / 4.5
6 / 117.5 / 5.5 / 5.2
7 / 108 / 6 / 5.9
8 / 97.8 / 6.6 / 6.6

Question 2: [5 Marks]

Suppose, we are performing 20 tasks sequentially using a machine each task taking 5 seconds. Now we are planning to divide each task into 5 sub-tasks and perform these tasks using pipelining.

Pipelining is an implementation technique where multiple instructions are overlapped in execution. The computer pipeline is divided in stages. Each stage completes a part of an instruction in parallel. The stages are connected one to the next to form a pipe - instructions enter at one end, progress through the stages, and exit at the other end.

Pipelining does not decrease the time for individual instruction execution. Instead, it increases instruction throughput. The throughput of the instruction pipeline is determined by how often an instruction exits the pipeline. Because the pipe stages are hooked together, all the stages must be ready to proceed at the same time. We call the time required to move an instruction one step further in the pipeline a machine cycle. The length of the machine cycle is determined by the time required for the slowest pipe stage.

The pipeline designer's goal is to balance the length of each pipeline stage . If the stages are perfectly balanced, then the time per instruction on the pipelined machine is equal to

Time per instruction on nonpipelined machine
Number of pipe stages

Under these conditions, the speedup from pipelining equals the number of pipe stages. Usually, however, the stages will not be perfectly balanced; besides, the pipelining itself involves some overhead.

.

Linear pipelines

A linear pipeline processor is a series of processing stages and memory access.

Non-linear pipelines

A non linear pipelining (also called dynamic pipeline) can be configured to perform various functions at different times. In a dynamic pipeline there is also feed forward or feedback connection. Non-linear pipeline also allows very long instruction words.

Pipelining: Limitations

Pipeline parallelism is a good fit for data warehousing (where we are working with lots of

data), but it makes no sense for OLTP because OLTP tasks are not big enough to justify

breaking them down into subtasks.

In order to eliminate overhead in breaking a big task into an M-stage pipeline, it is

important that there is efficient synchronization between producer/consumer tasks in the

pipeline execution. Also, it is important to avoid any tasks for which all data must be

processed before moving on to the next step because this results in pipeline stalls (where

we cannot feed the next execution step until all data has been processed through the current execution step).

Cost and drawback

As the assembly-line example shows, pipelining doesn't decrease the time for processing a single datum; it only increases the throughput of the system when processing a stream of data.

"High" pipelining leads to increase of latency - the time required for a signal to propagate through a full pipe.

A pipelined system typically requires more resources (circuit elements, processing units, computer memory, etc.) than one that executes one batch at a time, because its stages cannot reuse the resources of a previous stage. Moreover, pipelining may increase the time it takes for an instruction to finish.