An Introduction to

ATM Networks

Solution Manual

Harry Perros

Copyright 2002, Harry Perros

All rights reserved

Contents

Solution for the problems in

Chapter 2

Chapter 3

Chapter 4

Chapter 5

Chapter 6

Chapter 7

Chapter 8

Chapter 9

Chapter 10

Chapter11

Chapter 2. Basic Concepts from Computer Networking

  1. Circuit switching involves three phases: circuit establishment, data transfer and circuit disconnect. In circuit switching, channel capacity is dedicated for the duration of the connection, even when no data is being sent. Circuit switching, therefore, is not a good choice solution for bursty data. The course emitting the data is active transmitting for a period of time, then it becomes silent for a period of time which it is not transmitting. This cycle of being active and then silent repeats until the source complete its transmission. In such cases, the utilization of the circuit-switched connection is low.

Packet switching is appropriate for bursty traffic. Information is sent in packets, and each packet has a header with the destination address. A packet is passed through the network from node to node until it reaches the destination.

2.

a)Stop-and-Wait flow control: U = 0.1849%

b)Flow control with a sliding window of 7: U = 1.2939%

c)Flow Control with a sliding window of 127: U = 23.475%

3.

a)P = 110011, M = 11100011

FCS = 11010

b)P = 110011, M = 1110111100

FCS = 01000

4. Stuffed bit stream = 0111101111100111110100

Output Stream = 01111011111011111100

  1. Payload = 1500 bytes.

Overhead includes Flag, Address, Control, FCS and Flag bits. Assuming 8 control bits and 16 FCS bits, overhead = 8+8+8+16+8 = 48.

For every 1500 bytes of payload, we have 48/8 = 6 bytes of overhead.

% Bandwidth used for overheads = (6/1506 )*100 = 0.398 %

  1. One DS-1frame consists of 24 voice slots, each slot of 8 bits and 1 bit for frame synchronization. Therefore one frame = 24*8+1=193 bits.

The DS-1 rate is 1.544Mbps.

Now, out of 193 bits 1 bit is for control, therefore out of 1.544Mb the number of control bits = 8kb.

The number of frame synchronization bits per voice channel = 8000/24 = 333.33

Besides these frame synchronization bits, there control signal bits per voice channel. Every 6th voice slot contains one control signal bit.

Hence, there is one control bit in every 48 bits of voice. Therefore the number of control bits in a voice channel = 64000/48 = 1333.333

The total control signal data rate per voice channel = 333.33 + 1333.33 = 1.666kbps.

  1. In X.25, while setting up virtual circuits, the virtual circuit numbers have local significance. The reason being that, it is much easier to have local management rather than global synchronization, by having a simple mapping table between incoming and outgoing virtual circuit numbers, the design becomes very simple, and easy to manage. Frame relay and ATM also use this concept of local virtual circuit numbers.
  1. Consider the following imaginary IP header:

0 1 0 0 0 1 0 1 / 0 1 0 0 0 1 0 1 / 0 1 1 1 1 0 0 0 / 0 1 0 0 0 0 0 1
0 0 0 0 0 0 0 0 / 0 0 0 0 0 0 0 1 / 0 0 1 0 0 0 0 0 / 0 1 0 1 0 0 0 1
0 0 0 1 1 1 1 1 / 0 0 0 0 0 1 1 0 / 0 0 0 0 0 0 0 0 / 0 0 0 0 0 0 0 0
0 0 0 0 1 1 0 1 / 0 1 0 0 1 0 0 0 / 0 0 0 0 1 0 0 0 / 1 0 1 1 1 1 1 1
0 0 0 0 0 1 0 0 / 1 1 0 1 1 0 1 0 / 0 0 0 1 0 0 0 0 / 0 0 0 0 1 0 0 1

Now we will perform 1’s complement addition on the 16 bit words of the IP header. Notice that the Checksum field is all 0’s.

Calculation of checksum:

Bytes 1,2 –4545

Bytes 3,4 - 7841

Bytes 5,6 - 0001

Bytes 7,8 - 2051

Bytes 9,10 - 1F06

Bytes 11,12 - 0000

Bytes 13,14 - 0D48

Bytes 15,16 - 08BF

Bytes 17,18 -04DA

Bytes 19,20 - 1009

Sum = 127C8

Final Sum after shifting carry to LSB = 27C9

One’s complement of final sum = D836, which is inserted into the checksum field.

At the receiver this packet is received, and checksum is performed on the packet in the same manner, if the result of the addition is FFFF, then the received packet is without error, but we consider a case where the error cannot be detected. For e.g. if the least significant bit of byte 2 and the least significant bit of byte 10 are flipped , the calculated checksum will remain the same.

Calculation of checksum at receiver:

Bytes 1,2 –4544 - bit flipped

Bytes 3,4 - 7841

Bytes 5,6 - 0001

Bytes 7,8 - 2051

Bytes 9,10 - 1F07 – bit flipped

Bytes 11,12 - D836 - checksum

Bytes 13,14 - 0D48

Bytes 15,16 - 08BF

Bytes 17,18 -04DA

Bytes 19,20 - 1009

Sum = 1FFFE

Final Sum after shifting carry to LSB = FFFF

Hence the error cannot be detected.

9.

a)It is a Class B address

b)Network address = 1001 1000.0000 0001

Host address = 1101 0101.1001 1100

10.

a)Only one subnet can be defined, as the subnet mask 255.255.0.0 masks the first 2 bytes of the IP address, which is the network id, incase of Class B. In this case the network itself is the subnet, and vice versa. For multiple subnets, the subnet mask needs to mask more than 2 bytes of the IP address for Class B address.

b)The maximum number of hosts that can be defined per subnet = 216- 2, because all 1’s is used for directed broadcast, and all 0’s is used for network id.

Chapter 3. Frame Relay

  1. The motivation behind Frame Relay was a need to develop new high speed WANS, in order to support bursty traffic, rapid transfer rates imposed by new applications and the required response times. Some of the main features of frame relay are as follows:

a)Connection Oriented

b)Based on packet switching

c)No link level error and flow control

d)Routing decision, as to which node a packet should be forwarded to moved from layer 3 in IP/X.25 to layer 2 in Frame Relay.

e)Lost/Discarded frames are recovered by the end user, higher level protocols

f)Uses a feedback based congestion control scheme

  1. Frame Relay was developed for high speed WANS, offering high transfer rates and low response times. In traditional packet switching networks such as IP and X.25, the switching decision is made at the network layer and the flow/error control is done at the link layer. The two layers introduce their own encapsulation overheads, and passing a packet from one layer to another required moving it from one buffer of the computer’s memory to another, a time consuming process. Refer to Fig. 3.1.

In order to achieve its desired goal, frame relay overcomes this overhead. The layer three switching is moved to layer two, error and flow control is done by the end users, thereby providing a more efficient transport mechanism.

  1. The total number of DLCI values that can be defined with this header = 216.
  1. One reason being that, if two node A, B request a connection to a node C through the frame relay network using the same value of DLCI, then there will be a conflict at the frame relay node connected to C. Hence it is better if there is local significance of the DLCI value.

?

Also, global synchronization of the DLCI values is not necessary, a local mapping table, which maps incoming DLCI value to the outgoing DLCI value is sufficient. If the same DLCI value is to be used then it has to be relayed all the way to the destination node, which could be very far away, incurring huge overhead. Another reason is that if it is global, the result is a limited number of connections. These are some reasons why the DLCI value has local significance.

5.

a)TC = BC/CIR = 12/128 = 0.0938s when BC = 12Kb

TC = 40/128 = 0.3125s when BC = 40Kb

The time interval TC increases with increase in BC.

b) The time interval increases with BC, which means that the CIR remaining the same, the node can transmit more bits, without being penalized by network policing.

c) One of the parameters negotiated at set-up time includes excess burst sizeBE. As long as the number of bits transmitted is below the negotiated BC, the delivery of the data will be guaranteed. The parameter BE allows the node to transmit more than BC. If the number of bits transmitted is between BC, BC+BE, the data will be delivered with no guarantees, it DE bit will be set. If there in congestion at a node in the network, then this packet which had the DE bit set, might be discarded. If the node transmits more than BC+BE, then the frame will definitely be discarded. Hence, there is a mechanism for submitting to the network more than BC for each time interval.

  1. The forward explicit congestion notification (FECN) and backward explicit congestion notification (BECN) bits are part of a feedback based congestion control scheme adopted by frame relay. When there is congestion at a node, then the node sets the FECN bit in all the outgoing frames. So all the nodes that receive these frames know that an upstream node is congested. When the destination node receives a frame with the FECN bit set, it tries to help out with the congestion, by reducing its window size or delaying its acknowledgements, thereby slowing down the transmitting device. The receiving node also sets the BECN bit in the frames it send back to the transmitting node. When the transmitting node receives a frame with the BECN bit set, the node decreases its transmission rate. This helps in reducing the congestion.

Chapter 4. Main Features of ATM Networks

  1. There is no error control between two adjacent ATM nodes. It is not necessary, since the links in the ATM network have a very low error rate. In view of this, the payload of the packet is not protected against transmission errors. However, the header is protected in order to guard against forwarding a packet to the wrong destination. The recovery of a lost packet or a packet that is delivered to its destination with erroneous payload is left to the higher protocol layers.
  1. The ATM architecture was designed with a view to transmitting voice, video and data on the same network. Voice and video are real time applications and require very low delay. If link layer flow control is used, it will lead to retransmissions at the link layer, increasing delay, which cannot be tolerated for these time sensitive applications. Also the transmission links are of high quality and they have a very low error rate. These are some of the reasons why there is no data link layer flow control in ATM networks.
  1. It is easier to design ATM switches when the ATM cells are of a fixed size. The switches are less complex as no functionality has to be added to take care of cell size. If variable sized cells are allowed, this will considerably increase the complexity of ATM switches.
  1. ATM cell size = 53B = 53*8 = 424b

a)T1 line:

=

b)OC-3:

=

c)OC-12:

=

d)OC-24:

=

e)OC-48:

=

5.

a)Since in correction mode, HEC state machine will only accept error-free cells and correct single-error cells, the probability of a cell being rejected is:

1 – P(Error free cell) – P(Single bit error) = 1 – (1-p)40 – 40*p*(1-p)39

b)Since in the detection mode, HEC state machine will only accept error-free cells, the probability of a cell being rejected is:

1 – P(Error free cell ) = 1 – (1-p)40

c)After the first cell is rejected, the HEC state machine will be in detection mode. So, the probability of n successive cells being rejected is:

[1-(1- p)40 – 40*p*(1-p)39] * [1-(1-p)40]n-1

d)Define:

a0 = (1-p)40 …………… Prob. Header is error free

a1 = 40p(1-p)39 ………. Prob. of 1-bit error in header

Let P(n) be the probability that n successive cells have been admitted.

P(1) = a0 + a1

P(2) = a0a0 + a0a1 + a1a0

= a0(a0+a1) + a1a0

= a0P(1) + a1a0

P(3) = a0a0a0 + a0a0a1 + a0a1a0 + a1a0a0 + a1a0a1

= a0P(2) + a1a0P(1)

In general,

P(n) = a0P(n–1) + a1a0P(n-2)

6.

a)W=S *( A/T) + (S + R)*(1-A/T)

b)T=2000, S=0, W<100msec

R(2000-A) < 200 sec.

A must be less than T

Suppose 0 <= A < 2000

Then R=100msec to 200msec.

7.

a)(1-p)n

b)nCmpm(1-p)n-m

c)Probability that a cell is received in error = p

Probability of n cells being error free = y = (1-p)N

Probability of n cells with error = (1-y)

Time taken to transmit a packet with no errors = nT + D + F = R

Time taken to transmit a packet with error = nT + D + F + D +nT + D + F = 2R + D

Average time to taken to transmit a packet = PNOERROR[Time with no error] + P1ERROR[Time with 1 error] + ……………..

= yR + y(1-y)[2R + D] + y(1-y)2[3R + 2D] + y(1-y)3[4R + 3D] +…………

= yR + y(1-y)2R + y(1-y)23R + y(1-y)34R + ………… + y(1-y)D + y(1-y)22D + y(1-y)33D + y(1-y)44D+……………………

Separating the two parts into namely,

Part 1 - yR + y(1-y)2R + y(1-y)23R + y(1-y)34R + …………

Part 2- y(1-y)D [ 1 + (1-y)2 + (1-y)23 + (1-y)34+……………………

Summation of iai-1 over i = 0 to infinity it will give 1/(1-a)2 for a<1, in this case a =1-y.

Therefore ,Part 2 = =

For Part 1 = yR[ 1+ (1-y)2 + (1-y)23 + (1-y)34 + ……..]

Therefore Part 1 = = R/y.

Thus Part1 + Part 2 = R/y + D(1-y)/y

W = +

p / (1-p)n / W
0.1 / 0.042 / 934.523
0.08 / 0.082 / 468.9
0.06 / 0.156 / 236.987
0.04 / 0.294 / 116.36
0.02 / 0.545 / 53.56
0.01 / 0.74 / 34.176
0.008 / 0.786 / 31.005
0.006 / 0.835 / 28.012
0.004 / 0.887 / 25.197
0.002 / 0.942 / 22.558
0.001 / 0.97 / 21.33
0.0008 / 0.976 / 21.076
0.0006 / 0.982 / 20.825
0.0004 / 0.988 / 20.577
0.0002 / 0.994 / 20.331
0.0001 / 0.997 / 20.211
0.00008 / 0.998 / 20.17
0.00006 / 0.9982 / 20.162
0.00004 / 0.9988 / 20.138
0.00002 / 0.9994 / 20.114
0.00001 / 0.9997 / 20.102

d)

e) The above results show that the time W decreases with the decrease in the cell loss rate. In ATM the cell loss rate is very low, due to high-speed optical links. Hence the need for link layer error/flow control is not necessary. Hence there is no data link layer in ATM.

Chapter 5. The ATM Adaptation Layer

  1. Unstructured data transfer mode corresponds to 47 bytes of user information. DS-1 corresponds to 1.544Mbps.

1.544Mbps corresponds to 193000 bytes/sec.

bytes will take 47/193000 = 243 to fill-up an SAR-PDU.

  1. Suppose we have a CPS-packet with a CID = 00110111.

Below the CPS-packet bit streams are shown:

1). CPS no header split (CID Shown): 00110111…

2). Same CPS-packer header split (CID shown, 2 bits in previous cell) the 2 bits in the previous cell look like padding, you cannot tell if it belongs to a CPS-packet header. 00 | 110111….

Without the offset (OSF), there is no way to know if the last 2 bits in the first ATM cell is the padding or if it contains bits needed for the next CPS-packet header. In the worst case, the payload is misrouted to the wrong client.

However, since the CPS also contains a HEC, most likely the cell will be dropped since it likely contains the wrong HEC. All the following CPS-packets are similarly affected. If the number of streams is large, this is a bad problem.

  1. In AAL2, the value 0 cannot be used to indicate a CID because it is used as padding.

4.

a)

#1 #2 #3 #4

20 27 21 26 9 20

b) OSF =0 OSF=21 OSF=9

5.

a)Voice is coded at the rate of 32kbps. The timer goes off every 5ms.

CPS packet size = = 20 bytes

b)Active Period = 400ms

Timer goes off every 5 ms

Therefore, total number of CPS packets/active period = 400/5 = 80 packets.

6.

a)CPS introduces 8 additional bytes, a 4-byte header and a 4-byte trailer.

b)The number of SAR PDU’s required = 1508/44 = 34.2727 = 35

c)Additional bytes introduced by the SAR in each SAR-PDU are 4 bytes, a 2 byte header and a 2 byte trailer.

d)The maximum user payload in the ATM cell, will be when there is just the user PDU and no other CPS-PDU header or trailer in it, which is 44 bytes.

The minimum payload in the ATM cell, will be when along with the data there is the padding and the CPS-PDU trailer.

which is 44-3-4 = 37 bytes.

7

a)Considering a user pdu of 1500 bytes

i)

User-PDU + Trailer = 1500+1+1+2+4=1508bytes

Padding necessary = 28 bytes to make the CPS-PDU a multiple of 48 bytes.

ii)32 cells

iii) Trailer = 8 bytes

Padding = 28 bytes

ATM Cell headers = 5*32= 160 bytes

Total overhead = 8 + 28 + 5*32 = 196 bytes

b)Considering a user pdu of 1000 bytes

i)The padding necessary in this case in 0 because 1000 + 8 = 1008 is an integral multiple of 48.

ii)21 cells

iii)Trailer = 8 bytes

AT Padding = 0 bytes

M Cell headers = 5*21 = 105 bytes

Total overhead = 8 + 105 = 113 bytes.

8. Moving the length field from the trailer to the beginning of the CPS_PDU will not solve any problems. The CPS – PDU will still have to be split into ATM cells, and the last ATM cell will still have to have an indication that it is the last cell of this CPS-PDU. Also moving the length field to the header is a bad solution is a bad idea because if there is an error in the header it will lose synchronization and all the subsequent data will also be lost. Hence, this is a bad solution.

Chapter 6. ATM Switch Architectures

  1. Consider that an ATM cell arrives at input port 0 destined for output port 6 (110).

The cell is routed as follows:

  • Initially the destination port address is attached in front of the cell in reverse order (011).
  • At stage 1, the cell is routed to the lower outlet and the leading bit (1) is dropped.
  • At stage 2, the cell is again routed to the lower outlet and the leading bit (1) is dropped.
  • At the final stage, the leading bit is 0, so the cell is routed to the upper outlet, which is the destination output port 6 (110).
  1. The problem with input buffering switches is that they cause head of line blocking. This problem occurs when there is conflict for the same link between multiple cells at different input ports. When this happens, one of the cells is allowed to use the link. The other cells have to wait a the head of the input queue. However the cells, which are queued up after the blocked cells have to wait for their turn, even though they are not competing for the same link. These cells are also blocked. This is head of link blocking. Output buffering switches eliminate head of line blocking, because the do not have any input buffering. Hence they are preferable to switches with input buffering.
  1. In the buffered banyan switch architecture, each switching element has input and output buffers. The switch is operated in a slotted manner, and the duration of the slot is long enough so that a cell can be completely transferred from one buffer to another. Also, all the buffers function so that a full buffer will accept another cell during a slot, if the cell at the head of the buffer will depart during the same slot.

Upon arrival at the switch, a cell is delayed until the beginning of the next slot. At that time it is forwarded to the corresponding input buffer of the switching element in the 1st stage, if the input buffer is empty. If the buffer is full, the cell is lost. This is the only time when a cell can be lost in a buffered banyan switch. A cell cannot get lost at any other time. The reasons are explained as follows:

  • A cell in the input buffer of a switching element is forwarded to the corresponding output buffer only if there is free space. If there is no free space, the cell is delayed until the next slot. If two cells are destined for the same output buffer, then one cell is chosen at random.
  • The transfer of a cell at the head of the output buffer of a switching element to the input buffer of the switching element in the next stage is controlled by a backpressure mechanism. If the input buffer of the next switching element is free, then the cell is forwarded to the switching element, else the cell waits until the next slot and tries again. Due to the backpressure mechanism, no cell loss can occur within the buffered banyan switch.
  • If an output port becomes hot, i.e., receives a lot of traffic, a bottleneck will build up in the switching element associated with this output port. Due to the backpressure, cell loss can occur at the input ports of the switch, as the bottleneck extends backwards to the input ports.
  1. The buffered banyan distribution network is added in front of the buffered banyan switch to minimize queueing at the beginning stages of the switch. The mechanism distributes the traffic offered to the switch randomly over its input ports, which has the effect of minimizing queueing in the switching elements at the beginning stages of the switch. External blocking occurs when two or more cells compete for the same output port at the same time. This mechanism does not help eliminate external blocking, because the distribution network does not look at the destination address of the incoming cells when it distributes the cells over the input ports of the switch.
  1. The cells will appear in the order 1,1,2,3,4,5,5,8 at ports 0 through 7 respectively.

a)This switch architecture is non-blocking because each input port of the switch has its own dedicated link to each of the output ports of the switch. There is no internal or external blocking.