Congestion Prevention

Flow Control

Flow control is aimed at preventing a fast sender from overwhelming a slow receiver.

Flow control can be helpful at reducing congestion, but it can't really solve the congestion problem. For example, suppose we connect a fast sender and fast receiver (eg., two Crays) using a 9.6 kbps line:

1.If the two machines use a sliding window protocol, and the window is large, the link will become congested in a hurry.

2. If the window size is small (e.g., 2 packets), the link won't become congested.

Note how the window size limits the total number of packets that can be in transmission at one time.

Flow control can take place at many levels:

  • User process to user process (end-to-end).
  • Host to host. For example, if multiple application connections share a single virtual circuit between two hosts.

*Router to router. For example, in virtual circuits.

Flow Specifications

Traffic shaping is most effective when the sender, receiver, and subnet all agree to it.

To get agreement, it is necessary to specify the traffic pattern in a precise way using a flow specification.

A flow specification consists of a data structure that describes both the pattern of the injected traffic and the quality of service desired by the applications.

A flow specification can apply either to the packets sent on a virtual circuit, or to a sequence of datagrams sent between a source and a destination (or even to multiple destinations).

Congestion Control in Virtual Circuit Subnets

  • One technique that is widely used to keep congestion that has already started from getting worse is admission control.
  • The idea is simple: Once congestion has been signalled, no more virtual circuits are set up until the problem has gone away.
  • Thus, attempts to set up new transport layer connections fail. Letting more people in just makes matters worse.
  • An alternative approach is to allow new virtual circuits but carefully route all new virtual circuits around problem areas.

  • virtual circuits can also use a form of flow specification
  • the subnet will typically reserve resources along the path when the circuit is set up.
  • These resources can include table and buffer space in the routers and bandwidth on the lines.
  • This kind of reservation can be done all the time as standard operating procedure, or only when the subnet is congested.
  • A disadvantage of doing it all the time is that it tends to waste resources.
  • The price of the congestion control is unused bandwidth.

Traffic Shaping

Used in ATM subnets

Used to control burstiness of traffic

Much better to have a constant flow of data than feast or famine situation

Leaky Bucket

based on the principal of a bucket filled with water with a constant drip representing the flow - continuous and regular

Implementation requires the sender to have a internal out queue

One cell can be transmitted per clock tick and consequently it quietens the network

if the algorithm is used in a non-ATM network, ie. Variable length packets then restrict the number of bytes being transmitted rather than packets or any unit of data

Token Bucket Algorithm

  • The leaky bucket algorithm enforces a rigid output pattern at the average rate.
  • The Token Bucket algorithm allows the output to speed up when bursty traffic enters the bucket
  • Basically, tokens are placed in the leaky bucket at a steady rate.
  • A packet must first take a token and destroy it before it can be transmitted.

The leaky bucket algorithm does not allow idle hosts to save up permission to send large bursts later.

The token bucket algorithm does allow saving, up to the maximum size of the bucket, n. This property means that bursts of up to n packets can be sent at once, allowing some burstiness in the output stream and giving faster response to sudden bursts of input.

  • Token bucket algorithm throws away tokens when the bucket fills up but never discards packets.
  • Leaky bucket algorithm discards packets when the bucket fills up.

The leaky bucket and token bucket algorithms can also be used to smooth traffic between routers

A token bucket regulating a host can make the host stop sending when the rules say it must.

Telling a router to stop sending while its input keeps pouring in may result in lost data.

The implementation of the basic token bucket algorithm is just a variable that counts tokens. The counter is incremented by one every AT and decremented by one whenever a packet is sent. When the counter hits zero, no packets may be sent. In the byte-count variant, the counter is increment by k bytes every AT and decremented by the length of each packet sent.

A potential problem with the token bucket algorithm is that it allows large bursts even though the maximum burst interval can be regulated

One way to get smoother traffic is to put a leaky bucket after the token bucket. The rate of the leaky bucket should be higher than the token bucket's p but lower than the maximum rate of the network.

Load Shedding

When a router becomes overwhelmed by incoming packets it can use dump packets. This is load shedding and is another method of congestion control.

The router could dump the packets at random but most likely it will check the application that the packet belongs to and make a decision based on that packet.

Consider that dumping packets that are part of a connection oriented application, such as ftp, could cause possible re-transmission of packets already in the buffer of the router. Older packets are more valuable here so dumping newer ones is likely to cause less re-transmission.

For a video application dumping older packets is more acceptable than dumping newer ones.

Dumping newer ones in fovour of older ones can cause visual oddities (eg people moving backwards in a scene)

Another problem with video or sound transmission is Jitter.

Constant transmission rates are much more preferable than variable rates.

Negotiation for a average transmission rate can be made.

If a router receives a packet that is in front of its schedule it will delay that packet until it can transmit it on schedule

If a router receives a packet that is behind its schedule it will transmit that packet as soon as possible to attempt to maintain the agreed transmission rate.

Choke Packets

  • Choke packets can be used in both virtual circuit and datagram subnets.
  • Routers can monitor the level of congestion around them, and when congestion is present, they can send choke packets to the sender that say "slow down".
  • How can a router measure congestion?
  • A router might estimate the level of congestion by measuring the percentage of buffers in use, line utilisation, or average queue lengths.
  • Advantage:
  • Dynamic. Host sends as much data as it wants, the network informs it when it is sending too much.
  • Disadvantages:

1.Difficult to tune. By how much should a host slow down? depends on how much traffic the host is sending, how much of the congestion it is responsible for, and the total capacity of the congested region. Such information is not readily available in practice.

2. After receiving a choke packet, the sending host should ignore additional choke packets for a short while because packets currently in transmission may generate additional choke packets. How long? Depends on such dynamic network conditions as delay.

Spanning Tree Bridges

Spanning tree bridges were designed with transparency as a primary goal. A customer should be able to buy a bridge, insert it between two networks, and have everything work correctly with no hardware, software, or configuration changes on either hosts or existing bridges. How do they work?

1.Each bridge maintains a table that maps destination addresses to the outgoing interface. (Analogous to routing tables in routers.)

2.Bridge operates in promiscuous mode, reading every frame on each of its connected LANS, and the routing decision is made as follows:

(a)Extract the source and destination address from the frame, and find the corresponding table entries for each address.

(b)If the two table entries point to the same interface, discard the frame. Why? If both pointers point to the same interface, both the sender and recipient are on the same local network (as far as we can tell), and the frame doesn't need to be forwarded.

(c)Otherwise, if the two pointers are different, send the frame out on the LAN given by the routing entry for the destination address.

(d)If the destination is not in the table, flood the frame on all interfaces (except the one on which it arrived). We don't know where the destination is, so let's be conservative and send it everywhere. That way we can be sure that the packet traverses the LAN on which the destination resides.

3.Bridges use backward learning to build tables. They determine which LAN to use to reach destination X by recording the interface on which frames having source address X arrive on.

4.The table is a cache; un-referenced entries are periodically flushed, allowing machines to be moved from one LAN to another.

The above approach works only for tree-structured topologies. Why? Frames will loop forever (there is no time-to-live field in LAN frames) if there are multiple distinct paths between any two bridges. To handle arbitrary topologies, bridges use a special protocol to build a spanning tree:

1.Bridges that are not part of the spanning tree are unused. That is, they are specifically excluded from the tree and do not forward packets. They are available for backup, however, should one of the other bridges or LANs fail.

2.Bridges periodically rebuild tables. They regularly exchange topology information, allowing them to detect the failure of a bridge or LAN. When a bridge or link that is part of the spanning fails, a new spanning tree is constructed.

Advantages:

Easy, to use. Just install the bridges. No software changes are needed hosts.

Disadvantages:

1. Doesnot support multipath routing. By definition, only the bridges that belong to the spanning tree are used.

2.The path between any two hosts may not be the optimal path. An optimal path may traverse a bridge that is not part of the spanning tree and cannot be used.

3. Broadcast and multicast frames must be flooded in all cases.

Source Routing Bridges

Source routing bridges take a completely opposite approach from spanning tree bridges..

1.They are not transparent. Hosts treat frames sent locally differently from those sent through bridges. Conceptually, the sending host specifies a road map saying which bridges the frame must go through to reach its destination.

2. Each LAN is assigned a 16-bit LAN number, and each bridge on a LAN is assigned a 4-bit bridge number. The numbers must be unique and are set by the network administrator

3.Each frame carries a source route listing the path the frame is to take. The path consists of a sequence of [LAN number, bridge number] pairs.

1. Sending hosts (rather than bridges) responsible chooses the source route. Host selects paths by broadcasting (flooding) special discovery frames. A discovery frame includes space for each bridge to add its number to the recorded path.

2. Eventually, a discovery frame reaches the destination host , which returns it to the sender. Each returned message contains a viable path, and the sending host chooses the shortest one.

6.How many frames are generated? Unfortunately, the discovery process leads to frame explosion. The destination may receive an exponential number of copies of the original frame.

Advantages:

uses the optimal route. Also can make use of multiple paths to same destination.

Because paths aren't required to always lie along the spanning tree, better use of resources.

Disadvantages:

1.Not transparent to hosts; hosts must participate in source routing. This is a significant disadvantage.

2.Installing new bridges non-trivial. Specifically, a system administrator must assign LAN numbers and bridge numbers. Improper configuration leads to disaster.

3.Each host must detect bridge failure on its own (eg., using time-outs). With spanning tree bridges, the bridges hold that responsibility, and once they have reconfigured, all hosts start using the new path at the same time.

Not surprisingly, IBM supports source routing bridges, while DEC supports spanning tree bridges. IBM markets token ring networks, while DEC has always been big on Ethernets.

Multi Protocol Routers

Multi Protocol Routers (sometimes referred to as gateways) are packet switches that operate at the network layer (level 3). They are quite often used to connect organisations, ie the lines coming in to them are often owned by different entities. These “Gateways” often are found at political boundaries and on company LAN boundaries

Operating at the network level gives Multi Protocol Routers increased flexibility compared to bridges in terms of

1.Translating addresses between dissimilar networks.

2.Fragmenting large packets for transmission across networks that carry only small maximum packet lengths.

3.Selecting an appropriate path through the subnet.

4.Enforcing policies (eg., don't forward any local packets off of this network).

Because Multi Protocol Routers do more things than bridges, they generally run slower than bridges.

One issue that arises with Multi Protocol Routers is who owns them. Typically, bridges connect LANs of one organisation, and the issue does not arise there. The ownership question is important because someone has to be responsible for the Multi Protocol Routers operation and dual ownership frequently leads to finger pointing when something goes wrong.

1.The sending host opens a virtual circuit, but a circuit goes through gateway hops rather than router hops.

2.Any two neighbouring Multi Protocol Routers at the internetworking level must be connected to a common network.

3.Regular router-based virtual circuits connect neighbouring Multi Protocol Routers on the same physical network).

4.The end-to-end virtual circuit is a concatenation of individual virtual circuits through each of the networks along the path.

Connectionless internets operate just as connectionless networks. A host sends a packet to a neighbouring gateway, which forwards it the next gateway, and so forth. Just as with connectionless networks, Multi Protocol Routers make only a best-effort attempt at delivering the packet.

When a gateway receives a packet, it selects the interface to send the packet out on and encapsulates the packet using the local data link layer format. As a packet moves from gateway to gateway, it is repeatedly encapsulated and un-encapsulated as it travels across each network.

IP Protocol

The goal of IP is to interconnect networks of diverse technologies and create a single, virtual network to which all hosts connect.

Hosts communicate with other hosts by handing datagrams to the IP layer; the sender does not worry about the details of how the networks are actually interconnected.

IP provides unreliable, connectionless delivery service.

IP defines a universal packet called an Internet Datagram

Datagrams contain the following fields:

Version number (4-bits):

  • Including a version number allows a future version of IP be used along side the current version, facilitating migration to new protocols.

Header length (4-bits):

  • Length of the datagram header (excluding data) in 32-bit words. The minimum value is 5, maximum header length is 60 bytes. In practice, the length field is used to locate the start of the data portion of the datagram.

Type of Service:

  • basically what service do you require - fast, error free etc. using combination of D,T and R bits (delay, throughput and reliability). Commonly not used today.

Total length (16-bits):

  • Total length of the IP datagram (in bytes), including data and header. Maximum datgram size 65,535 bytes. The size of the data portion of the datagram is the total length minus the size of the header.

Identification:

  • ID of the datagram. All fragments of the same datagram have the same ID no.

Fragment offset (13-bits), Flags (3-bits), Identifier (16-bits):

  • These three fields are used for fragmentation and reassemble. DF don’t fragment - MF more fragments to come .
  • Gateways are free to fragment datagrams as needed, and hosts are required to reassemble fragments before passing complete datagrams to the higher layer protocols. Each fragment contains a complete copy of the original datagram's header plus some of the data .

Time-to-live (8-bits):

  • A hopcount that is decremented by each gateway. Should the hopcount reach 0, discard the datagram. Originally, the time-to-live field was intended to reflect real time. In practice, it is now a hopcount. The time-to-live field squashes looping packets.

Protocol (8-bits):

  • What type of data the IP datagram carries (e.g., TCP, UDP, etc.).

Header Checksum (16-bits):

  • A checksum of the IP header (excluding data).

Source address (.32-bits):

  • Original sender's address.

Destination address (32-bits):

  • Datagram's ultimate destination..

Options:

Network Byte Order

One problem that arises when interconnecting different machines is that different machines represent integers in different ways: