JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN

COMPUTER ENGINEERING

OPTICAL BURST SWITCHING

1VISHAL JANI ,2 PANDYA KASHYAP N,3 DIPTESH PATEL

1,2,3 Shree J.M. Sabva Institute Of Engineering And Technology, Aradhna Saikshnik Sankul, Bhavnagar Road, Botad – 364710 Gujarat State

ABSTRACT : The current Internet is suffering as the number of users on the Internet and the varieties of applications transported are growing steadily at high rate, the available bandwidths are facing limits. Frequent congestion situations restrict the use of new time-critical applications like IP telephony, video conferencing or online games.Optical Burst Switching (OBS) is an experimental network technology that enables the construction of very high capacity routers, using optical data paths and electronic control. OBS is the transmission of the aggregation of multiple IP packets (a burst) without the need for any type of buffering at the intermediate nodes. OBS works on cut and through forwarding as against the store and forward approach in Packet switching. The bandwidth for the burst is reserved in a one-way process. One channel is reserved for control information. Other are carried the data bursts. There are several schemes for OBS signaling: Tell-and-go (TAG), in-band terminator (IBT), Just-enough-time (JET) etc

1.1 The Need For OBS:

The current Internet is suffering as the number of users on the Internet and the varieties of applications transported are growing steadily at high rate, the available bandwidths are facing limits. Frequent congestion situations restrict the use of new time-critical applications like IP telephony, video conferencing or online games. Thus, there is not only an increasing demand for bandwidth but also some sort of scalable quality of service support. One possible solution in this domain is optical burst switching (OBS), a concept combining advantages of optical circuit and packet switching. Optical Burst Switching (OBS) is one of the most important switching technologies in the future optical WDM networks and Internet.

1.2 Basic Concept:

Optical burst switching (OBS) is a technology positioned between wavelength routing (i.e., circuit switching) and optical packet switching. All-optical circuits tend to be inefficient for trace that has not been statistically multiplexed, and optical packet switching requires practical, cost-effective, and scalable implementations of optical buffering and optical header processing, which are several years away. OBS is a technical compromise that does not require optical buffering or packet-level parsing, and it is more efficient than circuit switching when the sustained trace volume does not consume a full wavelength. The transmission of each burst is preceded by the transmission of a control packet, whose purpose is to inform each intermediate node of the upcoming data burst so that it can configure its switch fabric in order to switch the burst to the appropriate output port. An OBS source node does not wait for confirmation that an end-to-end connection has been set-up; instead it starts transmitting a data burst after a delay (referred to as offset), following the transmission of the control packet. OBS nodes have no buffers; therefore, in case of output port convict, they may drop bursts.

Optical burst switching is most promising in the sense that it utilizes both proved electronic control processing mechanism and optical transmission technology. It electronically allocates optical switching system resources ahead of optical data bursts. To avoid the discrepancy between the electronically processing speed and the speed of optical transmission, optical burst switching transports burst of large size assembled from smaller packets such as IP packets. To further support the future Internet multimedia, mission critical and real-time applications such as video on demand, telemedicine, and remote learning, optical burst switching needs to be able to support high quality of operation (e.g. low delay and loss probability). One of most important quality of service parameter in OBS is the burst loss rate (probability). It is important that high priority class of data bursts have low loss probability even under low network resource such as available wavelengths and optical switch paths in an optical burst-switching node (OBSN). One of main reason of burst loss is that after processing the request for reserving resource for incoming data burst, an OBSN is not able to schedule the output wavelength or optical switch matrix and has to drop the data burst. Each OBS node can adjust the data burst loss rates for different classes of bursts and satisfy differentiated quality of service requirement with the available resources.

2.1 Optical Circuit Switching

A Optical circuit switched network has to have a dedicated wavelength path for the duration of its connection. In order for a circuit switched network to operate, a circuit is defined from the start of the connection to the end. This circuit is then reserved for this connection only, but becomes available once the connection is terminated. Referring to Figure 2.1, if a connection between points A and B is required, then a circuit is set-up via S1, S3, S4 and S5. Other routes are possible allowing for resilience, and it should be noted that the links between the switches might consist of more than one circuit to allow multiple circuits to be set up.

.

Figure 2.1: Circuit Switching Network

Circuit switching has three phases: Circuit set up, Data transmission, and Circuit tear down.

During Circuit Set up, reservation of fixed wavelengths along the path is done, on each link along the path from the source to its corresponding destination.

During Data Transmission, data is sent on the dedicated line. When distributed control is used in the phase of routing, the offset time between a set up request and data transmission T, is at least as long as 2P+delta, where P is the one way propagation and the delay delta is the total processing delay encountered by the set up request along the path. There is no need for buffering on intermediate nodes since the circuit is only used for this data at that particular time.

After the whole data is sent to the destination Circuit Tear Down occurs. The destination sends an acknowledgment signal to the source. As a consequence the nodes are released to be used in another connection.

Figure 2.1. 2: Circuit Switching Signaling

2.2. Optical Packet Switching

Packet switching works by sending the packets of information along the appropriate route. The router decides the appropriate route when the packet arrives. In packet switching each packet (a piece of data) contains additional information in it (header), rather like the address on an envelope, and each switch in the network (usually called routers) looks at this information and directs it onward accordingly. As an example imagine information being sent from point C in Figure 3, and its destination is D. A packet of information leaves C and is directed by R1 onto R3, R3 then directs the packet to R4 and then onto D. However, it may not always occur like this. Perhaps during the transfer the link between R1 and R3 experiences a slow connection or is lost, R1 would then start sending the packets to R2, R2 would then send it to R5, and so on. In packet switching the length of each packet Lp, can be either fixed or variable having a minimum of S min and maximum of S max. With a fixed packet length, a burst of size Lb will be broken into smaller packet of the same size. With a variable length the message will be broken into Lb/S max packets, and padding is used only if a packet is shorter than S min. A main feature of packet switching is store and forward. Meaning that a packet needs to be completely assembled and received by a source and each intermediate node before it can be forwarded. This will let the packet experience a delay proportional to Lp at each node and will make necessary the existence of a buffer at each intermediate node of the network with a size of at least S max.

.

Passive link active link switch/router

Figure2.2: Packet Switching Network

2.3. Optical Burst Switching

In Optical Burst Switching, a control packet is sent first, followed by a burst of data without waiting for an acknowledgment for the connection establishment, this is called a one-way reservation protocol. The main feature of OBS is to switch a whole burst of packet whose length can range from one to several packets to a session using one control packet, and resulting in a lower control overhead per data unit. OBS uses out of band signaling, and the control packet and the data burst are loosely coupled in time. Meaning that they are separated at the source by an offset time, which is larger than the total processing time of the control packet along the path. In consequence this eliminates the need for the data burst to be buffered at any subsequent intermediate node just to wait for the control packet to get processed.

Figure 2.3: Optical Burst Switching Nodes

Another alternative is that an OBS protocol may not use an offset time at the source, but instead requires that the data burst at each intermediate node is delayed in a fixed time that is not shorter than the maximal time needed to process a control packet at the intermediate node. To support IP over WDM in OBS, we run IP software along with other control software as part of the interface between the network layer and the WDM layer, on top of every optical (WDM) switch. In the WDM, a dedicated control wavelength is used to route the control packet. To send data, a control packet is routed from a source to a destination based on the IP address it carries to set up a connection by configuring all optical switches along the path. Next, a burst is delivered without going through intermediate IP entities, thus reducing the latency as well as the processing at the IP layer. In OBS, the wavelength of a link used by the burst will be released as soon as the burst passes through the link, either automatically according to the reservation made or by an explicit release packet. This means that bursts from different sources to different Destinations can effectively utilize the bandwidth of the same wavelength on a link in time-shared statistical multiplexed manner. In case the control packet fails to reserve the wavelength at an intermediate node, the burst is not rerouted, it is dropped. OBS protocols are not all the same; some of them support a reliable burst transmission, which has a negative acknowledgment that is sent back to the source node, which retransmits the control packet and the burst after that. Other OBS protocols are not reliable and don’t shave such negative acknowledgments.

2.4. Comparison:

Circuit switching is good for smooth traffic and quality of service guarantee due to a fixed bandwidth reservation. However, the bandwidth becomes inefficient for busty data traffic. This means that the bandwidth is either wasted during low traffic period or too much overhead (e.g. delay) occurs due to the frequent set up /release for every connection. The advantage of Packet switching is that a packet containing a header (e.g. addresses) and a payload is sent without circuit set up (delay) and we have static sharing of the link wavelengths among packets with different sources and destinations. However, due to the store and forward mechanism, every node processes the header of the packet arriving to know where to route it, and this make the use of a buffer at every node necessary. OBS combines both advantages of optical circuit and packet switching. Unlike the circuit switched approach it does not need to dedicate a wavelength for each end-to-end connection due to the fast release of the wavelength on a link after the burst passes by it. Also unlike the packet switched approach, burst data does not need to be buffered or processed at the cross connect since the OBS mechanism is a cut through one.

Table 1. Comparison between the Optical Switching Techniques

3.1 Concept of Obs Network Architecture

The basic burst-switching concept is illustrated in Fig. 1. The transmission links carry data on tens or hundreds of wavelength channels and user data bursts can be dynamically assigned to any of these channels by the OBS routers. One (or possibly several) channel on each link is reserved for control information that is used to control the dynamic assignment of the remaining channels to user data bursts. When an end system has a burst of data to send, an idle channel on the access link is selected and the data burst is sent on that channel. Shortly before the burst transmission begins, a Burst Header Cell (BHC) is sent on the control channel, specifying the channel on which the burst is being transmitted and the destination of the burst. The OBS router, on receiving the BHC, assigns the incoming burst to an idle available channel at the outgoing link leading toward the desired destination and establishes a path between the specified channel on the access link and the channel selected to carry the burst. It also forwards the BHC on the control channel of the selected link, after modifying the cell to specify the channel on which the burst is being forwarded. This process is repeated at every router along the path to the destination. The BHC also includes an Offset field, which contains the time between the transmission of the first bit of the BHC and the first bit of the burst, and a Length field specifying the time duration of the burst. The offset and length fields are used to time switching operations in the OBS routers, and the offset field is adjust by the routers to reflect variations in the processing delays encountered in the routers’ control subsystems. If a router does not have idle channels available at the output port, the burst can be stored in a buffer.

OBS router architecture consisting of a set of Input/Output Modules (IOM) that interfaces to external links and a multistage interconnection network of Burst Switch Elements (BSE). The interconnection network uses a Benes topology, which provides parallel paths between any input and output port. A three-stage configuration comprising port switch elements can support up to external links (each carrying many WDM channels). The topology can be extended to 5,7 or more stages. In general, a 2K-1 stage configuration can support up to dk ports. For example, a 5-stage network constructed from 8 port BSEs would support 512 ports. If each port carried 256 channels at 10 Gb/s each, the aggregate system capacity would be 1310 Tb/s.

Input IOMs process the arriving BHCs, performing routing lookups and inserting the number of the output IOM into BHCs before passing them on. The BSEs use the output port number to switch the burst through to the proper output. Each of the components that do electronic processing on the cell keeps track of the time spent and updates the offset field in the BHC to maintain synchronization with the burst.

3.2 OBS Fundamental

Figure 3.2.1: Burst Assembly/Disassembly at the Edge of an OBS Network

In an OBS network, various types of client data are aggregated at the ingress (an edge node) and transmitted as data bursts (Figure 3.2.1(a)) which later will be disassembled at the egress node (Figure3.2.1 (b)). During burst assembly/ disassembly, the client data is buffered at the edge where electronic RAM is cheap and abundant.

Figure3.2.2: Separated Transmission of Data and Control Signals

Figure 3.2.2 depicts the separation of data and control signals within the core of an OBS network. For each data burst, a control packet containing the usual ”header” information of a packet including the burst length information is transmitted on a dedicated control channel. Since a control packet is significantly smaller than a burst, one control channel is sufficient to carry control packets associated with multiple (e.g., hundreds of) data channels. A control packet goes through O/E/O conversion at each intermediate OBS node and is processed electronically to configure the underlying switching fabric. There is an offset time between a control packet and the corresponding data burst to compensate for the processing/configuration delay. If the offset time is large enough, the data burst will be switched all-optically and in a “cut-through” manner, i.e., without being delayed at any intermediate node (core). In this way, no optical RAM or fiber delay lines (FDLs) is necessary at any intermediate node.

3.3 Assembly algorithms

Usually, assembly algorithms can be classified as

1)Timer-based

2) Burst length-based

In thetimer-basedscheme, a timer starts at the beginning of each new assembly cycle. After a fixed time T, all the packets that arrived in this period are assembled into a burst.

In the burst length-basedscheme, there is a threshold on the (minimum) burst length. A burst is assembled when a new packet arrives making the total length of current buffered

Packets exceed the threshold. The time out value for timer-based schemes should be set carefully. If the value is too large, the packet delay at the edge might be intolerable. If the value is too small, too many small bursts will be generated resulting in a higher control overhead. While timer-based schemes might result in undesirable burst lengths, burst length-based assembly algorithms do not provide any guarantee on the assembly delay that packets will experience.

After a burst is generated using the algorithms mentioned above, the burst is buffered in the queue for an offset time before being transmitted to give its corresponding control packet enough time to make reservations at the downstream nodes as shown in Figure 3.2.2. During this offset period, packets may continue to arrive.