JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN ELECTRONICS AND COMMUNICATION ENGINEERING

NEURAL BASED SCHEDULING ALGORITHM FOR FIXED LENGTH PACKET SWITCHING

1 MISS BHOSALE P. Y., 2 PROF. KORE S. N.

1, 2 Department of Electronics Engineering, Walchand College of Engineering, Vishrambag, Sangli–416 415, Maharashtra, India.

ABSTRACT : In this paper, we study the performance of high speed switches, where the switch fabrics operates at a slightly higher speed than the links i.e. speeded − up switch. Input queue switches suffer from phenomenon called as HOL blocking can be overcome by the use of virtual output queuing which is described here. We describe here Neural Network algorithm and scheduling policies like random selection. A VOQ provide 100% throughput for uniform traffic, yet it is simple to implement in hardware. Simulation results are presented to indicate the performance under uniform Bernoulli i.i.d. traffic. With the use VOQ structure, the performance of the switch improves in terms of throughput and delay and circuit complexity also reduces.

Key Words : Neural Network, Hopfield Neural Network, Virtual Output Queue, Input Queuing.

ISSN: 0975 –6779| NOV 10 TO OCT 11 | VOLUME – 01, ISSUE - 02 Page 155

JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN ELECTRONICS AND COMMUNICATION ENGINEERING

1. INTRODUCTION

Packet switching is central to IP routing and ETHERNET switching. To accommodate high-speed links packet-to-cell segmentation is required. It means that variable length packets are converted into fixed length cells and then switching is carried out. In high speed packet switching, efforts are being made to develop compact, low cost and efficient switches. There are different architectures are proposed for packet switches namely output queued switch, shared memory switches which out perform in throughput and providing quality of service(QOS). Entire decade from 1990 − 2000 has been spent by researcher to develop Output Queued switch with QOS. Output Queued switch provides 100% throughput at the cost of switching speed up of switch fabric by a factor of ‘N’. It becomes difficult when high speed switches at Gbps rate are to be designed. VLSI technology limits internal rate of switching in fabric. Input Queued switches requires less memory bandwidth but can only provide 58% throughput with single Queue at each input port. In last decade 1995 − 2005 constant effort are made to suggest architecture which provide 100% throughput. Virtual Output Queued, N queues per input port i.e. Queues at input port provides 100% throughput.

2. NEURAL BASED ALGORITHM

The research method includes study of performance of Hopfield Neural Network for the scheduling the packets.

Figure 1. Block Diagram

3. TERMINOLOGIES USED

We define the following data structures:

1.  Generator Matrix indicates that there is packet generated at input i destined for output j and otherwise.

2.  Backlog Indication Matrix indicates that input i has at least one packet destined for output j otherwise. In other words, if.

3.  Qlength Matrix : is nothing but the updated backlog indication matrix which is just the addition of new generator matrix and the old backlog indication matrix.

4.  Scheduled Matrix is nothing but the permutation of Identity matrix.

5.  Transmission Matrix indicates that input i should transmit a packet to output j and otherwise.

4. ALGORITHM

The algorithm is coded in MATLAB 7.0 for the Hopfield network operation for the problem. The following is a listing of the steps in the program.

Step 1 Initialize activations of all units.

Step 2 Initialize input-output ports.

Step 3 Initialize a small value to ι.

Step 4 Do Step 5 through Step 19 for thousand iterations.

Step 5 Assign probability of packet generation λ.

Step 6 When λ is less than 0.9 perform

Step 7 through Step 18.

Step 7 Generate packets at each input port.

Step 8 Calculate Generator Matrix (G).

Step 9 Calculate the Backlog Indication Matrix (B).

Step 11 When stopping condition is false; perform Steps 12 through Step 16.

Step 12 Do Steps 13 through Step 15 for n^2 times. (n is the number of ports)

Step 13 Choose a unit at random.

Step 14 Change activities on selected unit by (4).

Step 15 Apply the output function

Step 16 Test for stopping condition.

Step 17 Calculate the Transmission Matrix (T).

Step 18 Schedule the selected packets.

Step 19 Subtract the scheduled packets from the

Backlog Indication Matrix.

5. SIMULATION AND RESULTS

Generate Cell: As shown in Figure 4.1 the input to the generate cell block is nothing but the packets generated with the probability. This generate cell will produce the generator matrix (G).

and

Q length Matrix: The generator matrix is added with the backlog indication matrix (B) produced by the backlog matrix block. Addition of generator matrix and the backlog indication matrix will produce the Qlength matrix.

For first iteration:

For rest of the iterations

Hopfield Neural Network

The Qlength matrix which is the forwarded to the Hopfield Network to generate the scheduled matrix (V) which is nothing but permutation of identity matrix.

As an example, suppose the Backlog Indication Matrix B is

There are four possible permutations as follows

Transmission Matrix

This scheduled matrix is the multiplied with the backlog indication matrix to produce transmission matrix (T). This matrix is the forwarded to the output port.

From the above four possible combinations one is selected depending upon the value of V. Suppose the scheduled matrix is

then the selected transmission matrix will be

Transmission matrix is the subtracted from the previous backlog matrix to update backlog indication matrix for the next iteration.

Table 1 and Table 2 bellow shows the throughput comparison for the switch size of 8 and switch size of 4 respectively. As shown in tables the throughput goes on improving as we increase the number of queues per port.

Table 1 Throughput comparison for different queuing strategies with switch size = 4

No. of Iterations / Throughput / Throughput / Throughput
M=1 / M=2 / M=4
100 / 0.2504 / 0.49 / 0.85
1000 / 0.2528 / 0.4989 / 0.9552
10000 / 0.2550 / 0.5003 / 0.9858
100000 / 0.2680 / 0.5018 / 0.9991

Table 2 shows the throughput comparison for different queuing strategies under the assumption that switch size of 4 and buffer size is equal to the port size.

Table 2 Throughput comparison for different queuing strategies with switch size = 8

Through-
put / Through-
put / Through-
put / Through-
put / Through-
put
M=1 / M=2 / M=4 / M=6 / M=8
0.19 / 0.25 / 0.48 / 0.53 / 0.87
0.20 / 0.25 / 0.50 / 0.60 / 0.89
0.22 / 0.26 / 0.51 / 0.61 / 0.93
0.24 / 0.28 / 0.53 / 0.69 / 0.99

To compare the merits of the proposed scheme, we simulated the throughput, mean queuing delay, and cell loss probability for multiple input-queued switches employing the restricted rule. In the simulation, cells are assumed to arrive at the beginning of each time slot and to leave at the end of each time slot. The arrival traffic in each input port is assumed to be homogeneous. That is, the arrival traffic is a Bernoulli process distributed independently and identically for each input port and uniformly for all output ports. The switch size used is . In Figure. 2, the throughput for multiple output-queued switches is plotted over different bifurcation parameters (M). When M=6 the throughput is close to 0.9 and when M=8 it is very close to 1.0.

Figure 2 Average Throughput

In Figure 3 the improved performance in terms of mean queuing delay for the multiple input-queued switches is plotted over Input queuing and Output Queuing. The buffer size is assumed to be infinite for the mean queuing delay.

Figure 3 Average Queuing Delay

6. CONCLUSION AND FUTURE WORK

The Internet requires fast switches and routers to handle the increasing congestion. One emerging strategy to achieve this is to merge the strengths of ATM and IP; building an IP routers around high speed switches. The demand of high bandwidth required to use input queuing. By using Virtual Output Queuing, the maximum throughput can goes up to 100% for independent arrivals. So it requires using fair, simple and efficient scheduling algorithms to arbitrate access to the switching fabric. In this research we consider the performance of HNN scheduling algorithm. The queuing model present here provides a rather general method in analyzing such ATM switches in terms of throughput, mean queue length, mean queuing delay and cell loss probability given the switch size, queue, buffer size and offered load. Several assumptions are made such as uniform traffic i.e. cells arrive at each input according to an i.i.d. Bernoulli process, and output destinations of arriving cells are uniformly distributed over all outputs. To this end, we have introduced HNN algorithm as it is simple to implement in hardware and operate at high speed. It gives fair access to each output lines and prevents starvation of input queues. By proper control of input queues, it provides 100% throughput and minimum average queue delay i.e. Quality − Of − Service. The results show here are under uniform i.i.d. Bernoulli traffic.

7.  REFERENCES:

[1]  N.McKeown, A. Mekkittikul, V.Anantharam, J.Walrand. "Achieving 100% throughput in Input output queued Switch," IEEE Trans. On Communication, Vol.47, No.2, pp.188-201, April 1999.

[2]  Karol M., Hluchyj M., Morgan S., "nput versus Output Queuing on a Space Division Switch", IEEE Transaction on Communications. Vol.35, n.12, Dec. 1987, p.1347- 1356.

[3]  R.Y. Awedh, and H. Mouftah, "survey of switch architectures", Elsevier Computer network and ISDN systems, vol.27 , pp.1567-1613, 1995.

[4]  S. Matsuda, “optimal Hopfield Network for Combinatorial Optimization with Linear Cost Function”, IEEE Trans. Neural Networks, vol. 9, 1998 .

[5]  Keith J. Symington, “A Neural-Network Packet Switch Controller: Scalability, Performance and Network Optimization”, IEEE Trans. Neural Networks, vol. 14, 2003.

ISSN: 0975 –6779| NOV 10 TO OCT 11 | VOLUME – 01, ISSUE - 02 Page 155

JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN ELECTRONICS AND COMMUNICATION ENGINEERING

ISSN: 0975 –6779| NOV 10 TO OCT 11 | VOLUME – 01, ISSUE - 02 Page 155