Design and Synthesis of hardware based E-nose using Sensor network and Spiking Neural Network

Praveen kumar .R1,Keerthiga.G2

1Department of ECE, Saveetha Engineering College, Chennai.

2Professor, Department of ECE, Saveetha Engineering College, Chennai.

Abstract - To design a Spiking Neural Network (SNN) chip using verilog to classify and differentiate odour in an E-nose. Gas sensors used in electronic noses are based on the broad selectivity profiles, mimicking the responses of olfactory receptors in the biological olfactory system. The process of identification of electronic nose will run into a problem, the gas, which is detected, has the same chemical element. Misidentification due to similarity of chemical properties of gases is possible; it can be solved using neural network algorithm. To avoid the misidentification the Spiking neural network is introduced. The Spiking neural network is design by the pitt’s model. Here the McCulloch pitts model is considered as threshold. There are two threshold is involving here Synaptic gap threshold and Activation function threshold. For low power and reduced chip size, the system is designed using Verilog.

Index Terms — Very Large Scale Integration (VLSI), Spiking neural network, Sub-threshold oscillation.

I. Introduction

In a portable E-Nose system, learning and classification algorithms play important roles. Since system operation time is usually limited by the sensor responding time rather than data recognition, the calculation speed of the algorithm is not a vital concern, but power consumption is substantially more critical. The biological system is highly energy efficient, and our inspiration from it can help us to design a low power classification algorithm. Furthermore, the E-Nose will be composed of hundreds of thousands of sensors in the future.

Similar to the very-large-scale integration (VLSI) system, a biological system suffers from noise and mismatch, however, despite this; animals can still complete their tasks. To make the artificial system as reliable as a biological system, many researchers have investigated how biological systems work, and have constructed a similar system using artificial neurons operating with action potentials and other bio-inspired characteristics to perform learning and classification tasks. These neural networks are called spiking neural network. Implementing an SNN by analog- VLSI rather than digital-VLSI may reduce the power and silicon area of the chip. For greater mobility and longer battery usage time, the power and the size of an E-Nose are crucial concerns.

The proposed SNN chip may designed by Mcculloch’s Pitts model. It is used because of sub-threshold oscillation, it is obtained for low power consumption.

II. SPIKING NEURAL NETWORK AND SENSOR NETWORK

To make the artificial system as reliable as a biological system, many researchers have investigated how biological systems work, and have constructed a similar system using artificial neurons operating with action potentials and other bio-inspired characteristics to perform learning and classification tasks. These neural networks are called Spiking Neural Network. Spiking neural networks (SNNs) fall into the third generation of neural network models, increasing the level of realism in a neural simulation. In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather fire only when a membrane potential– an intrinsic quality of the neuron related to its membrane electrical charge– reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal.

In the context of spiking neural networks, the current activation level is normally considered to be the neuron's state, with incoming spikes pushing this value higher, and then either firing or decaying over time. Various coding methods exist for interpreting the outgoing spike train as a real-value number, either relying on the frequency of spikes, or the timing between spikes, to encode information. This kind of neural network can in principle be used for information processing applications the same way as traditional artificial neural networks. However due to their more realistic properties, they can also be used to study the operation of biological neural circuits. Starting with a hypothesis about the topology of a biological neuronal circuit and its function, the electrophysiological recordings of this circuit can be compared to the output of the corresponding spiking artificial neural network simulated on computer, determining the plausibility of the starting hypothesis.

In practice, there is a major difference between the theoretical power of spiking neural networks and what has been demonstrated. They have proved useful in neuroscience, but not (yet) in engineering. Some large scale neural network models have been designed that take advantage of the pulse coding found in spiking neural networks, these networks mostly rely on the principles of reservoir computing. However, the real world application of large scale spiking neural networks has been limited because the increased computational costs associated with simulating realistic neural models have not been justified by commensurate benefits in computational power. As a result there has been little application of large scale spiking neural networks to solve computational tasks of the order and complexity that are commonly addressed using rate coded (second generation) neural networks. In addition it can be difficult to adapt second generation neural network models into real time, spiking neural networks. It is relatively easy to construct a spiking neural network model and observe its dynamics. It is much harder to develop a model with stable behavior that computes a specific function. A sensor network (WSN) of spatially distributedautonomoussensorstomonitorphysical or environmental conditions, such astemperature,sound,pressure, etc. and to cooperatively pass their data through the network to a main location. The more modern networks are bi-directional, also enablingcontrolof sensor activity. The development of wireless sensor networks was motivated by military applications such as battlefield surveillance; today such networks are used in many industrial and consumer applications, such as industrial process monitoring and control, machine health monitoring, and so on.

III. Mcculloch’s Pitts Model.

This model is applicable for analyzing the function of neurons by using the more number of sensors in the E-nose system. This model can process all the values we getting from the sensor network and compare with the pre computed values in the processing chip. So, we have considered Mcculloch’s Pitts model also known as linear threshold gate model as a reference. This model was the earliest model ever proposed for the function of neuron. It is a neuron of a set of inputs I1,I2…..In and output y.

Where W1, W2……Wn are weight values normalized in the range of either (0,1) or (-1,1).

The activation function is performed which gives the output of the sensors. The signals generated by actual sensor network are the action-potential spikes, and the biological neurons are sending the signal in patterns of spikes rather than simple absence or presence of single spike pulse. For example, the signal could be a continuous stream of pulses with various frequencies. With this kind of observation, we should consider a signal to be continuous with bounded range.

Fig 2: Sigmoid function.

Fig 1: McCulloch Pitts Model

Additionally, the sigmoid function describes the “closeness” to the threshold point by the slope.

Input signal from sensor network is detects external signals and passes through the network. These inputs can be of any type ranging from pulse, square, and sine. In this paper, I have considered two pulse inputs which are counted and transmitted, after a certain delay. From the McCulloch Pitts model, there are two thresholds involved:

1. Synaptic gap threshold, and

2. Activation function threshold.

Synaptic gap threshold is modeled as weight to the AND gate, and Activation function threshold is modeled as

Fig 3: Synaptic gap threshold model.

The synaptic weight is modified according to the STDP learning rule proposed. In a standard STDP, a pre- synaptic spike may cause LTD, and a post-synaptic spike may cause LTP. The amount of weight change decayed exponentially with respect to the timing difference between the pre- synaptic spike and post-synaptic spike (for LTP and inverse for LTD). However, in the STDP proposed in, the synaptic weight was updated only when pre-synaptic spikes occurred, and the post-synaptic spike did not modify synaptic weight.

Fig 4: Activation function threshold model.

The question of whether a pre-synaptic spike causes LTP or LTD is determined by the voltage of the soma of the post- synaptic neuron when the pre-synaptic spike is produced. When the soma voltage of post-synaptic neuron is close to the threshold, a pre-synaptic spike causes LTP, otherwise it causes LTD. To change the synaptic weight, the pre-synaptic spike and the post-synaptic spike should occur within a specific period, which is called the STDP window. Most of the circuits fix the STDP window and neglect LTD when the firing rate of the post-synaptic neuron is large enough. Instead of wasting time to generate LTD and subsequently neglecting it, this paper changes the STDP window according to the firing rate of the post-synaptic neuron to achieve a similar performance in learning with an improved training speed. The proposed system is referred from reference[1].

IV. EXPERIMENTAL RESULTS.

The Spiking Neural Network is designed by the McCulloch pitts model. There are two types of threshold is involving in this Spiking Neural Network.

1.  Synaptic gap threshold.

2.  Activation function threshold.

In the verilog code, the function has to be generated for these thresholds. So the synaptic gap threshold is considered as Adder and the Activation function threshold as Comparator.

Fig 5: Blocks of SNN chip

The developed snn chip also produces repeatable responses in the measurement of three beverages using different sensor batches, hence confirm its reproducibility characteristics. The developed E-nose is also able to produce different patterns for different samples. The patterns produced by the snn demonstrated that the E-nose has good discriminative ability, which is an important characteristic. Based on the results we concluded that the developed snn is a reliable analytical instrument for the design of E-nose.

V CONCLUSION.

In the proposed system, Spiking Neural Network is used and simulated using verilog. SNN is designed using Mcculloch’s pitts model. There are two threshold values available for Mcculloch’s pitts model. The synaptic gap threshold is designed as adder and the activation function threshold is designed as comparator. By implementing in the above mentioned model, totally the adders and multipliers used in the Spiking Neural Network chip is reduced to 21 and 7. The Spiking Neural Network is proposed in this paper is optimized for the size power consumption. Thereby the efficiency is improved by reducing the area, the generated look-up tables uses 12% of the total space.

REFERENCES

[1] Hung-Yi Hsieh and Kea-Tiong Tang, “VLSI Implementation of a Bio-Inspired Olfactory Spiking Neural Network” ieee transactions on neural networks and learning systems, vol. 23, no. 7, july 2013.

[2] MazlinaMamat, Salina Abdul Samad and Mahammad A. Hannan“An Electronic Nose for Reliable Measurement and Correct Classification of Beverages”International Journal on sensor Netw., vol. 2. 2011, pp. 6435–6453.

[3] Helmy Widyantara, Muhammad Rivai, Djoko Purwanto “Neural Network for Electronic Nose using Field Programmable Analog using spiking neurons” International Journal of spiking neurons Vol.1, No.6,2010, pp. 527-538.

[4] Olivier Rochel, Dominique Martinez1, Etienne Hugues1 and FrédéricSarry “Stereo-olfaction with a sniffing neuromorphic robot Arrays”International Journal of Electrical and Computer Engineering (IJECE)Vol.2, No.6, December 2012, pp. 739~747.

[5] D. Muir, G. Indiveri, and R. Douglas, “Form specifies function: Robust spike-based computation in analog VLSI without precise synaptic weights,” in Proc. IEEE Int. Symp. Circuits Syst., vol. 5. 2005, pp. 5150– 5153.

[6] A. Bofill-i-Petit and A. Murray, “Synchrony detection and amplification by silicon neurons with STDP synapses,” IEEE

Trans. Neural Netw., vol. 15, no. 5, pp. 1296–1304, Sep. 2004.

[7] G. Indiveri, “Synaptic plasticity and spike-based computation in VLSI networks of integrate-and-fire neurons,” Neural Inf. Process., vol. 11, nos. 4–6, pp. 1–12, 2007.

[8] T. J. Koickal, A. Hamilton, S. L. Tan, J. A. Covington, J. W. Gardner, and T. C. Pearce, “Analog VLSI circuit implementation of an adaptive neuromorphic olfaction chip,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 54, no. 1, pp. 60–73, Jan. 2007.

[9] E. Chicca, D. Badoni, V. Dante, M. D’Andreagiovanni, G. Salina, L. Carota, S. Fusi, and P. Del Giudice, “A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory,” IEEE Trans. Neural Netw., vol. 14, no. 5, pp. 1297–1307, Sep. 2003.

[10] J. Wade, L. McDaid, J. Santos, and H. Sayers, “SWAT: A spiking neural network training algorithm for classification problems,” IEEE Trans. Neural Netw., vol. 21, no. 11, pp. 1817–1830, Nov. 2010.

[11] Q. Sun, F. Schwartz, J. Michel, Y. Herve, and R. Dalmolin, “Implemen- tation study of an analog spiking neural network for assisting cardiac delay prediction in a cardiac resynchronization therapy device,” IEEE Trans. Neural Netw., vol. 22, no. 6, pp. 858–869, Jun. 2011.

[12] H. Markram, J. Lubke, M. Frotscher, and B. Sakmann, “Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs,” Science, vol. 275, no. 5297, pp. 213–215, 1997.

[13] J. M. Brader, W. Senn, and S. Fusi, “Learning real-world stimuli in a neural network with spike-driven synaptic dynamics,” Neural Comput., vol. 19, no. 11, pp. 2881–2912, 2007.

[14] S. Fusi, M. Annunziato, D. Badoni, A. Salamon, and D. J. Amit, “Spike- driven synaptic plasticity: Theory, simulation, VLSI implementation,” Neural Comput., vol. 12, no. 10, pp. 2227–2258, 2000.