School of Electrical, Computer and Energy Engineering

M.S. Final Oral Defense

STDP implementation using CBRAM devices in CMOS

by

Mahraj Sivaraj

22 June 2015

2 pm

GWC 208C

Committee:

Hugh Barnaby, Chair

Michael Kozicki

Jenifer Blain Christen

Abstract

Alternative computation based on neural systems on a nanoscale device is becoming of increasing interest because of the massive parallelism and scalability it provides. Neural based computation systems also offer defect finding and self healing mechanisms. Traditional Von Neumann based architectures (which separates the memory and computation units) inherently suffer from the Von Neumann bottleneck whereby the processor is limited by the number of instructions it fetches. The clock driven Von Neumann computer survived with technology scaling. However as transistor scaling is slowly coming to an end with channel lengths becoming a few nanometers in length, processor speeds are beginning to saturate. This lead to the development of multi-core systems which process data in parallel, with each core being based on the Von Neumann architecture.

The human brain has always been a mystery to scientists. Modern day super computers are outperformed by the human brain in certain computations. The human brain occupies far less space and consumes a fraction of the power a super computer does with certain processes such as pattern recognition. Neuromorphic computing aims to mimic biological neural systems on silicon to exploit the massive parallelism that neural systems offer. Neuromorphic systems are event driven systems rather than being clock driven. One of the issues faced by neuromorphic computing was the area occupied by these circuits. With recent developments in the field of nanotechnology, memristive devices on a nanoscale have been developed and show a promising solution. Memristor based synapses can be up to 3 orders smaller than CMOS based synapses.

In this thesis, the Programmable Metallization Cell (a memristive device) is used to prove a learning algorithm known as Spike Time Dependant Plasticity. This learning algorithm is an extension to Hebb’s learning rule in which the synapses weight can be altered by the relative timing of spikes across it. The synaptic weight with the memristor will be its conductance, and CMOS oscillator based circuits will be used to produce spikes that can modulate the memristor conductance by firing with different phases differences.