Practicals for neural networks 1

  • Week 5 Tasks 1 and 2
  • Week 6 and 7 Tasks 3

Task 1.

Aim: To understand how a neuron can be used to store information.

If x1=1, x2=1;x3=1 and w1=1, w2=1 and w3=1, =2

What would value of y be?

y=w*x - 

Which is y=(1*1+1*1+1*1-2)=1

If the weights are the same but x1=0,x2=0,x3=0 what is the value of y?

If the weights are the same but x1=1,x2=0,x3=0 what is the value of y?

If the weights are the same but x1=0,x2=1,x3=0 what is the value of y?

If the weights are the same but x1=0,x2=1,x3=1 what is the value of y?

What would happen if add at the end of this neuron a further function, that say if y>=0 then output is 1 else output is 0.

Using this new function and the neuron what parameters would be needed to produce an OR gate.

Task 2.

Aim: To develop a line-following robot based on the two neurones controlling the robot (see code at the end of the document). The robot has two light sensors on the left and right and aims to follow the left-hand side of a thick line. The sensor produce a ‘1’ when on the sensor is on the line and ‘0’ when off the line.

Left Sensor / Right Sensor / Output 1 / Output 2
0 / 0 / 0 / 0
0 / 1 / 0 / 1
1 / 0 / 1 / 0
1 / 1 / 1 / 1

Or

Left Sensor / Right Sensor / Output 1 / Output 2
0 / 0 / 1 / 0
0 / 1 / 0 / 1
1 / 0 / 1 / 1
1 / 1 / 0 / 1

Weights={{bias1,w11,w21},{bias2,w12,w22}};

Your task is to find the weights to make the output 1 and 2 in the table by selecting weights and then add the weights to the code at the end of the document. Remember that output will be 1 if the weighted sum is greater than or equal to 0, otherwise it is 0.

Suggestion: Treat each output separately.

Task 3

Aim: To produce a multilayer perceptron using Java. This code is based around the Neural Network implementation of Jeff Heaton (

Task 3.1:

Enter the code below into a Java IDE (such as eclipse or JCreator) This code was taken from

/**

* Neural Network

* Feedforward Backpropagation Neural Network

* Written in 2002 by Jeff Heaton(

*

* This class is released under the limited GNU public

* license (LGPL).

*

* @author Jeff Heaton

* @version 1.0

*/

public class Network {

/**

* The global error for the training.

*/

protected double globalError;

/**

* The number of input neurons.

*/

protected int inputCount;

/**

* The number of hidden neurons.

*/

protected int hiddenCount;

/**

* The number of output neurons

*/

protected int outputCount;

/**

* The total number of neurons in the network.

*/

protected int neuronCount;

/**

* The number of weights in the network.

*/

protected int weightCount;

/**

* The learning rate.

*/

protected double learnRate;

/**

* The outputs from the various levels.

*/

protected double fire[];

/**

* The weight matrix this, along with the thresholds can be

* thought of as the "memory" of the neural network.

*/

protected double matrix[];

/**

* The errors from the last calculation.

*/

protected double error[];

/**

* Accumulates matrix delta's for training.

*/

protected double accMatrixDelta[];

/**

* The thresholds, this value, along with the weight matrix

* can be thought of as the memory of the neural network.

*/

protected double thresholds[];

/**

* The changes that should be applied to the weight

* matrix.

*/

protected double matrixDelta[];

/**

* The accumulation of the threshold deltas.

*/

protected double accThresholdDelta[];

/**

* The threshold deltas.

*/

protected double thresholdDelta[];

/**

* The momentum for training.

*/

protected double momentum;

/**

* The changes in the errors.

*/

protected double errorDelta[];

/**

* Construct the neural network.

*

* @param inputCount The number of input neurons.

* @param hiddenCount The number of hidden neurons

* @param outputCount The number of output neurons

* @param learnRate The learning rate to be used when training.

* @param momentum The momentum to be used when training.

*/

public Network(int inputCount,

int hiddenCount,

int outputCount,

double learnRate,

double momentum) {

this.learnRate = learnRate;

this.momentum = momentum;

this.inputCount = inputCount;

this.hiddenCount = hiddenCount;

this.outputCount = outputCount;

neuronCount = inputCount + hiddenCount + outputCount;

weightCount = (inputCount * hiddenCount) + (hiddenCount * outputCount);

fire = new double[neuronCount];

matrix = new double[weightCount];

matrixDelta = new double[weightCount];

thresholds = new double[neuronCount];

errorDelta = new double[neuronCount];

error = new double[neuronCount];

accThresholdDelta = new double[neuronCount];

accMatrixDelta = new double[weightCount];

thresholdDelta = new double[neuronCount];

reset();

}

/**

* Returns the root mean square error for a complet training set.

*

* @param len The length of a complete training set.

* @return The current error for the neural network.

*/

public double getError(int len) {

double err = Math.sqrt(globalError / (len * outputCount));

globalError = 0; // clear the accumulator

return err;

}

/**

* The threshold method. You may wish to override this class to provide other

* threshold methods.

*

* @param sum The activation from the neuron.

* @return The activation applied to the threshold method.

*/

public double threshold(double sum) {

return 1.0 / (1 + Math.exp(-1.0 * sum));

}

/**

* Compute the output for a given input to the neural network.

*

* @param input The input provide to the neural network.

* @return The results from the output neurons.

*/

public double []computeOutputs(double input[]) {

int i, j;

final int hiddenIndex = inputCount;

final int outIndex = inputCount + hiddenCount;

for (i = 0; i < inputCount; i++) {

fire[i] = input[i];

}

// first layer

int inx = 0;

for (i = hiddenIndex; i < outIndex; i++) {

double sum = thresholds[i];

for (j = 0; j < inputCount; j++) {

sum += fire[j] * matrix[inx++];

}

fire[i] = threshold(sum);

}

// hidden layer

double result[] = new double[outputCount];

for (i = outIndex; i < neuronCount; i++) {

double sum = thresholds[i];

for (j = hiddenIndex; j < outIndex; j++) {

sum += fire[j] * matrix[inx++];

}

fire[i] = threshold(sum);

result[i-outIndex] = fire[i];

}

return result;

}

/**

* Calculate the error for the recogntion just done.

*

* @param ideal What the output neurons should have yielded.

*/

public void calcError(double ideal[]) {

int i, j;

final int hiddenIndex = inputCount;

final int outputIndex = inputCount + hiddenCount;

// clear hidden layer errors

for (i = inputCount; i < neuronCount; i++) {

error[i] = 0;

}

// layer errors and deltas for output layer

for (i = outputIndex; i < neuronCount; i++) {

error[i] = ideal[i - outputIndex] - fire[i];

globalError += error[i] * error[i];

errorDelta[i] = error[i] * fire[i] * (1 - fire[i]);

}

// hidden layer errors

int winx = inputCount * hiddenCount;

for (i = outputIndex; i < neuronCount; i++) {

for (j = hiddenIndex; j < outputIndex; j++) {

accMatrixDelta[winx] += errorDelta[i] * fire[j];

error[j] += matrix[winx] * errorDelta[i];

winx++;

}

accThresholdDelta[i] += errorDelta[i];

}

// hidden layer deltas

for (i = hiddenIndex; i < outputIndex; i++) {

errorDelta[i] = error[i] * fire[i] * (1 - fire[i]);

}

// input layer errors

winx = 0; // offset into weight array

for (i = hiddenIndex; i < outputIndex; i++) {

for (j = 0; j < hiddenIndex; j++) {

accMatrixDelta[winx] += errorDelta[i] * fire[j];

error[j] += matrix[winx] * errorDelta[i];

winx++;

}

accThresholdDelta[i] += errorDelta[i];

}

}

/**

* Modify the weight matrix and thresholds based on the last call to

* calcError.

*/

public void learn() {

int i;

// process the matrix

for (i = 0; i < matrix.length; i++) {

matrixDelta[i] = (learnRate * accMatrixDelta[i]) + (momentum * matrixDelta[i]);

matrix[i] += matrixDelta[i];

accMatrixDelta[i] = 0;

}

// process the thresholds

for (i = inputCount; i < neuronCount; i++) {

thresholdDelta[i] = learnRate * accThresholdDelta[i] + (momentum * thresholdDelta[i]);

thresholds[i] += thresholdDelta[i];

accThresholdDelta[i] = 0;

}

}

/**

* Reset the weight matrix and the thresholds.

*/

public void reset() {

int i;

for (i = 0; i < neuronCount; i++) {

thresholds[i] = 0.5 - (Math.random());

thresholdDelta[i] = 0;

accThresholdDelta[i] = 0;

}

for (i = 0; i < matrix.length; i++) {

matrix[i] = 0.5 - (Math.random());

matrixDelta[i] = 0;

accMatrixDelta[i] = 0;

}

}

}

Task 3.2: Similarly enter the code below as a new class. This code use the previous class Network to train and test a multilayer perceptron to tackle the AND problem. The code was adapted from the same source ( as the code in Task 3.1 .

import java.text.*;

public class neural {

public static void main(String args[])

{

double ANDInput[][] =

{

{0.0,0.0},

{1.0,0.0},

{0.0,1.0},

{1.0,1.0}};

double ANDIdeal[][] =

{ {0.0},{0.0},{0.0},{1.0}};

System.out.println("Learn:");

Network network = new Network(2,1,1,0.7,0.9);

NumberFormat percentFormat = NumberFormat.getPercentInstance();

percentFormat.setMinimumFractionDigits(4);

for (int i=0;i<1000;i++) {

for (int j=0;j<ANDInput.length;j++) {

network.computeOutputs(ANDInput[j]);

network.calcError(ANDIdeal[j]);

network.learn();

}

System.out.println( "Trial #" + i + ",Error:" +

percentFormat .format(network.getError(ANDInput.length)) );

}

System.out.println("Recall:");

for (int i=0;i<ANDInput.length;i++) {

for (int j=0;j<ANDInput[0].length;j++) {

System.out.print( ANDInput[i][j] +":" );

}

double out[] = network.computeOutputs(ANDInput[i]);

System.out.println("="+out[0]);

}

}

}

Task 3.3 Go through the code and comment what each line does.

Task 3.4: Similarly enter the code below as a new class. This code use the previous class Network to train and test a multilayer perceptron to tackle the XOR problem. The code was taken from the same source ( as the code in Task 3.1 .

import java.text.*;

public class neural2 {

public static void main(String args[])

{

double xorInput[][] =

{

{0.0,0.0},

{1.0,0.0},

{0.0,1.0},

{1.0,1.0}};

double xorIdeal[][] =

{ {0.0},{1.0},{1.0},{0.0}};

System.out.println("Learn:");

Network network = new Network(2,3,1,0.7,0.9);

NumberFormat percentFormat = NumberFormat.getPercentInstance();

percentFormat.setMinimumFractionDigits(4);

for (int i=0;i<10000;i++) {

for (int j=0;j<xorInput.length;j++) {

network.computeOutputs(xorInput[j]);

network.calcError(xorIdeal[j]);

network.learn();

}

System.out.println( "Trial #" + i + ",Error:" +

percentFormat .format(network.getError(xorInput.length)) );

}

System.out.println("Recall:");

for (int i=0;i<xorInput.length;i++) {

for (int j=0;j<xorInput[0].length;j++) {

System.out.print( xorInput[i][j] +":" );

}

double out[] = network.computeOutputs(xorInput[i]);

System.out.println("="+out[0]);

}

}

}

Task 4.3:

When it runs what do you see happening. Does it solve the XOR problem.

Task 4.4:

Go through the code in task 1 and put comments in to help with your understanding of what the code does. In particular the way the weights are altered.

Code

import josx.platform.rcx.*;

public class annlf{

public static void main(String[] args)

{

int w[][] ={//put weights here};

int o[]={1,1};

int s1,s2,res1,res2;

int sensor1=0,sensor2=0;

robot_1 tom=new robot_1();

Sensor.S1.activate();

Sensor.S3.activate();

for(;;){

sensor1=Sensor.S1.readValue();

sensor2=Sensor.S3.readValue();

LCD.showNumber(sensor1);

if (sensor1<42)

s1=1;

else

s1=0;

if (sensor2<42)

s2=1;

else

s2=0;

res1=w[0][1]*s1+w[0][2]*s2+w[0][0];

if (res1>=0)

o[0]=1;

else

o[0]=0;

res2=w[1][1]*s1+w[1][2]*s2+w[1][0];

if (res2>=0)

o[1]=1;

else

o[1]=0;

if ((o[0]==1)&(o[1]==1))

tom.forward1(10);

if ((o[0]==0)&(o[1]==0))

tom.backward1(20);

if ((o[0]==1)&(o[1]==0))

tom.tlturn(20);

if ((o[0]==0)&(o[1]==1))

tom.trturn(20);

LCD.refresh();

}

}

}