MaastrichtUniversity

Faculty of Humanities and Science

Knowledge Engineering Study

TRIAL EXAMINATION BACHELOR KE

Algorithmic Networks & Optimization

Examiner:R.L. Westra

Date:December 9, 2008

Time:8:30 –10:30

Place:BOU 8-10, room 0.09

Notes:

  1. The exam is closed-book – but the enclosed NN textbook chapters summaries can be used during the exam.
  2. The exam consists of 4 pages (including this page).
  3. The exam time is 3 hours.
  4. The number of exam questions is 10.
  5. The number of points for each question is given (in bold).
  6. The maximum number of points is 10.
  7. The final exam grade is the sum of the points of the questions answered correctly.
  8. The final course grade is the sum of the final exam grade plus the bonus grade that you earned from the bonus task hand-in. The final course grade will be rounded off with a maximum of 10.
  9. Before answering the questions, please first read all the exam questions, and then make a plan to spend the three hours.
  10. When answer the questions please do not forget:
  11. to write your name and student number on each answer page;
  12. to number the answers; and
  13. to number the answer pages.

Good Luck!!!

PART I - THEORY QUESTIONS (TOTAL: 6 Pts)

Question 1 - Architectures

a. What is a recurrent network?

b. What is a delay in a network?

Question 2 – Perceptron

a. What restriction in its architecture prevents a perceptron to learn more complicated examples

b. What would it mean to introduce a learning rate into the perceptron learning rule, and what would its effect be.

Question 3 – Hebb’s rule and the linear associator

a. Can the unsupervised Hebbian learn rule be used to learn a Linear Associatora collection of input-output patterns? Explain.

b. Discuss the relation of the pseudo-inverse rule to the error in the performance of the Linear Associator network. Also explain what the nature of this error is.

c. Can a network trained with the pseudo-inverse rule handle non-linearly separable patterns? Explain.

Question 4 – Performance surfaces

Consider a scalar function f(x) with Hessian matrix H(x).

a. What is the relation between the nature of an optimum of function f at a location x* and the eigen-values of H?

b. What does it mean if some eigen-value of H are zero?

Question 5 – Backpropagation

a. discuss the under performance versus overtraining in a feed-forward NN with Backpropagation.

b. Whyand how does the Levenberg-Marquardt algorithm approximate the Jacobian?

c. Discuss the effect of the μ-parameter in the Levenberg-Marquardt algorithm.

Question 6

a. Describe a small-world network that is not scale-free.

b. Is it possible that a scale-free network is not a small-world network?

PART II - MATHEMATICS(TOTAL: 6 Pts)

Question 7 – Supervised Hebbian Learning

Consider the following prototype patterns:

a. Are these two patterns orthogonal?

b. Design an autoassociator for these patterns with the Hebb-rule.

c. What response does the network give for the input-pattern p3 shown above?

Question 8 – Performance Optimization

Let {pj | j=1…N} be a set of vectors that is conjugate to a Hessian matrix A. Show that these vectors are independent.

Question 9 – Widrow-Hoff Learning

The pilot of an airplane is talking into a microphone in his cockpit. The sound received by the air trafficcontroller in the tower is garbled because the pilot voice signal has been contaminated by engine noise that reaches his microphone. Can you suggest an adaptive ADALINE filter that might help reduce the noise in the signal received by the control tower? Explain your system.

Question 10 – Variations on Back Propagation

Show that if a momentum term is added to the steepest descent algorithm on a pure quadratic function, there will always be a momentum coefficient that will make the algorithm stable, regardless of the learning rate.