Peter Williams (201407)STAS Topic #3

Science, Technology and Society

‘The brain is just a computer made of meat’

This brave statement by Marvin Minsky, one of the Artificial Intelligence pioneers in the 1950s, brings about the question which has been plaguing AI experts since the birth of the field – What exactly constitutes intelligence? Without knowing the answer to this question where can we draw the line and say we have made a machine that is truly intelligent?Is intelligence merely the ability to solve problems? If so, any computer from a calculator to a Cray would have some degree of intelligence. Alan Turing developed a chess program in 1952 – even though he didn’t have a computer which was powerful enough to run the program – which undoubtedly could plan the next move, analyse the opponent’s and execute its own. It was ‘thinking’, but does thinking actually constitute intelligence?

‘The key to understanding cognition rests, to some degree, in understanding memory, perception, language and other cognitive phenomena’ (Morelli & Brown, 1992, p.2). Reasoning and judgement can also be added to the list of common definitions of cognition. It is evidently clear that there is much more to the brain than just knowledge and calculation. Alan Turing believed that if someone could develop a machine which can be asked questions and formulate an answer through the user of this perception, understanding of language, reasoning and judgement (and fool the individual asking the question into believing that it was human) then the machine would be intelligent. Nobody has yet been able to do this.

Maybe we need to look at the brain in a medical perspective. Dr Eric Chudler, an expert on neuroscience at the University of Washington, identifies the differences and similarities between brains and computers, ‘both computers and brains store memories; both computers and brains can be modified to perform new tasks. Computers and brains both have the ability to monitor their surroundings and respond with behaviour to manipulate their environment’ (Chudler, 2001, p6-7).

This cause/effect selection may be the largest similarity between brains and computers – it forms the basis of every human thought and computer calculation. Basic computers like calculators respond to a set of given numbers and operators and output an answer; the brain responds to ‘sensors’ in the body when cold, and responds by making you shiver. A thermostat (just a switch, of which a computer is made from millions) responds to the cold by turning on the boiler. Obviously this type of ‘intelligence’, if you can call it that, doesn’t compare to the complexity of human intelligence, but thousands of similar cause/effect, input/output actions happen in the brain every second.

Dr. Chudler does however mention the fact that the big difference between a human brain and a computer is consciousness and awareness. ‘You know you are here… [Computers] do not experience the emotions, dreams and thoughts that are an essential part of what makes us human’ (Chudler, 2001, p6-7).

This consciousness or awareness is what throws the validity of the Turing test into question – If a computer were able to fool a human into believing that it were another human would it be intelligent? Is it consciously fooling the human, or has it just got a large knowledge base and extensive adaptive programming?

Consciousness, in the context of this topic, means the knowledge of one’s own existence, condition (i.e. state), sensations, etc. Computers can certainly be in an ‘on’ condition or state – in other words conscious – but are they actually aware of this state like a human is? Basic human consciousness constitutes a set of automatically controlled actions such as the beating of the heart, breathing, and temperature control. Without these measures the body would surely malfunction. The brain knows this, and thus controls these actions to ensure a malfunction does not occur. This is consciousness.

In a computer these actions, or controls, can be likened to the computer clock, the states of the critical processes, and memory management. If these controls were not in place, the computer would surely malfunction. The computer knows this, and therefore ensures that faults do not occur. Is this not also consciousness? By comparing two systems with remarkably different architectures surely their definitions of consciousness will be similar in context but different in substance. Why would this make a computer less intelligent than the brain? Maybe the psychological abilities of brains and computers may provide an insight into their different types of intelligence.

What can a brain do? The brain is very versatile and can perform many different types of activity. Some of the prominent characteristics are:

  • Recognition of signals from the body and responding to them.
  • When confronted with a problem the brain will (sometimes – see emotion) logically formulate a response based on facts, knowledge and often form opinion from the combination of these factors.
  • The ability to learn; basically adaptation according to fact and knowledge, learning from mistakes, or avoiding them due to ‘common sense’.
  • Learning the art of language – pronunciation through to vocabulary, grammar through to presentation of a topic (like writing an essay!).
  • Experiencing emotion. Happiness, sadness, love, anger, fear, etc. are a collage of emotions that can affect, either by hindering or improving, the outcome of a human’s actions or opinions.

So, is it possible for a computer to have these characteristics?

A computer recognises signals from its inputs and responds to them via its outputs. This forms the basis of the Input  Process  Output model. A computer can be given a problem and logically formulate a response based on information stored. If a computer was programmed to take a wealth of information and work out the pros and cons of each bit of information, I am sure it would be possible for the machine to form an opinion of sorts based on facts and knowledge.

We come to learning. Basic learning is based on adaptive behaviour, a theory called ‘Knowledge of Results (KR) – where a stimulus or behaviour is performed and the resulting response is recorded and acknowledged’ (Annett, 1969, p.9). There are two types of this associative learning – Classical conditioning and Operant conditioning. I will focus on operant conditioning, where the brain associates a reward with a successful behaviour and a punishment with an unsuccessful behaviour. This is an example of trial and error, or learning from your mistakes.

Dr Tarassenko from the Robotics department at OxfordUniversity believes that computers that learn through trial and error are not intelligent. The robots that he works on navigate a room, and when they hit an obstacle they know that it is there and do not hit it again. He believes that his robot doesn’t meet the criteria set by classical AI research, “knowledge”, “planning”, and “reasoning”.Dr Tarassenko says, “[The robot] doesn’t plan, it reacts to collisions. It has no concept of what an obstacle is at all, and it doesn’t reason about it because it proceeds by trial and error.” (Connor, 1993). The author of the article came to the conclusion that learning through mistakes is not enough to constitute intelligence, even though that sort of behaviour is that possessed by a young child or animal. However, I personally believe that operant conditioning is an important way of learning and that a computer can definitely be programmed to store examples of why it was rewarded and why it was punished and learn from it – after all, children learn most of what they learn through this method.

Computers find it extremely hard to understand human language. ‘Computers remain lousy at context, at disambiguation, at getting jokes, at intent, at meaning, at everything we callunderstanding.’ (Censorware, 2000). This is because every word read by computers that supposedly understand language is read into the computer without considering many of the words around it. Even though they understand the vocabulary, and some ‘intelligent’ systems can understand a bit of context – for instance the type of verb – they can’t quite get the gist of grammar or presentation. There is a long way to go to replicate the type of language used by, for instance, HAL in the film 2001: A Space Odyssey, where the computer talks perfectly to its operators.

No computer can currently experience emotion in the way HAL did; But why would we want to mimic a trait often seen as a hindrance to our own human performance? Although there are cons to integrating emotion into computers – what is the point of having a computer that gets all upset when it crashes – ‘Emotion provides us with a motivation and drive, with a set of personal preferences, with a uniqueness that is desirable in a sophisticated AI.’, (Reingold, 1999). There are two types of emotion – external and internal. External emotion in the AI field is where the computer appears to have emotion (i.e. synthesised speech), and this is definitely an advantage. At the moment, computer interfaces usually take quite a while to learn. Even Microsoft’s Windows takes months for a person that has never used a computer to get the hang of. An interface that could understand the emotion of the user, and display its own emotion – with the combination of language skills – would be very easy to use, and would make it very much more human. Internal emotion would mean that the machine would actually havefeelings that would affect the way it operated and performed. It would give the machine the drive and motivation that humans have, it would also give the computer a personality of sorts. This is all influential in improving human interaction with computers. It would make the machine more efficient when interfacing with humans, and would help the computer form opinions in a similar way to a human. This would, in turn, make the computer seem more intelligent.

Is a brain just a computer made of meat? Maybe if we reverse the question, and ask if a computer could be a brain made of electronic components in the future, a system that has all or maybe just some of the traits above might just be considered intelligent. I believe it will be possible to beat the Turing test, but we must look at the issue from a psychological perspective instead of a computer science one. Intelligence - the ability to comprehend; to understand and profit from experience, an ability to acquire, retrieve, and use knowledge in a meaningful way, and an ability to learn and understand new knowledge or reason in new situations are all definitions. I would consider afriendly talking computer with problem solving skills, conditioning by association, the ability to formulate an opinion of it’s own and not it’s programmers, and a passion to solve the problem would be a very clever machine indeed. I do not think it will be too long before we can not just emulate but integrate some of these traits into a computer, and maybe call a computer intelligent. However, if intelligence is only available to those with a brain, in conclusion to the whole debate we must remember that the brain is only a computer made of meat!

Reference List

Morelli, R & Miller Brown, W. (1992). Minds, Brains & Computers. Norwood, NJ: Ablex.

Chudler, E. (2001, March). A Computer in Your Head? -Odyssey Magazine, 10:6-7.

(Note: Digitised at

Annett, J. (1969). Feedback and Human Behaviour. Harmondsworth: Penguin.

Connor, S. (1993, November 7). The Brain Machines. The Independent on Sunday.

The Censorware Project. (Sep 7 2000). Computers and Language. Retrieved May 03 2004, from

Reingold, E. (1999). Can Computers Possess Emotional Intelligence? Retrieved May 04 2004, from the University of Toronto,

1