Lund University
Philosophy Department
FTEA 12:5 Consciousness
Exam paper Spring Term 2012
Teacher: Jan Hartman
An Intentional Look at the
中文室
Chinese Room
by
Tania Norell
Lund 29 May2012
Introduction
-The Mind and Matter Problem
The Philosophy of Mind, which pertains to the study of mental properties andphysical properties in relation to the world, has been around since the 4th century with philosophers like Socrates and Plato. The Mind and Matter Problem, also known as the Body and Soul Problem, deals with the issues of identifying and understanding the apparent difference between the mental mind or souland the physical matter or body. Through history many resolutions to this problem have been presented. For example, Ontological Dualism claims that the mind and body are two distinct substances. Property Dualism claims that the mind is independent of the body, because it has its own properties, but it is not a distinct substance. Idealism claims that the physical can be reduced to the mental, and Monism claims that the mental can be reduced to the physical, whereas Neutral Monism claims that the mental and the physical are two attributes of one “unknown” substance. So as we can see the Mind and Matter Problem can be approached in a variety of ways.
-Purpose and Demarcation
The purpose of this paper is to present and analyze two articles on the topic of the thought experiment called the Chinese Room, which pertains to the Mind and Matter Problem within Monism. A common view of contemporary philosophers is within the realm of Physicalistic Monism which includes Identity Theory, Behaviorism, Functionalism and Eliminative Materialism. In this paper I will not go into the different aspects of the variations of Monism. My intention is to focus on one of the monist theories that deals with Biological Naturalism, which entails the view that mental properties and physical properties are not two separate properties but rather two different properties of the biological brain.
-Material and Method
The materials I will useare two articles. First,John R. Searle,a philosophy professor at Berkeley University of California, who´s article“Mind, Brains, and Programs” is about the Chinese Room. Secondly, Jacquette Dale, a philosophy professor at the University of Bern, who´s article “Fear and Loathing (and other intentional states) in Searle´s Chinese Room” offers criticism to Searle´s article. I have chosen to present each article followed by an analysis of each, separately. I will then conclude with an intentional look at the issues concerning the Chinese Room.
Material Presentation
-Searle
In the article “Minds, Brains, and Programs” in Behavioral and Brain Sciences from 1980, John Searle presents his thought experiment called the Chinese Room. Through he offers two propositions that form the foundation for justifying three conclusions;
1) Intentionality in human beings is a product of causal features of the brain.
2) Instantiating a computer program is never by itself a sufficient condition of intentionality.
3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program.
4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain.
5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. (Searle 1980, pp.417-457).
Searle asks the question “Could a machine think?” and his answer is, Yes a machine can think, if it is a machine that has the causal powers of a biological brain. The reason he makes this point is because he is reacting to the Artificial Intelligence (AI) researchers, who also claim that machines can think, but according to Searle they do not consider the criteria of intentionality before referring to the machine as mind. Searle seems to want to highlight that AI is actually not dealing with mind but rather has do to do with the matter of the function of programs, which according to Searle is a physical thinking that is an operational function of simulation and cannot be considered as a mental thinking containing understanding.
The point Searle puts forward with the Chinese Room example is that yes a computer machine can think, behave, function, operate or compute, whatever term one chooses to use, just as a human brain machine can. This is done by using information on identifying basic symbols and following instructions on constructing complex symbols correctly, but this does not automatically imply that the machine understands the meaning of the process. In the Chinese Room experiment he places himself in a room with a slot for input and a slot for output. The input he receives is a set of Chinese symbols. In the room he has all the information needed for Chinese symbol identification and all the instructions for how to assemble Chinese symbol combinations correctly. He uses this program to compute an output which is accurate in relation to the input. He points out that the people outside the room will think the person in the room understands Chinese but in fact he does not understand any Chinese at all. What he understands on the other hand is the language that the information and instructions are provided in, which is English. By this example he hopes to show that yes he is capable of processing an English program but that does not translate into an understanding of the Chinese input or output. The computers thinking is then not equivalent to a mind that has intentionality, i.e. it does not have the causal power to understand anything beyond the program, but it does have the capability to function and behave as if it does. Since the computer´s thinking is only a simulation of the thinking process it cannot be claimed to have a mind that understands.Searle clearly states that “the computer understanding is not just partial or incomplete; it is zero”(Searle 1980, pp.417-457).
In his article Searle also replies to 6 criticisms by referring to his two propositions which ultimately bring him to the same three conclusions each time. No new ideas are presented, but the question; Could a machine think? to which he answered Yes in the beginning of the article is reflected upon with the questions; “But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?” (Searle 1980, pp.417-457). The question is then; Could a machine understand? And Searle´s answer to that question is No. He then concludes the article with explaining his answer, again using his two propositions; which ultimately highlights the gap between syntax and semantics. Searle explains that;
Because the formal symbol manipulations themselves don´t have any intentionality; they are quite meaningless; they aren´t even symbol manipulations, since the symbols don´t symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output (Searle 1980, pp.417-457).
Analysis
-Searle
Searle´s Chinese Room is offered in reaction to AI and when it comes to AI Searle makes a distinction between weak AI and strong AI. He agrees with the weak AI that implies that a computer machines simulation of human cognitive capacities is a useful tool for us to use for the purpose of understanding the processes of a minds capability of rational thinking. He does not agree with the strong AI that claims that this tool is equivalent to a mind because it is able to understand. Searle claims to exemplify through his thought experiment that it is incorrect to think that an accurately programmed machine is sufficient in providing intentionality, which is the ability of having cognitive states that exceed the programming, i.e.intentionality goes beyond mere information, it demands understanding.
Searle mentions that the definition of the term understand can be argued, if you so wish, but the brute fact is that a machine without the brains intentionality understands nothing. The reason we use the word understand in relation to mechanical machines is because they are an extension of our brains intentional power, ie. we understand the input and output, but the machine does not. The machine is only capable of computing the program and if one wants to claim that the machine understands the program this would be a faulty use of the word understand. With the Chinese Room argument Searle hopes to show that the program in relation to the computer is not equivalent to what the mind is to the brain. His reason is that even if his brain memorized the English program concerning the Chinese symbols he would still not understand Chinese. Searle is quite clear on what he has to say in regard to strong AI:
If strong AI is to be a branch of psychology, then it must be able to distinguish those systems that are genuinely mental from those that are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental (Searle 1980,pp.417-457).
Searle clearly delineates what this specifically mental trait is. He claims that the reason strong AI researchers think that computers have mind is because cognitive science delineates the mental as information processing and a computer can be argued to do exactly that-process information. Searle points out that this correlation is incorrect since computers manipulate specific program information, it does not reflect on information about the world.The difference is that the computer has the capability of syntax processing which is an operational function that provides results and the mind has the ability of semantic processing which is a reflective function that results in understanding. Searle suggests that if strong AI did not correlate operational behavior with intentionality then there would not be a problem, because they would realize their mistake. He goes on to accuse strong AI of being dualistic since this to him exemplifies that they separate mind from being dependent on the brain because they claim that mechanical machines can have mind. Searle views himself as a monist andyes, a machine can be claimed to be capable to think, but it is a mind only if it is a machine with the biological causal power of being able to provide intentionality in relation to the world.
Material Presentation
-Dale
In the article “Fear and Loathing (and other intentional states) in Searle´s Chinese Room” in Philosophical Psychology from 1990,Jacquette Daleargues that Searle´s Chinese Room experiment poses no threat to Artificial Intelligence.
The fact that the Chinese Room by hypothesis satisfactorily passes the Turning Test of machine intelligence, but the homuncular agent by stipulation does not understand Chinese, is taken by Searle as demonstrating that no pure syntax processor is capable of achieving true intelligence or understanding, or of producing genuine intrinsic intentionality necessary for psychological states (Dale 1990, pp.287-305).
Dale points out that Searle´s conclusions that claim to refute AI, also refutes other psychological theories such as functionalism, computationalism andcognitivism,He then offers three criticisms against Searle´s confident refutative conclusions.
1) The Chinese Room is irrelevant in the refutation of the Turning test of machine intelligence and any of the functionalist-computationalist-cognitivist family of philosophical-psychological theories. 2) The concept of the right causal powers required to sustain the product of genuine intrinsic intentionality is unintelligible except as the right microlevel input-output functionalities, a model supposedly invalidated as inadequate by the Chinese Room counterexample.
3) The causal-biological naturalization of intentionality in terms of phenomena caused by and realized in neutral microstructures which Searle attempts to advance is either reducible to input-output functionalities, again supposedly invalidated by the Chinese Room, or else fails to naturalize the concept by lack of analogy with distinctively nonintentional nomically irreducible phenomena caused by and realized in a nonneutral material microstructure (Dale 1990,pp.287-305).
The problem that Dale highlights is that there is a conflict between Searle´s naturalization of intentionality as being a causal-biological phenomenon and the thesis that it is causally-physically-mechanically irreducible. Searle´s defense to all criticism is that the Chinese Room example clearly shows that computers are syntactically capable but not semantically able and this is the inherent difference between programs and minds. Dale clarifies that this is not the issue; the criticism pertains rather to the sufficiency of the Chinese Room as a valid enough justification for these conclusions;
Searle´s argument states:
Programs are syntactical.
Minds have semantics.
Syntax by itself is neither sufficient for nor constitutive of semantics. Therefore,
Programs by themselves are not minds (Dale 1990, pp.287-305).
Dale´s refined argument states:
Programs are syntactical and at most only derivatively semantical.
Minds are intrinsically semantical.
Syntax and derivative semantics by themselves are neither sufficient for nor constitutive of intrinsic semantics. Therefore,
Programs by themselves are not minds (Dale 1990, pp.287-305).
Dale argues that pure syntax is an oxymoron and therefore a program cannot be purely syntactical. Semantics is not something a mind has but rather minds are intrinsically semantical. The issue that Dale conveys through his criticism is that;
The problem is that when the modified Chinese Room gives up the non-Chinese-speaking symbol-swapping homuncular prisoner as a single locus of program execution and control, and redesigned as a micro-functionally isomorphic simulation of natural intelligence, it is no longer evident that the system lacks intrinsic intentionality (Dale 1990, pp.287-305).
Analysis
-Dale
Dale´s criticisms in his article “Fear and Loathing”picks up where Searle leaves off in his article “Minds, Brains and Programs”. That is he highlights Searle´s perceived gap between syntax and semantics. What Dale seems to want to bring to the table is the possibility that there might be semantics involved with the syntax depending on if it is framed on a macro- or micro-level. The only fact that Searle has that the Chinese Room “computer” does not understand Chinese is the fact that he, as the only figuring agent, does not understand Chinese. He does, on the other hand, understand English and that is why he is capable of following the program because it is in English. But, one could ask if the firing neurons in his brain “machine” understand English or are they doing the understanding syntactically, so to speak. How then does one distinguish between syntax and semantics? Depending on the frame of a macro-level homuncular intelligent agent, or the frame of micro-level isomorphoric intelligence, the distinction between syntax and semantics is not as clear cut as Searle would like to have it. “The problem of macro- versus micro-level program design suggests that Searle has no prior justification for the claim that such a program could not as a matter of fact duplicate the causal powers of the brain minimally sufficient to produce intentionality” (Dale1990, pp.287-305).
According to Dale the only reason Searle has of holding on to the syntax-semantic gap is by keeping the Chinese Room example in a macro-level homunculus frame. So Dale does not disagree with Searle on the macro-level,which concludes that programs do not have minds, but rather he has highlighted that the fact that programs do not have minds is because,according to Searle´s propositions, they are only derivatively semantic and therefore not sufficient for intrinsic semantics, which would be necessary for equaling programs as having minds because then they would be mind.
Searle does not appreciate this widening of the frame, because in response to Dale Searle maintains that Dale has “some very deep misunderstandings” (Dale 1990, pp.287-305), not only of his arguments but of the nature of intentionality. The main issue then between Dale and Searle is the ontology of intentionality because even if Dale,to a degree,agrees with what Searle´s Chinese Room shows, regarding the distinction between syntax and semantics, he does not agree that it shows that intentionality is an exclusively biological causal power dependent on the brain. Dale writes that “even if only biological systems were to exhibit intrinsic intentionality, a claim which cannot be adduced as obvious without begging the question against mechanist philosophy of the mind, that still would not qualify intrinsic intentionality as biological” (Dale 1990, pp.287-305).Searle´s approach is thus that it is obvious and Dale approach is that that just won´t do. The fact that intentionality is exhibited by biological organisms does not automatically translate into the fact that intentionality´s essence is biological.
Searle´s response to this is to accuse Dale of being a dualist since he interprets Dale´s argument as implying that if intentionality is not necessarily dependent on the biological then that exemplifies that he holds a view that intentionality is abstract and separate from biological phenomenon. Dale concludes by simply not taking Searle´s fear and loathing on board and remarks that Searle contributes no new ideas nor restates previous positionsto reinforce the Chinese Room as to adequately meet any criticism.