From: Godfrey Steve (sg698@ecs.soton.ac.uk)
Date: Wed Apr 25 2001 - 23:02:05 BST
Subject: Searle: Minds, brains, and programs
>SEARLE:
>strong AI has little to tell us about thinking, since it is not about machines
>but about programs, and no program by itself is sufficient for thinking.
Godfrey:
I think that this is quite a bold statement. If Strong AI has little
to tell us about thinking, why are so many people involved in Strong
AI. Maybe strong AI has little to do with how the brain works, surely
we can learn something about thinking from this that could be used to
eventually find out how the brain works.
>SEARLE:
>According to strong AI, the computer is not merely a tool in the study of
>the mind; rather, the appropriately programmed computer really is a mind,
>in the sense that computers given the right programs can be literally said
>to understand and have other cognitive states.
Godfrey:
This would make the programs running on the computer not a tool to aid
in the discovery of what a mind is, but are the actual explanation
themselves. I interpret this description of strong AI as meaning that
only certain programs meeting certain criteria are said to have minds,
as it states that 'the computer has to be running appropriate
programs'. Does this mean any program that runs, or only a small
subset of running programs e.g. those that pass the T2 test?
Godfrey:
Searle describes a situation where a man enters a restaurant and
purchases a burnt hamburger. He then describes how, from this
information humans can answer questions about the situation that were
not included in the story such as 'did the man eat the burger?'.
Schank's machine can answer similar questions.
>SEARLE:
>To do this, they have a -representation" of the sort of information that human
>beings have about restaurants, which enables them to answer such questions
>as those above, given these sorts of stories.
Godfrey:
I do not think that the computer necessarily has to understand the
story to be able to answer questions about it. The rules about how
humans would react to situations would have to be coded into the
computer. The computer does not understand what has happened, but
searches for the best fit in its list of rules to this situation and
given the response. If it were coded differently, it would have given
the wrong answer and so, appear not to understand. A Simple data input
error should not mean the difference between understanding or not. The
rules being coded into the system are similar to a child learning by
experience. The difference being that the child can interact with its
surroundings and make its own opinion about what happens in certain
situations. Whereas the computer has to rely completely on the
information supplied to it by the programmer as gospel, and cannot make
its own opinion. It has not been grounded either and so may not
actually know what a hamburger is and what it is for it to be burnt, it
just knows people do not like burnt food. The reason humans can extract
information about what happened from the text that was not explicitly
stated, is because we can think back to our own experiences about when
we have been to restaurants. We know that if we went to a restaurant
and received a burn burger, we would not eat it. Our answer to the
question may be wrong, as we do not know that this man may have liked
burnt food, and eaten it anyway.
>SEARLE:
>The machine can literally be said to understand the story and provide the
>answers to questions,
Godfrey:
It is not necessarily understanding the story, it is probably just using
rules
of English and its knowledge base to infer facts about the story, to
understand
it fully it would need to know what a restaurant and all the other objects
in the
story actually are, eg have some sort of grounding.
>SEARLE:
>that what the machine and its program do explains the human ability to
>understand the story and answer questions about it.
Godfrey:
I do not think that this ability of a computer explains anything about
how the human mind works. Just because two machines produce the same
outcome, does not mean they work in the same way e.g. the petrol and
electric engine, both are able to power a vehicle but work in
completely different ways. It may help us to understand ways in which
understanding could be achieved, which could lead onto us discovering
how humans understand.
Godfrey:
Searle describes a system in which he is able to actually become the
system, which is purely formal symbol manipulation and could be
interpreted as being asked questions and supplying answers. Searle
shows that even though the system is answering questions that are
indistinguishable from a fluent Chinese speaker, he knows no Chinese
and has no understanding abut what the topic is he answering questions
about. This is a very strong argument for cognition not being just
computation. The problem is that the symbols mean nothing to Searle, as
they have not been grounded.
>SEARLE:
>Suppose that I'm locked in a room and given a large batch of Chinese writing.
>Now suppose further that after this first batch of Chinese writing I am given
>a second batch of Chinese script together with a set of rules for correlating
>the second batch with the first batch. Now suppose also that I am given a third
>batch of Chinese symbols together with some instructions, again in English,
>that enable me to correlate elements of this third batch with the first two
>batches, and these rules >instruct me how to give back certain Chinese symbols
>with certain sorts of shapes in >response to certain sorts of shapes given me
>in the third batch.
Godfrey:
In this example Searle has actually become the computer by executing
the programme himself. With this example Searle has come up with a
test for all implementation-independent symbol systems, as he himself
can actually become the system, and then tell us whether or not he
understood the Chinese or was simply doing meaningless symbol
manipulation. From this example it would appear that he was simply
performing meaningless symbol manipulation, proving that it is possible
for a machine (possibly T2) to interact with people in an
indistinguishable way from other humans, and still have no
understanding about what it is doing, just simply running its
programme. The flaw to this test though would be if Searl were not
aware he understood the Chinese, ie some form of unconscious
understanding. This argument relies on the fact that all understanding
is done consciously; other wise Searle would be no wiser that the
system understood by becoming the system. Unconscious understanding
may be occurring when a person sleepwalks, they understand how to walk,
and are performing the action but are unaware that they are doing it.
When a sleepwalker awakes they are usually unaware that they have done
it.
>SEARLE:
>Well, then, what is it that I have in the case of the English sentences that I do
>not have in the case of the Chinese sentences? The obvious answer is that I
>know what the former mean, while I haven't the faintest idea what the latter mean.
Godfrey:
This is again the symbol-grounding problem; Searle has no idea what the
Chinese symbols actually refer to in the real world.
Godfrey:
Searle gives examples about how as humans we can assume trivial machine
have some form of thought, I particularly like his explanation about
why we do this.
>SEARLE:
>The reason we make these attributions is quite interesting, and it has to do with
>the fact that in artefacts we extend our own intentionality; our tools are
>extensions of our purposes, and so we find it natural to make metaphorical
>attributions of intentionality to them.
Godfrey:
It probably makes it easier for us as humans to attribute intelligence
with machines as it means that we do not have to know how they work, as
we can accept that we do not understand how intelligence works. This
is just ignorance, and has no basis for assigning the label intelligent
to any machine in my view.
>SEARLE:
>The systems reply (Berkeley). "While it is true that the individual person who is
>locked in the room does not understand the story, the fact is that he is merely
>part of a whole system, and the system does understand the story.
Godfrey:
Apart from the individual person who is locked in the room, there is
some paper, a pencil and some databanks of Chinese symbols. This
response seems to suggest that the paper, pencil and databanks
understand Chinese. I do not think that it is likely that a piece of
wood and paper can understand a complex language. The only other
explanation for this response is if the individual memorizes the whole
system to tell us if they then understand Chinese.
>SEARLE:
>let the individual internalise all of these elements of the system. They memorize
>the rules in the ledger and the data banks of Chinese symbols, and perform all the
>calculations in their head. The individual then incorporates the entire system.
Godfrey:
It doesn't make any difference if the individual memorises the data or
not. The data was there before they memorised it, so the only thing
that the individual will gain by memorising it is a faster access time
to the data. This is going to have no affect to how much Chinese is
understood by the individual, it will still be meaningless symbol
manipulation.
>SEARLE:
>So there are really two subsystems in the man; one understands English, the other
>Chinese, and "it's just that the two systems have little to do with each other."
>But, I want to reply, not only do they have little to do with each other, they
>are not even >remotely alike.
Godfrey:
The Chinese system inside the individual is an off line non-interactive
system. The user has no control over what is out put from the system
is as they are simply following rules blindly. There is no
understanding involved. With the English system, the user has the
freedom to reply in what ever way they wish, as they are not following
strict rigid rules, as they understand the language and understand what
it means.
>SEARLE:
>The example shows that there could be two "systems," both of which pass the
>Turing test, but only one of which understands
Godfrey:
I think that the Chinese system would be able to pass the Turing test
for a short period of time, but I am not sure that it would be able to
fool a person for a long period of time such as forty years. It is
possible but unlikely.
>Searle
>McCarthy, for example, writes, '-Machines as simple as thermostats can be said to
>have beliefs, and having beliefs seems to be a characteristic of most machines
>capable of problem solving performance"
Godfrey:
I do not think that this is the case. Thermostats are built by us, to
measure what the temperature is. This measurement is made my looking at
how far a piece of metal bends when it expends and contracts. It is a
physical property of metals that they expand when heated, they have no
control over this. In an older thermostat this is the whole story,
just a piece of metal bending. The only difference with a digital
thermostat is that it presents the user with an accurate reading about
how far the metal has bent. I do not think that a bending piece of
metal constitutes beliefs.
>SEARLE:
>What we wanted to know is what distinguishes the mind from thermostats.
>SEARLE: (The robot reply)
>Suppose we put a computer inside a robot, and this computer would not just take
>in formal symbols as input and give out formal symbols as output, but rather would
>actually operate the robot in such a way that the robot does something very much
>like perceiving, walking, moving about, hammering nails, eating drinking anything
>you like. The robot would, for example have a television camera attached to it
>that enabled it to 'see,' it would have arms and legs that enabled it to 'act,'
>and all of this would be controlled by its computer 'brain.
Godfrey:
This is an attempt to enable the system to interact with the outside
world, thus admitting, as pointed out by Searle, that there is more to
cognition than solely formal symbol manipulation. The system needs to
be able to relate meaningless symbols that it is manipulating with
objects in the real world, if it stands a chance to understand anything
at all. If a human were born with no senses at all, then would they
understand anything? They would have no knowledge about the world, no
experience and no language with which to communicate its thoughts.
Adding robot peripherals to the computer to allow it to interact with
the world may still not be enough. Surely if the system is still being
controlled by a computer, the signals from the peripherals will still
have to be translated into symbols that will still be meaningless to
the machine. This machine would simply be a T2 machine with added
peripherals and not a T3 machine. There is still something missing,
which prevents this machine being properly grounded.
>SEARLE: The brain Simulator reply
>"Suppose we design a program that doesn't represent information that we have about
>the world, such as the information in Schank's scripts, but simulates the actual
>sequence of neuron firings at the synapses of the brain of a native Chinese speaker
>when he understands stories in Chinese and gives answers to them.
>Now surely in such a case we would have to say that the machine understood the
>stories; and if we refuse to say that, wouldn't we also have to deny that native
>Chinese speakers understood the stories?
Godfrey:
I do not think that this is a valid argument. Surely if we are
simulating something, we are not actually doing it. We are simulating a
thinking brain; we are not actually creating a thinking brain. If we
were to simulate an aeroplane in flight, it would not actually be
flying, internally it would still be meaningless symbol manipulation
inside a stationary computer. This is also true if we simulated a
brain, it would be formal symbol manipulation, following the rules
provided to the computer by the program written to simulate the brain.
It may be the case that as a result of simulation we are able to
analyse how thinking occurs, and maybe even discover how it is done,
but the system will still not be actually thinking.
>SEARLE:
>imagine that instead of a mono lingual man in a room shuffling symbols we have the
>man operate an elaborate set of water pipes with valves connecting them. When the
>man receives the Chinese symbols, he looks up in the program, written in English,
>which valves he has to turn on and off. Each water connection corresponds to a
>synapse in the Chinese brain, and the whole system is rigged up so that after doing
>all the right firings, that is after turning on all the right faucets, the Chinese
>answers pop out at the output end of the series of pipes.
Godfrey:
Again this is a simulation of the brain. It is not going to be able to
think, as we have not tried to build the brain, but simulate it, as a
result we are going to get a simulation of thinking, not actual
thinking. Because we are modelling neurons with water pipes it is not
going to be a particularly useful simulation if the brain, as the only
thing thy have in common is the format in which the pipes and the
neurons are laid out.
>SEARLE:
>I see no reason in principle why we couldn't give a machine the capacity to
>understand English or Chinese, since in an important sense our bodies with our
>brains are precisely such machines.
Godfrey:
Searle is saying that because our brains are causal systems, they can
be classed as machines. Therefore a machine is capable of having the
capacity to understand English or Chinese, so there is no reason why
humans should not be able to build a machine with these properties.
>SEARLE:
>But I do see very strong arguments for saying that we could not give such a thing to
>a machine where the operation of the machine is defined solely in terms of
>computational processes over formally defined elements.
Godfrey:
Searle here is suggesting that maybe there needs to be more to a
machine than just formal symbol manipulation for it to be able to
understand. There needs to be that something extra that enables us to
know what is actually meant by words in our language. Formal symbol
manipulation is not capable of this on its own, as it is just
meaningless symbols. Maybe something extra needs to be added to make
some form of hybrid system. Maybe the computation works with the
something extra to create full understanding of something.
>SEARLE:
>"Could a machine think?"
>The answer is, obviously, yes. We are precisely such machines.
>"Yes, but could an artefact, a man-made machine think?"
Godfrey:
I think that we have to be careful here about what we class as a man
made machine, if we are machines, then we can make ourselves by the
natural reproductive method, this produces humans that we know are
capable of thought. We therefore have to exclude reproduction,cloning,
test tube babies and any other system that grew naturally without our
full understanding. If we simply reproduce a system, but do not know
how it works, we have not gained anything.
>SEARLE:
>"But could something think, understand, and so on solely in virtue of being a
>computer with the right sort of program? Could instantiating a program, the right
>program of course, by itself be a sufficient condition of understanding?" NO
Godfrey:
This is searle's answer to the question being asked in this paper.
This is because symbols are meaningless to the computer. I do not
think that a T2 machine is capable of understanding as it has no way of
interacting with the real world and so has no way to link meaningless
symbols with anything other than more meaningless symbols.
>SEARLE:
>No one supposes that computer simulations of a five-alarm fire will burn the
>neighbourhood down or that a computer simulation of a rainstorm will leave us all
>drenched. Why on earth would anyone suppose that a computer simulation of
>understanding actually understood anything?
Godfrey:
I like the point that Searle is making here, to simply simulate
understanding will result in exactly that, simulated understanding, not
actual understanding. To make a computer actually understand we need
to approach this problem from a different angle. Rather then trying to
simulate, we need to try and actually do it. We need to find a way to
actually make the computer understand. This may not be possible with a
digital computer that works purely on formal symbol manipulation, as we
have already discovered that this is not enough to understand, it needs
something else as well. In my view, we need to work on a new kind of
hybrid machine, that has the symbol manipulation of the digital
computer and some way of grounding these meaningless symbols.
>SEARLE:
>the sense in which people "process information" when they reflect, say, on problems
>in arithmetic or when they read and answer questions about stories, the programmed
>computer does not do -information processing." Rather, what it does is manipulate
>formal symbols
Godfrey:
Is the computer capable of information processing? I thought that for
something to be called information it has to have some sort of meaning,
other wise it is simply data. If a computer is unable to understand
meaning, then any form of information supplied to it will simply be
treated as meaningless data.
>SEARLE:
>Since appropriately programmed computers can have input-output patterns similar to
>those of human beings, we are tempted to postulate mental states in the computer
>similar to human mental states.
Godfrey:
I think that as the average person in the street has no idea about how
a computer works, so they imagine that it works in the same way as
something that performs similar operations to it. As a computer seems
to have the same input-output patterns as a human, it is not an
unreasonable step to take to assume that they both work in a similar
way, as we do not know how the human brain works either. Obviously
this is not the case, but it is a way to explain the unknown workings
of a computer.
>SEARLE:
>The single most surprising discovery that I have made in discussing these issues is
>that many AI workers are quite shocked by my idea that actual human mental
>phenomena might be dependent on actual physical/chemical properties of actual
>human brains.
Godfrey:
I think that this is a very good point, we have discussed all the way
through this paper how symbol manipulation, which is what programs are,
are not capable of understanding. So there needs to e something
added. Maybe this something that needs adding is the physical/chemical
properties of the brain. Maybe they are capable of something that
programming is not, and this something is what we are missing in
getting computers to actually understand.
>SEARLE:
>Unless you believe that the mind is separable from the brain both conceptually and
>empirically -- dualism in a strong form -- you cannot hope to reproduce the mental
>by writing and running programs since programs must be independent of brains.
Godfrey:
This is because computers are only capable of duplicating systems that
are implementation independent, other wise they will simply be
simulating the system, as we have already discussed this is not enough
for real understanding.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST