> From: "Parker, Chris" <Chris.Parker@soton.ac.uk>
> Date: Sun, 4 Feb 1996 08:39:55 +0000
>
> Chinese Room Argument
>
> Searle suggests that computation is not the same as cognition (C=C is
> not true) because computation does not involve understanding but
> cognition does. He wishes to refute the claims of Strong Artificial
> Intelligence (AI) that if the performance of a computer running a
> program cannot be distinguished from that of a human being (the Turing
> Test, TT or T2), then that system has the same cognitive abilities as a
> human and that it effectively has a mind and can understand. This means
> that mind/brain = program/computer, which has two consequences, the
> mind becomes a program which is implementation independent (the
> computer/brain becomes relatively unimportant) and this in turn removes
> any mind-body problem, because its all in the program.
Well, that's more or less it, though you have the causality described a
bit oddly. It is a fact about computation that it is
implementation-independent, so if it is TRUE that C=C, then
implementation-independence may be the explanation of the mind/body
problem: No point in agonising over how mental states could be physical
states: It's merely because the physical system happens to be
implementing the right COMPUTATIONAL states. So if you want to
understand the mind, study its software, not its hardware.
> Searle argued that he could show that strong AI was wrong by imagining
> that he was a computer. In the fantasy he was locked in a room
> containing baskets of Chinese symbols (that he didn't understand) and
> sets of instructions on how to handle the symbols. People outside the
> room could feed him with more symbols and instructions for him to feed
> back certain other symbols depending on the symbols fed in. The people
> outside (programmers) would be capable of composing the instructions
> (program) in such a way that the input could be interpreted as
> questions and the feed back (output) from him (the computer) could
> always be interpreted as valid answers. All this without him
> understanding any Chinese. His point was that if he could run a program
> without understanding the symbols, no computer could be any different
> because no computer had anything that he didn't, or no computer could
> come to understand Chinese simply by running a program. Computer
> programs, by their nature of implementation independence, are formal
> syntactically defined operating instructions, which do not have
> intrinsic semantics, and according to the Church/Turing Thesis, they
> are all equivalent.
All of this is more or less right, except that just here you are
confusing the Church/Turing Thesis (which is about what computation IS)
with implementation-independence, which is one of the features of
computation.
> Searle and all computers could satisfy TT without
> understanding (or full cognitive ability) so TT is not a strong enough
> test to demonstrate C=C. Searle then imagined that the Chinese symbols
> were replaced by English ones. He could now provide the same correct
> answers via his own understanding. He had a mind with intrinsic
> semantic contents in addition to syntactics, no extrinsic
> interpretation of the symbols that he manipulated was necessary if he
> understood them. He was now doing more than a computer and so he is
> different and C=C is not true.
Fine, but you need to have it clear in your mind what were the reasons
to have believed that C=C might be true in the first place. It wasn't
just an arbitrary hypothesis: There were argument and evidence
supporting it. Searle's argument simply shows that, despite this, it was
incorrect (or, rather, incomplete).
> The Symbol Grounding Problem
>
> The Chinese Room Argument leaves us asking how could a formal symbol
> system ever acquire intrinsic semantic interpretations for its
> constituent symbols? If we wish to model cognition, how is intrinsic
> meaning acquired by the symbols we use? The problem is analogous to
> trying to learn Chinese from a Chinese-Chinese dictionary. A similar
> problem is solved by cryptologists deciphering ancient hieroglyphics,
> but their efforts are aided by their knowledge of their own language
> and its structure, the availability of syntactically and semantically
> structured chunks of ancient texts (which can be analysed
> statistically), and historic knowledge or guesswork about the
> civilisation involved. This helpful knowledge is called grounding. The
> symbols are anchored by semantic cables intrinsic to the symbol
> system..
Actually, it's not grounding (which is based on "honest toil": direct
contact between symbols and the categories they name); rather, it's
theft, or indirect grounding, based on their already-grounded first
language. The real grounding problem is about how THAT got grounded.
> Simple connectivity between symbol and a seen object in the world
> prompts the question what is doing the seeing and connecting, an
> homunculus? In a system such as a robot with sensorimotor abilities,
> there would be a problem of discrimination, otherwise the ugly duckling
> theorem says that it would not be able to distinguish between any two
> objects, there would always be differences.
Going too fast here. The Ugly Duckling Problem concerns categorisation,
not discrimination (or, rather, Miller's "absolute" rather than
"relative" discrimination). A Funes-robot's problem wouldn't be that it
couldn't tell things apart: It could tell EVERYTHING apart: Every
instant would be infinitely different from every other. What it couldn't
do would be to CATEGORISE, to abstract some things and see they were all
of the same kind, so their differences could all be ignored, and they
could all be given the same category name (symbol).
> The Harnad model suggests
> that, in the case of seeing a horse, the ability to discriminate
> involves superimposing internal iconic, non-symbolic, analog
> representations of the image projected on our retinas from real horses
> onto representations of horses in our memory.
Yes, but the issue here is not discrimination, which is a relative
judgment about PAIRS of inputs; rather it is categorisation or
identification, which is an absolute judgment about inputs in
isolation.
In describing the symbol grounding problem, though, there is no need to
dwell on this model in particular: And system that successfully grounds
symbols in the categories they refer to will do the trick.
> The next stage requires identification,
> and involves categorical perception. The proposed mechanism for this
> process is via learned and innate feature detectors which can identify
> invariant features of objects in the same sensory projections. These
> representations "ground" the symbol for horse probably via a
> connectionist network The whole process is driven by sensory processes
> (bottom-up). Symbol manipulation is dependent on these non-arbitrary
> representations as well as their arbitrary "shapes".
This got a bit garbled toward the end: In this grounding model, nets
might do the feature extraction. What has an arbitrary shape is the
category name. What has a nonarbitrary shape is the feature detector in
which the name is grounded.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:58 GMT