Chinese Room Argument
Searle suggests that computation is not the same as cognition (C=C is
not true) because computation does not involve understanding but
cognition does. He wishes to refute the claims of Strong Artificial
Intelligence (AI) that if the performance of a computer running a
program cannot be distinguished from that of a human being (the Turing
Test, TT or T2), then that system has the same cognitive abilities as a
human and that it effectively has a mind and can understand. This means
that mind/brain = program/computer, which has two consequences, the
mind becomes a program which is implementation independent (the
computer/brain becomes relatively unimportant) and this in turn removes
any mind-body problem, because its all in the program.
Searle argued that he could show that strong AI was wrong by imagining
that he was a computer. In the fantasy he was locked in a room
containing baskets of Chinese symbols (that he didn't understand) and
sets of instructions on how to handle the symbols. People outside the
room could feed him with more symbols and instructions for him to feed
back certain other symbols depending on the symbols fed in. The people
outside (programmers) would be capable of composing the instructions
(program) in such a way that the input could be interpreted as
questions and the feed back (output) from him (the computer) could
always be interpreted as valid answers. All this without him
understanding any Chinese. His point was that if he could run a program
without understanding the symbols, no computer could be any different
because no computer had anything that he didn't, or no computer could
come to understand Chinese simply by running a program. Computer
programs, by their nature of implementation independence, are formal
syntactically defined operating instructions, which do not have
intrinsic semantics, and according to the Church/Turing Thesis, they
are all equivalent. Searle and all computers could satisfy TT without
understanding (or full cognitive ability) so TT is not a strong enough
test to demonstrate C=C. Searle then imagined that the Chinese symbols
were replaced by English ones. He could now provide the same correct
answers via his own understanding. He had a mind with intrinsic
semantic contents in addition to syntactics, no extrinsic
interpretation of the symbols that he manipulated was necessary if he
understood them. He was now doing more than a computer and so he is
different and C=C is not true.
The Symbol Grounding Problem
The Chinese Room Argument leaves us asking how could a formal symbol
system ever acquire intrinsic semantic interpretations for its
constituent symbols? If we wish to model cognition, how is intrinsic
meaning acquired by the symbols we use. The problem is analogous to
trying to learn Chinese from a Chinese-Chinese dictionary. A similar
problem is solved by cryptologists deciphering ancient hieroglyphics,
but their efforts are aided by their knowledge of their own language
and its structure, the availability of syntactically and semantically
structured chunks of ancient texts (which can be analysed
statistically), and historic knowledge or guesswork about the
civilisation involved. This helpful knowledge is called grounding. The
symbols are anchored by semantic cables intrinsic to the symbol
system..
Simple connectivity between symbol and a seen object in the world
prompts the question what is doing the seeing and connecting, an
homunculus? In a system such as a robot with sensorimotor abilities,
there would be a problem of discrimination, otherwise the ugly duckling
theorem says that it would not be able to distinguish between any two
objects, there would always be differences. The Harned model suggests
that, in the case of seeing a horse, the ability to discriminate
involves superimposing internal iconic, non-symbolic, analog
representations of the image projected on our retinas from real horses
onto representations of horses in our memory. The next stage requires
identification,
and involves categorical perception. The proposed mechanism for this
process is via learned and innate feature detectors which can identify
invariant features of objects in the same sensory projections. These
representations "ground" the symbol for horse probably via a
connectionist network The whole process is driven by sensory processes
(bottom-up). Symbol manipulation is dependent on these non-arbitrary
representations as well as their arbitrary "shapes". 1480
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:58 GMT