The Chinese Room Argument demonstrates that cognition cannot
exclusively be computation (a purely symbollic system). This is the
case because the symbols are not grounded (hence, the "symbol rounding
problem"). There are different attempts to solve the symbol grounding
problem which will be discussed.
An important paradigm in cognitive psychology has been the
computational approach. Computation involves the manipulation of a
symbol system. According to proponents (e.g. Fodor, 1980; Pylyshyn,
1973, 1984) minds are symbol systems. A symbol sysem has a number of
important features; it involves a set of arbitrary physical tokens that
are manipulated on the basis of explicit rules,the manipulation is
based on the shape of the physical tokens (i.e. is syntactic and not
based on meaning), and the system of tokens, strings of tokens and
rules are all semantically interpretable. There were a number of
reasons why the symbolic view of mind appeared to be persuasive.
However, a simple thought experiment was provided by Searle (1980) that
demonstrated that the mind cannot be a pure symbol system.
Searle (1980) challenged the assumption that a symbol system that could
produce behaviour that is distinguishable from ours must have a mind.
If a computer could pass the Turing Test (TT; Turing, 1964) then it was
thought by many that it would have a mind. A TT passer would be able to
produce behaviour like ours in a 'pen pal' situation (i.e. model
linguistic behaviour). Searle said that if a (symbol system) computer
can pass the TT in Chinese by merely manipulating symbols then it does
not, in fact, understand Chinese. This is because Searle (or anyboby
else who doesn't understand Chinese) could take the place of the
computer and impliment the symbol system without understanding
Chinese.
The Chinese Room Argument is an example of the Symbol Grounding Problem
(Harnad, 1990). Another example will help to demonstrate this problem.
If you had to learn Chinese as a second language and you only had a
Chinese-Chinese dictionary you would endlessly pass from meaningless
symbol (or symbol-string) to another, never reaching anything that had
any meaning. The symbols would not be grounded. A second version of
this Chinese-Chinese dictionary-go-round is similar, only requiring you
to learn Chinese as a first language. This time the symbols would not
be grounded either and it is an analogous to the difficulty faced by
purely symbolic models of mind. That is; how is symbol meaning to be
grounded in anything other than meaningless symbols? This is the symbol
grounding problem.
One approach that avoids the synbol grounding problem is
connectionism. According to connectionism (e.g. McClelland Rumelhart
et al, 1986), cognition is not symbol manipulation but dynamic patterns
of activity in a multilayered network of nodes or units with weighted
positive or negative interconnections. However, connectionist systems
are at a disadvantage to symbolic models as many of our behavioural
capacities seem to be symbolic. Particularly linguistic capacities,
but also logical reasoning and some other higher order cognitive
capacities, appear to have the systematic meaning of symbol systems.
Perhaps a solution to this problem would be reached by combining the
advantages of symbol models with the grounding capacity of
connectionist systems.
An alternative version of the TT is immune to the Chinese Room
Argument. The TT involves producing only linguistic capacity that is
indistinguishable from ours, but if it also included our robotic
capacities (how we discriminate, manipulate and identify objects) then
a computer that could pass it would have grounded its symbols and hence
would understand (have a mind etc). This version of the test has been
called the Total Turing Test (TTT; Harnad, 1989). Harnad (1990)
suggested a possible model that could pass it that would involve a
symbol system that could be grounded in meaning by a connectionist
mechanism. His hybrid solution to the symbol grounding problem was a
system that involved iconic representations (analogs og proximal
sensory input) and categorical representations which are feature
detectors that pick out the invariant features of objects and events.
These could be combined to ground higher order symbolic representations.
Connectionism is a potential mechanism for learning the invariant
features of categorical representations. In this model symbol
manipulation would not only be based on the arbitrary shape of the
symbols, but also on the nonarbitrary shape of the icons and
categorical representations in which they are grounded.
While a model of this hybrid type that can actually pass the TTT has
not been built yet it provides a possible way of passing it in the
future and hence solving the symbol grounding problem and avoiding the
Chinese Room Argument. It combines the advantages of the symbolic
approach with the connectionist capacity to ground symbols in their
meaning.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:57 GMT