> From: "Baden, Denise" <DB193@psy.soton.ac.uk>
> Date: Sun, 4 Feb 1996 18:34:50 GMT
>
> Pylyshyn believes that cognition is computation i.e. that thought is
> propositional. He denies the explanatory power and causal role of
> images and claims that explanation can only occur when images are
> cashed into the language of thought i.e. propositions. Pylyshyn
> believes the sensory system is part of the hardware - precognition.
> Only when it becomes computational is it cognition. The
> computational level is the programme level - this is independent of
> the details of the hardware and could be run on very different
> systems i.e. it is implementation independent.
What are the REASONS Pylyshyn believes this? What is the evidence that
FAVOURS computationalism in the first place?
> Searle however criticises the strong AI point of view which says
> that the mind is a computer programme, that the brain is irrelevant
> and that the Turing test (i.e. a computer penpal that could pass as
> a human from its responses) is decisive. Turing's point is that if a
> machine passes the Turing test, finding out that it is a machine
> isn't grounds for denying it has a mind. Searle's Chinese room argument
> gives a loophole into the other minds barrier. He agrees that when
> the computer acts as a Chinese penpal, one may not be able to tell
> it from a human being. However, Searle claims he could do the same
> without understanding, by memorising all the rules for manipulating
> symbols from a Chinese-Chinese dictionary. Searle claims that he
> could pass the Turing test simply by symbol manipulation, whilst
> having zero understanding. If that computer understands Chinese it
> is not because it's running the right computer programme because
> Searle is and he doesn't understand it. Therefore it is not
> implementation independent.
Or rather (since computation IS implementation-independent), cognition
is not (just) computation.
> Searle claimed from this that cognition wasn't computation at all.
> He believed that our cognitive abilities are grounded completely and
> necessarily in the brain. However, his Chinese room argument has
> only proved that cognition cannot be just computation. Pure symbol
> manipulation cannot give rise to any intrinsic meaning. Like a book,
> the meaning only emerges when it is interpreted by the reader.
> Cognition thus requires us to get from meaningless symbols to what
> the symbols are about.
>
> This can occur in several ways. The most obvious is that symbols
> must be grounded in a bottom up fashion by their direct reference to
> objects in the real world. Features of the material world also
> require some sort of weighting and categorizion. If every feature in
> the environment were paid equal attention, then everything would be
> unique, so the very act of, for example, labeling something with 2
> eyes, a nose and a mouth a `face' requires picking out the salient,
> invariant features. Once symbols such as words have been grounded,
> these can give rise to higher order symbols which do not have to be
> grounded in the same way. For example, if the words hair and chin
> are grounded, the word beard would have some intrinsic meaning to
> the system by reference to those 2 words. Based on these criteria,
> Searle's Chinese room argument would not apply to a robot that had a
> sensori-motor transduction ability, as the symbols would then make
> contact with what they were about.
Very good: I would only add that being "grounded" is not NECESSARILY
the same as having "intrinsic meaning to the system" (nor, for that
matter, does being ungrounded mean not having intrinsic meaning). A
grounded T3 robot might still have nobody home (it is only a hypothesis
that it will, but one that no one will ever be able to confirm or deny,
except the robot itself!) and we ourselves may not be grounded symbol
systems (because we may not be symbol systems of any kind).
Second, it is not merely "contact" between the symbols and the world
that constitutes grounding (even a stand-alone computer's symbols have
some contact with the world), it is "contact" at the T3 level -- that
is, categorisation and all other interactions between the robot and the
things in the world its internal symbols are interpretable as being
about must all be as good as ours, in fact, indistinguishable from ours.
That's a lot more specific and demanding than "contact"!
> The Chinese room argument tries to draw direct analogies between a
> computer operating purely on symbol manipulation, with Searle giving
> answers in Chinese, based purely on a Chinese-Chinese dictionary.
No, based on implementing the T2-passing programme.
> However, Searle still has a mind, and therefore would be making a
> natural effort to make sense of these symbols he has had to analyze
> to quite improbable lengths of complexity and depth. His
> understanding therefore would very likely be qualitatively different,
> than that of a computer.
You have forgotten that what the computer is on trial for is whether it
has ANY understanding at all, merely in virtue of running the T2-passing
programme; this is not about the "quality" of its understanding, but the
EXISTENCE of its understanding. Searle is not on trial for that; we know
he understands English. And what he might eventually make of the squiggles
and squoggles after a lot of time manipulating them is simply
irrelevant. The very ACT of manipulating them will not amount to
understanding Chinese, and that was all we needed in order to convict
the computer of having no understanding merely because it too was doing
the very same symbol manipulations.
> Even if the symbols were grounded, the
> analogy could be challenged.
True, T3 grounding does not equal consciousness; it's just the best we
can ever hope to do, in modeling the mind...
> A robot, with no inbuilt fears or
> desires would not see anything of relevance or meaning in the world
> of objects. A cup, for example, may just be as meaningless a symbol,
> albeit in 3D, as words written in a dictionary.
So now all you need to tell us is what an "inbuilt fear or desire" might
be; we already know, because of Searle, that it can't be just the
implementation of the right computations...
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:58 GMT