From: Basto Jorge (jldcb199@ecs.soton.ac.uk)
Date: Mon May 28 2001 - 01:53:32 BST
>HARNAD:
>...
>Good question, and good point. I would say that we could give the paper
>a more benign interpretation, and say that Searle recognizes is that all
>he has shown is that cognition can't be all computation, and not that
>cognition can't be computation at all.
>
>But if that is what he means, then why does he say that the what
>follows from the Chinese Rook Argument is that we should turn away from
>computation and study the brain instead?
>
>"Weak AI" (which just uses computation as a tool for modeling the mind,
>just as it can be used as a tool for modeling the brain or the solar
>system or a plane) still seems to allow for the possibility that some
>of whatever it is that passes the TT could be computational and some
>not. A hybrid computational/noncomputational system could be modelled
>with Weak AI. Only Strong AI says it's all got to be computation.
>
>So, according to your reading, if some of cognition could be
>computation, why does Searle say we need to turn INSTEAD to the brain?
>
>As to the sensorimotor capacities of robots, what Searle doesn't seem to
>consider is that they could be PART of cognition, rather than just
>input to a (noncognizing) computer. I agree with you that this is
>because his critique is negative, and he has not given the positive
>solution to the grounding problem enough thought. He's out to show it's
>not computers, and he thinks a robot is necessarily just a computer
>with I/O -- whereas it could be a hybrid computational/noncomputational
>system all the way through.
>Stevan Harnad
Basto:
just a personal remark on this one, not really a comment.
I have been through Searle's text a couple of times and I came up with
another (I think) interpretation:
The CRA task was devised (or should have been) to show the limits of
assuming intelligence/consciousness as solely symbol manipulation.
Where Searle fails is exactly when he uses his CRA to jump from one
domain (limits of computationalism) to another distinct domain (the
possibility of understanding by other systems). What I mean is that it
is obvious that Searle's argument works well to show that intelligence
(and understanding) cannot be all computation. When he puts himself
inside the hybrid robot system and extrapolates from this that hybrid
robot systems therefore CANNOT have intelligence/understanding, he is
throwing (perhaps unwittingly) a dogmatic affirmation only. The fact is
that whatever he would put inside his system would not exhibit
understanding: even if he would put himself inside ME (and I have some
intelligence), he could show that I have not understanding, because the
CRA task it is not suited to be used to assert WHAT systems have or do
not have intelligence: it merely shows that symbolic manipulation does
not suffices to account for intelligence. I could never understand how
did Searle convinced himself of his proof when I read the article,
since it is noticeable that the CRA does not say ANYTHING at all about
the nature of the entities -it just says something about the faculties.
When he uses himself to perform symbol manipulation, we have an
intelligent entity performing a subset of his intelligent capacities
(symbol manipulation) but the intelligence of this entity (Searle) is
not put to test. The TASK however, shows that by performing symbol
manipulation ONLY, the entity does NOT exihibit any understanding, SAME
AS the hybrid system.
So as far as I can see, Searle could have put a robot replacing him and
we would have the same reasoning. But it would NOT be shown that this
robot was NOT intelligent AT ALL (following the same line of reasoning,
since Searle IS intelligent but did NOT show understanding when just
performing symbol manipulation, the robot could be showing NO
understanding when performing the task, but what can we say about the
rest of its faculties, for as far as we know the robot could be JUST
using a subset of its faculties) I think Searle jumps to some wrong
conclusions throughout the whole article, even if he gets some right.
Can you comment on the above?
Jorge
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:31 BST