> From: "Whitehouse Chantal" <CW495@psy.soton.ac.uk>
> Date: Wed, 13 Mar 1996 13:21:00 GMT
>
> At the beginning of the course it seemed we were not only trying to
> define what a "mind" was but also who had one. Is it just humans or
> do animals have them, and what about machines?
I'm SO glad you asked these questions, which are spot-on!
We actually didn't (and couldn't!) "define" the mind, but we CAN point
to it: It's experiences, feelings, that stuff we each know at first hand
in our own private case.
Unfortunately, although the mind is easy enough to recognise without
any definition in our own private case, when it comes to other humans,
animals, and machines, we're up against the other-minds problem: How
can you know that anything but yourself feels anything at all?
This is not a course in philosophy, so insofar as other humans are
concerned, we will make the rather safe bet that when they act as if it
hurts them if you pinch them, they're really feeling something. Same
for animals that are a lot like us, like monkeys and dogs. Does it not
hurt a fish or a lizard or a snail if you pinch it? I think it does, but
this course won't help us there, so we probably have to leave that
question unanswered.
Machines? Well, if you put it that way, you side-step a question that
was raised earlier but never answered: What is (and isn't) a machine?
Let's leave that for a moment. How about specific machines: Do
typewriters have minds? Do cars?
We'd probably all guess no, and we'd probably be right. Can we be wrong?
Of course. It's possible that typewriters have minds and even that
people don't (except me), but it's about equally unlikely in both cases.
Why is it unlikely? Well, at least in part because typewriters and cars
don't ACT the way people and animals with minds act: They can't DO the
kinds of things people and animals can do.
So what about computers? There the question gets more difficult, because
computers CAN do some of the things that, until now, only people and
animals could do. But there's still the other-minds problem: How can we
be sure, or even get an inkling, of whether or not computers have minds?
> Now that we've started to think about the way in which we think
> (questioning "do we use symbols or images?") it seems that all of a
> sudden it's definite that computers don't have a mind. The fact that
> a computer can manipulate symbols was given as proof that a
> "homunculus" would not be needed if we "thought" in symbols.
> Why can we say that the computer definitely has no homunculus?
> Doesn't the problem of saying "what inside the computer is
> looking at the symbols and interpreting them?" arise in the same way
> as the question of what looks at the symbols or images in our own
> heads?
Yes it does. And I'm glad you brought it up before I did! We DON'T have
any good reasons for believing a computer couldn't have a mind. The
only reasons there are are ones I've come to call "Granny Objections,"
because they are the kinds of reasons Granny would give for why a
computer couldn't possibly have a mind -- and why we could not possibly
be computers. When the Granny objections are looked at closely, however,
they all turn out to be arbitrary: basically the equivalent of an
unexamined prejudice rather than evidence and informed reasoning.
Now Granny has an excuse for voicing Granny objections, because she was
born a long time ago and is not going to University and enrolled in a
course on Explaining the Mind! But we have to do better than that.
I'm going to post a list of 11 Granny objections in the next message.
Everyone is invited to comment on them, for or against. But thanks for
opening the door: You have not yourself raised a Granny Objection. On
the contrary, you have asked whether we are right to agree,
unquestioningly, with Granny in the first place! You are right that we
are not right to do so, yet I'm sure most of us do agree with Granny.
(So do I, for that matter!). So let's see whether her objections can
stand the light of critical scrutiny...
> I'm also a little unsure about the difference between symbols and
> pictures. I understand how a picture can resemble an object, unlike a
> word, but not why a homunculus is not needed for symbols if it is for
> images. Aren't pictures just symbols themselves just on a more
> complex scale? And so wouldn't you need a homunculus to look at the
> symbols before interpreting them? The only evidence saying you don't
> need a homunculus is coming back to the "well computers can do it so
> obviously you don't need one" thing again. It just doesn't seem to
> explain enough.
You're basically right about most of this. Let me try to sort it out a
bit:
(1) The difference between symbols and pictures:
If you see how a picture resembles an object but a word (symbol) doesn't,
then why would you say pictures were symbols on "a more complex scale"?
One (the picture) resembles what it represents; the other (the symbol)
does not. Now it is true that since the symbol is arbitrary, and hence
can be anything, there is no reason it could not happen to be a picture.
I could, for example, use the word "meeow" to stand for a cat, and meeow
does resemble what a cat sounds like. (Emma asked a question in class
about onomatopoeia in language that was on this same point.)
But if the symbol "meeow" is to be used in the place of "cat," all the
same symbol-manipulation rules must apply to it. For example, instead of
saying "the cat is on the mat," I will have to say "the meeow is on the
mat," and so on, for every possible English sentence about cats. Fair
enough, so far. We're using a nonarbitrary, analog image as an arbitrary
symbol, but the rest of the symbols of English are still arbitrary.
Could we do the same for all the words of English? "mat," "on," "is"?
If every symbol had to resemble what it represented, you couldn't SAY
anything, you would have to ACT IT ALL OUT: mime it. Could I do that with
the message I'm writing right now? Could I do it for things that are
far away, or absent, or imagined, or abstract? What would be the
"picture" for "goodness" or "truth"?
What should be evident is that although a symbol may HAPPEN to resemble
what it represents, you can't really USE that property of the symbol if
you're going to express propositions symbolically. A proposition is
either true or false. A picture (or a mimed act) just IS. A picture or
a bit of mimcry may be a good or a bad likeness to this or that, but it
cannot be true or false. A picture is not a statement. (A picture of a
pipe is not a pipe, it merely resembles a pipe. And a statement about a
pipe is neither a pipe nor a picture. The statement, however, unlike the
picture, can be true or false.)
It is essential for the use of symbols in making making propositions or
in doing calculations or logical deductions that we ignore or abandon
or abstract away from whatever resemblance or causal connection they
may have to what they represent, because the rules for manipulating
them (whether in maths or in logic or in computer programming or in
language) are not based on any resemblance or causal connection between
their shapes and what they represent. The rules are mechanical, rather
like grammatical rules; they operate on the shape of the symbol (as in
the quadratic equation formula), but any other shape would have done
just as well. In fact, if there were a RESTRICTION to only certain
shapes and not others (you may call 4 "4" and only "4," not "IV," or
"2-squared," or "the number of letters in the word four," then all
the power of symbols and symbol manipulation would be lost under this
arbitrary restriction. Symbol shape has to be arbitrary, otherwise it
arbitrarily restricts the expressive power of symbols systems.
(2) Pictures, symbols and the homunculus.
Do you need a homunculus to interpret symbols in the head, just as you
would need a homunculus to see pictures in the head? You rightly point
out that the only answer I have given to this is that computers can do
things using symbol manipulation alone, and since they don't have minds,
it can be done without a mind. But how do we know a computer doesn't
have a mind?
Let's separate the two questions. This much we do know about computers:
Once you have described the computation they are doing, you know
EVERYTHING they are doing, and how; you know everything that is going on
inside them. You have a complete explanation. So if a computer does
have a mind, you've explained that too. There would be no need for a
homunculus, because we would know exactly what was PRODUCING the mind:
It would simply be the symbol manipulations going on inside there. We
know what is CAUSING the symbol manipulations: The design of the
computer and the programme (symbol manipulation rules) that are being
executed by it. We know there is no little man needed to perform the
symbol manipulation. Getting the symbols manipulated in a particular
way is just part of the way the machinery is set up.
But if one of the effects of having that computer mechanically perform
those symbol manipulations is that it also somehow produces a MIND in the
computer, so be it: The computer has a mind, and how its minds works is
completely explained by mechanical symbol manipulation.
You see, the homunculus is only needed when a mind needs to decide what
to do; for until you have explained how that mind does what it does, you
have simply substituted one mystery for another: If I ask you how you
remembered someone's name, and you tell me it was by picturing them in
your head, and then identifying the picture, I still have to ask how you
identified the picture and found the name. But if I know that the
outcome is produced by a mindless, mechanical process that gives a full
causal explanation of how some task was accomplished, then I don't have
a homunculus problem. All "how" questions have been answered.
Now if it so happens that the mindless, mechanical functioning of the
system somehow PRODUCES a mind, so much the better! Then the system not
only explains how a mind could DO what it (or I) can do; it actually
produces a mind in so doing. I am not relying on that mind to explain
how the system does what it can do; I explain that separately, and
completely. The mind is just a bonus.
The same is true of an image-manipulating system as of a symbol-
manipulating system: Images' shapes are nonarbitrary, whereas symbols'
shapes are arbitrary, relative to what each represents. But apart from
that, mechanical image manipulation is just as free of the homunculus
problem as mechanical symbol manipulation, since neither requires a
mind to do what they do. If a mind turns out to be "generated" by a
system doing either image manipulation or symbol manipulation (or
both), so much the better. But that would not lead to a homunculus
problem, because what the system could DO would be completely
explained.
Next message: Granny Objections to the Computer's Having a Mind
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:39 GMT