> From: "Bollons Nicholas" <NSB195@psy.soton.ac.uk>
> Date: Fri, 8 Mar 1996 10:48:19 GMT
>
> According to Propositional Theories the mind has no need for mental
> imagery or pictures but instead uses symbolism and logic to process
> sensory input. This removes the problem of the Homunculus 'the little
> man in our head' which has been talked about, and in Artificial
> Intelligence research seems to have been proved to some degree.
The evidence (not "proof" -- you only have proof in maths) from the
accomplishments of Artificial Intelligence is that computers are
ABLE to do certain things that previously only people with minds were
able to do.
We have no idea how people do it, but we know exactly how computers do
it: They do it by computation, which is rule-based symbol manipulation
(as in the recipe for finding the roots of quadratic equations that I
described last time: -b +/-... etc.).
We know HOW computers do what they do. We don't know how people do what
they do. If we have no idea how people DO do it, then we have no idea
how they DON'T do it either. So as long as computation is the ONLY
available explanation for how anyone or anything could do something, we
have no basis whatsoever for saying it is the WRONG explanation -- no
basis for saying that, no matter how it turns out that people DO do it,
it's not doing to be the way computers do it.
Besides having the virtue of explaining exactly HOW what people can do
can be done at all, computers also have the advantage of not requiring a
homunculus, because whatever they do, they do MINDLESSLY, mechanically,
hence with no need to explain anything further. (Otherwise we'd have to
start worrying about how to explain the mind of the little man inside
the computer!) Computers allow us to "discharge the homunculus."
It is for these two reasons -- that (1) computation looked like the
only way to do the kinds of things that minds can do, yet (2)
computation can do it mindlessly -- that computation was taken to be an
EXPLANATION of the mind.
> Symbolism uses sort of 'on and off' states as in the basics of
> computers who really are a big collection of switches that can be
> turned either on or off depending on what you tell them (input).
The binary (0/1) code, physically wired as on and off switches in a
computer is simply one way of CODING the symbols that are being
manipulated. They could have been coded in any way, because the SHAPE
of the symbols does not matter. It is arbitrary. Computation is symbol
manipulation, and it doesn't matter whether you call an apple "apple,"
"apfel," "pomme" or "01001", or whether you call the number four "four"
or "4" or "IV" or "quatre" or "apple" or "01001," because the same
manipulation rules would apply no matter what you called it.
The shapes of symbols are arbitrary. We pick them by a convention that
we all agree to use: Let "apple" stand for apple and "4" stand for
four. The on/off code of computers just happens to be convenient to use
because on and off are two states. Everything (including every sentence
in English) can be translated into and out of a binary code.
The symbol manipulation (turning the switches on and off) is not JUST
based on input, though. It is also based on the internal state of the
machine. The rule for solving quadratic equations could already be
coded in the computer, so the computer simply applies it mechanically
to its input symbols (as you would, when solving a quadratic equation
mechanically).
> If we associate this with symbolic learning in humans, are we just a
> walking collection of switches that can be turned on or off
> depending on our input ? Think of the first transistor computers that
> had to change their physical state to be either on or off. Do we have
> these physical states in our head ?.
If what is going on inside our heads is computation, it need not be
based on the 0/1 binary code, as digital computers are. Even in digital
computers there are higher-level programming languages that don't code
things in binary symbols (0/1) but in symbols that are closer to
English. Those theorists (like Pylyshyn and Fodor) who argue that
cognition is a form of computation -- that the explanation of the mind
can only be a computational one -- think that there is a "language of
thought," a mental code that is not exactly the same as the language we
speak, but very close to it, and that is the brain's code.
It can all be translated into and out of binary code though, and in
digital computers it is. But in other kinds of hardware, like the
brain, it need not necessarily be translated into a binary code. And
even in computers, the explanation of what they are doing is usually
given at the level of the computer programming language, which, as I
said, is closer to English, rather than at the level of the machine
code, which is all in terms of on/off, 0/1.
The digital computer's hardware happens to be binary, on/off; but it
could have been more like the higher-level language too. It's just more
convenient and general to have it binary.
> Symbolism removes the Homunculus but turns us into walking transistors.
Not walking transistors but brains whose functional "code" is the
"language of thought." The way our brains work, according to
computationalists (symbolists, if you like) is by doing mindless symbol
manipulations that mechanically produce our actions, those countless
things we are capable of DOING (recognising faces, remembering names,
doing math or logic, finding new ideas). It doesn't matter what we are
made out of (transistors or neurons or something else). What matters is
that the right symbol-manipulations should be going on.
> Steven used the example of turning the F grid into symbolic on off
> states in our head ( 00010010 e.t.c) is this really how we process all
> sensory input ?
Well, that's the question we are facing right now: We KNOW the task can
be done by computation alone. Are there other ways? It turns out that
there ARE other ways, at least for certain kinds of tasks, such as
image-matching ("is the X on the F?" "Is this a rotated R or its
mirror-image?"). Another possibility that works is ANALOG processing,
in which the manipulation is of internal objects that are not symbols
but of internal objects that share some of the shape properties of the
outside objects on which the task is operating.
The important thing is that analog processing, like symbol
processing, likewise requires no homunculus: No one little man needs
to "look at" or "judge" the shapes that are rotated and matched in
shape-matching. The process itself delivers the result, with no need
of a mind. This mindless capacity is the STRENGTH of a theory that
wishes to explain the mind without relying on yet another unexplained
mind!
So it looks as if there can be INTERNAL imagery after all, and that
this can explain some of the things our minds are able to do. The
possibility of internal imagery is supported by Kosslyn's computer
simulations (in which analog processing is IMITATED by a digital
computer -- but it could have done without simulation too, by building
an internal rotation machine that actually rotates shapes that are the
same as the shapes of outside objects).
But besides this evidence that imagery is POSSIBLE, there is also
evidence from brain anatomy and brain imaging that there are parts of
the brain that are analog: Some of these parts are analog
copies of parts of the retina (they are called "retinotopic maps"
because they are point-for-point copies of the "topography" of the
retina). So whenever an object casts a shadow on the retina, it casts a
shadow on these higher retinotopic areas too, and the shadows have the
same shape.
(Symbols, in contrast, are and must be arbitrary in shape: symbols must
not resemble the things they represent. Hence pictures are not symbols
in the technical, computational sense we are discussing here.)
There is also evidence (see the Posner and Kosslyn papers) that it is
these retinotopic areas that are active when people say they are using
imagery. They are also the areas that are injured when people say they
have lost their imagery.
So it looks as if the case for internal imagery as being at least PART
of how the mind works has been made. But what about the "mental" part?
We have had to pay a price for being able to show that images can really
do the job. We've had to discharge the homunculus. That means images can
do it ON THEIR OWN: that the internal shapes and processes involved in
analog image manipulation are sufficient to explain the kinds of things
we feel WE do by "using" mental images.
This leaves us with "images" that are just as mindless as computation:
Indeed, there is an even closer parallel, for when we think in words
rather than in images, we are aware of the images of those words in our
mind's "ear," just as we are aware of the images of objects in our mind's
eye when we think in images. But it appears as if what we are AWARE of in
our minds, what we can introspect, is again not playing any real causal
role in what are minds are doing.
It's not too hard to accept that whatever so quickly gives me the "5"
when I am asked for the sum of 2 + 3 is some kind of unconscious
computation that hands me the sum on a platter. It's perhaps only a
tiny bit less easy to accept that -- when you show me four words, then
wait a bit, then show me a fifth word and ask whether or not it was one
of the first four -- whatever so quickly gives me the answer "yes" or
"no" here is again some kind of computation, perhaps even one involving
a serial search, just as Sternberg says it does, but so fast that I am
not aware of it. With a bit of a stretch, my very rapid shape-matching
capacity is perhaps also perhaps explicable by an unconscious internal
analog rotation process.
But where does this leave ME in all this? I thought I was doing all
this work. I thought I was the one adding the numbers, searching the
word-list, rotating the images. If you had to discharge the homunculus
to explain how my mind worked in these cases where the answer comes too
fast for me to mind not taking credit for it, what will happen in the
slower cases where I would quite like to continue to believe that I am
in charge?
We will return to this when we return to the issue of consciousness
toward the end of the course. For now, you will have to settle for the
fact that discharging the homunculus, whether for words or images,
always requires a causal mechanism that can do the work without your
help. No one is denying that it is all happening in your mind, and that
you are really there while it's all going on, seeing the images and
hearing the words. But what is not obvious is what causal role is left
for you, once the reverse-engineering explanation of how your mind
does what it does is complete.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:38 GMT