> From: "Baden, Denise" <DB193@psy.soton.ac.uk>
> Date: Sun, 4 Feb 1996 14:02:47 GMT
>
> Introspective intuition can easily lead to the idea that many of our
> cognitive tasks are achieved via mental imagery. However, Pylyshyn
> asked the question: who is looking at these images and what are
> they doing with them? He made the point that this does not give any
> explanatory role to images at all, it just pushes the explanation
> deeper into the system. He claims therefore that to explain anything
> by use of mental imagery is to be homuncular and lead to infinite
> regress.
All true, but kid brother doesn't know what a homunculus is and what
that has to do with it.
> His alternative is that our minds can be seen as a software
> programme, and we manage to do the things we do, not by mental
> imagery, but by following a set of propositional rules. These he
> believes are built into the system, and are sufficient to account for
> our information processing abilities. However, if propositional rules
> are built into the system, there must be someone inside to interpret
> them as, for example saying if a then b, doesn't make any sense.
Why not? We know programmes can be run on computer hardware and they
can do what they do. Why can't they be run on brain hardware and do the
same thing? The right point to make here is not that you need a
homunculus to interpret the programme code, but that perhaps "images"
too can deliver results without needing anyone to look at them, as you
suggest below:
> Also, is it any less credible that mental images could be hardwired
> into our brains in the same way. These points challenge Pylyshyn's
> claim that his computational approach is significantly less
> homuncular.
Hardwiring isn't exactly the point either, though. Even if what is
hardwired is just some general learning rules, and the rest of the
"programme" gets assembled on the basis of learning, it's still all
computational; ditto for images (in the sense of internal analogues).
With both images and words we have the same subjective experience: We
see the images, and we understand the words. Moreover, what we DO, the
task we actually perform in the external world -- say, mental
arithmetic, or mental rotation -- we FEEL we do by consulting our images
or our word-understanding. In reality, however, this is no explanation,
but rather, it is what CALLS for explanation. So, to discharge the
mysterious homunculus that images the mental picture or interprets the
mental word, what is needed is a causal mechanism that can do the same
thing, for then it is also a (potential) explanation of how our heads do
it.
So any autonomous mechanism that can deliver the capacity is nonhomuncular,
but it is not at all clear why one that manipulates internal symbols
(computation) is any better than one that manipulates internal analogues
of object shapes.
> There are many occasions when mental imagery is a more useful tool in
> problem solving than propositional debates.
More useful than propositional processing, surely, not debates!
> Roger and Shephard, for
> example, show that translating 2D images into 3D images is more
> difficult to do using propositions, than it is by visualisation.
There's just one Roger Shepard, and what he shows is that if the task
is to say whether two 2D projections are of the same 3D objects (in
different rotated orientations) or of two different 3D objects, how
long it takes is closely correlated with how greatly the object has been
rotated, suggesting that the subject is matching them by doing a mental
rotation. It is also true that if one were designing a device to do this
(and only this), then the most sensible way to do it would be to give it
the capacity to rotate internal 3D analogs generated from the 2D
projections, rather than to do it numerically, from a 2D or 3D
coordinate system or array, or worse, propositionally, from a set of
descriptions and operations on descriptions. It COULD be done the latter
way too, but not efficiently or economically.
> Similarly, Jeannerod gives many examples of how the neural substrates
> of motor imagery can be elucidated, and how these are coupled
> directly to motor preparation and motor action.
This sentence about Jeannerod had no content at all: Try to picture your
kid brother's face when told this sentence to reflect on...
> Kosslyn, in particular takes on Pylyshyn's challenge, and undertakes a
> series of experiments which aim to show that mental imagery is
> involved in cognitive tasks. He shows, for example, that people take
> the same time to mentally scan an imagined map, as they would do a
> real map. Unfortunately, it is very difficult to be sure that Kosslyn
> is really measuring scanning times, as opposed to other factors such
> as subject expectations etc. Pylyshyn counters many of Kosslyns
> conclusions, by saying that the subjects could be reaching their
> conclusions computationally, and that differences in scanning times
> could just as easily be accounted for by the length of time required
> to compute the answer, as it could by the time taken to scan the internal
> image.
So, what conclusion is one to draw from all this? (By the way, you
really need to give more detail, explanation and examples to make sure
your kid brother stays interested and departs informed! Often your
response is much too sketchy, like saying only the punchlines of jokes
you assume your reader has already heard and is only testing you to see
how many of them you've heard too...)
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:57 GMT