PY104 Lecture Notes for Chapter 6 of Green et al. 1996
Chapter 3 suggested that the mind need not be understood as a whole.
There are "modules" that can each be explained independently.
The first module was vision. Chapter 4 gave a little sample of what
needs to be explained in order to understand our visual perceptual
capacities. The second module was speech (Chapter 5). Chapter 6
now tries to put these two together: vision + speech make
it possible to READ.
Note, though, that although modules may be explainable independently,
they also interact. Reading depends on both speech and vision. There
are parts of the brain that are specialised for vision (most of the back
of the brain), and other parts that correspond to speech (on the side
and front) but although an area of the frontal lobe has been called a
"reading centre" (Exner's area) it is unlikely that the brain is
actually organised for reading in the way that it is for speech and
vision. Reading came too late in our evolutionary history to have an area
of the brain specifically dedicated to reading.
In Chapter 6 there will be time only to describe the reading of
words. Bigger units such as sentences will be covered in Chapter 7.
Most of the explanation of the mechanisms of reading will consist of
of connections between other functions, and one of the questions
that current theories of reading are working on is whether there
are one or two "routes" from seeing letters on paper to turning them
into spoken sounds:
The LEXICAL route:
One possible route is from seen words to spoken words is:
(1) from seeing the letters on paper [vision]
(2) to identifying the letters ("graphemes")
(3) to seeing them as written words ("input lexicon": grapheme strings)
(4) to understanding their meanings ("semantics")
(5) to turning them into spoken words ("output lexicon": phoneme strings)
(6) to speaking them [speech]
There is evidence, though, that there is another route from seeing
letters to speaking words:
The NONLEXICAL route:
(a) from seeing the letters on paper [vision]
(b) to identifying the letters ("graphemes")
(c) to transforming the graphemes to phonemes [TRANSFORMATION]
(d) to speaking them [speech]
The lexical route is based on the meanings of the words (the "lexicon" is
just our word vocabulary). The nonlexical route bypasses meaning
and transforms graphemes directly into phonemes.
Why is this difficult? And why does the author of the Chapter, Professor
John Morton, say that reading is harder than speaking?
Well, one reason is that part of reading IS speaking, so reading draws
on both the visual module and the speech module. But besides that,
reading has a special problem, and it has to do with invariance:
Object constancy (seeing an object as the same object despite
transformations in its position) is based on invariants -- invariants
that our mind can "recover" from the (distal) object's (proximal)
"shadows" on our sense organs. Speech is also perceived on the basis of
invariants that our mind can recover despite transformations in
context, speed, voice quality, accent, etc.
Now for a transformation to have an invariance from which something can
be "recovered" (reconstructed), it has to be a one-to-one
transformation ("isomorphism"). It is the point to point
correspondences that allow the transformation to be "inverted" to
recover the original from its analog image.
An example would be multiplication and division: If you multiply
a number by another number, you can get the original number back
by dividing by the same number you multiplied it with. Multiplication
and division are inverses of one another. But if a transformation
is from many-to-one, then there is no way to recover the original.
The speech we hear can be inverted and transformed into the speech we
speak. There is a one-to-one correspondence between our input and our
output. (They are analogs of one another.) But what about the
relationship between written letters ("graphemes") and speech?
They are not invertible in either direction: transforming
graphemes to phonemes can be one-to-many (e.g. think of the
multitude of ways that we pronounce "t" in this sentence!);
and transforming speech to letters is also one-to-many (as in
the "sh" sound of ship, passion, patient, etc.).
So unlike (1) the route between distal objects and their retinal
projections, and unlike (2) the route between spoken and heard speech,
in which invariants allow inversion in both directions, (3) the route
from graphemes to speech and from speech to graphemes is more
complicated and so inverting it is more complicated too.
Priming
Priming is the modern day descendent of "subliminal perception."
(You have read about people requiring longer presentations to
recognise rude words, because your "unconscious mind" sees and
suppresses them from your conscious mind [this turns out to be a
misinterpretation of the evidence]; and you have also
read about advertisements appearing on the TV too quickly for our
conscious minds to see them, but our unconscious minds see them and
influence us to buy the products [this too turns out to be untrue].
Yet as you will see, unconscious "priming" really is possible.)
When you see a stimulus and soon afterward you see it again, you will
recognise it when you see it the second time more quickly and more
accurately than you recognise a stimulus that you have not seen very
recently. The first sighting of stimulus is said to "prime" the next
sighting soon afterward. The surprising thing is that this priming
happens even if you are not conscious of the first stimulus:
A stimulus can be "erased" before we become conscious of it if it is
followed immediately by a "masking" stimulus. If you show someone an
apple very briefly, and immediately follow it with a scrambled shape
about the same size as an apple, they will say they did not see the
apple. (And they really won't see it: If you offer them money to tell
you what they saw before the mask, they won't be able to tell you.)
Yet, if they are shown a lot of objects afterward, including an apple,
they will recognise the apple faster, and name it more accurately than
the objects that were not primed.
So priming works whether or not you are conscious of the priming
stimulus.
Masked priming can be done with printed words too. When a word has
recently been primed, even though you were unconscious of it because of
a mask, you recognise it more accurately and quickly than words that
have not been primed. Now the question is: if you are shown a
masked prime of a printed word, e.g., "apple," what is actually being
primed? Is it (1) the visual shape (which happens to be a
word, but might just as well have been squiggles)? Or is it (2) the visual
shape recognised as a string of graphemes (letters)? or is it (3) the
visual shape recognised as a printed word of English? or is it (4)
the meaning of the word? or is it (5) the image of the object the word
stands for? or is it (6) the spoken form of the word (as in pronouncing
a written word silently)?
The prime could have been priming any or all of these way-stations on
the route from the seen stimulus to the spoken one.
It turns out that priming with written words must be acting on the
visual shape of letters and words (2), rather than on something deeper,
like their meaning (5), or how they are spoken (6). This was shown by
experiments using priming at different levels: for example, the
definition of the object that a word names (such as "a round, red
fruit" or "what you eat every day to keep the doctor away" or a picture
of an apple) does not prime the recognition of the printed word
"apple." Only seeing "apple" does.
Morton's logogen model: "Logos" means "word." So this was meant to be a
model of how the graphemic, phonemic, and semantic features of words
are represented and generated: it has been supplanted by more recent
models that were inspired by it, but that no longer make use of the
"logogen" concept.
The Lexical Decision Task
This is a task in which subjects must decide whether they
have seen a word (such as "mane") or a nonword (such as "mave").
Together with Priming, the Lexical Decision task has been used
to infer the processes underlying the transformation from
graphemes to phonemes and back. One result was that there turn out
to be two independent, parallel routes from graphemes to phonemes
and back: one direct, not involving meaning, and the other
indirect, and involving meaning.
For example, regular words like "mope" are recognised faster
than irregular ones like "move." And nonwords that sound like
words (e.g. "moov") take longer than nonwords that don't sound like
words (e.g. "noov").
In general, reaction times for performing certain tasks can give clues
about the processes that underlie them in the brain. (Other clues
come from success/errors in performance, evidence from the new
techniques of brain imaging, and the effects of brain injuries.)
Supplementing behavioral evidence for the existence of two parallel
lexical and nonlexical routes between graphemes and phonemes,
Seidenberg & McClelland modeled the transformation from graphemes to
phonemes with neural nets that received graphemes as input, and
produced phonemes as output, with hidden units active in between. The
net's patterns of errors, however, did not match human ones.
Coltheart's dual route cascade model explained much more of the data
by assuming there were two routes, not just the nonlexical one
that S & M's neural net modeled.
Deep and Surface Dyslexia
Another source of evidence is the effects of brain injuries.
When the areas of the brain that are involved in understanding language
(the left temporal lobe) and in producing language (the left
frontal lobe) are damaged, patients have language disorders
called "aphasias."
There are also brain areas where damage causes reading and/or writing
disorders. One of the more puzzling disorders is "alexia without
agraphia" in which the patient loses the ability to read but retains
the ability to write. (They of course think they can't write, since
they can't read, but when the neurologist tells them not to think about
it and just go ahead and write, they are able to do it, although they
lose continuity because they can't read what they are writing!)
Complete alexias are rare, but partial ones, called "dyslexias" are
more common. Two different kinds of dyslexias also support the
dual-route theory of reading: "Surface dyslexics" have the kinds of
problem you would expect: they misread the shapes of words, mistaking
"more" for "move" and so on. But once they have the right word, they
can get to its meaning and its pronunciation without difficulty.
"Deep dyslexics" have their problems further down than the surface
shapes of the graphemes. Their kind of error is in misreading "move" as
"budge", which is not a confusion at the grapheme level, but a
confusion at the level of meaning.
When children first learn to read, they see words just as shapes,
rather than as made up of letters or graphemes that correspond to the
way they are pronounced. There are some languages (e.g., Chinese) in
which that's all a child has to learn, because the graphemes are just
pictures. We wouldn't expect certain kinds of dyslexia in Chinese,
where no mapping from graphemes onto phonemes and back needs to be
learned. We would expect less dyslexia in languages (e.g., Spanish)
where the mapping from phonemes to graphemes and back is closer to
being a one-to-one transformation (an isomorphism, which is
invertible). The dyslexias appear at the stage where the rules for this
transformation need to be learned, because they are not
straightforward, especially in English!
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:51 GMT