> From: Seale, Josephine <jvs196@soton.ac.uk>
>
> Could someone please give me, in kid-sib style, the processes
> underlying the mapping from the target domain of information by the
> ACME, SME and IAM computational models ?
All three are models for finding ways in which two seemingly different
things may have some structure that is the same. Trivial analogies are
the one between the sun and the moon (both round and up there) or
between cricket swings and golf swings. There are many ways in which
the two domains differ, but in some features, or relations between
features, they resemble one another.
Examples of more interesting analogies might be the one between sensory
intensity and firing frequency: if you stimulate the skin harder and
harder, the reaction of the nerves is not to fire harder and harder, but
to fire faster and faster: There is a correlation between the intensity
and the frequency. Firing frequency is an analog transformation of
sensory intensity.
There are tests of your ability to see analogies between two seemingly
different domains: The Miller Analogies Test items always go like this:
A is to B as X is to ?
For example, "Stallion is to Mare as Bull is to ?"
Or 4 is to 2 as 2 is to ?
For the last one you would have to have noticed a relation between 4 and
2, and then concluded that the relation is "twice as much as".
The chapter is interested in creative analogies: Examples of when some
creative person saw that there was some kind of correlation between two
kinds of things that we had all thought were different. The SME
(Structure Mapping Engine) is simply a way of representing two sets of
relationships so that it makes explicit what structure they share. It's
like a more elaborate version of the analogies in the Miller Analogy
Test. It computes the similarities in structure between two domains,
and focuses more on similarities between relations of parts than on
similarities between the parts themselves.
SMA is not a model for human creativity; it is just a way of finding
and highlighting similarities by trying every possibility and picking
the best one. It does this by trying all matches one after the other.
ACME (the Analogical Constraint Mapping Engine) does something similar,
but in parallel, in a network. IAM (the Incremental Analogy Machine)
again does it serially rather than in parallel but, unlike SMA or ACME
it does not try all possibilities, but rather starts with a best fit and
sees whether it can match more than 50% of the relations in both
domains.
You do not need to know the details of these models! You just need to
get a feeling for the kind of thing the models do, and how. If you take
a course on problem-solving and Analogy, you can study models like
these in detail, but now you only need a qualitative sense of what
they do.
> I also have another query as to how we recognise the faces of animals
> such as dogs. I know that with humans there is substantial evidence
> that we have a prototypical face and use this and that in recognising a
> whole dog we probably use invariants. However, if a group of people
> saw just the heads of dogs and then were asked to say if they were dogs
> or not then, if one was a Labrador and the other was a Pekingese,
> despite looking different, I would presume that most would correctly
> identify them both as dogs.
>
> However, how can a prototype be used here as these dogs are quite
> different with reference to the lengths of their snout and their ears
> etc. In addition, if you have to tell the difference between a wolf and
> a Bulldog or an Alsatian and a wolf I would suggest that most would be
> quicker at distinguishing the first match. As such, would this be
> evidence of an overlapping between stored prototypes of a dog and a
> wolf or would it be due to both animals having similar invariant
> features? Additionally, it could be argued that distinguishing the
> above animals is quicker for those with more experience with such
> animals.
>
> I know that this is a rather hypothetical question but I was just
> curious as to what anybody else thought.
No, the question's fine. It's not at all sure that face recognition is
based on prototype matching; that is one of the theories. You're right
that prototype matching would not help much in deciding which animals
are dogs, because dogs breeds vary so much, and look a lot more like
other species than like other breeds of dog.
Prototype theorists would perhaps reply that you need multiple
prototypes for as varied a category as "dog." The truth is that
no one has yet devised a model that could do well on the task you are
thinking of -- distinguishing photos of dogs of all breeds from
photos of other mammals that look a lot like them.
The trouble is that an invariant-feature model couldn't do much better
either, because no one has yet found the invariants in photos of dogs
and non-dogs that will lead to low-error categorisation.
As you say, the problem of distinguishing, say, wolves from
Alsatians is just as much a problem for prototypes (overlapping
prototypes) and invariant features (shared invariants).
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:54 GMT