Could someone please give me, in kid-sib style, the processes
underlying the mapping from the target domain of information by the
ACME, SME and IAM computational models ?
I also have another query as to how we recognise the faces of animals
such as dogs. I know that with humans there is substantial evidence
that we have a prototypical face and use this and that in recognising a
whole dog we use probably use invariants. However, if a group of people
saw just the heads of dogs and then were asked to say if they were dogs
or not then, if one was a labrador and the other was a Pekingese,
despite looking different, I would presume that most would correctly
identify them both as dogs.
However, how can a prototype be used here as these dogs are quite
different with reference to the lengths of their snout and their ears
etc. In addition, if you have to tell the difference between a wolf and
a Bulldog or an Alsatian and a wolf I would suggest that most would be
quicker at distinguishing the first match. As such, would this be
evidence of an overlapping between stored prototypes of a dog and a
wolf or would it be due to both animals having similar invarient
features ? Additionally, it could be argued that distinguishing the
above animals is quicker for those with more experience with such
animals.
I know that this is a rather hypthetical question but I was just
curious as to what anybody else thought.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:53 GMT