> From: "Mc Naught-Davis, Beth" <BAMND195@psy.soton.ac.uk>
> Date: Thu, 23 May 1996 11:31:51 GMT
>
> Neural nets are like the brain in that they have interconnected units
> which pass activity between themselves. Some units are specific others
> are part of a pattern. Neural nets do however differ from the brain in
> other ways eg. they have no dendrites, axons, synapses, glia,
> neurotransmitters or action potentials. How similar or different they
> are to the brain doesn't really matter. It's how good they are at
> resembling the output of the brain which is important. If the output is
> the same then perhaps reverse engineering could be used to work out
> 'how' it occurs.
It's the behavioural capacities of the brain that the modeling is trying
to capture; so it's not just the output, but the input/output relations:
They must be able to do what We can do, given the input WE get.
> The neural nets consist of either just an input and an
> output layer or these plus other layers. Using supervised learning
> which gives feedback to enable back propagation to strenghen
> connections, neural nets are able to learn.
To learn what?
> A symbol is an arbitrary shape which can be manipulated using
> algorithms to achieve the correct answer. The algorithm is a formula
> which can be followed machanically, no meaning is needed and therefore
> no mind is needed. The symbols represent objects or words and can be
> manipulated to produce variations of sentences, for example.
>
> This type
> of manipulation is not possible with neural nets because the outputs
> are held within a unit, or a pattern of units. They cannot be broken
> down in order to construct a new sentence from the components, as you
> can with symbols. The net would need to learn from scatch the new
> association of words.
You have the basic idea, though a little more direct discussion of
Pylyshyn on systematicity would strengthen the reply.
This archive was generated by hypermail 2b30 : Tue Feb 13 2001 - 16:23:43 GMT