From: Bell Simon (smb398@ecs.soton.ac.uk)
Date: Sun Apr 01 2001 - 23:50:08 BST
Harnad: Computation is just interpretable symbol manipulation;
cognition isn't
http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad94.computation.cognition.html
Skywriting by Simon Bell
>HARNAD:
>Formal computation is clearly symbol manipulation, with the
>operations on the symbols (read, write, move, halt) being based,
>as already stated, on the shapes of the symbols. Such shape-based
>operations are usually called "syntactic" to contrast them with
>"semantic" operations, which would be based on the meanings of
>symbols, rather than just their shapes. Meaning does not enter
>into the definition of formal computation.
Bell:
It is true that computation is not defined in terms of semantics,
but instead of syntax. However the grammar which models the
behaviour of the system holds the semantics implicitly. The symbols
are defined in terms of each other; composite being functions of
atomic symbols. Therefore relationships between the symbols are
defined explicitly. The behaviour of something goes a long way
towards giving it meaning.
For an input / output device, there is a need to be able to map
the input symbols to a given event, so that we could say that this
symbol means this event.
>HARNAD:
>This criterion of semantic interpretability (which has been dubbed
>the "cryptographers constraint," because it requires that the
>symbol system should be decodable in a way that makes systematic
>sense) is not a trivial one to meet: It is easy to pick a bunch of
>arbitrary symbols and to formulate arbitrary yet systematic
>syntactic rules for manipulating them, but this does not guarantee
>that there will be any way to interpret it all so as to make sense
>(Harnad 1994b).
Bell:
The cryptographic constraint is that the encrypted message is
decipherable. It is just that the text is describable by a grammar,
not that the grammar itself refers to anything in particular.
Whether we can attribute a meaning to the grammar is
irrelevant to the meaning of the symbols themselves for they are
defined by the grammar. The symbols are given a behaviour by the
grammar, and we attribute a meaning to this behaviour. There is
a level of abstraction involved. The grammar is given meaning
through it's atomic symbols and all else is derived from this.
>HARNAD:
>In other words, the set of semantically interpretable formal
>symbol systems is surely much smaller than the set of formal
>symbol systems simpliciter, and if generating uninterpretable
>symbol systems is computation at all, surely it is better
>described as trivial computation, whereas the kind of computation
>we are concerned with (whether we are mathemematicians or
>psychologists), is nontrivial computation: The kind that can be
>made systematic sense of.
Bell:
The separation of formal systems into those that have semantic
interpretations and those that do not is unnecessary. Whether we
can find a situation that fits the model, and so gives
semantic interpretation, has no effect upon the strength of the
system, but only perhaps it's usefulness. I would doubt that
grammars that can not be interpreted in any way do not exist,
unless there is a proof that I am unaware of.
>HARNAD:
>Trivial symbol systems have countless arbitrary "duals": You can
>swap the interpretations of their symbols and still come up with
>a coherent semantics (e.g., swap bagel and not-bagel above).
>Nontrivial symbol systems do not in general have coherently
>interpretable duals, or if they do, they are a few specific
>formally provable special cases (like the swappability of
>conjunction/negation and disjunction/negation in the
>propositional calculus). You cannot arbitrarily swap
>interpretations in general, in Arithmetic, English or LISP,
>and still expect the system to be able to bear the weight of a
>coherent systematic interpretation (Harnad 1994 a).
Bell:
The difference between this definition of a 'trivial' and
'nontrivial' symbol system is whether the same grammar will
support multiple real-life situations purely by redefining the
atomic symbols upon which it is based. Another way of putting this
is whether two real-life situations exhibit the same
behaviour.
>HARNAD:
>It would be trivial to say that every object, event and state of
>affairs is computational because it can be systematically
>interpreted as being its own symbolic description: A cat on a
>mat can be interpreted as meaning a cat on the mat, with the cat
>being the symbol for cat, the mat for mat, and the spatial
>juxtaposition of them the symbol for being on. Why is this not
>computation? Because the shapes of the symbols are not arbitrary
>in relation to what they are interpretable as meaning, indeed
>they are precisely what they are interpretable as meaning.
Bell:
Even with the cat upon a mat example there is an abstraction
involved. An object / symbol does not have a meaning by its own
right, without an observer involved. When we speak of something
as having a meaning, we are projecting our own interpretation
of the thing upon it. The only way of attributing a meaning to an
object would be to define it as the sum of all the possible
interpretations that it could have. However this would be the same
for all things, and so there would be no information involved to
distinguish them apart. Without an observer to enforce interpretation
nothing has meaning apart from its behaviour.
>HARNAD:
>Any causal or functional explanation of a physical system is
>always equally compatible with a mental and a non-metal
>interpretation [Harnad 1994]; the mental interpretation always
>seems somehow independent of the physical one, even though they
>are clearly correlated, because the causality/functionality
>always looks perfectly capable of managing equally well, in fact
>indistinguishably (causally/functionally speaking), with or
>without the mentality.
Bell:
Causal and functional explanations of a physical system is
equally compatible with either a mental state, or a non-metal one
due to the failure to define mentality. The use of 'non-mentality'
to mean not mental is equally ambiguous. Surely this indicates
that mentality is a description of a causal / functional system
that satisfies certain properties?
>HARNAD:
>They argue: Cognition is computation. Implement the right symbol
>system -- the one that can pass the penpal test (for a lifetime)
>-- and you will have implemented a mind. Unfortunately, the
>proponents of this position must contend with Searle's (1980)
>celebrated Chinese Room Argument, in which he pointed out that
>any person could take the place of the penpal computer,
>implementing exactly the same symbol system, without
>understanding a word of the penpal correspondence. Since
>computation is implementation-independent, this is evidence
>against any understanding on the part of the computer when it is
>implementing that same symbol system.
Bell:
I would not agree that Searle's Chinese room argument is evidence
that understanding is not involved in computation while it is in
humans. Searle believes that if he encompasses the system within
his body by remembering the algorithms and executing them himself
then his ignorance of Chinese proves the ignorance of the
system, for it is he. Searle forgets that the behaviour that a
system can exhibit is not determined through the study of the behaviour
of the individual processes, without having a concept of the
system as a whole. This overall view however is not required to
implement the system, especially if it is parallel. Searle can
therefore implement the system without understanding its
overall behaviour, as the sum is greater than its parts.
>HARNAD:
>The system is merely syntactic, manipulating meaningless symbols
>on the basis of shape-based rules, and the shapes of the symbols
>are arbitrary in relation to what they are interpretable as
>meaning. Looking for meaning in such a system is analogous to
>looking for meaning in a Chinese/Chinese dictionary when one does
>not know any Chinese: All the words are there, fully defined; it
>is all systematic and coherent. Yet if one looks up an entry, all
>one finds is a string of meaningless symbols by way of definition,
>and if one looks up each of the definienda in turn, one just
>finds more of the same.
Bell:
The notion of having a Chinese to Chinese dictionary is important.
It would result in a grammar that had no atomic symbols, as they are
all defined in terms of each other. It is true that the model would
have no meaning that was determinable if viewed from outside of the
system; the Chinese would be meaningless squiggles. However this
does not mean that the grammar would be useless.
When such a system was implemented then a method for matching the
out of system event (the input) to a symbol is required, or to
ground it, to give it another name. However if this was done in
a system that learnt its grammar as it progressed, then this
would be performed by the device that observed the event. The
symbol could easily be a representation of the event itself as
detected, such as a coding of a sound wave.
If is true that this system could then be described using different
symbols, and if this was given to somebody else to implement then
they would not be able to give it meaning. This however is due to
the inadequacies of using a grammar alone to define a system and
not of the computation that could implement it. Agreed interfaces
from the symbol system are required to ground it for it to be
understood, but not for it to be built.
>HARNAD:
>A computer simulation of an optical transducer does not transduce
>real light (that would require a real optical transducer), it
>transduces virtual light, i.e., computer-simulated light, and the
>transduction itself is virtual. The same is true of a virtual
>plane, which does not really fly, but merely simulates flight in
>simulated air. By the same token, virtual furnaces produce no
>real heat, and virtual solar systems have no real planetary
>motion or gravity.
Bell:
The fact that a simulated plane does not 'really' fly is
irrelevant. A simulation is something that exhibits the same
behaviour. A simulated hot air molecule behaves in an identical
way as it's physical counterpart, although it does not
physically move. Intelligence however is in my opinion correctly
defined in terms of behaviour. The fact that a physical brain has
no physical presence is of no consequence, as it can still
display intelligent behaviour.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST