From: Shynn Chris (cs698@ecs.soton.ac.uk)
Date: Tue May 01 2001 - 11:47:20 BST
Shynn:
This paper investigates the principals of making a conscious robot and
whether or not making a robot with a singular consciousness is feasible.
It details a project at MIT which was aimed at making not a conscious
robot but a robot which could interact with the real world in a versatile
and robust way. This robot would also be able to provide invaluable
information which would be near impossible to get otherwise.
> DENNETT:
> It is unlikely, in my opinion, that anyone will ever make a robot that
> is conscious in just the way we human beings are.
Shynn:
I am not sure if I agree with Dennett in this point. I believe it
depends on what people define as the essence of consciousness. If you
believe that that essence is the human soul or spirit then in that case
I agree with Dennett that humans will never be able to make a conscious
robot as we would have to become near gods ourselves and able to bestow
a soul upon a robot. If, however you see that consciousness as just the
correct balance of chemicals and elements in the human brain then I
believe that it is totally possible to make a conscious robot akin to
our own consciousness as we would eventually understand and be able
to replicate that balance.
> DENNETT:
> Might a conscious robot be "just" a stupendous assembly of more
> elementary artifacts--silicon chips, wires, tiny motors and cameras
> --or would any such assembly, of whatever size and sophistication,
> have to leave out some special ingredient that is requisite for
> consciousness?
Shynn:
Cannot a human also be seen in this light ? are we not just a collection
of neural cells, veins, arteries and nerves ? The only thing Dennett
points out here is that many people believe in a 'spark' of
consciousness which is what would be missing in a robot that any human
had constructed. Once again it comes back to a debate of whether or not
the consciousness is some ethereal quality akin to a soul or whether it
can be explained by physical science. If it can be explained by phyisical
science then that balance or ingredient will at some level of
sophistication be replicated.
> DENNETT:
> The phenomena of consciousness are an admittedly dazzling lot, but I
> suspect that dualism would never be seriously considered if there
> weren't such a strong undercurrent of desire to protect the mind from
> science, by supposing it composed of a stuff that is in principle
> uninvestigatable by the methods of the physical sciences.
Shynn:
Physical scientific methods are advancing all the time and more and more
false conceptions attributed to the supernatural are being proved false
all the time. Yet still the desire to protect the mind by encasing it in
the notion of a transcendant being able to bestow life and a soul must be
enourmous as there are over a billion people in the world that believe in
some form of higher being and of the human spirit. Yet even with these
conceptions of higher powers we strive to create an artificial mind
capable of conscious thought.
> DENNETT:
> Robots are inorganic (by definition), and consciousness can exist only
> in an organic brain.
Shynn:
Why ? arent both organic and inorganic compounds made up of the same
component materials ? This argument for the impossibility of a conscious
robot is not valid in my opinion as all compounds are created of the same
elements such as carbon and hydrogen so why shouldnt consciousness be able
to reside within an inorganic structure of those elements ?
> DENNETT:
> So there might be straightforward reasons of engineering that showed
> that any robot that could not make use of organic tissues of one sort or
> another within its fabric would be too ungainly to execute some task
> critical for consciousness.
Shynn:
again I raise the objection that why shouldnt a robot be able to have a
conscious mind using solely inorganic materials as these are the same in
base elements as organic compounds. Once again it comes back to an
argument of a specific required ingredient, the 'spark' of
consciousness. If such a spark is required and can only be attained by the
use of organic compounds then I agree that a robot purely consisting of
inorganic materials could not attain consciousness. Also there is the
problem of interfacing between the organic and inorganic components of the
robot. If these components could not be interfaced efficiently and
properly then the robot could not make use of its organic components and
therefore suffer as a result. But would this deny it consciousness ? I do
not think it would, I believe that the consciousness would still be
present but would be severly limited, akin to limitations placed upon
autistic people who because of a genetic problem are not as versatile as
others.
> DENNETT:
> Robots are artifacts, and consciousness abhors an artifact; only
> something natural, born not manufactured, could exhibit genuine
> consciousness.
Shynn:
I do not agree with this point that Dennet exhibits at all, already human
genetic scientists have cloned sheep and these behave just as normal sheep
would, they have as much consciousness as other sheep yet just because
they were cloned and not born naturally should not exclude them from
having a chance at being catagorised as conscious. I totally agree with
Dennet when later on in the same passage he describes this as origin
chauvinism and should be completly discounted.
> DENNETT:
> And to take a threadbare philosophical example, an atom-for-atom
> duplicate of a human being, an artifactual counterfeit of you, let us
> say, might not legally be you, and hence might not be entitled to your
> belongings, or deserve your punishments, but the suggestion that such a
> being would not be a feeling, conscious, alive person as genuine as any
> born of woman is preposterous nonsense, all the more deserving of our
> ridicule because if taken seriously it might seem to lend credibility to
> the racist drivel with which it shares a bogus "intuition".
Shynn:
I completely agree with the point Dennett makes here in that what is in
essence a clone of you, although it is not the exact same person as you,
because of what in my opinion is changes in thought patterns due to a
lifetime of experiences, is still alive and should not be discounted out
of hand.
> DENNETT:
> it could turn out that any conscious robot had to be, if not born, at
> least the beneficiary of a longish period of infancy. Making a fully-
> equipped conscious adult robot might just be too much work. It might be
> vastly easier to make an initially unconscious or nonconscious
> "infant" robot and let it "grow up" into consciousness, more or less the
> way we all do.
Shynn:
I believe this hypothesis to be correct, all humans start life with a bare
minimum of knowledge and have only their instincts and their capability
for learning to build upon. It is the same with neural nets, they start
out with the bare minimum of base rules and learn from there building
patters and recognising those patterns when they reappear. I believe that
making a fully-adult robot would be too much work as you would have to
program in a lifetimes worth of experiences and rules, whereas an infant
robot would be able to learn those rules for itself just as a human baby
might. It may take longer but I think that this is the only way making a
conscious robot akin to ourselves will ever work.
> DENNETT:
> Robots will always just be much too simple to be conscious.
Shynn:
The argument that is put foward in the relevant passage to this quote is
in my opinion very close-minded as who knows what technology will be
capable of in the future. If we had decided that tools would always be
much too simple to be usefull we would still be plodding around in the mud
looking for our next meal with a crude club.
> DENNETT:
> There is no reason at all to believe that some one part of the brain is
> utterly irreplacible by prosthesis, provided we allow that some crudity,
> some loss of function, is to be expected in most substitutions of the
> simple for the complex. An artificial brain is, on the face of it, as
> "possible in principle" as an artificial heart, just much, much harder
> to make and hook up.
Shynn:
As we have been told much of the brain is made up of sensori material and
is used to interpret sensori input. Once part of the brain is incapable of
functioning then if that part is replaced it would undoubtedly affect the
conscious mind of the person. In this matter then, I agree with Dennett
that if a piece of the brain is replaced the consciousness is affected and
although this cannot be proven, until we fully understand the functioning
of the brain and the storing of the huge ammounts of data the brain
contains we will not know how a prosthestic brain would affect the mind.
> DENNETT:
> A much more interesting tack to explore, in my opinion, is simply to set
> out to make a robot that is theoretically interesting independent of the
> philosophical conundrum about whether it is conscious.
Shynn:
This next section of the paper outlines and comments on the COG project,
which is the project to make an interactive robot at MIT. As stated
previously the aim of COG is not to be conscious, but to be versatile and
interactive with its environment to provide invaluable data to the
designers.
> DENNETT:
> Cog's eyes won't give it visual information exactly like that provided
> to human vision by human eyes (in fact, of course, it will be vastly
> degraded), but the wager is that this will be plenty to give Cog
> the opportunity to perform impressive feats of hand-eye coordination,
> identification, and search. At the outset, Cog will not have color
> vision.
Shynn:
I agree with Dennett here that, though Cog's eyes will be vastly
downgraded from ourown they will still model sight as well as they need
to. As for the fact that Cog doesnt have colour vision, I dont see that
this would make any difference at the outset as many animals such as dogs
survive with only greyscale vision and even as humans we have the rod
cells in our eyes to provide greyscale vision at night.
> DENNETT:
> part of the hard-wiring that must be provided in advance is an
> "innate" if rudimentary "pain" or "alarm" system to serve roughly the
> same protective functions as the reflex eye-blink and pain-avoidance
> systems hard-wired into human infants.
Shynn:
I also agree with with Dennett here, but only to a point. Yes human
infants have reflexes and pain-avoidance systems build in but they learn
by trial and error as to what will set off those alarms. Infants are not
in my opinion a good model for this as, though they have the systems, they
are almost unable to make use of them as they have not yet built up an
idea of what will set them off. I believe that this could also be the case
with Cog, in early stages while he is still learning what will set off the
alarms he may damage much equipment in that learning.
> DENNETT:
> The goal is that Cog will quickly "learn" to keep its funny bones from
> being bumped--if Cog cannot learn this in short order, it will have to
> have this high-priority policy hard-wired in.The same sensitive
> membranes will be used on its fingertips and elsewhere, and, like human
> tactile nerves, the "meaning" of the signals sent along the attached
> wires will depend more on what the central control system "makes of
> them" than on their "intrinsic" characteristics. A gentle touch,
> signalling sought- for contact with an object to be grasped, will not
> differ, as an information packet, from a sharp pain, signalling a need
> for rapid countermeasures.
Shynn:
Here Dennet puts foward the idea of modelling the lower level functions of
the human brain in Cog. These reactions to the data-packets sent by the
membrane will be essential learning material and it would be interesting
to see how Cog will differentiate between a light touch upon an object and
a touch that would break something as according to Dennett they will be
exactly the same data-wise. It would also be interesting to see how
Cogs brain would be organised and what priorities were given to which
alarm signals. Such as would one membrane be more important than another ?
modelling a critical system that needs to be protected.
> DENNETT :
> So even in cases in which we have the best of reasons for thinking that
> human infants actually come innately equipped with pre-designed gear, we
> may choose to try to get Cog to learn the design in question, rather
> than be born with it. In some instances, this is laziness or
> opportunism- -we don't really know what might work well, but maybe Cog
> can train itself up.
Shynn:
I like this view of designing future versions of Cog, it is a view similar
to giving him a set of knowledge then after his lifetime you take all his
accumulated knowledge and pass that on as genetic knowledge if you like to
his offspring. This is a good idea and I think that this will work as
instead of trying to model all of the human mind in one go it will start
out with the very basics and learn what it needs to from there, just like
a human infant does.
> DENNETT:
> How plausible is the hope that Cog can retrace the steps of millions of
> years of evolution in a few months or years of laboratory exploration?
Shynn:
This hope is unfounded in my opinion as for Cog the environment will be
totally different to the environment that any other creature has evolved
in. In real life there is a balance between predators and prey and
creatures use their sences for survival and the finding of food. In the
laboratory environment Cog will have no need of food and is in no danger
of 'dying' so the whole process is different.
> DENNETT:
> We are going to try to get Cog to build language the hard way, the way
> our ancestors must have done, over thousands of generations. Cog has
> ears (four, because it's easier to get good localization with four
> microphones than with carefully shaped ears like ours!) and some
> special-purpose signal-analyzing software is being developed to give Cog
> a fairly good chance of discriminating human speech sounds, and probably
> the capacity to distinguish different human voices.
Shynn:
Here Dennett describes what will be designed into Cog to aid him in
picking up the human language, or at least some rudimentary form of
language. But I believe that just to give him this equipment will not be
enough, I think that Cog will also need to have a sort of desire to
imitate its 'mothers' like human infants do when they start to speak for
the first time. I think that if Cog does have this desire and also the
desire to understand what it is imitating then he will be able to pick up
language much as a human infant does. But it is the understanding of the
language it is learning that is the most important, otherwise Cog becomes
just a parrot who has a basic use of language but no understanding of what
that language refers to.
> DENNETT:
> This is so for many reasons, of course. Cog won't work at all unless it
> has its act together in a daunting number of different regards. It must
> somehow delight in learning, abhor error, strive for novelty, recognize
> progress. It must be vigilant in some regards, curious in others, and
> deeply unwilling to engage in self-destructive activity. While we are at
> it, we might as well try to make it crave human praise and company, and
> even exhibit a sense of humor.
Shynn:
I totally agree with Dennett here, without these desires and needs then
Cog will only ever be a learning machine, learning because it is designed
to, and not as is the case of humans because it wants to.
> DENNETT:
> I submit that Cog moots the problem of symbol grounding, without having
> to settle its status as a criticism of "strong AI". Anything in Cog that
> might be a candidate for symbolhood will automatically be "grounded" in
> Cog's real predicament, as surely as its counterpart in any child, so
> the issue doesn't arise,
Shynn:
I agree with Dennett here in that all symbols of need in grounding would
be automatically grounded because Cog is modelling an infant to begin
with. I think this because although he will not know the use for objects
such as an Umbrella neither does an infant untill much later in its
development. All either Cog of an inafnt wold have would be the knowledge
that an object exists and it looks like an umbrella although neither of
them know what it is called.
> DENNETT:
> if Cog develops to the point where it can conduct what appear to be
> robust and well-controlled conversations in something like a natural
> language, it will certainly be in a position to rival its own monitors
> (and the theorists who interpret them) as a source of knowledge about
> what it is doing and feeling, and why.
Shynn:
Here is where Dennett concludes saying that since Cog is designed to
re-design himself as much as possible then it will eventually be Cog who
will be the expert on what is happening to Cog. If he manages to reach a
stage where he is able to converse with his designers he will have
redesigned himself so much then only he will be able to tell the
designers what is happening in him, as they although experts will not have
the current operational knowledge of Cog at any one time.
This archive was generated by hypermail 2.1.4 : Tue Sep 24 2002 - 18:37:30 BST