Richard Loosemore wrote:
candice schuster wrote:
Richard,
Your responses to me seem to go in round abouts. No insult intended
however.
You say the AI will in fact reach full consciousness. How on earth
would that ever be possible ?
I think I recently (last week or so) wrote out a reply to someone on
the question of what a good explanation of "consciousness" might be
(was it on this list?). I was implictly referring to that explanation
of consciousness. It makes the definite prediction that consciousness
(subjective awareness, qualia, etc. .... what Chalmer's called the
Hard Problem of consciousness) is a direct result of an intelligent
system being built with a sufficient level of complexity and
self-reflection.
Make no mistake: the argument is long and tangled (I will write it up
a length when I can) so I do not pretend to be trying to convince you
of its validity here. All I am trying to do at this point is to state
that THAT is my current understanding of what would happen.
Let me rephrase that: we (a subset of the AI community) believe that
we have discovered concrete reasons to predict that a certain type of
organization in an intelligent system produces consciousness.
This is not meant to be one of those claims that can be summarized in
a quick analogy, or quick demonstration, so there is no way for me to
convince you quickly, all I can say is that we have very string
reasons to believe that it emerges.
Sounds reasonable to me. Actually, it seems "intuitively obvious".
I'm not sure that a reasoned argument in favor of it can exist, because
there's no solid definition of consciousness or qualia. That which some
will consider reasonable, others won't understand any grounds for
accepting. Consider the people who can argue with a straight face that
dogs don't have feelings.
You mentioned in previous posts that the AI would only be programmed
with 'Nice feelings' and would only ever want to serve the good of
mankind ? If the AI has it's own ability to think etc, what is
stopping it from developing negative thoughts....the word 'feeling'
in itself conjures up both good and bad. For instance...I am an
AI...I've witnessed an act of injustice, seeing as I can feel and
have consciousness my consciousness makes me feel Sad / Angry ?
Again, I have talked about this a few times before (cannot remember
the most recent discussion) but basically there are two parts to the
mind: the thinking part and the motivational part. If the AGI has a
motivational that feels driven by empathy for humans, and if it does
not possess any of the negative motivations that plague people, then
it would not react in a negative (violent, vengeful, resentful....
etc) way.
Did I not talk about that in my reply to you? How there is a
difference between having consciousness and feeling motivations? Two
completely separate mechanisms/explanations?
I'll admit that this one bothers me. How is the AI defining this entity
WRT which it is supposed to have empathy? "Human" is a rather high
order construct, and a low-level AI won't have a definition for one
unless one is built in. The best I've come up with is "the kinds of
entities that will communicate with me", but this is clearly a very bad
definition. For one thing, it's language bound. For another thing, if
the AI has a "stack depth" substantially deeper than you do, you won't
be able to communicate with it even if you are speaking the same
language. Empathy for tool-users might be better, but not satisfactory.
It's true that the goal system can be designed so that it wants to
remain stable, and "thinking" is only a part of "tools used for
actualizing goals", so the AI won't want to do anything to change it's
goals unless it has that as a goal. But the goals MUST be designed
referring basically to the internal states of the AI, rather than of the
external world, as the AI-kernel doesn't have a model of the world built
in...or does it? But if the goals are based on the state of the model
of the world, then what's to keep it from solving the problem by
modifying the model directly, rather than via some external action? I
think it safer if the goals aren't tied to it's model of the world.
Tyeing it to "What's really out there" would be safer...but the goals
would need to be rather abstract, particularly if you presume that
sensory organs can be added, removed, or updated.
...
Richard Loosemore
Candice
> Date: Thu, 25 Oct 2007 19:02:35 -0400
> From: [EMAIL PROTECTED]
> To: [email protected]
> Subject: Re: [singularity] John Searle...
>
> candice schuster wrote:
> > Richard,
> >
...
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57911976-ab147d