Charles D Hixson wrote:
Richard Loosemore wrote:
candice schuster wrote:
Richard,
Your responses to me seem to go in round abouts. No insult intended
however.
You say the AI will in fact reach full consciousness. How on earth
would that ever be possible ?
I think I recently (last week or so) wrote out a reply to someone on
the question of what a good explanation of "consciousness" might be
(was it on this list?). I was implictly referring to that explanation
of consciousness. It makes the definite prediction that consciousness
(subjective awareness, qualia, etc. .... what Chalmer's called the
Hard Problem of consciousness) is a direct result of an intelligent
system being built with a sufficient level of complexity and
self-reflection.
Make no mistake: the argument is long and tangled (I will write it up
a length when I can) so I do not pretend to be trying to convince you
of its validity here. All I am trying to do at this point is to state
that THAT is my current understanding of what would happen.
Let me rephrase that: we (a subset of the AI community) believe that
we have discovered concrete reasons to predict that a certain type of
organization in an intelligent system produces consciousness.
This is not meant to be one of those claims that can be summarized in
a quick analogy, or quick demonstration, so there is no way for me to
convince you quickly, all I can say is that we have very string
reasons to believe that it emerges.
Sounds reasonable to me. Actually, it seems "intuitively obvious".
I'm not sure that a reasoned argument in favor of it can exist, because
there's no solid definition of consciousness or qualia. That which some
will consider reasonable, others won't understand any grounds for
accepting. Consider the people who can argue with a straight face that
dogs don't have feelings.
I'm glad you say that, because this is *exactly* the starting point for
my approach to the whole problem of "explaining" consciousness: nobody
agrees what it actually is, so how can we start explaining it?
My approach is to first try to understand why people have so much
difficulty. Turns out (if you think it through hard enough) that there
is a kind of an answer to that question: there is a certain class of
phenomena that we would *expect* to occur in a thinking system, that
that system would report as "inexplicable". We can say why the system
would have to report them thus. Those phenomena match up exactly with
the known features of consciousness.
The argument gets more twisty-turny after that, but as I say, the
starting point is the fact that nobody can put their finger on what it
really is. It is just that I use that as a *fact* about the phenomenon,
rather than as a source of confusion and frustration.
You mentioned in previous posts that the AI would only be programmed
with 'Nice feelings' and would only ever want to serve the good of
mankind ? If the AI has it's own ability to think etc, what is
stopping it from developing negative thoughts....the word 'feeling'
in itself conjures up both good and bad. For instance...I am an
AI...I've witnessed an act of injustice, seeing as I can feel and
have consciousness my consciousness makes me feel Sad / Angry ?
Again, I have talked about this a few times before (cannot remember
the most recent discussion) but basically there are two parts to the
mind: the thinking part and the motivational part. If the AGI has a
motivational that feels driven by empathy for humans, and if it does
not possess any of the negative motivations that plague people, then
it would not react in a negative (violent, vengeful, resentful....
etc) way.
Did I not talk about that in my reply to you? How there is a
difference between having consciousness and feeling motivations? Two
completely separate mechanisms/explanations?
I'll admit that this one bothers me. How is the AI defining this entity
WRT which it is supposed to have empathy? "Human" is a rather high
order construct, and a low-level AI won't have a definition for one
unless one is built in. The best I've come up with is "the kinds of
entities that will communicate with me", but this is clearly a very bad
definition. For one thing, it's language bound. For another thing, if
the AI has a "stack depth" substantially deeper than you do, you won't
be able to communicate with it even if you are speaking the same
language. Empathy for tool-users might be better, but not satisfactory.
It's true that the goal system can be designed so that it wants to
remain stable, and "thinking" is only a part of "tools used for
actualizing goals", so the AI won't want to do anything to change it's
goals unless it has that as a goal. But the goals MUST be designed
referring basically to the internal states of the AI, rather than of the
external world, as the AI-kernel doesn't have a model of the world built
in...or does it? But if the goals are based on the state of the model
of the world, then what's to keep it from solving the problem by
modifying the model directly, rather than via some external action? I
think it safer if the goals aren't tied to it's model of the world.
Tyeing it to "What's really out there" would be safer...but the goals
would need to be rather abstract, particularly if you presume that
sensory organs can be added, removed, or updated.
The story is this: we are coming at this problem from two radically
different views of what this AGI has got inside it.
I never talk about AGIs as if they were old-school, conventional AI
systems, I only talk about systems built using similar mechanisms to
those that exist in the human cognitive system ..... and in particular,
I refer to "motivation systems" driving these AGIs.
This is important, because if the AGI only has a crude goal stack (this
is essentially what lay behind your thoughts above) then, yes, its
behavior will be all over the place, and not at all controllable. The
bad news is that I think you are absolutely right that it would be
impossible to nail down a goal (in this sort of system) that
corresponded to "be empathic to humans". The good news is, though, that
I do not that that an AGI driven by that kind of crude goal stack is
ever going to become an AGI in the first place: it will simply never
become generally intelligent.
There are huge, glaring inconsistencies in the idea of an AGI being
driven by a stack of goals that have to be interpreted. It is a
laighable idea, really. I have said this before, but how do you get a
baby version of such an AGI to learn? Give it the goal "acquire
knowledge" and then let the system interpret the phrase "acquire
knowledge"?? Bit of a problem for an AGI that cannot yet understand the
words "acquire" and "knowledge". I exaggerate the problem for
simplicity of exposition, but ou can see where this is going.
The alternative is a diffuse motivational system. Nobody has built one
yet, but its a work in progress. In such a system it is entirely
possible to put in a feeling of empathy.
More on that in the future.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57916547-4382e0