--- Charles D Hixson <[EMAIL PROTECTED]>
wrote:

> Tom McCabe wrote:
> > The problem isn't that the AGI will violate its
> > original goals; it's that the AGI will eventually
> do
> > something that will destroy something really
> important
> > in such a way as to satisfy all of its
> constraints. By
> > setting constraints on the AGI, you're trying to
> think
> > of everything bad the AGI might possibly do in
> > advance, and that isn't going to work.
> >
> >  - Tom
> What if one of the goals was "minimize the amount of
> destruction that 
> you do".
> I'll grant you that that particular goal might
> result in a rather 
> useless AI, but it could be quite useful if you
> adjusted the strength of 
> that sub-goal properly WRT it's other goals.

Yer what? A subgoal is created by the AGI, not by us,
depending on environmental context. It lasts until it
is no longer useful for calculating which actions will
best fulfill the supergoal; it is then promptly
deleted or saved in an archive somewhere. I fail to
see how any given set of supergoals already clearly
stated by somebody would lead to a logical subgoal of
"minimize the amount of destruction" (side note: What
do you mean by "destruction"? It's a human term that's
so ambiguous you might as well say "Maximize
goodness".) This is probably just a failure of my
imagination; however, we will only have human
imaginations to work with when we're building one of
these things.

> Note that this isn't a constraint (i.e., a part of
> the problem), but 
> this is a part of what the AI considers to be it's
> "core being".  
> Presumably and strong AI will be presented with
> several problems, and 
> each one will have constraints appropriate to that
> problem,

Where do these constrains come from? Constraints such
as "don't turn the universe into computronium" have to
be constraints that the AGI derives from its own
internal morality, so that it can avoid doing horrible
things even when no human specifically forbids them in
advance.

> and the 
> solution to that problem will have a certain value,
> and potential 
> solutions will each have it's associated cost...as
> will postponing the 
> solution to the problem, but these will be
> transient, and not a part of 
> what the AI thinks of as "myself".
> 
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 

 - Tom


       
____________________________________________________________________________________
Got a little couch potato? 
Check out fun summer activities for kids.
http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz
 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=9732633-7c2abd

Reply via email to