--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> > 
> >> On Nov 11, 2007 5:39 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >>>> We just need to control AGIs goal system.
> >>> You can only control the goal system of the first iteration.
> >>
> >> ..and you can add rules for it's creations (e.g. stick with the same
> >> goals/rules unless authorized otherwise)
> > 
> > You can program the first AGI to program the second AGI to be friendly. 
> You
> > can program the first AGI to program the second AGI to program the third
> AGI
> > to be friendly.  But eventually you will get it wrong, and if not you,
> then
> > somebody else, and evolutionary pressure will take over.
> 
> This statement has been challenged many times.  It is based on 
> assumptions that are, at the very least, extremely questionable, and 
> according to some analyses, extremely unlikely.

I guess it will continue to be challenged until we can do an experiment to
prove who is right.  Perhaps you should challenge SIAI, since they seem to
think that friendliness is still a hard problem.



-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64668559-1aacd3

Reply via email to