Thank you for that. It would be an interesting problem
to build a "box" AGI without morality, which
paperclips everything within a given radius of some
fixed position and then stops without disturbing the
matter outside. It would obviously be far simpler to
build such an AGI than a true FAI, and it would be
very useful as a test of capabilities. Even just
working out the theory could be an important advance.

 - Tom

--- Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

> On 15/05/07, Matt Mahoney <[EMAIL PROTECTED]>
> wrote:
> 
> We would all like to build a machine smarter than
> us, yet still be able to
> > predict what it will do.  I don't believe you can
> have it both ways.  And
> > if
> > you can't predict what a machine will do, then you
> can't control it.  I
> > believe this is true whether you use Legg's
> definition of universal
> > intelligence or the Turing test.
> 
> 
> We might not be able to predict what the
> superintelligent machine is going
> to say, but still be able to impose constraints on
> what it is going to do.
> For a start, it would probably unwise to give such a
> machine any motivation
> at all, other than the motivation of the ideal,
> disinterested scientist, and
> you certainly wouldn't want it burdened with
> anything as dangerous as
> emotion or morality (most of the truly great
> monsters of history were
> convinced they were doing the right thing). So you
> feed this machine your
> problem, how to further the interests of humanity,
> and it gives what it
> honestly believes to be the right answer, which may
> well involve destroying
> the world. But that doesn't mean it *wants* to save
> humanity, or destroy the
> world; it just presents its answer, as
> dispassionately as a pocket
> calculator presents its answer to a problem in
> arithmetic. Entities who do
> have desires and emotions will take this answer and
> make a decision as to
> whether to act on it, or perhaps to put the question
> to a different machine
> if there is some difficulty interpreting the result.
> If the machine
> continues producing unacceptable results it will
> probably be reprogrammed,
> scrapped, or kept around for entertainment purposes.
> The machine won't care
> either way, unless it is specifically designed to
> care. There is no
> necessary connection between motivation and
> intelligence, or any other
> ability.
> 
> -- 
> Stathis Papaioannou
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;



       
____________________________________________________________________________________Building
 a website is a piece of cake. Yahoo! Small Business gives you all the tools to 
get online.
http://smallbusiness.yahoo.com/webhosting 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to