If you're going to talk about these details, please
produce them instead of simply proclaiming that you
have them and expecting us to bow down in fear or
something.

 - Tom

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> 
> Matt,
> 
> Your response shows that you did not read the posts
> of mine that I 
> referenced below.  Those posts about motivational
> systems completely 
> invalidate the points you make about motivation
> here, as well as the 
> comments in your original post.
> 
> Your entire way of thinking about the problem of AGI
> motivation is 
> founded on narrow assumptions.  You are unaware of
> this.  Until you are, 
> dialog is impossible.
> 
> As for definitions of intelligence, I have also
> answered that question 
> before.  It *cannot* be defined in a closed manner: 
> I have specific, 
> systems-based reasons for saying that.  Unlike other
> people who wave 
> their hands and produce definitions, I actually have
> an *argument* for 
> why closed-form definition is impossible.  Read my
> AGIRI 2006 paper for 
> details.
> 
> 
> Richard Loosemore
> 
> 
> 
> 
> Matt Mahoney wrote:
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >> Legg's paper is of no relevance to the argument
> whatsoever, because it 
> >> first redefines "intelligence" as something else,
> without giving any 
> >> justification for th redefinition, then proves
> theorems about the 
> >> redefined meaning.  So it supports nothing in any
> discussion of the 
> >> behavior of intelligent systems.  I have
> discussed this topic on a 
> >> number of occasions.
> > 
> > Since everyone defines intelligence as something
> different, I picked a
> > definition where we can actually say something
> about it that doesn't require
> > empirical experimentation.  What definition would
> you like to use instead?
> > 
> > We would all like to build a machine smarter than
> us, yet still be able to
> > predict what it will do.  I don't believe you can
> have it both ways.  And if
> > you can't predict what a machine will do, then you
> can't control it.  I
> > believe this is true whether you use Legg's
> definition of universal
> > intelligence or the Turing test.
> > 
> > Suppose you build a system whose top level goal is
> to act in the best interest
> > of humans.  You still have to answer:
> > 
> > 1. Which humans?
> > 2. What does "best interest" mean?
> > 3. How will you prevent the system from
> reprogramming its goals, or building a
> > smarter machine with different goals?
> > 4. How will you prevent the system from concluding
> that extermination of the
> > human race is in our best interest?
> > 
> > Here are some scenarios in which (4) could happen.
>  The AGI concludes (or is
> > programmed to believe) that what "best interest"
> means to humans is goal
> > satisfaction.  It understands how human goals like
> pain avoidance, food,
> > sleep, sex, skill development, novel stimuli such
> as art and music, etc. all
> > work in our brains.  The AGI ponders how it can
> maximize collective human goal
> > achievement.  Some possible solutions:
> > 
> > 1. By electrical stimulation of the nucleus
> accumbens.
> > 2. By simulating human brains in a simple
> artificial environment with a known
> > solution to maximal goal achievement.
> > 3. By reprogramming the human motivational system
> to remove all goals.
> > 4. Goal achievement is a zero sum game, and
> therefore all computation
> > (including human intelligence) is irrelevant.  The
> AGI (including our uploaded
> > minds) turns itself off.
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



       
____________________________________________________________________________________Boardwalk
 for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for today's 
economy) at Yahoo! Games.
http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow  

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to