These questions, although important, have little to do
with the feasibility of FAI. I think we can all agree
that the space of possible universe configurations
without sentient life of *any kind* is vastly larger
than the space of possible configurations with
sentient life, and designing an AGI to get us into
this space is enough to make the problem *very hard*
even given this absurdly minimal goal. To shamelessly
steal Eliezer's analogy, think of building an FAI of
any kind as building a 747, and then figuring out what
to program with regards to volition, death, human
suffering, etc. as learning how to fly the 747 and
finding a good destination.

 - Tom

--- Matt Mahoney <[EMAIL PROTECTED]> wrote:

> I think I am missing something on this discussion of
> friendliness.  We seem to
> tacitly assume we know what it means to be friendly.
>  For example, we assume
> that an AGI that does not destroy the human race is
> more friendly than one
> that does.  We also want an AGI to obey our
> commands, cure disease, make us
> immortal, not kill or torture people, and so on.  We
> assume an AGI that does
> these things is more friendly than one that does
> not.
> 
> This seems like an easy question.  But it is not.
> 
> Humans fear death, but inevitably die.  Therefore
> the logical solution is to
> upload our minds.  Suppose it was technologically
> possible to make an exact
> copy of you, including all your memories and
> behavior.  The copy could
> convince everyone, even you, that it was you.  Would
> you then shoot yourself?
> 
> Suppose you simulate an artificial world with
> billions of agents and an
> environment that challenges and eventually kills
> them.  These agents can also
> reproduce (copying all or part of their knowledge)
> and mutate.  Suppose you
> have enough computing power that each of these
> agents could have human level
> intelligence or better.  What attributes would you
> expect these agents to
> evolve?
> 
> - Goals that confer a survival advantage?  (belief
> in consciousness)
> - A balance between exploration and exploitation to
> maximize accumulated goal
> achievement? (belief in free will)
> 
> Suppose the environment allows the agents to build
> computers.  Will their
> goals motivate them to build an AGI?  If so, how
> will their goals influence
> the design?  What goals will they give the AGI?  How
> do you think the
> simulation will play out?  Consider the cases:
> 
> - One big AGI vs. many AGIs competing for scarce
> resources.
> - Agents that upload to the AGI vs. those that do
> not.
> 
> What is YOUR goal in running the simulation? 
> Suppose they build a single AGI,
> all the agents upload, and the AGI reprograms its
> goals and goes into a
> degenerate state or turns itself off.  Would you
> care?
> 
> 
> -- Matt Mahoney, [EMAIL PROTECTED]
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



       
____________________________________________________________________________________
Got a little couch potato? 
Check out fun summer activities for kids.
http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz
 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to