I think I am missing something on this discussion of friendliness.  We seem to
tacitly assume we know what it means to be friendly.  For example, we assume
that an AGI that does not destroy the human race is more friendly than one
that does.  We also want an AGI to obey our commands, cure disease, make us
immortal, not kill or torture people, and so on.  We assume an AGI that does
these things is more friendly than one that does not.

This seems like an easy question.  But it is not.

Humans fear death, but inevitably die.  Therefore the logical solution is to
upload our minds.  Suppose it was technologically possible to make an exact
copy of you, including all your memories and behavior.  The copy could
convince everyone, even you, that it was you.  Would you then shoot yourself?

Suppose you simulate an artificial world with billions of agents and an
environment that challenges and eventually kills them.  These agents can also
reproduce (copying all or part of their knowledge) and mutate.  Suppose you
have enough computing power that each of these agents could have human level
intelligence or better.  What attributes would you expect these agents to
evolve?

- Goals that confer a survival advantage?  (belief in consciousness)
- A balance between exploration and exploitation to maximize accumulated goal
achievement? (belief in free will)

Suppose the environment allows the agents to build computers.  Will their
goals motivate them to build an AGI?  If so, how will their goals influence
the design?  What goals will they give the AGI?  How do you think the
simulation will play out?  Consider the cases:

- One big AGI vs. many AGIs competing for scarce resources.
- Agents that upload to the AGI vs. those that do not.

What is YOUR goal in running the simulation?  Suppose they build a single AGI,
all the agents upload, and the AGI reprograms its goals and goes into a
degenerate state or turns itself off.  Would you care?


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to