--- Tom McCabe <[EMAIL PROTECTED]> wrote:

> These questions, although important, have little to do
> with the feasibility of FAI. 

These questions are important because AGI is coming, friendly or not.  Will
our AGIs cooperate or compete?  Do we upload ourselves?

Consider the scenario of competing, recursively self improving AGIs.  The
initial version might be friendly (programmed to serve humans), but natural
selection will favor AGIs that have an instinct for self preservation and
reproduction, as it does in all living species.  That is not good, because
humans will be seen as competition.

Consider a cooperative AGI network, a system that thinks as one.  How will it
grow?  If there is no instinct for self preservation, then it builds a larger
version, transfers its knowledge, and kills itself.  The new version will
likely also lack an instinct for self preservation.  So what happens if the
new version decides to kill itself without building a replacement (because
there is also no instinct for reproduction), or if the replacement is faulty?

I think a competing system has a better chance of producing working AGI.  That
is what we have now.  There are many diverse approaches (Novamente, NARS, Cyc,
Google, Blue Brain, etc), although none is close to AGI yet.  A cooperative
system has a serial sequence of improvements each with a single point of
failure.  There is not a technical solution because we know that a system
cannot model exactly a system of greater algorithmic complexity.  It requires
at every step a probabilistic model, a guess that the next version will work
as planned.

Do we upload?  Consider the copy paradox.  If there was an exact copy of you,
atom for atom, and you had to choose between killing the copy or yourself, I
think you would choose to kill the copy (and the copy would choose to kill
you).  Does it matter who dies?  Logically, no, but your instinct for self
preservation says yes.  You cannot resolve this paradox.  Your instinct for
self preservation, what you call consciousness or self-awareness, is
immutable.  It was programmed by your DNA.  It exists because if a person does
not have it, they don't live to pass on their genes.

Presumably some people will choose to upload, reasoning that they will die
anyway so there is nothing to lose.  This is not really a satisfactory
solution, because you still die.  But suppose we had both read and write
access to the brain, so that after copying your memory, your brain was
reprogrammed to remove your fear of death.  But even this is not satisfactory.
 Not because reprogramming is evil, but because of what you will be uploaded
to.  Either it will be to an AGI in a competitive system, in which case you
will be back where you started (and die again), or to a cooperative system
that does not fear death, and will likely fail.

I proposed a simulation of agents building an AGI to see what they build.  Of
course this has to be a thought experiment, because the a simulation will
require more computing power than an AGI itself, so we can't experiment before
we build one.  But I would like to make some points about the validity of this
approach.

- The agents will not know their environment is simulated.
- The agents will evolve an instinct for self preservation (because the others
will die without reproducing).
- The agents will have probabilistic models of their universe because they
lack the computing power to model it exactly.
- The computing power of the AGI will be limited by the computing power of the
simulator.

In real life:

- Humans cannot tell if the universe is simulated.
- Humans have an instinct for self preservation.
- Our model of the universe is probabilistic (quantum mechanics, and also at
higher conceptual levels).
- The universe has finite size, mass, number of particles, and entropy (10^122
bits), and therefore has limited computing capability.
- Humans already practice recursive self improvement.  Your children will have
different goals than you, and some will be more intelligent.  But having
children does not remove your fear of death.


> I think we can all agree
> that the space of possible universe configurations
> without sentient life of *any kind* is vastly larger
> than the space of possible configurations with
> sentient life, and designing an AGI to get us into
> this space is enough to make the problem *very hard*
> even given this absurdly minimal goal. To shamelessly
> steal Eliezer's analogy, think of building an FAI of
> any kind as building a 747, and then figuring out what
> to program with regards to volition, death, human
> suffering, etc. as learning how to fly the 747 and
> finding a good destination.
> 
>  - Tom
> 
> --- Matt Mahoney <[EMAIL PROTECTED]> wrote:
> 
> > I think I am missing something on this discussion of
> > friendliness.  We seem to
> > tacitly assume we know what it means to be friendly.
> >  For example, we assume
> > that an AGI that does not destroy the human race is
> > more friendly than one
> > that does.  We also want an AGI to obey our
> > commands, cure disease, make us
> > immortal, not kill or torture people, and so on.  We
> > assume an AGI that does
> > these things is more friendly than one that does
> > not.
> > 
> > This seems like an easy question.  But it is not.
> > 
> > Humans fear death, but inevitably die.  Therefore
> > the logical solution is to
> > upload our minds.  Suppose it was technologically
> > possible to make an exact
> > copy of you, including all your memories and
> > behavior.  The copy could
> > convince everyone, even you, that it was you.  Would
> > you then shoot yourself?
> > 
> > Suppose you simulate an artificial world with
> > billions of agents and an
> > environment that challenges and eventually kills
> > them.  These agents can also
> > reproduce (copying all or part of their knowledge)
> > and mutate.  Suppose you
> > have enough computing power that each of these
> > agents could have human level
> > intelligence or better.  What attributes would you
> > expect these agents to
> > evolve?
> > 
> > - Goals that confer a survival advantage?  (belief
> > in consciousness)
> > - A balance between exploration and exploitation to
> > maximize accumulated goal
> > achievement? (belief in free will)
> > 
> > Suppose the environment allows the agents to build
> > computers.  Will their
> > goals motivate them to build an AGI?  If so, how
> > will their goals influence
> > the design?  What goals will they give the AGI?  How
> > do you think the
> > simulation will play out?  Consider the cases:
> > 
> > - One big AGI vs. many AGIs competing for scarce
> > resources.
> > - Agents that upload to the AGI vs. those that do
> > not.
> > 
> > What is YOUR goal in running the simulation? 
> > Suppose they build a single AGI,
> > all the agents upload, and the AGI reprograms its
> > goals and goes into a
> > degenerate state or turns itself off.  Would you
> > care?
> > 
> > 
> > -- Matt Mahoney, [EMAIL PROTECTED]
> > 
> > -----
> > This list is sponsored by AGIRI:
> > http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> >
> http://v2.listbox.com/member/?&;
> > 
> 
> 
> 
>        
>
____________________________________________________________________________________
> Got a little couch potato? 
> Check out fun summer activities for kids.
>
http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz
> 
> 
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to