The AGI is going to have to embed itself into some organizational
bureaucracy in order to survive.  It'll appear friendly to individual humans
but to society it will need to get itself fed, kind of like a queen ant, and
we are the worker ants all feeding it.  Eventually it will become
indispensible.  If an individual human rebels against it - like someone
rebelling against IRS computers, good luck.  Once it is embedded it ain't
going away except for newer and better versions.  And then different
bureaucracies will have their own embedded AGI's all vying for control.  But
without some sort of economic feeding base the AGI's won't embed they'll
wane... it's a matter of survival.

John

> -----Original Message-----
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> 
> --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > Legg's paper is of no relevance to the argument whatsoever, because it
> > first redefines "intelligence" as something else, without giving any
> > justification for th redefinition, then proves theorems about the
> > redefined meaning.  So it supports nothing in any discussion of the
> > behavior of intelligent systems.  I have discussed this topic on a
> > number of occasions.
> 
> Since everyone defines intelligence as something different, I picked a
> definition where we can actually say something about it that doesn't
> require
> empirical experimentation.  What definition would you like to use
> instead?
> 
> We would all like to build a machine smarter than us, yet still be able
> to
> predict what it will do.  I don't believe you can have it both ways.
> And if
> you can't predict what a machine will do, then you can't control it.  I
> believe this is true whether you use Legg's definition of universal
> intelligence or the Turing test.
> 
> Suppose you build a system whose top level goal is to act in the best
> interest
> of humans.  You still have to answer:
> 
> 1. Which humans?
> 2. What does "best interest" mean?
> 3. How will you prevent the system from reprogramming its goals, or
> building a
> smarter machine with different goals?
> 4. How will you prevent the system from concluding that extermination of
> the
> human race is in our best interest?
> 
> Here are some scenarios in which (4) could happen.  The AGI concludes
> (or is
> programmed to believe) that what "best interest" means to humans is goal
> satisfaction.  It understands how human goals like pain avoidance, food,
> sleep, sex, skill development, novel stimuli such as art and music, etc.
> all
> work in our brains.  The AGI ponders how it can maximize collective
> human goal
> achievement.  Some possible solutions:
> 
> 1. By electrical stimulation of the nucleus accumbens.
> 2. By simulating human brains in a simple artificial environment with a
> known
> solution to maximal goal achievement.
> 3. By reprogramming the human motivational system to remove all goals.
> 4. Goal achievement is a zero sum game, and therefore all computation
> (including human intelligence) is irrelevant.  The AGI (including our
> uploaded
> minds) turns itself off.
> 
> 
> -- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to