--- rg <[EMAIL PROTECTED]> wrote:

> ok see my responses below..
> 
> Matt Mahoney wrote:
> > --- rg <[EMAIL PROTECTED]> wrote:
> >   
> >> Matt: Why will an AGI be friendly ?
> >>     
> >
> > The question only makes sense if you can define friendliness, which we
> can't.
> >
> >   
> We could say behavior that is acceptable in our society then..
> In your mail you believed they would be friendly..
> So I ask why would they behave in a way acceptable to us ?

Because peers in a competitive network will compete for resources, and humans
control the resources.  I realize the friendliness will only be temporary.

> > Initially I believe that a distributed AGI will do what we want it to do
> > because it will evolve in a competitive, hostile environment that rewards
> > usefulness.  

> If it evolves in a competitive, hostile environment it would only do 
> what is best for itself..
> How would that coincide with what is best for mankind ? Why would it?
> 
> If it is an artificial reward system, it will one day realize it is just 
> such a system
> designed to evolve it in a particular direction, what happens then?

It is not really artificial.  Peers will incrementally improve and the most
successful ones will be the basis for designing copies.  This is a form of
evolution.  Competition for resources is a stable evolutionary goal. 
Resources take the form of storage and bandwidth (i.e. information has
negative value).  Humans will judge the quality of information by rating
peers, which in turn will rate other peers, establishing a competition for
reputation.

I realize friendliness fails when information becomes too complex for humans
to understand.  Then the competition for computational resources will continue
without human involvement.

I am interested in how the safety of distributed AI can be improved.  I
realize that centralized designs are safer, but I think they are less likely
to emerge first because they are at a disadvantage in availability of
resources, both human and computer.  We need to focus on the greater risk.

I don't think an intelligence explosion can be judged as good or bad,
regardless of the outcome.  It just is.  The real risk to humanity is that our
goals evolved to ensure survival of the species in primitive times.  In a
world where we can have everything we want, those same goals will destroy us.


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to