--- Samantha  Atkins <[EMAIL PROTECTED]> wrote:

> 
> On Jun 21, 2007, at 8:14 AM, Tom McCabe wrote:
> 
> >
> > We can't "know it" in the sense of a mathematical
> > proof, but it is a trivial observation that out of
> the
> > bazillions of possible ways to configure matter,
> only
> > a ridiculously tiny fraction are Friendly, and so
> it
> > is highly unlikely that a selected AI will be
> Friendly
> > without a very, very strong Friendly optimization
> over
> > the set of AIs.
> 
> Out of the bazillions of possible ways to configure
> matter only a  
> ridiculously tiny fraction are more intelligent than
> a cockroach.  Yet  
> it did not take any grand design effort upfront to
> arrive at a world  
> overrun when beings as intelligent as ourselves. 

The four billion years of evolution doesn't count as a
"grand design effort"?
 
> So how does your  
> argument show that Friendly AI (at least relatively
> Friendly) can only  
> be arrived at by intense up front Friendly design?

Baseline humans aren't Friendly; this has been
thoroughly proven already by the evolutionary
psychologists. If I were to propose an alternative to
CEV that included as many extraneous,
evolution-derived instincts as humans have for an FAI,
you would all (correctly) denounce me as nuts.

> 
> > In addition, for the vast majority of
> > goals, it is useful to get additional
> > matter/energy/computing power, and so unless
> there's
> > something in the goal system that forbids it,
> turning
> > us into raw materials/fusion fuel/computronium is
> the
> > default action.
> >
> 
> For a rather stupid unlimited optimization process
> this might be the  
> case but that is a pretty weak notion of an AGI.

How intelligent the AGI is isn't correlated with how
complicated the AGI's supergoal is. A very intelligent
AI my have horrendously complicated proximal goals
(subgoals), but they still serve the supergoal even
after the AGI has become vastly more intelligent than
us. And I strongly suspect that even most horrendously
complicated supergoals will result in more
energy/matter/computing power being seen as desirable.

> 
> >> I also disagree with his previously stated
> >> assessment of the viability of
> >>
> >> A) coming to a thorough, rigorous formal
> >> understanding of AI
> >> Friendliness prior to actually building some
> AGI's
> >> and experimenting
> >> with them
> >>
> >> or
> >>
> >> B) creating an AGI that will ascend to superhuman
> >> intelligence via
> >> ongoing self-modification, but in such a way that
> we
> >> humans can be
> >> highly confident of its continued Friendliness
> >> through its successive
> >> self-modifications
> >>
> >> He seems to think both of these are viable
> (though
> >> he hasn't given a
> >> probability estimate, that I've seen).
> >>
> >> My intuition is that A is extremely unlikely to
> >> happen.
> >>
> >> As for B, I'd have to give it fairly low odds of
> >> success, though not
> >> as low as A.
> >
> > So, er, do you have an alternative proposal? Even
> if
> > the probability of A or B is low, if there are no
> > alternatives other than doom by old
> > age/nanowar/asteroid strike/virus/whatever, it is
> > still worthwhile to pursue them.
> 
> If A and B are very unlikely then major effort
> toward A and B are  
> unlikely to bear fruit in time to halt existential
> risk outside AGI  
> that we are already prone to, especially including
> being of too  
> limited effective intelligence without AGI.    MNT
> by itself would be  
> the end of old age, physical scarcity and most
> diseases relatively  
> quickly.

It would also be the end of us relatively quickly. If
you can make a supercar with MNT, you can make a
supertank. If you can make an electromagnetic
Earth-based space launch system, you can make an
electromagnetic rail gun, and so forth. To quote
Albert Einstein: "I know not with what weapons WWIII
will be fought, but WWIV will be fought with sticks
and stones."

>  It would also give us the means, if we
> have sufficient  
> intelligence, to combat many other existential
> risks.   But ultimately  
> we are limited by available intelligence.  In a
> faster and more  
> complex world wielding greater and greater powers
> having intelligence  
> capped by no AGI is a very serious existential
> threat.

So, er, you agree with me?
 
> Serious  
> enough that I believe it is very suboptimal for high
> powered brilliant  
> researchers to be chasing an impossible or very
> highly unlikely goal.

If we do get powerful, superintelligent AGI, scenario
B is mandatory if we aren't going to be blown to bits
with scenario A being highly desirable for extra
safety. Even if we incur a 90% chance of death through
nanowar if we have to wait another decade for the
necessary research, it's better than a 99.99999999999%
chance of getting turned into paperclips.

> - samantha

 - Tom

> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



       
____________________________________________________________________________________Ready
 for the edge of your seat? 
Check out tonight's top picks on Yahoo! TV. 
http://tv.yahoo.com/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to