--- Samantha Atkins <[EMAIL PROTECTED]> wrote:

> Tom McCabe wrote:
> > --- Samantha  Atkins <[EMAIL PROTECTED]> wrote:
> >
> >   
> >>
> >> Out of the bazillions of possible ways to
> configure
> >> matter only a  
> >> ridiculously tiny fraction are more intelligent
> than
> >> a cockroach.  Yet  
> >> it did not take any grand design effort upfront
> to
> >> arrive at a world  
> >> overrun when beings as intelligent as ourselves. 
> >>     
> >
> > The four billion years of evolution doesn't count
> as a
> > "grand design effort"?
> >   
> 
> Not in the least.  No designer.

Evolution proves that design doesn't require a
designer, not a conscious one, anyway.

> The point being
> that good/interesting 
> outcome occur without conscious heavy design. 

Agreed.
 
> Remember the context was 
> a claim that "Friendly" AI could only arise by such
> conscious design.

Also agreed. However, an intelligent system in general
requires some kind of optimization process for design,
which was the original point.

> >  
> >   
> >> So how does your  
> >> argument show that Friendly AI (at least
> relatively
> >> Friendly) can only  
> >> be arrived at by intense up front Friendly
> design?
> >>     
> >
> > Baseline humans aren't Friendly; this has been
> > thoroughly proven already by the evolutionary
> > psychologists. If I were to propose an alternative
> to
> > CEV that included as many extraneous,
> > evolution-derived instincts as humans have for an
> FAI,
> > you would all (correctly) denounce me as nuts.
> >
> >   
> Play with me.  We don't know what "Friendly" is
> beyond doesn't destroy 
> all humans almost immediately.

We lack a rigorous technical understanding, but we all
have an intuitive one- a Friendly AI will act nice,
not cause us pain, not seize the entire universe for
itself, not act like a human bully, etc.

>  If you think
> Friendly means way "nicer" 
> than humans then you get in even more swampy
> territory.  And again, the 
> point wasn't about humans being Friendly in the
> first place. 

The original point was that an FAI could come about
through some process other than very careful
engineering, with humans as an example (we were
designed by evolution). My reply was that humans are
not Friendly.

> 
> >
> >> For a rather stupid unlimited optimization
> process
> >> this might be the  
> >> case but that is a pretty weak notion of an AGI.
> >>     
> >
> > How intelligent the AGI is isn't correlated with
> how
> > complicated the AGI's supergoal is. 
> I would include in intelligence being able to reason
> about consequences 
> and implications.

Exactly. A simple-goal-system superintelligent AGI
might build a wonderfully complicated device, with
long chains of cause and effect and complex systems,
which turns the universe into cheesecake. The AGI is a
lot better than humans at seeing consequences and
implications- the AGI knows the device will turn the
universe into cheesecake, while a human wouldn't.

>  You seem to be speaking of
> something much more 
> robotic and to my thinking much less intelligent. 
> In particular it 
> seems to be missing much self-reflection.

Most goal systems are naturally stable and will tend
to avoid self-reflection, because self-reflection
introduces the possibility of alteration, and a simple
goal system will see an agent with an altered goal
system as less desirable because it pursues new goals
instead of the current goals.

> > A very intelligent
> > AI my have horrendously complicated proximal goals
> > (subgoals), but they still serve the supergoal
> even
> > after the AGI has become vastly more intelligent
> than
> > us. And I strongly suspect that even most
> horrendously
> > complicated supergoals will result in more
> > energy/matter/computing power being seen as
> desirable.
> >   
> Without being put into context?  I would consider
> that not very intelligent.

Why not? Humans are intelligent, and due to our
intelligence, we have been very successful at
extracting energy, matter, and computing power from
lumps of rock. And this raw material has helped us
further our goals tremendously.

> >> If A and B are very unlikely then major effort
> >> toward A and B are  
> >> unlikely to bear fruit in time to halt
> existential
> >> risk outside AGI  
> >> that we are already prone to, especially
> including
> >> being of too  
> >> limited effective intelligence without AGI.   
> MNT
> >> by itself would be  
> >> the end of old age, physical scarcity and most
> >> diseases relatively  
> >> quickly.
> >>     
> >
> > It would also be the end of us relatively quickly.
> If
> > you can make a supercar with MNT, you can make a
> > supertank. If you can make an electromagnetic
> > Earth-based space launch system, you can make an
> > electromagnetic rail gun, and so forth. To quote
> > Albert Einstein: "I know not with what weapons
> WWIII
> > will be fought, but WWIV will be fought with
> sticks
> > and stones."
> >
> >   
> Here we go with the assertion of the only true way
> again.   It no more 
> follows than MNT - FAI = Certain Doom than it
> followed that Nuclear Bomb 
> - FAI => Certain Doom.

Nuclear Bomb - FAI = Nuclear War. MNT - FAI = nanotech
war. How devastating nanotech war is depends on the
level of nanotechnology, but at some point our weapons
will be so powerful that it will mean certain doom for
the human species, if not the entire planet.

>  And MNT has a lot more
> positive potential, like 
> raising the standard of living to far more than
> subsistence standards 
> all over earth, ending aging, curing all diseases
> and so on.

So what? Happy living with nanotech doesn't eliminate
the threat of nanotechnological war, while war does
eliminate the prospect of happy living with nanotech.
It's the old security guard's conundrum: it only takes
one successful attempt to steal something, but it
takes an infinite number of failures to prevent
something from being stolen (in the really long term).

>  I think we 
> might be a little busy with the Golden Age to be in
> a hurry to do 
> ourselves in.

Just like we were too busy enjoying the benefits of
nuclear power to build huge arsenals. And just like we
were too busy enjoying the benefits of mechanization
to make machine guns and tanks... and just like we
were too busy enjoying metal to make swords...

> >>  It would also give us the means, if we
> >> have sufficient  
> >> intelligence, to combat many other existential
> >> risks.   But ultimately  
> >> we are limited by available intelligence.  In a
> >> faster and more  
> >> complex world wielding greater and greater powers
> >> having intelligence  
> >> capped by no AGI is a very serious existential
> >> threat.
> >>     
> >
> > So, er, you agree with me?
> >  
> >   
> I agree about the importance of vastly more
> intelligence, I don't agree 
> that pre-computing how to make it "Friendly" is
> tractable.  In the short 
> run Golden Age phenomenon should follow MNT for a
> while if we are lucky 
> even without AGI.

If we are lucky. I don't think we should trust the
lives of seven billion people to luck.

> >> Serious  
> >> enough that I believe it is very suboptimal for
> high
> >> powered brilliant  
> >> researchers to be chasing an impossible or very
> >> highly unlikely goal.
> >>     
> >
> > If we do get powerful, superintelligent AGI,
> scenario
> > B is mandatory if we aren't going to be blown to
> bits
> > with scenario A being highly desirable for extra
> > safety. Even if we incur a 90% chance of death
> through
> > nanowar if we have to wait another decade for the
> > necessary research, it's better than a
> 99.99999999999%
> > chance of getting turned into paperclips.
> >
> >   
> You have no valih means to make such a laughably
> exact statement  and 
> the "paperclip" argument is way dated.

Out of the possible number of AGIs that humans might
build, 99.999999999999% lead to the destruction of
humankind. It's trivially easy to make up examples of
AGIs with simple goal systems that destroy us. Can you
come up with a single coherent, technically defined
goal system that doesn't? Oh, and Euclid's proof of
the infinite number of primes and Eratosthenes' proof
of the Earth's circularity are vastly more dated than
the paperclip argument.

> - s
> 
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 

 - Tom


       
____________________________________________________________________________________
Got a little couch potato? 
Check out fun summer activities for kids.
http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz
 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to