--- Eugen Leitl <[EMAIL PROTECTED]> wrote:

> On Thu, Jun 07, 2007 at 05:53:10AM -0700, Michael
> Anissimov wrote:
> 
> > If an AI can come up with better ideas for
> improving our lives than we
> > can, then wouldn't it make sense to pay attention
> to it?  Why should
> 
> You've been sounding like a broken record for a
> while. It's because
> speed kills. What or who is doing the killing is not
> important.

What does that mean? Sure, an AI has huge potential
for harm (Michael Anissimov is a director of the
Lifeboat Foundation, an organization created
specifically to avoid AI disaster and other
existential risks), but the potential to do huge harm
is inherent in anything with that much power, and that
much power is necessary for doing large amounts of
good in the world.

> > our leaders be of mere human-level intelligence if
> they can be much
> > smarter?  If people democratically choose for
> superintelligences to be
> 
> Dude, I-current wouldn't trust me a picometer if I
> was much, much
> smarter. Neither should you.

Uh, so aren't you agreeing with him, if you're saying
that humans are too untrustworthy to govern us?

> > their leaders (whether IAed humans or AI), as I
> think they inevitably
> > will, then wouldn't that be reasonable?
> > 
> > Our xenophobia and human chauvinism about machines
> will evaporate
> > when/if a true friendly AGI is built and starts to
> accomplish good
> > deeds in the world.
> 
> Or we ourselves will evaporate, together with a few
> cm of Earth regolith.
> Sorry, I'd rather not take the chances.

Well, the alternative is to face certain death from
nanoweapons or AIs deliberately created to go rogue.
Sometimes you have to take an acceptable risk, as part
of avoiding a much larger risk. This does NOT mean
that we shouldn't do everything possible to minimize
risk, but let's face it, we're playing with
superweapons with fifty-million-year-vintage brains
and there's bound to be some level of risk involved.

> > Matt and John, I think an AI will get traction by
> doing, not just by
> 
> Now you're talking!
> 
> > talking.  A friendly AGI will, say, invent a cure
> for cancer, or hand
> > us the design for a working fusion reactor, or
> something else we can't
> 
> Or a superhuman agent decides to do something, and
> you just happen
> to catch a lungful of fifth-degree side effects,
> chuckly weakly, and die.
> 
> > even imagine.  Accomplishing these good deeds will
> give it tremendous
> > social capital.  If such an AI is charismatic as
> well, then the public
> > will practically beg it to take greater
> responsibility.
> 
> I'm sorry, I'm not religious. Try the next door down
> the hall.

What does this have to do with religion?

> > Why the human chauvinism, guys?  You've never met
> an intelligent
> > machine, so why are you judging them?
> 
> Why the animal chauvinism, sheep? You've never met
> an intelligent human,
> so why are you judging them? Oh, wait...

Humans and sheep share large segments of DNA and the
same evolutionary design process. Humans and AIs do
not.

> -- 
> Eugen* Leitl <a href="http://leitl.org";>leitl</a>
> http://leitl.org
>
______________________________________________________________
> ICBM: 48.07100, 11.36820 http://www.ativel.com
> http://postbiota.org
> 8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443
> 8B29 F6BE
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



       
____________________________________________________________________________________
Get the free Yahoo! toolbar and rest assured with the added security of spyware 
protection.
http://new.toolbar.yahoo.com/toolbar/features/norton/index.php

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to