Ants I'm not sure about, but many species are still
here only because we, as humans, are not simple
optimization processes that turn everything they see
into paperclips. Even so, we regularly do the exact
same thing that people say AIs won't do- we bulldoze
into some area, set up developments, and species go
extinct without most people even realizing it. I
believe the count of extinctions is several hundred a
year now?

 - Tom

--- Charles D Hixson <[EMAIL PROTECTED]>
wrote:

> Kaj Sotala wrote:
> > On 6/22/07, Charles D Hixson
> <[EMAIL PROTECTED]> wrote:
> >> Dividing things into us vs. them, and calling
> those that side with us
> >> friendly seems to be instinctually human, but I
> don't think that it's a
> >> universal.  Even then, we are likely to ignore
> birds, ants that are
> >> outside, and other things that don't really get
> in our way.  An AI with
> >
> > We ignore them, alright. Especially when it comes
> to building real
> > estate over some anthills.
> >
> > People often seem to bring up AIs not being
> directly hostile to us,
> > but for some reason they forget that indifference
> is just as bad. Like
> > Eliezer said - "The AI does not hate you, nor does
> it love you, but
> > you are made out of atoms which it can use for
> something else."
> >
> > While there obviously is a possibility that an AI
> would choose to
> > leave Earth and go spend time somewhere else, it
> doesn't sound all
> > that likely. For one, there's a lot of
> ready-to-use infrastructure
> > around here - most AIs without explict
> Friendliness goals would
> > probably want to grab that for their own use.
> >
> It may not be good, but it's not "just as bad". 
> Ant's are flourishing.  
> Even wasps aren't doing too badly.
> 
> FWIW:  Leaving earth is only one possibility...true,
> it's probably the 
> one that we would find least disruptive.  To me it's
> quite plausible 
> that humans could "live in the cracks".  I'll grant
> you that this would 
> be a far smaller number of humans than currently
> exist, and the process 
> of getting from here to there wouldn't be gentle. 
> But this isn't "as 
> bad" as an AI that was actively hostile.
> 
> OTOH, let's consider a few scenario's where not
> super-human AI 
> develops.  Instead there develops:
> a) A cult of death that decides that humanity is a
> mistake, and decides 
> to solve the problem via genetically engineered
> plagues.  (Well, 
> diseases.  I don't specifically mean plague.)
> b) A military "genius" takes over a major country
> and decides to conquer 
> the world using atomic weapons.
> c) Several rival "racial supremacy" groups take over
> countries, and 
> start trying to conquer the world using plagues to
> either modify all 
> others to be just like them, or sterile.
> d) Insert your own favorite human psychopathology.
> 
> If we don't either develop a super-human AI or split
> into mutually 
> inaccessible groups via diaspora, one of these
> things will lie in our 
> future.  This is one plausible answer to the Fermi
> Paradox...but it 
> doesn't appear to me to be inevitable as I see two
> ways out.
> 
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



       
____________________________________________________________________________________
Pinpoint customers who are looking for what you sell. 
http://searchmarketing.yahoo.com/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to