--- BillK <[EMAIL PROTECTED]> wrote:

> On 7/2/07, Tom McCabe wrote:
> >
> > AGIs do not work in a "sensible" manner, because
> they
> > have no constraints that will force them to stay
> > within the bounds of behavior that a human would
> > consider "sensible".
> >
> 
> 
> If you really mean the above, then I don't see why
> you are bothering
> to argue on this list.
> (Apart from enjoying all the noise and excitement).
> 
> You believe that an AGI has no constraints on it's
> behaviour, so why
> argue about what it might or might not do?

You're right - arguing about "what an AGI will do"
makes no sense, because AGIs in general can do
anything. But you can argue about "what the vast
majority of AGIs will do", which is what I usually
meant, or what an AGI with some specific design will
do, which is what I might have meant in context.

> Your case is that AGI might do *anything*.

Yes, if you're referring to all possible AGIs.

> This list thinks that AGI will arrive whatever we
> do. So why not try
> and ensure humanity will survive the experience?

Er, that's exactly my point- since most AGIs will
destroy us, and we know that an AGI can follow any
arbitrary behavior, why not build one that follows
patterns of behavior we will see as nice, and which
will more importantly protect us from other AGIs?

> 
> BillK
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 

 - Tom


       
____________________________________________________________________________________
Sick sense of humor? Visit Yahoo! TV's 
Comedy with an Edge to see what's on, when. 
http://tv.yahoo.com/collections/222

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=9034744-e4e9e5

Reply via email to