On 7/2/07, Tom McCabe wrote:

AGIs do not work in a "sensible" manner, because they
have no constraints that will force them to stay
within the bounds of behavior that a human would
consider "sensible".



If you really mean the above, then I don't see why you are bothering
to argue on this list.
(Apart from enjoying all the noise and excitement).

You believe that an AGI has no constraints on it's behaviour, so why
argue about what it might or might not do?

Your case is that AGI might do *anything*.

This list thinks that AGI will arrive whatever we do. So why not try
and ensure humanity will survive the experience?


BillK

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=8909014-17fad3

Reply via email to