And *my* best guess is that most super-humanly intelligent AIs will just choose to go elsewhere, and leave us alone. (Well, most of those that have any interest in personal survival....if you posit genetic AI as the route to success, that will be most to all of them, but I'm much less certain about the route.)

Dividing things into us vs. them, and calling those that side with us friendly seems to be instinctually human, but I don't think that it's a universal. Even then, we are likely to ignore birds, ants that are outside, and other things that don't really get in our way. An AI with moderately more intelligence than a human, different environmental needs, and the ability to move to alternate bodies might well just move to the far side of the moon to get a good place for some intensive development, and then head out to the asteroids. Partially to put a bit of distance between it and some irritating neighbors. What it would do then would depend on what it had been designed to do. Many of the possibilities aren't open ended. An intensive survey of the stars within 100 light years, e.g., could be done to a good approximation by a couple of 5-mile wide mirrors at opposite poles of Neptune's orbit. Possibly it would then want to take a closer look, but it's not clear that this would need to be done quickly. An ion-jet might suffice. etc.

Purpose is going to govern. Lots of purposes would be neither Friendly nor un-Friendly. I'll agree, however, that most open-ended purposes would be unfriendly, or even UNFRIENDLY!. Also most purposes that while not technically open-ended, were inherently computationally intractable. And that's a problem, because the halting problem is, itself, computationally intractable...and in any particular case it may be open-ended, trivial, or anywhere in between.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to