On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: > > The motivational system of some types of AI (the types you would > > classify as tainted by complexity) can be made so reliable that the > > likelihood of them becoming unfriendly would be similar to the > > likelihood of the molecules of an Ideal Gas suddenly deciding to > > split into two groups and head for opposite ends of their container.
Richard, in the context of the foregoing, I'd like to know your thoughts on the effective differences between a powerful entity being "nice" like a friend versus doing the "right" thing in the bigger picture, much like a parent doing what they perceive best despite screams of pain and protest from their children. - Jef ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=48421237-16eb0e