(A small point: can you set the list so that the default Reply is to the list rather than to the individual poster?)

Interesting discussion. I think the points raised are valid, and would go further: not only are they problematic for Friendly AI, they are also problematic for Unfriendly AI. In other words, many of the same issues that trip up "be Friendly" as a goal also trip up "make paperclips", "make myself smarter/more powerful" etc.

I'm highly skeptical at this point about the feasibility of self-willed AI (whether it be sentient like humans, or a Yudkowskian RPOP or whatever) at all. Not that there aren't possible configurations of atoms which would correspond to such, or even that it might be achieved in the sufficiently distant future, but that it won't be feasible in the foreseeable future.

Instead, I think what might be feasible is smart tool/assistant AI. Not that that means we can slack off - my estimate for the difficulty of creating smart tool AI vastly exceeds some people's estimates for the difficulty of creating a Transcendent Power! But that whatever we come up with in the philosophy of Friendliness in our lifetimes (unless life extension technology advances faster than I think it will), will only be of use if it is used by humans, not machines.

To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to