Hi Richard, I have left that email sitting in my Inbox, and skimmed it over, but did not find time to read it carefully and respond to it yet. I only budget myself a certain amount of time per day for recreational emailing (and have been exceeding that limit this week, already ;-) .... I hope to find time to read/respond this weekend.
Ben G On 10/27/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Curious. A couple of days ago, I responded to demands that I produce arguments to justify the conclusion that there were ways to build a friendly AI that was extremely stable and trustworthy, but without having to give a mathematical proof of its friendliness. Now, granted, the text was complex, technical, and not necessarily worded as best it could be. But the background to this is that I am writing a long work on the foundations of cognitive science, and the ideas in that post were a condensed version of material that is spread out over several dense chapters in that book ... but even though that longer version is not ready, I finally gave in to the repeated (and sometimes shrill and abusive) demands that I produce at least some kind of summary of what is in those chapters. But after all that complaining, I gave the first outline of an actual technique for guaranteeing Friendliness (not vague promises that a rigorous mathematical proof is urgently needed, and "I promise I am working on it", but an actual method that can be developed into a complete solution), and the response was .... nothing. I presume this means everyone agrees with it, so this is a milestone of mutual accord in a hitherto divided community. Progress! Richard Loosemore. ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]