Curious.

A couple of days ago, I responded to demands that I produce arguments to justify the conclusion that there were ways to build a friendly AI that was extremely stable and trustworthy, but without having to give a mathematical proof of its friendliness.

Now, granted, the text was complex, technical, and not necessarily worded as best it could be. But the background to this is that I am writing a long work on the foundations of cognitive science, and the ideas in that post were a condensed version of material that is spread out over several dense chapters in that book ... but even though that longer version is not ready, I finally gave in to the repeated (and sometimes shrill and abusive) demands that I produce at least some kind of summary of what is in those chapters.

But after all that complaining, I gave the first outline of an actual technique for guaranteeing Friendliness (not vague promises that a rigorous mathematical proof is urgently needed, and "I promise I am working on it", but an actual method that can be developed into a complete solution), and the response was .... nothing.

I presume this means everyone agrees with it, so this is a milestone of mutual accord in a hitherto divided community.

Progress!



Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to