I'd like to do a small data gathering project regarding producing a Might-Be-Friendly AI (MBFAI). In other words, for whatever reason (don't want to go into it again in this thread), we assume 100% provability is out of the question for now, so we take one step back and then the decision is either produce something with less than 100% chance of success or hold off and don't make anything until we can do better.

So two obvious questions arise. (1) What lower-than-100% likelihood of success is acceptable at a very minimum? (2) How to concretely derive that percentage before proceeding to launch?

I'd like to gather ideas of answers to the first question, in order to help the researchers focus on roughly what minimal levels of performance the community-at-large feels they ought to be aiming for.

If you'd like to participate, please email me _offlist_ at [EMAIL PROTECTED] with your _lowest acceptable_ likelihood of success percentage, where the percentage represents:

   Your lowest acceptable chance that the MBFAI will avoid running amok in
   one 24 hour day.

So if you choose for example 99%, then that would mean you want there to be at a minimum only a 1% chance the MBFAI would run amok during each day it exists. "Run amok" is left undefined, but is understood to basically mean it goes off and does something "bad" other than what we were hoping it might do. If you like, include a short explanation of how you came up with your number, and if you prefer I do not release your name along with the answers.

I'll post the average, high/low range, etc. later.
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to