On 9/16/06, Brian Atkins <[EMAIL PROTECTED]> wrote:
If you'd like to participate, please email me _offlist_ at [EMAIL PROTECTED]
with your _lowest acceptable_ likelihood of success percentage, where the
percentage represents:

    Your lowest acceptable chance that the MBFAI will avoid running amok in
    one 24 hour day.

Sorry for not giving a straight answer to your question. I would not
feel comfortable answering your question of how much 'bad stuff' I am
willing to tolerate without looking at the positive side of things. In
general I would vote for any AI that demonstrably does more good than
harm. Such an AI could make mistakes with tangible negative results on
a daily basis as long as the 'good stuff' it does outweighs that. In
this case my answer would be 0%.

On the other hand I would never want the AI to make a mistake so bad
that humanity would not be able to recover from it (i.e. infinitly
'bad stuff'). In this scenario I would settle for nothing less than
100%.

Cheers,

Stefan

--
Stefan Pernar
App. 1-6-I, Piao Home
No. 19 Jiang Tai Xi Lu
100016 Beijing
China
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to