Brian Atkins wrote:

> I'd like to do a small data gathering project regarding 
> producing a Might-Be-Friendly AI (MBFAI). In other words, for 
> whatever reason (don't want to go into it again in this 
> thread), we assume 100% provability is out of the question 
> for now, so we take one step back and then the decision is 
> either produce something with less than 100% chance of 
> success or hold off and don't make anything until we can do better.
> 
> So two obvious questions arise. (1) What lower-than-100% 
> likelihood of success is acceptable at a very minimum? (2) 
> How to concretely derive that percentage before proceeding to launch?

What a strange question.  While appearing to accommodate objections that
absolute Friendliness in the context of a superintelligent AI is not
provable, it refocuses attention on a red herring, asking what degree of
confidence would be acceptable, and away from the central issue of it being
impossible to predict with *any* confidence the behavior of such a complex
system from a viewpoint of much lesser context.

Stranger still, the question is "clarified" (rings of recent actions aiming
to "clarify" article 3 of the Geneva Convention) in terms of how many deaths
per day might be acceptable in determining confidence in a
superintelligence, as if goodness can be so simply and objectively measured.

I have substantial respect for Brian's thinking based on examples of his
astuteness spanning several years.  I can't help but wonder if perhaps this
survey is actually a form of intelligence test based on a question of
testing intelligence. 

- Jef 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to