On 9/12/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
From: Hank Conn < [EMAIL PROTECTED]>
>I think the question, "will the AI be Friendly?", is only possible to answer AFTER you have the source code of the conscious >algorithms sitting on your computer screen, and have a rigorous prior theoretical knowledge on exactly how to make an AI >Friendly.

Then I'm afraid it is hopeless.  There are several problems.
 
Is Friendly AI hopeless? Well, I don't know if it will be done quickly enough, but I disagree from the standpoint you argue.
 

1. It is not possible for a less intelligent entity (human) to predict the behavior of a more intelligent entity.  A state machine cannot simulate another machine with more states than itself.
 
http://www.sl4.org/wiki/KnowabilityOfFAI
 

2. A rigorous proof that an AI will be friendly requires a rigorous definition of "friendly".
 
Clearly!
 

3. Assuming (2), proving this property runs into Godel's incompleteness theorem for any AI system with a Kolmogorov complexity over about 1000 bits.  See http://www.vetta.org/documents/IDSIA-12-06-1.pdf
 
I am completely unfamiliar with this stuff. By God it looks interesting though. I'll just have to refer you to my response to number 1. If you find something disputable- in the above link- based on your understanding of the relationship between Godel's Theorem and Kolmogorov complexity, get back to me (or Eliezer for that matter) about that. In the mean time I'll continue attempting to learn. :)
 

4. There is no experimental evidence that consiousness exists.  You believe that it does because animals that lacked an instinct for self preservation and fear of death were eliminated by natural selection.
 
When I say consciousness, I really mean "human-like intelligence", because in fact consciousness doesn't exist, we are merely using a stronger label. I think my rather obvious presumption of consciousness being readily modelled by a computer algorithm speaks for itself on this semantic issue (doesn't it?).
 
I'm not sure about your distinction between human and animal intelligence on the basis of "self-preservation" and "fear of death"- can you cite any literature, or any other relevant info? I think it's a very very common thing for people to define "that one thing" that distinguishes 'human intelligence' from 'animal intelligence', and it's virtually equally common for "that one thing" to be completely wrong.
 
 
-hank

This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to