Matt Mahoney wrote:
You wrote:
1. It is not possible for a less intelligent entity (human) to predict
the behavior of a more intelligent entity.

Anna questions:
I'm just curious to know why?
If you're saying it's not possible then you must have some pretty good
references to back that statement.
I would like to read those references if it's possible.
Thanks

You wrote:
2. A rigorous proof that an AI will be friendly requires a rigorous
definition of "friendly".

Anna agrees:
If you are going to promote "friendly" behavior, most people need to
agree about what the definition of "friendly" really is?

You wrote:
3. Assuming (2), proving this property runs into Godel's
incompleteness theorem for any AI system with a Kolmogorov complexity
over about 1000 bits.
See http://www.vetta.org/documents/IDSIA-12-06-1.pdf

Anna writes:
No opinion, I have no idea what you're talking about.
Could you please rephrase #3 in english language terms so that I can
understand:)

You wrote:
4. There is no experimental evidence that consiousness exists.  You
believe that it does because animals that lacked an instinct for self
preservation and fear of death were eliminated by natural selection.

Anna writes:
Your right.  There is no way to measure consciouness.
At the same time, the word does exist.
Why?
What do you think the word consciouness means?

Just curious
Anna:)







On 9/12/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
From: Hank Conn <[EMAIL PROTECTED]>
>I think the question, "will the AI be Friendly?", is only possible to
answer AFTER you have the source code of the conscious >algorithms sitting
on your computer screen, and have a rigorous prior theoretical knowledge on
exactly how to make an AI >Friendly.

Then I'm afraid it is hopeless.  There are several problems.

1. It is not possible for a less intelligent entity (human) to predict the
behavior of a more intelligent entity.  A state machine cannot simulate
another machine with more states than itself.

2. A rigorous proof that an AI will be friendly requires a rigorous
definition of "friendly".

3. Assuming (2), proving this property runs into Godel's incompleteness
theorem for any AI system with a Kolmogorov complexity over about 1000 bits.
 See http://www.vetta.org/documents/IDSIA-12-06-1.pdf

4. There is no experimental evidence that consiousness exists.  You believe
that it does because animals that lacked an instinct for self preservation
and fear of death were eliminated by natural selection.


-- Matt Mahoney, [EMAIL PROTECTED]




-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to