On 9/13/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:

The basic problem as many have noted is Godelian.  Chaitin's version
of Godel's Theorem says "You can't prove a 20 pound theorem with a 10
pound axiom system."   We humans cannot prove theorems about things
that are massively more algorithmically complex than ourselves.

That's not true.  A trivial example:  very simple number theories can
prove "n = n" for any value of n, no matter how large.  In particular
for extremely complex n.

More interestingly, we can prove some things about the behaviour of
sufficiently simple programs no matter how complex their input is.

Chaitin's actual result is that an axiomatic system cannot prove any
statement of the form "H(x) > n" for n larger than a fixed bound N
depending only on the system, where H(x) is the length of the shortest
program outputing x.  (See e.g. page 3 of Calude et al, Is Complexity
a Source of Incompleteness?
http://www.cs.auckland.ac.nz/~cristian/aam.pdf).  "H(x) > n" means no
program of less than n bits in length outputs x.

That is, an axiomatic system cannot prove *that* certain objects are
significantly more complex than itself.  It can still prove things
about these objects.

And,
even if the software code for a superhuman AI is not that tremendously
complex compared to ourselves, the universe itself probably is... and
to prove the ongoing Friendliness about an AI we would likely have to
prove something about how the AI would react to our particular
universe, complex as it is... which is not possible due to the
universe's complexity and our simplicity...

We can avoid this problem by proving more general results.   For
instance, that the AI reacts in a particular way to any universe in a
class C, where C is sufficiently simple.

This argument by reference to Godel and the complexity of the universe
is NOT a rigorous proof that Friendly AI is impossible ... it's just a
heuristic argument that it is most likely impossible.

-- Nick Hay

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to