----- Original Message ----
From: Russell Wallace <[EMAIL PROTECTED]>

As for the state machine argument, consider the following program:

i = 0
while 1:
 i = i + 1
 print i

run on a machine with a googolplex bytes of memory at a googolplex operations per second. That machine has far more states than me, yet I can quite confidently predict its actions.

-----

I mean that you cannot simulate the machine it runs on.  Of course you can simulate the above program.  It has a Kolmogorov complexity much lower than that of your brain.

I stated that a less intelligent entity cannot predict the behavior of a more intelligent entity.  By intelligence, I mean information content, or Kolmogorov complexity.  I realize there are other definitions.  But I think you will agree that a super AI will have more knowledge than you do.

All physically realizable computers including the human brain are finite state machines.  A state machine cannot simulate another state machine (which can take programs as input) if that other machine has more memory.  Shane Legg's paper extends this to classes of universal Turing machines with bounded Kolmogorov complexity (programs have finite memory for code and input data but infinite working memory and output data).  Consider the class of machines with Kolmogorov complexity n or less.  The paper proves no program can learn to predict the behavior of all machines in this class unless its complexitiy is also at least n.

We have no experience yet in trying to predict the behavior of superhuman AIs.  But look at it the other way around.  I think you can more accurately predict your cat's behavior than it can predict yours.

-- Matt Mahoney, [EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to