On 9/14/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
I mean that you cannot simulate the machine it runs on.  Of course you can simulate the above program.  It has a Kolmogorov complexity much lower than that of your brain.

A Commodore 64 has Kolmogorov complexity much lower than that of my brain, yet I can't simulate it; my short term memory is much smaller than the 64's memory, and my conscious thought processes are too slow. A housefly's brain has Kolmogorov complexity much lower than mine, yet I cannot simulate it for the above reasons plus the even more important fact that I don't have the data that would go into such a simulation.

The fact of the matter is that simulating the states of an entity is fun to prove mathematical theorems about, but it just doesn't have anything to do with how humans actually try to predict things.

I stated that a less intelligent entity cannot predict the behavior of a more intelligent entity.  By intelligence, I mean information content, or Kolmogorov complexity.

By that definition, a cloud of gas in thermal equilibrium is superintelligent. I think you need a new definition :P

But I think you will agree that a super AI will have more knowledge than you do.

And it still won't be able to predict so much as a housefly by the method you suggest.

We have no experience yet in trying to predict the behavior of superhuman AIs.  But look at it the other way around.  I think you can more accurately predict your cat's behavior than it can predict yours.

>From what I've seen of my brother with his cat, it's not clear to me that it's not the other way around. The cat can predict that when it miaows, he'll put out food; he can't predict whether or not the cat will eat it.

This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to