--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> On Fri, May 9, 2008 at 2:13 AM, Matt Mahoney <[EMAIL PROTECTED]>
> wrote:
> >
> > A rational agent only has to know that there are some things it cannot
> > compute.  In particular, it cannot understand its own algorithm.
> >
> 
> Matt,
> 
> (I don't really expect you to give an answer to this question, as you
> didn't on a number of occasions before.) Can you describe
> mathematically what you mean by "understanding its own algorithm", and
> sketch a proof of why it's impossible?


Informally I mean there are circumstances (at least one) where you can't
predict what you are going to think without thinking it.

More formally, "understanding an algorithm P" means that for any input x
you can compute the output P(x).  Perhaps x is a program Q together with
some input y.  It is possible to have P be a simulator such that P(Q,y) =
Q(y).  Then we would say that P understands Q.

I claim there is no P such that P(P,y) = P(y) for all y.  My sketch of the
proof is as follows.  All realizable computers are finite state machines. 
In order for P to simulate Q, P must have as much memory as Q to represent
all of the possible states of Q, plus additional memory to run the
simulation.  (If it uses no additional states, then P is an isomorphism of
Q, not a simulation.  It can't record the output without outputting it). 
But P cannot have more memory than itself.

This is quite common in human thought.  For example, we learn grammar
before we learn what grammar is.  We sometimes cannot explain the process
by which we solve some problems.  We misjudge what we might do in some
circumstances.  The latter is a case where we form an approximation Q of
our mind, P, which uses less memory but sometimes gives wrong results,
P(Q,x) = Q(x) != P(x).  We can't predict for which x we will make
mistakes.  Often the best we can do is a probabilistic model tuned to
minimize the error over some assumed distribution of x.  For example, we
might use an order-3 statistical model trained on a corpus of text to
approximate a language model.

This implies that AI design is experimental.  We cannot predict what an AI
will do.  Nor can each generation predict the next generation of recursive
self improvement.  This is true at all levels of intelligence (or more
precisely, memory size, which you need for intelligence).


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to