Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

Matt Mahoney wrote:
--- Stan Nilsen <[EMAIL PROTECTED]> wrote:

Matt,

Thanks for the links sent earlier. I especially like the paper by Legg and Hutter regarding measurement of machine intelligence. The other paper I find difficult, probably it's deeper than I am.
The AIXI paper is essentially a proof of Occam's Razor.  The proof uses a
formal model of an agent and an environment as a pair of interacting
Turing
machines exchanging symbols.  In addition, at each step the environment
also
sends a "reward" signal to the agent.  The goal of the agent is to
maximize
the accumulated reward.  Hutter proves that if the environment is
computable
or has a computable probability distribution, then the optimal behavior of
the
agent is to guess at each step that the environment is simulated by the
shortest program consistent with all of the interaction observed so far.
This
optimal behavior is not computable in general, which means there is no
upper
bound on intelligence.
Nonsense. None of this follows from the AIXI paper. I have explained why several times in the past, but since you keep repeating these kinds of declarations about it, I feel obliged to repeat that these assertions are speculative extrapolations that are completeley unjustified by the paper's actual content.

Yes it does.  Hutter proved that the optimal behavior of an agent in a
Solomonoff distribution of environments is not computable.  If it was
computable, then there would be a finite solution that was maximally
intelligent according to Hutter and Legg's definition of universal
intelligence.

Still more nonsense: as I have pointed out before, Hutter's implied definitions of "agent" and "environment" and "intelligence" are not connected to real world usages of those terms, because he allows all of these things to depend on infinities (infinitely capable agents, infinite numbers of possible universes, etc.).

If he had used the terms "djshgd", "uioreou" and "astfdl" instead of "agent", "environment" and "intelligence", his analysis would have been fine, but he did not. Having appropriated those terms he did not show why anyone should believe that his results applied in any way to the things in the real world that are called "agent" and "environment" and "intelligence". As such, his conclusions were bankrupt.

Having pointed this out for the benefit of others who may have been overly impressed by the Hutter paper, just because it looked like impressive maths, I have no interest in discussing this yet again.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=78403968-fdcb5a

Reply via email to