"Physics envy" is what we call the quest for a simple theory of AGI,
analogous to the simple set of equations in physics that explain everything
in the universe. Alas, Legg proved otherwise. There is no universal
prediction algorithm. If there were, it would be Occam's Razor, but it is
not computable. Powerful learners are necessarily complex.

Legg's paper:
https://arxiv.org/abs/cs/0606070

Proof: suppose you have a simple, universal learner that could input any
computable sequence of bits and eventually guess a program that generates
it. Then I can write a simple generator that your predictor will guess
wrong every time. My program runs a copy of your predictor and outputs the
opposite bit.

In the neat vs. scruffy debate, the scruffies won.

On Mon, Nov 18, 2019, 3:20 PM <rounce...@hotmail.com> wrote:

> hang on - i thought Korrellan was talking about me?  shit im getting
> paranoid...
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T28a97a3966a63cca-M7e08883946a35b8d70943b96>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T28a97a3966a63cca-Me60c5bfbde52ac3209599f75
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to