Occam's Razor is true because for all possible probability distributions
over the infinite set of possible theories described by strings, each
theory can only be more likely than a finite set of longer theories. This
is true in any language used to describe the theories.

By "theory" I mean a description or program that is consistent with past
observations and makes predictions. I suppose you would need to determine
experimentally which languages work best. I think it would be human
understandable languages because humans are doing the experiments.

On Mon, Jun 29, 2020, 2:29 PM <immortal.discover...@gmail.com> wrote:

> But Matt, if we use a language that is easiest to compute in our observed
> universe, and penalize larger systems, then we are really just leveraging
> physics and a penalization. We already know this in the original Occam's
> Razor: Leverage physics and make the algorithm as small as possible (but
> not too small).
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T37756381803ac879-Mea6329ab643b5853235ed9bb>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T37756381803ac879-Me9e8c7798ae9ee652dcb124a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to