Not ever being much of a fan of Goertzel's arguments about AGI, it took
some doing to get me to bother reading his April 14, 2020 paper  "Grounding
Occam's Razor in a Formal Theory of Simplicity
<https://arxiv.org/pdf/2004.05269.pdf >".  I only did so because while
looking for yesterday's Joasch Bach's keynote address to Goertzel's AGI
conference, I fell into a live video feed that, at that particular moment,
was of a researcher talking my language about Solomonoff Induction.  It
turned out to be Arthur Franz of occam.com.ua, with whom I had communicated
a couple of years ago.  So I kept watching.

As they solicited questions from the viewing audience I asked a couple of
questions about Solomonoff Induction.  Not entirely to my surprise, these
questions annoyed the panel (except Franz).  The question I annoyed them
with was posed in response to an assertion by Alexey Potapov to the effect
that Solomonoff Induction could be biased to any prior one wished by one's
choice of Turing machine.  So I asked if there might be a notion of Turing
machine complexity.  This triggered Potapov, who shut down the topic by
referring to Goertzel's aforelinked paper as "The Answer".

Well, OK.  So now where are we?  Let's take this quote from Goertzel:

"However, all these applications of Occam’s Razor either rely on very
specialized formalizations of the 'simplicity' concept (e.g. shortest
program length), or neglect to define simplicity at all."

The notion that a universal computation is a "very specialized
formalization" is _exactly_ the kind of Oracular nonsense that has narrowed
my not filter on Goertzel over the years.

Can someone bother to read this paper and report back?  I can't be bothered
given my other PRIORities.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T37756381803ac879-Ma686ba608b53c36b0af228c2
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to