Hi all,

I'd like to apologize for some comments I made in my post of last
night, which was intended
by me purely in the spirit of silliness, but apparently was taken
too seriously by some people.

Email communication lacks nuance sometimes, as we all know.

In my post I was humorously pursuing the line of thought that
AIXI and related work is "useless" by comparing it to the approach
of writing literary masterpieces by randomly typing characters
into a computer until a literary masterpiece appears.

But of course, this is just a funny analogy, not a precise nor
a serious one.

I am sure if we had been sitting around in a cafe' BS-ing about
AGI systems, no one would have been offended....  But then
from my tone of voice, the look on my face, etc., it would have
been apparent to anyone listening that I was just exaggerating
things for comedy value (self-perceived comedy value; I must
admit my sense of humor can be somewhat peculiar ;-); and that
I actually do admire the AIXI work as extremely excellent
mathematics...

Please recall that I'm a math PhD, and the notion of "uselessness"
has a long history in mathematics!  G.H. Hardy wrote a famous
essay in the middle of the last century about the beautiful and
glorious uselessness of number theory, his field of math.  He
said one of the reasons he chose it was precisely BECAUSE it
was useless and he found this fact very aesthetically satisfying.

Of course, he would be surprised right now to see the NSA
hiring so many number theorists to help with their cryptographic
work ;-)

What this shows is that, in seriousness, it's almost impossible
to tell what will be useless or not.

After all, who'da thunk that the math of infinite-dimensional spaces
would turn out to be central to our understanding of the physical
universe (but there Hilbert spaces are, at the center of quantum
theory).

In a temporary fit of foolishness, I thought everyone on the list
understood this, and understood that I understood
this, so that no one would take my joking around about AIXI too
seriously....  But, as I said above, email lacks nuance sometimes...
(Bring on the metaverse, please!!!)

Anyway, in the rest of this mail I will clarify my actual thoughts
about AIXI (which I have mentioned in previous mails anyway,
but some folks might not have been on the relevant lists at
the appropriate times).

I seriously do NOT think there is any practical value to be
gotten out of trying to create a pragmatic AGI system by
"scaling AIXI down."  My own feeling is that to create a
pragmatic AGI system, quite different ideas will be needed,
as compared to the ideas in AIXI which are sufficient for
creating AGI given infinite or massive computational
resources.

However, that doesn't imply that further, interesting
theoretical ideas can't be obtained by trying to scale AIXI
down.  Maybe they will be.  And maybe some of these
further theoretical ideas will help someone to make progress
toward a pragmatic AGI design.  Obviously, the direction
of progress of science is hard to rule out.  (Though, as
Richard points out, in actually doing science oneself, one
has to make judgment calls, and my judgment personally
has been that this is not the best path to pursue, nor even
on the list of the best 3 or 4 paths.)

Perhaps the main contribution of the AIXI work, in my
view, is that it provides a rigorous mathematical refutation
to the idea that "AGI is impossible."  It shows that, in case
anyone had any doubt, "Yes, the problem of creating AGI
is in essence a problem of coping with limited space
and time resources."   Now, this was intuitively obvious
to me (and many others) already.  But, the AIXI and AIXItl
work shows it quite impressively and definitively.  Because
it shows that if one makes sufficiently generous assumptions
regarding space and time resources, one can achieve
maximally powerful AGI using a very simple algorithm.

Another way in which it's instructive is simply due to the
complexity of the mathematics involved.  The fact that such
deep and complex mathematical manipulations are needed
to prove such an intuitively obvious point (maximal
intelligence given infinite resources) tells us something about
our current mathematical and theoretical constructs and the
fundamental awkwardness of their applicability to AGI
and cognition.  Reading Hutter's proofs reminds me somewhat
of reading physics books written before Newton.  In that
period of physics, so much work and effort and intelligence
went into doing things in such very complicated ways, because
the foundational concepts that were needed to "make simple
things look simple" hadn't been invented yet.

Finally, there are at least two principles underlying AIXI
that I think are applicable to realistic-resources AGI systems:

-- probabilistic reasoning (though a realistic-resources
system can only approximate it, not exactly achieve it)
-- Occam's Razor (preferring the simpler explanation)

These two aspects are key to AIXI and I also think they
are central to achieving AGI using realistic resources.

However, in a realistic-resources AGI, these aspects
have to be tangled up with a lot of other aspects to be useful.
And they may be achieved indirectly due to other
principles.  In Novamente they are in fact included
explicitly.  But in the human brain, I suspect that

-- approximate probabilistic reasoning is a consequence
of Hebbian learning

-- Occam's Razor is a consequence of energy minimization,
a major design principle in the brain

Still, in spite of these high-level commonalities between AIXI
and realistic-resources AGI systems, and in spite of the use of
AIXI as a demonstration that "AGI is all about coping with
resource restrictions", as I said above I really don't think that
"scaling AIXI down" is the best way to think about AGI
design.

I note that Shane Legg, who just spend a few years working on
AIXI related stuff for his PhD thesis, is now working on his
own AGI system --- but not by "scaling AIXI down", rather
by (according to my understanding) taking some ideas from
neuroscience and some original algorithmic/structural ideas, and
some general inspirations from his AIXI work.   My feeling
is that this sort of integrative and pragmatic approach is more
likely to succeed in terms of actually getting working AGI
software to exist...

-- Ben








Sorry Shane, I guess I got carried away with my sense of humor ...

No, I don't really think AIXI is useless in a mathematical, theoretical sense.
I do think it's a dead-end in terms of providing guidance to
pragmatic AGI design, but that's another
story

I will send a clarifying email to the list, I certainly had no serious
intention to offend people...

Ben


Ben Goertzel wrote:

Sorry Shane, I guess I got carried away with my sense of humor ...

No, I don't really think AIXI is useless in a mathematical, theoretical sense.
I do think it's a dead-end in terms of providing guidance to
pragmatic AGI design, but that's another
story

I will send a clarifying email to the list, I certainly had no serious
intention to offend people...

Ben


Shane Legg wrote:
AplBen,

So you really think AIXI is totally "useless"?  I haven't been reading
Richard's comments, indeed I gave up reading his comments some
time before he got himself banned from sl4, however it seems that you
in principle support what he's saying.  I just checked his posts and
can see why they don't make sense, however I know very well that
shouting rather than reasoning on the internet is a waste of time.

My question to you then is a bit different.  If you believe that AIXI is
totally a waste of time, why is it that you recently published a book
with a chapter on AIXI in it, and now think that AIXI and related study
should be a significant part of what the SIAI does in the future?

Shane

------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to