Yes, if we live in a universe that has Turing-uncomputable physics, then
obviously AIXI is not necessarily going to be capable of adequately dealing
with that universe ... and nor is AGI based on digital computer programs
necessarily going to be able to equal human intelligence.

In that case, we might need to articulate new computational models
reflecting the actual properties of the universe (i.e. new models that
relate to the newly-understood universe, the same way that AIXI relates to
an assumed-computable universe).  And we might need to build new kinds of
computer hardware that make appropriate use of this Turing-uncomputable
physics.

I agree this is possible.  I also see no evidence for it.  This is
essentially the same hypothesis that Penrose has put forth in his books The
Emperor's New Mind, and Shadows of the Mind; and I found his arguments there
completely unconvincing.  Ultimately his argument comes down to:

A)  mathematical thinking doesn't feel computable to me, therefore it
probably isn't

B) we don't have a unified theory of physics, so when we do find one it
might imply the universe is Turing-uncomputable

Neither of those points constitutes remotely convincing evidence to me, nor
is either one easily refutable.

I do have a limited argument against these ideas, which has to do with
language.   My point is that, if you take any uncomputable universe U, there
necessarily exists some computable universe C so that

1) there is no way to distinguish U from C based on any finite set of
finite-precision observations

2) there is no finite set of sentences in any natural or formal language
(where by language, I mean a series of symbols chosen from some discrete
alphabet) that can applies to U but does not apply also to C

To me, this takes a bit of the bite out of the idea of an uncomputable
universe.

Another way to frame this is: I think the notion of a computable universe is
effectively equivalent to the notion of a universe that is describable in
language or comprehensible via finite-precision observations.

And the deeper these discussions get, the more I think they belong on an
agi-phil list rather than an AGI list ;-) ... I like these sorts of ideas,
but they really have little to do with creating AGI ...

-- Ben G

On Mon, Oct 20, 2008 at 11:23 AM, Abram Demski <[EMAIL PROTECTED]>wrote:

> Ben,
>
> The most extreme case is if we happen to live in a universe with
> uncomputable physics, which of course would violate the AIXI
> assumption. This could be the case merely because we have physical
> constants that have no algorithmic description (but perhaps still have
> mathematical descriptions). As a concrete example, let's say some
> physical constant turns out to be a (whole-number) multiple of
> Chaitin's Omega. Omega cannot be computed, but it can be approximated
> (slowly), so we could after a long time suspect that we had determined
> the first 20 digits (although we would never know for sure!). If a
> physical constant turned out to match (some multiple of) these, we
> would strongly suspect that the rest of the digits matched as well.
>
> (Of course, the actual value of Omega depends on the model of
> computation employed, so it would be very surprising indeed if the
> physical constant matched Omega for one of our standard computational
> models...)
>
> AIXI would never except this inductive evidence.
>
> This is similar to Wei Dai's argument about aliens offering humans a
> box that seems to be a halting oracle.
>
> I think there is a less extreme case to be considered (meaning, I
> think there is a broader way in which we might say AIXI cannot
> "understand" uncomputable entities the way we can), but the argument
> is probably clearer for the extreme case, so I will leave it at that
> for now.
>
> Clearly, this argument is very "type 2" at the moment. What I *really*
> would like to discuss is, as you put it, the set of sufficient
> mathematical axioms for (patially-)logic-based AGI such as
> OpenCogPrime.
>
> --Abram
>
> On Mon, Oct 20, 2008 at 9:45 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > I do not understand what kind of understanding of noncomputable numbers
> you
> > think a human has, that AIXI could not have.  Could you give a specific
> > example of this kind of understanding?  What is some fact about
> > noncomputable numbers that a human can understand but AIXI cannot?  And
> how
> > are you defining "understand" in this context?
> >
> > I think uncomputable numbers can be indirectly useful in modeling the
> world
> > even if the world is fundamentally computable.  This is proved by
> > differential and integral calculus, which are based on the continuum
> (most
> > of the numbers on which are uncomputable), and which are extremely handy
> for
> > analyzing real, finite-precision data ... more so, it seems, than
> > "computable analysis" variants.
> >
> > But, I think AIXI or other AI systems can understand how to apply
> > differential calculus in the same sense that humans can...
> >
> > And, neither AIXI nor a human can display a specific example of an
> > uncomputable number.  But, both can understand the diagonalization
> > constructs that lead us to believe uncomputable numbers "exist" in some
> > sense of the word "exist"
> >
> > -- Ben G
> >
> > On Sun, Oct 19, 2008 at 9:33 PM, Abram Demski <[EMAIL PROTECTED]>
> wrote:
> >>
> >> Ben,
> >>
> >> How so? Also, do you think it is nonsensical to put some probability
> >> on noncomputable models of the world?
> >>
> >> --Abram
> >>
> >> On Sun, Oct 19, 2008 at 6:33 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >> >
> >> > But: it seems to me that, in the same sense that AIXI is incapable of
> >> > "understanding" proofs about uncomputable numbers, **so are we
> humans**
> >> > ...
> >> >
> >> > On Sun, Oct 19, 2008 at 6:30 PM, Abram Demski <[EMAIL PROTECTED]>
> >> > wrote:
> >> >>
> >> >> Matt,
> >> >>
> >> >> Yes, that is completely true. I should have worded myself more
> clearly.
> >> >>
> >> >> Ben,
> >> >>
> >> >> Matt has sorted out the mistake you are referring to. What I meant
> was
> >> >> that AIXI is incapable of understanding the proof, not that it is
> >> >> incapable of producing it. Another way of describing it: AIXI could
> >> >> learn to accurately mimic the way humans talk about uncomputable
> >> >> entities, but it would never invent these things on its own.
> >> >>
> >> >> --Abram
> >> >>
> >> >> On Sun, Oct 19, 2008 at 4:32 PM, Matt Mahoney <[EMAIL PROTECTED]>
> >> >> wrote:
> >> >> > --- On Sat, 10/18/08, Abram Demski <[EMAIL PROTECTED]> wrote:
> >> >> >
> >> >> >> No, I do not claim that computer theorem-provers cannot
> >> >> >> prove Goedel's Theorem. It has been done. The objection applies
> >> >> >> specifically to AIXI-- AIXI cannot prove goedel's theorem.
> >> >> >
> >> >> > Yes it can. It just can't understand its own proof in the sense of
> >> >> > Tarski's undefinability theorem.
> >> >> >
> >> >> > Construct a "predictive" AIXI environment as follows: the
> environment
> >> >> > output symbol does not depend on anything the agent does. However,
> >> >> > the agent
> >> >> > receives a reward when its output symbol matches the next symbol
> >> >> > input from
> >> >> > the environment. Thus, the environment can be modeled as a string
> >> >> > that the
> >> >> > agent has the goal of compressing.
> >> >> >
> >> >> > Now encode in the environment a series of theorems followed by
> their
> >> >> > proofs. Since proofs can be mechanically checked, and therefore
> found
> >> >> > given
> >> >> > enough time (if the proof exists), then the optimal strategy for
> the
> >> >> > agent,
> >> >> > according to AIXI is to guess that the environment receives as
> input
> >> >> > a
> >> >> > series of theorems and that the environment then proves them and
> >> >> > outputs the
> >> >> > proof. AIXI then replicates its guess, thus correctly predicting
> the
> >> >> > proofs
> >> >> > and maximizing its reward. To prove Goedel's theorem, we simply
> >> >> > encode it
> >> >> > into the environment after a series of other theorems and their
> >> >> > proofs.
> >> >> >
> >> >> > -- Matt Mahoney, [EMAIL PROTECTED]
> >> >> >
> >> >> >
> >> >> >
> >> >> > -------------------------------------------
> >> >> > agi
> >> >> > Archives: https://www.listbox.com/member/archive/303/=now
> >> >> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
> >> >> > Modify Your Subscription: https://www.listbox.com/member/?&;
> >> >> > Powered by Listbox: http://www.listbox.com
> >> >> >
> >> >>
> >> >>
> >> >> -------------------------------------------
> >> >> agi
> >> >> Archives: https://www.listbox.com/member/archive/303/=now
> >> >> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> >> >> Modify Your Subscription: https://www.listbox.com/member/?&;
> >> >> Powered by Listbox: http://www.listbox.com
> >> >
> >> >
> >> >
> >> > --
> >> > Ben Goertzel, PhD
> >> > CEO, Novamente LLC and Biomind LLC
> >> > Director of Research, SIAI
> >> > [EMAIL PROTECTED]
> >> >
> >> > "Nothing will ever be attempted if all possible objections must be
> first
> >> > overcome "  - Dr Samuel Johnson
> >> >
> >> >
> >> > ________________________________
> >> > agi | Archives | Modify Your Subscription
> >>
> >>
> >> -------------------------------------------
> >> agi
> >> Archives: https://www.listbox.com/member/archive/303/=now
> >> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> >> Modify Your Subscription: https://www.listbox.com/member/?&;
> >> Powered by Listbox: http://www.listbox.com
> >
> >
> >
> > --
> > Ben Goertzel, PhD
> > CEO, Novamente LLC and Biomind LLC
> > Director of Research, SIAI
> > [EMAIL PROTECTED]
> >
> > "Nothing will ever be attempted if all possible objections must be first
> > overcome "  - Dr Samuel Johnson
> >
> >
> > ________________________________
> > agi | Archives | Modify Your Subscription
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to