This group, as in most AGI discussions, will use logic and statistical
theory loosely.  We have to.  One is that we - thinking entities - do not
know everything and so our reasoning is based on fragmentary knowledge.  In
this situation the boundaries of logical reasoning in thought, both natural
and artificial, are going to be transgressed.  However, knowing that is
going to be the case in AGI, we can acknowledge it and try to figure out
algorithms that will tend to ground our would-be programs.

Now Solomonoff Induction and Algorithmic Information Theory are a little
different.  They deal with concrete data spaces.  We can and should question
how relevant those concrete sample spaces might be to general reasoning
about the greater universe of knowledge, but the fact that they deal with
concrete spaces means that they might be logically bound.  But are they?  If
an idealism is both concrete (too concrete for our uses) and not logically
computable then we have to really be wary of trying to use it.

If using Solomonoff Induction is incomputable it does not prove that it is
illogical.  But if it is incomputable, it would be illogical to believe that
it can be used reliably.

Solomonoff Induction has been around long enough for serious mathematicians
to examine its validity.  If it was a genuinely sound method, mathematicians
would have accepted it.  However, if Solomonoff Induction is incomputable in
practice it would be so unreliable that top mathematicians would tend to
choose more productive and interesting subjects to study.  As far as I can
tell, Solomonoff Induction exists today within the backwash of AI
communities.  It has found new life in these kinds of discussion groups
where most of us do not have the skill or the time to critically examine the
basis of every theory that is put forward.  The one test that we can make is
whether or not some method that is being presented has some reliability in
our programs which constitute mini experiments.  Logic and probability pass
the smell test, even though we know that our use of them in AGI is not
ideal.

Jim Bromer



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to