Jim Bromer wrote:
> Last week I came up with a sketch that I felt showed that Solomonoff 
> Induction 
>was incomputable in practice using a variation of Cantor's Diagonal Argument.

Cantor proved that there are more sequences (infinite length strings) than 
there 
are (finite length) strings, even though both sets are infinite. This means 
that 
some, but not all, sequences have finite length descriptions or are the output 
of finite length programs (which is the same thing in a more formal sense). For 
example, the digits of pi or sqrt(2) are infinite length sequences that have 
finite descriptions (or finite programs that output them). There are many more 
sequences that don't have finite length descriptions, but unfortunately I can't 
describe any of them except to say they contain infinite amounts of random data.

Cantor does not prove that Solomonoff induction is not computable. That was 
proved by Kolmogorov (and also by Solomonoff). Solomonoff induction says to use 
the shortest program that outputs the observed sequence to predict the next 
symbol. However, there is no procedure for finding the length of the shortest 
description. The proof sketch is that if there were, then I could describe "the 
first string that cannot be described in less than a million bits" even though 
I 
just did. The formal proof 
is 
http://en.wikipedia.org/wiki/Kolmogorov_complexity#Incomputability_of_Kolmogorov_complexity


I think your confusion is using the uncomputability of Solomonoff induction to 
question its applicability. That is an experimental question, not one of 
mathematics. The validity of using the shortest or simplest explanation of the 
past to predict the future was first observed by William of Ockham in the 
1400's. It is standard practice in all fields of science. The minimum 
description length principle is applicable to all branches of machine learning.

However, in the conclusion 
of http://mattmahoney.net/dc/dce.html#Section_Conclusion I argue for Solomonoff 
induction on the basis of physics. Solomonoff induction supposes that all 
observable strings are finite prefixes of computable sequences. Occam's Razor 
might not hold if it were possible for the universe to produce uncomputable 
sequences, i.e. infinite sources of random data. I argue that is not possible 
because the observable universe if finitely computable according to the laws of 
physics as they are now understood.

 -- Matt Mahoney, matmaho...@yahoo.com




________________________________
From: Jim Bromer <jimbro...@gmail.com>
To: agi <agi@v2.listbox.com>
Sent: Wed, July 14, 2010 11:29:13 AM
Subject: [agi] Comments On My Skepticism of Solomonoff Induction


Last week I came up with a sketch that I felt showed that Solomonoff Induction 
was incomputable in practice using a variation of Cantor's Diagonal Argument.  
I 
wondered if my argument made sense or not.  I will explain why I think it did.
 
First of all, I should have started out by saying something like, Suppose 
Solomonoff Induction was computable, since there is some reason why people feel 
that it isn't.
 
Secondly I don't think I needed to use Cantor's Diagonal Argument (for the in 
practice case), because it would be sufficient to point out that since it was 
impossible to say whether or not the probabilities ever approached any 
sustained 
(collared) limits due to the lack of adequate mathematical definition of the 
concept "all programs", it would be impossible to make the claim that they were 
actual representations of the probabilities of all programs that could produce 
certain strings.
 
But before I start to explain why I think my variation of the Diagonal Argument 
was valid, I would like to make another comment about what was being claimed.
 
Take a look at the n-ary expansion of the square root of 2 (such as the decimal 
expansion or the binary expansion).  The decimal expansion or the binary 
expansion of the square root of 2 is an infinite string.  To say that the 
algorithm that produces the value is "predicting" the value is a torturous use 
of meaning of the word 'prediction'.  Now I have less than perfect grammar, but 
the idea of prediction is so important in the field of intelligence that I do 
not feel that this kind of reduction of the concept of prediction is 
illuminating.  

 
Incidentally, There are infinite ways to produce the square root of 2 (sqrt 2 
+1-1, sqrt2 +2-2, sqrt2 +3-3,...).  So the idea that the square root of 2 is 
unlikely is another stretch of conventional thinking.  But since there are an 
infinite ways for a program to produce any number (that can be produced by a 
program) we would imagine that the probability that one of the infinite ways to 
produce the square root of 2 approaches 0 but never reaches it.  We can imagine 
it, but we cannot prove that this occurs in Solomonoff Induction because 
Solomonoff Induction is not limited to just this class of programs (which could 
be proven to approach a limit).  For example, we could make a similar argument 
for any number, including a 0 or a 1 which would mean that the infinite string 
of digits for the square root of 2 is just as likely as the string "0".
 
But the reason why I think a variation of the diagonal argument can work in my 
argument is that since we cannot prove that the infinite computations of the 
probabilities that a -program will produce a string- will ever approach a 
limit, 
to use the probabilities reliably (even as an infinite theoretical method) we 
would have to find some way to rearrange the computations of the probabilities 
so that they could.  While the number of ways to rearrange the ordering of a 
finite number of things is finite no matter how large the number is, the number 
of possible ways to rearrange an infinite number of things is infinite.  I 
believe that this problem of finding the right rearrangement of an infinite 
list 
of computations of values after the calculation of the list is finished 
qualifies for an infinite to infinite diagonal argument. 

 
I want to add one more thing to this in a few days.
 
Jim Bromer
agi | Archives  | Modify Your Subscription  


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to