On Sat, Mar 1, 2008 at 12:37 AM, Abram Demski <[EMAIL PROTECTED]> wrote:
> I'm an undergrad who's been lurking here for about a year. It seems to me
> that many people on this list take Solomonoff Induction to be the ideal
> learning technique (for unrestricted computational resources). I'm wondering
> what justification there is for the restriction to turing-machine models of
> the universe that Solomonoff Induction uses. Restricting an AI to computable
> models will obviously make it more realistically manageable. However,
> Solomonoff induction needs infinite computational resources, so this clearly
> isn't a justification.
>
> My concern is that humans make models of the world that are not computable;
> in particular, I'm thinking of the way physicists use differential
> equations. Even if physics itself is computable, the fact that humans use
> incomputable models of it remains. Solomonoff Induction itself is an
> incomputable model of intelligence, so an AI that used Solomonoff Induction
> (even if we could get the infinite computational resources needed) could
> never understand its own learning algorithm. This is an odd position for a
> supposedly universal model of intelligence IMHO.
>
> My thinking is that a more-universal theoretical prior would be a prior over
> logically definable models, some of which will be incomputable.
>
> Any thoughts?
>

I agree with the gist of this. Learning is a decision-making process,
and to make it practical, both the results of possible decisions
(state of the system after it is changed to reflect learning), and
decision process itself, must be practically feasible. Solomonoff
induction optimizes for short programs (result of learning), while
ignoring decision process and runtime of resulted programs. In AGI,
system must incrementally add to its knowledge, so the result of any
given learning 'step' is a small modification to existing system,
which must both take care of system staying within feasibility limits
(not overflowing alloted disk space and the like), and be sufficiently
efficient.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to