I'll try to answer this one...

1)
In a nutshell, the algorithmic info. definition of intelligence is like
this: Intelligence is the ability of a system to achieve a goal that is
randomly selected from the space of all computable goals, according to some
defined probability distribution on computable-goal space.

2)
Of course, if one had a system that was highly intelligent according to the
above definition, it would be a great compressor.

3)
There are theorems stating that if you have a great compressor, then by
wrapping a little code around it, you can get a system that will be highly
intelligent according to the algorithmic info. definition.  The catch is
that this system (as constructed in the theorems) will use insanely,
infeasibly much computational resource.

What are the weaknesses of the approach:

A)
The real problem of AI is to make a system that can achieve complex goals
using feasibly much computational resource.

B)
Workable strategies for achieving complex goals using feasibly much
computational resource, may be highly dependent on the particular
probability distribution over goal space mentioned in 1 above

For this reason, I'm not sure the algorithmic info. approach is of much use
for building real AGI systems.

I note that Shane Legg is now directing his research toward designing
practical AGI systems along totally different lines, not directly based any
of the alg. info. stuff he worked on in his thesis.

However, Marcus Hutter, Juergen Schmidhuber and others are working on
methods of "scaling down" the approaches mentioned in 3 above (AIXItl, the
Godel Machine, etc.) to as to yield feasible techniques.  So far this has
led to some nice machine learning algorithms (e.g. the parameter-free
temporal difference reinforcement learning scheme in part of Legg's thesis,
and Hutter's new work on Feature Bayesian Networks and so forth), but
nothing particularly AGI-ish.  But personally I wouldn't be harshly
dismissive of this research direction, even though it's not the one I've
chosen.

-- Ben G




On Fri, Dec 26, 2008 at 3:53 PM, Richard Loosemore <r...@lightlink.com>wrote:

> Philip Hunt wrote:
>
>> 2008/12/26 Matt Mahoney <matmaho...@yahoo.com>:
>>
>>> I have updated my universal intelligence test with benchmarks on about
>>> 100 compression programs.
>>>
>>
>> Humans aren't particularly good at compressing data. Does this mean
>> humans aren't intelligent, or is it a poor definition of intelligence?
>>
>>  Although my goal was to sample a Solomonoff distribution to measure
>>> universal
>>> intelligence (as defined by Hutter and Legg),
>>>
>>
>> If I define intelligence as the ability to catch mice, does that mean
>> my cat is more intelligent than most humans?
>>
>> More to the point, I don't understand the point of defining
>> intelligence this way. Care to enlighten me?
>>
>>
> This may or may not help, but in the past I have pursued exactly these
> questions, only to get such confusing, evasive and circular answers, all of
> which amounted to nothing meaningful, that eventually I (like many others)
> have just had to give up and not engage any more.
>
> So, the real answers to your questions are that no, compression is an
> extremely poor definition of intelligence; and yes, defining intelligence to
> be something completely arbitrary (like the ability to catch mice) is what
> Hutter and Legg's analyses are all about.
>
> Searching for previous posts of mine which mention Hutter, Legg or AIXI
> will probably turn up a number of lengthy discussion in which I took a deal
> of trouble to debunk this stuff.
>
> Feel free, of course, to make your own attempt to extract some sense from
> it all, and by all means let me know if you eventually come to a different
> conclusion.
>
>
>
>
> Richard Loosemore
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

"I intend to live forever, or die trying."
-- Groucho Marx



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to