Ah, so you do not accept AIXI either.

Put this way, your complex system dilemma applies only to pure AGI,
and not to any narrow AI attempts, no matter how ambitious. But I
suppose other, totally different reasons (such as P != NP, if so) can
block those.

Is this the best way to understand your argument? Meaning, is the key
idea "intelligence is a complex global property, so we can't define
it"? If so, my original blog post is way of. My interpretation was
more like "intelligence is a complex global property, so we can't
predict its occurring based on local properties". These are two very
different arguments. Perhaps you are arguing both points?

On Wed, Jun 25, 2008 at 6:20 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
[..]
> The confusion in our discussion has to do with the assumption you listed
> above:  "...I am implicitly assuming that we have some exact definition of
> intelligence, so that we know what we are looking for..."
>
> This is precisely what we do not have, and which we will quite possibly
> never have.
[..]
> Richard Loosemore


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to