Abram Demski wrote:
Ah, so you do not accept AIXI either.

Goodness me, no ;-). As far as I am concerned, AIXI is a mathematical formalism with loaded words like 'intelligence' attached to it, and then the formalism is taken as being about the real things in the world (i.e. intelligent systems) that those words normally signify.



Put this way, your complex system dilemma applies only to pure AGI,
and not to any narrow AI attempts, no matter how ambitious. But I
suppose other, totally different reasons (such as P != NP, if so) can
block those.

Is this the best way to understand your argument? Meaning, is the key
idea "intelligence is a complex global property, so we can't define
it"? If so, my original blog post is way of. My interpretation was
more like "intelligence is a complex global property, so we can't
predict its occurring based on local properties". These are two very
different arguments. Perhaps you are arguing both points?

My feeling is that it is a mixture of the two. My main concern is not to *assert* that intelligence is a complex global property, but to ask "Is there a risk that intelligence is a complex global property?" and then to follow that with a second question, namely "If it is complex, then what impact would this have on the methodology of AGI?".

The answers that I tried to bring out in that paper were that (1) there is a substantial risk that all intelligent systems must be at least partially complex (reason: nobody seems to know how to build a complete intelligence without including a substantial dose of the kind of tangled mechanisms that almost always give rise to complexity), and (2) the impact on AGI methodology is potentially devastating, and (disturbingly) so subtle that it would be possible for a skeptic to deny it forever.

The impact would be devastating because the current approach to AI, if applied to a situation in which the target was a complex system, would just run around in circles forever, always building systems that were kind of smart, but which did not scale up to the real thing, or which could only work if we hand-craft every piece of knowledge that the system uses, and so on. In fact, the predicted progress rate in AI research would show exactly the type of pattern that has existed for the last fifty years. As I said in another response to someone recently, all of the progress that has been made is essentially a result of AI researchers implictly using their own intuitions about how their minds work, while at the same time (mostly) denying that they are doing this.

So, going back to your question. I do think that if intelligence is a (partially) complex global property, then it cannot be defined in a way that allows us to go from a definition to a prescription for a mechanism (i.e., we cannot simply set it up as an optimization problem). That is not the direct purpose of my argument, but it is corollary. Your second point is closer to the goal of my argument, but I would rephrase it to say that getting a real intelligence (an AGI) to work probably will require at least part of the system to have a disconnected relationship between global and local, so in that sense we would not be able to 'predict' the occurence of intelligence based on local properties.

Remember the bottom line. My only goal is to ask how different methodologies would fare if intelligence is complex.




Richard Loosemore


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to