Ed Porter wrote:
Richard,
In your blog you said:

"- Memory. Does the mechanism use stored information about what it was doing fifteen minutes ago, when it is making a decision about what to do now? An hour ago? A million years ago? Whatever: if it remembers, then it has memory.

"- Development. Does the mechanism change its character in some way over time? Does it adapt?

"- Identity. Do individuals of a certain type have their own unique identities, so that the result of an interaction depends on more than the type of the object, but also the particular individuals involved?

"- Nonlinearity. Are the functions describing the behavior deeply nonlinear?

These four characteristics are enough. Go take a look at a natural system in physics, or an engineering system, and find one in which the components of the system interact with memory, development, identity and nonlinearity. You will not find any that are understood.

“…

“Notice, above all, that no engineer has ever tried to persuade one of these artificial systems to conform to a pre-chosen overall behavior….”

I am quite sure there have been many AI system that have had all four of these features and that have worked pretty much as planned and whose behavior is reasonably well understood (although not totally understood, as is nothing that is truly complex in the non-Richard sense), and whose overall behavior has been as chosen by design (with a little experimentation thrown in) . To be fair I can't remember any off the top of my head, because I have read about many AI systems over the years. But recording episodes is very common in many prior AI systems. So is adaptation. Nonlinearity is almost universal, and Identity as you define it would be pretty common.

So, please --- other people on this list help me out --- but I am quite sure system have been built that prove the above quoted statement to be false.

Ed,

You have put words into my mouth: I have never tried to argue that a narrow-AI system cannot work at all.

(Narrow AI is what you are referring to above: it must be narrow AI, because there have not been any fully functioning *AGI* systems delivered yet, and you refr to systems that have been built).

The point of my argument is to claim that such narrow AI systems CANNOT BE EXTENDED TO BECOME AGI SYSTEMS. The complex systems problem predicts that when people allow those four factors listed above to operate in a full AGI context, where the system is on its own for a lifetime, the complexity effects will then dominate.

In effect, what I am claiming is that people have been masking the complexity effects by mollycoddling their systems in various ways, and by not allowing them to run for long periods of time, or in general environments, or to ground their own symbols.

I would predict that when people do this "mollycoddling" of their AI systems, the complex systems effects would not become apparent very soon.

Guess what? That exactly fits the observed history of AI. When people try to make these AI systems operate in ways that brings out the complexity, the systems fail.



Richard Loosemore




P.S. Please don't call it "Richard-complexity" .... it has nothing to do with me: this is "complexity" the way that lots of people understand the term. If you need to talk about the concept that is the opposite of simple, it would be better to use "complicated". Personalizing it just creates confusion.










-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to