John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
My take on this is completely different.

When I say "Narrow AI" I am specifically referring to something that is
so limited that it has virtually no chance of becoming a general
intelligence.  There is more to general intelligence than just throwing
a bunch of Narrow AI ideas into a pot and hoping for the best. If it
were, we would have had AGI long before now.

It's an opinion that AGI could not be built out of a conglomeration of
narrow-AI subcomponents. Also there are many things that COULD be built with
narrow-AI that we have not even scratched the surface of due to a number of
different limitations so saying that we would have achieved AGI long ago is
an exaggeration.
I don't think a General Intelligence could be built entirely out of narrow AI components, but it might well be a relatively trivial add-on. Just consider how much of human intelligence is demonstrably "narrow AI" (well, not artificial, but you know what I mean). Object recognition, e.g. Then start trying to guess how much of the part that we can't prove a classification for is likely to be a narrow intelligence component. In my estimation (without factual backing) less than 0.001 of our intelligence is General Intellignece, possibly much less.
Consciousness and self-awareness are things that come as part of the AGI
package.  If the system is too simple to have/do these things, it will
not be general enough to equal the human mind.


I feel that general intelligence may not require consciousness and
self-awareness. I am not sure of this and may prove myself wrong. To equal
the human mind you need these things of course and to satisfy the sci-fi
fantasy world's appetite for intelligent computers you would need to
incorporate these as well.

John
I'm not sure of the distinction that you are making between consciousness and self-awareness, but even most complex narrow-AI applications require at least rudimentary self awareness. In fact, one could argue that all object oriented programming with inheritance has rudimentary self awareness (called "this" in many languages, but in others called "self"). This may be too rudimentary, but it's my feeling that it's an actual model(implementation?) of what the concept of self has evolved from.

As to an AGI not being conscious.... I'd need to see a definition of your terms, because otherwise I've *got* to presume that we have radically different definitions. To me an AGI would not only need to be aware of itself, but also to be aware of aspects of it's environment that it could effect changes in, And of the difference between them, though that might well be learned. (Zen: "Who is the master who makes the grass green?", and a few other koans when "solved" imply that in humans the distinction between internal and external is a learned response.) Perhaps the diagnostic characteristic of an AGI is that it CAN learn that kind of thing. Perhaps not, too. I can imagine a narrow AI that was designed to plug into different bodies, and in each case learn the distinction between itself and the environment before proceeding with its assignment. I'm not sure it's possible, but I can imagine it.

OTOH, if we take my arguments in the preceding paragraph too seriously, then medical patients that are "locked in" would be considered not intelligent. This is clearly incorrect. Effectively they aren't intelligent, but that's because of a mechanical breakdown in the sensory/motor area, and that clearly isn't what we mean when we talk about intelligence. But examples of recovered/recovering patients seem to imply that they weren't exactly either intelligent or conscious while they were locked-in. (I'm going solely by reports in the popular science press...so don't take this too seriously.) It appears as if when external sensations are cut-off, that the mind estivates...at least after awhile. Presumably different patients had different causes, and thence at least slightly different effects, but that's my first-cut guess at what's happening. OTOH, the sensory/motor channel doesn't need to be particularly well functioning. Look at Stephan Hawking.

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to