> From: Charles D Hixson [mailto:[EMAIL PROTECTED]
> >
> I don't think a General Intelligence could be built entirely out of
> narrow AI components, but it might well be a relatively trivial add-on.
> Just consider how much of human intelligence is demonstrably "narrow AI"
> (well, not artificial, but you know what I mean).  Object recognition,
> e.g.  Then start trying to guess how much of the part that we can't
> prove a classification for is likely to be a narrow intelligence
> component.  In my estimation (without factual backing) less than 0.001
> of our intelligence is General Intellignece, possibly much less.
> >

I agree that it may be <1%. Also from what I've read with brain atrophy
cases is that a typical human brain may be able to function relatively
normally with <10% of its mass if atrophy is applied over time.

> I'm not sure of the distinction that you are making between
> consciousness and self-awareness, but even most complex narrow-AI
> applications require at least rudimentary self awareness.  In fact, one
> could argue that all object oriented programming with inheritance has
> rudimentary self awareness (called "this" in many languages, but in
> others called "self").  This may be too rudimentary, but it's my feeling
> that it's an actual model(implementation?) of what the concept of self
> has evolved from.
> 

Consciousness and awareness are two functions that I was separating out. The
programming language "this" and "self" are particular to class instances
right and can be at the root of the hierarchy tree but there are many, many
this's in a large OO application. A collective group could be considered
some sort of self-awareness this is true and it could be fleshed out and
expanded upon. What I have been exploring though is whether conscious,
awareness, etc. have to be present for a general intelligence. The trend is
to include them.

> As to an AGI not being conscious.... I'd need to see a definition of
> your terms, because otherwise I've *got* to presume that we have
> radically different definitions.  To me an AGI would not only need to be
> aware of itself, but also to be aware of aspects of it's environment
> that it could effect changes in,  And of the difference between them,
> though that might well be learned.  (Zen:  "Who is the master who makes
> the grass green?", and a few other koans when "solved" imply that in
> humans the distinction between internal and external is a learned
> response.)  Perhaps the diagnostic characteristic of an AGI is that it
> CAN learn that kind of thing.  Perhaps not, too.  I can imagine a narrow
> AI that was designed to plug into different bodies, and in each case
> learn the distinction between itself and the environment before
> proceeding with its assignment.  I'm not sure it's possible, but I can
> imagine it.

AGI per se may be defined as a lifelike intelligent entity requiring brain
related things like consciousness. In my mind, I am thinking of general
intelligence without the difficult task of building consciousness. You could
argue a rock has some sort of consciousness. I'm thinking intelligence is a
sort of self-contained entity that depends upon the state, structure,
complexity and potential of its contained data and representation.
Intelligence would be related to an "energy" transfer needed to extract a
structured data set from a structured data superset. The structured data set
(a query) would have a morphic chain relationship to the structure of the
stored data and the "energy" required to get it would be proportional to the
"intelligence". Lower energy expenditure across query types implies higher
intelligence related to those queries. The morphic chain relationship
basically is a subset of a morphism mapping graph. Better intelligence means
solving the graph and applying optimizing techniques based on parameters.
Measurement of intelligence (the energy) would basically be counting bit
flips on queries related to query structure and bit count. Knowledge
optimization such as self organizing and optimizing morphism graphs
naturally affect the potential energy and things like having this reorganize
based on query is all part of it. But from what I gather intelligence is
just a bit and time (or state) relationship between sets of bits - that is
for a digital based intelligence. I don't know if an analog based
intelligence would have similar mathematical structure or not...I suppose
that when you boil it down they'll be particle wave duality issues :)

John


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to