John,

I'm developing this argument more fully elsewhere, so I'll just give a partial gist. What I'm saying - and I stand to be corrected - is that I suspect that literally no one in AI and AGI (and perhaps philosophy) present or past understands the nature of the tools they are using.

All the tools - all the sign systems currently used - especially language - are actually general-purpose - AS USED BY THE HUMAN BRAIN.

The whole point of just about every word in language is that it constitutes a general, open brief which can be instantiated in any one of an infinite set of ways.

So if I tell you to "handle" an object, or a piece of business, like say "removing a chair from the house" - that word "handle" is open-ended and gives you vast freedom within certain parameters as to how to apply your hand(s) to that object. Your hands can be applied to move a given box, for example, in a vast if not infinite range of positions and trajectories. Such a general, open concept is of the essence of general intelligence, because it means that you are immediately ready to adapt to new kinds of situation - if your normal ways of handling boxes are blocked, you are ready to seek out or improvise some strange new contorted two-finger hand position to pick up the box - which also count as "handling". (And you will have actually done a lot of this).

So what is the "meaning" of "handle"? Well, to be precise, it doesn't have a/one meaning, and isn't meant to - it has a range of possible meanings/references, and you can choose which is most convenient in the circumstances.

The same principles apply to just about every word in language and every unit of logic and mathematics.

But - and correct me - I don't think anyone in AI/AGI is using language or any logico-mathematical systems in this general, open-ended way - the way they are actually meant to be used - and the very foundation of General Intelligence.

Language and the other systems are always used by AGI in specific ways to have specific meanings. YKY, typically, wanted a language for his system which had precise meanings. Even Ben, I suspect, may only employ words in an "open" way, in that their meanings can be changed with experience - but at any given point their meanings will have to be specific.

To be capable of generalising as the human brain does - and of true AGI - you have to have a brain that simultaneously processes on at least two if not three levels, with two/three different sign systems - including both general and particular ones.



John:>> Charles: >> I don't think a General Intelligence could be built entirely
out
of
>> narrow AI components, but it might well be a relatively trivial add-
on.
>> Just consider how much of human intelligence is demonstrably "narrow
AI"
>> (well, not artificial, but you know what I mean).  Object
recognition,
>> e.g.  Then start trying to guess how much of the part that we can't
>> prove a classification for is likely to be a narrow intelligence
>> component.  In my estimation (without factual backing) less than
0.001
>> of our intelligence is General Intellignece, possibly much less.
>> >
>
John:  I agree that it may be <1%. >
>

Oh boy, does this strike me as absurd. Don't have time for the theory
right
now, but just had to vent. Percentage estimates strike me as a bit
silly,
but if you want to aim for one, why not look at both your paragraphs,
word
by word. "Don't"  "think" "might" "relatively" etc. Now which of those
words
can only be applied to a single type of activity, rather than an open-
ended
set of activities? Which cannot be instantiated in an open-ended if not
infinite set of ways? Which is not a very valuable if not key tool of a
General Intelligence, that can adapt to solve problems across domains?
Language IOW is the central (but not essential) instrument of human
general
intelligence - and I can't think offhand of a single world that is not a
tool for generalising across domains, including "Charles H." and "John
G.".

In fact, every tool you guys use - logic, maths etc. - is similarly
general
and functions in similar ways. The above strikes me as a 99% failure to
understand the nature of general intelligence.


Mike you are 100% potentially right with a margin of error of 110%. LOL!

Seriously Mike how do YOU indicate approximations? And how are you
differentiating general and specific? And declaring relative absolutes and
convenient infinitudes... I'm trying to understand your argument.

John

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.
Checked by AVG.
Version: 7.5.519 / Virus Database: 269.22.1/1345 - Release Date: 3/26/2008 6:50 PM




-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to