Charles: I don't think a General Intelligence could be built entirely out
of
narrow AI components, but it might well be a relatively trivial add-on.
Just consider how much of human intelligence is demonstrably narrow AI
(well, not artificial, but you know what I mean). Object recognition,
e.g.
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Charles: I don't think a General Intelligence could be built entirely
out
of
narrow AI components, but it might well be a relatively trivial add-
on.
Just consider how much of human intelligence is demonstrably narrow
AI
(well, not
I've added some content in the Computational Linguistics section of the AGIRI
Wiki, which Ben outlined:
Fluid_Construction_Grammar adapted from the Wikipedia article that I mostly
authored. Link Grammar adapted from Wikipedia. Language Generation adapted
from Wikipedia. Word Grammar
John,
I'm developing this argument more fully elsewhere, so I'll just give a
partial gist. What I'm saying - and I stand to be corrected - is that I
suspect that literally no one in AI and AGI (and perhaps philosophy) present
or past understands the nature of the tools they are using.
All
So if I tell you to handle an object, or a piece of business, like say
removing a chair from the house - that word handle is open-ended and
gives you vast freedom within certain parameters as to how to apply your
hand(s) to that object. Your hands can be applied to move a given box, for
Ben:It's not just that we can CHOOSE the meanings of concepts from a fixed
menu
of possibilities ... we CREATE the meanings of concepts as we use them ...
this is how and why concept-meanings continually change over time in
individual minds and in cultures...
Yes. Good point.
Mike,
An interesting paper on the meanings of words is I don't believe in word
senses by Adam Kilgarriff. He concludes:
Following a description of the conflict between WSD [Word Sense Disambiguation]
and lexicological research, I examined the concept, ‘word sense’. It was not
found to be
Steve,
Some odd thoughts in reply. Thanks BTW for article.
1. You don't seem to get what's implicit in the main point - you can't reliably
work out the sense of an enormous number of words by any kind of word lookup
whatsoever. How do you actually work out how to handle the object - the
It's true, a word sense is not a crisp thing like a part-of-speech
... it's more of a cluster among usage-instances...
Yet, this kind of fuzzy, cluster-type category does play an important
role in cognition, no?
ben g
2008/3/27 Stephen Reed [EMAIL PROTECTED]:
Mike,
An interesting paper on
[Warning: A random blurb on the word theme].
Words and similar things are marvelous high-level training tools. They
provide a uniform interface that allows to access high-level concepts
through low-level standard input. They allow to perform supervised
training without special 'label signals'.
On 27/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
3. While philosophically, intellectually, most people dealing with this
area may expect words to have precise meanings, they know practically and
intuitively that this is impossible and work on the basis that words can
have different
Ben,
I would agree with an even stronger version of your statement: Treating word
senses as fuzzy, cluster type categories in the context of usage-instances is
the only cognitively plausible method for AGI to comprehend and produce them.
-Steve
Stephen L. Reed
Artificial Intelligence
- Original Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 27, 2008 5:30:12 PM
Subject: Re: [agi] Microsoft Launches Singularity
DIV {
MARGIN:0px;}
Steve,
Some odd thoughts in reply. Thanks BTW for
article.
1. You don't seem to get what's
From: Mike Tintner [mailto:[EMAIL PROTECTED]
I'm developing this argument more fully elsewhere, so I'll just give a
partial gist. What I'm saying - and I stand to be corrected - is that I
suspect that literally no one in AI and AGI (and perhaps philosophy)
present
or past understands the
14 matches
Mail list logo