Gary Miller wrote:
> I agree that as humans we bring a lot of general knowledge with us when
> we learn a new domain.  That is why I started off with the general
> conversational domain and am now branching into science, philosophy,
> mathematics and history.  And of course the AI can not make all the
> connections without being extensively interviewed on a subject and
> having a human help clarify it's areas of confusion just as a parent
> answers questions for a child or a teacher for a student.  I am not in
> fact trying to take the exhaustive approach one domain at a time
> approach but rather to teach it the most commonly known and requested
> information first.  My last email just used that description to identify
> my thoughts on grounding.  I am hoping that by doing this and repeating
> the interviewing process in an iterative development cycle that
> eventually the bot will eventually be able to discuss many different
> subjects at a somewhat superficial level much as same as most humans are
> capable of.  This is a lot different from the exhaustive definition that
> Cyc provides for each concept.

Gary, I respect the hypothesis you're making here: it is a scientific
hypothesis in the sense of Karl Popper, i.e. it is pragmatically
falsifiable.  You can try with this approach and see how it works.  It is
not identical to the expert systems approach, though it has some
commonalities.

My own intuition is that this approach will not succeed -- that conversing
with humans is not going to get across enough of the tacit, implicit
knowledge that a mind needs to have to really converse intelligently in any
nontrivial subject area.  I think that even if the implicit knowledge seems
to *us* to be there in the conversations, it won't be there *for the system*
unless the system has had some experience gaining implicit knowledge of its
own via nonlinguistic world-interaction.

> I don't think AI is absent sufficient theory, just sufficient execution.

Well, here I profoundly disagree with you.  I think that the
generally-accepted AI theories are profoundly wrong, and extremely limited
in their view of how intelligence must operate.  I think AI's failure to
execute is directly based on the failure of its theories to accept and
encompass the full complexity of the mind.

> I feel like the Cyc Project's heart was in the right place and the level
> of effort was certainly great, but perhaps the purity of their vision
> took priority over usability of the end result.  Is any company actually
> using Cyc as anything other than a search engine yet?
>
> That being said other than Cyc I am at a loss to name any serious AI
> efforts which are over a few years in duration and have more that 5 man
> years worth of effort (not counting promotional and fundraising).

My Novamente project certainly fits this description.  The Webmind AI
project had about 70 man-years of effort go into it between 1997-early 2001.
Novamente is Webmind's successor -- different code, different mathematics,
different software architecture, but the same spirit, and building on
Webmind's successes and mistakes.  Novamente has had maybe 7 man-years of
effort go into it since mid-2001.

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to