On 4/6/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> Ben:  Are you interested in translating LRRH into Novamente's KR, as a
demo?

Not really...

Here's the thing: Novamente's KR is very flexible...

So, one could translate LRRH into Novamente-ese in a way that would sorta
resemble "Cyc plus probabilities, with some higher-order functions and
pattern-intensities too"

But, that wouldn't be likely to closely resemble the way LRRH would wind
up being represented in the mind of a Novamente instance that really
understood the story.

So the exercise of explicitly writing LRRH in Novamente's KR would likely
wind up being not only pointless, but actively misleading ;-)

While I do think that a probabilistic logic based KR (as NM uses) is a
good choice, I don't think that the compact logical representation a human
would use to explicitly represent a story like LRRH, is really the right
kind of representation for deep internal use by an AGI system.  An AGI's
internal representation of a story like this may be logical in form, but is
going to consist of a very large number of uncertain, contextual
relationships, along with some of the crisper and more encapsulated ones
like those a human would formulate if carrying out the exercise of encoding
LRRH in logic.

It is for this reason, among others, that I find Cyc-type AI systems a bit
misguided
(another main reason is their lack of effective learning algorithms; and
then there's the fact that the absence of perceptual-motor grounding makes
it difficult for a useful self-model to emerge; etc. etc.)

Hi Ben,

I understand the current situation with Novamente.  It seems that one
fundamental difference between Cyc and Novamente is that Cyc is focused on
the linguistic / symbolic level whereas Novamente is focused on sensory /
experiential learning.

My current intuition is that Cyc's route may achieve "a certain level of
intelligence" *sooner*.  (Although the work done with sensory-based AGI
would probably still be useful.)  This may sound kind of vague, but my
intuition is that if we invest on a Cyc-like AGI for 5 years, it may be able
to converse with humans in a natural language and answer some commonsense
queries (which, the current Cyc actually is somewhat capable of).  But if
you invest 5 years in a sensory-based AGI, the resulting AGI baby may
be still at the level of a 3-5 years old human.  It seems that much of your
work may be wasted on dealing with sensory processing and experiential
learning, the latter is particularly inefficient.

The Cyc route actually bypasses experiential learning because it allows us
to directly enter commonsense knowledge into its KB.  That is perhaps the
most significant difference between these 2 approaches.

YKY

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to