Richard Loosemore wrote in a Sun 11/11/2007 11:09 PM post

>RICHARD####> You are right.  I have only spent about 25 years working on
this
problem.  Perhaps, no matter how bright I am, this is not enough to
understand Novamente's promise.

ED####> There a many people who have spent 25 years working on AI who have
not spent the time to try to understand the multiple threads that make up
the Novamente approach.  From the one paper I read from you, as I remember
it, your major approach to AI was based on a concept of complexity in
which it was hard-for-humans-to-understand the relationship between the
lower level of the system and the higher level functions you presumably
want it to have.  This is very different than the Novamente approach,
which involves complexity, but not so much at an architectural level, but
rather at the level of what will emerge in the self-organizing gen/comp
network of patterns and behaviors that architecture is designed to grow,
all under the constant watchful eye -- and selective weeding and watering
-- of its goal and reward systems.  As I understand it, the complexity in
Novamente is much more like that in an economy in which semi-rational
actors struggle to find and make a niche at which they can make a living,
than the somewhat more anarchical complexity in the cellular automata Game
Of Life.

So perhaps you are like most people who have spent a career in AI, in that
the deep learning you have obtained has not spend enough time thinking
about the pieces of Novamente-like approaches.  But it is almost certain
that that 25 years worth of knowledge would make it much easier for you to
understand Novamente-like approach than all but a very small percent of
this planet/s people, if you really wanted to.

>>ED####> I am sure you are smart enough to understand its promise if you
wanted to.  Do you?

>RICHARD####> I did want to.

I did.

I do.

ED####> Great. If you really do, I would start reading the papers at
http://www.novamente.net/papers/.  Perhaps Ben could give you a better
reading list than I.

I don’t know about you, Richard, but given my mental limitations, I often
find I have to read some parts of paper 2 to 10 times to understand them.
Usually much is unsaid in most papers, even the well written ones. You
often have to spend time filling in the blanks and trying to imagine how
what its describing would actually work.  Much of my understanding of the
Novamente approach not only comes from a broad range of reading and
attending lectures in AI, micro-electronic, and brain science, but also a
lot of thinking about what I have read and heard from other, and about
what I have observed over decades of my own thought processes.

(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 11, 2007 11:09 PM
To: agi@v2.listbox.com
Subject: Re: [agi] What best evidence for fast AI?


Edward W. Porter wrote:
> Richard,
>
> Geortzel claims his planning indicates it is rougly 6 years x 15
> excellent, hard-working programmers, or 90 man years to getting his
> architecture up an running.  I assume that will involve a lot of “hard”
> mental work.
>
> By “hard problem” I mean a problem for which we don’t have what seems
> --
> within the  Novemente model -- to be a way for handling it at, at least,

> a roughly human-level.  We won’t have proof that the problem is not hard

> until we actually get the part of the system that deals with that
> problem up and running successfully.
>
> Until then, you have every right to be skeptical.  But you also have
> the
> right, should you so choose, to open your mind up to the tremendous
> potential of the Novamente approach.
>
>
>>RICHARD####> What would be the solution of the grounding problem?
> ED####> Not hard. As one linguist said “Words are defined by the
> company
> they keep”.  Kinda like I am guessing Google sets work, but at more
> different levels in the gen/comp pattern hierarchy and with more cross
> inferencing between different google-set seeds.  The same goes not only
> for words, but for almost all concepts and sub-concepts.  Grounding is
> made out of a life-time of experience recording such associations and
> the dynamic reactivation of those associations both in the subconscious
> and conscious in response to current activations.
>
>>RICHARD####> What would be the solution of the problem of autonomous,
> unsupervised learning of concepts?
> ED####> Not hard! Read Novamente (or for a starter my prior summaries
> of
> it).  That’s one of its main focus.
>
>>RICHARD####> Can you find proofs that inference control engines will
>>not
> show divergent behavior under heavy load (i.e. will they degrade
> gracefully when forced to provide answers in real time)?
>
> ED####> Not totally clear.  Brain level hardware will really help
> here,
> but what is six orders of magnitude to the potential of combinatorial
> explosion in dynamic activations of something as large and
> high-dimensional as world knowledge?.
>
> This issue falls under the
> getting-it-all-to-work-together-well-automatically heading, which I said

> is non-trivial.  But Novamente directs a lot of attention to this
> problems, by among other approaches (a) using long and short term
> importance metrics to guide computational resource allocation, (b)
> having a deep memory of which computational patterns have proven
> appropriate in prior similar circumstances, (c) having a gen/comp
> hierarchy of such prior computational patterns which allows them to be
> instantiated in a given case in a context appropriate way, and (d)
> providing powerful inferencing mechanisms that go way beyond those
> commonly used in most current AIs.
>
> I am totally confident we could get something very useful out of the
> system even if it was not as well tuned as a human brain.  There as all
> sorts of ways you could dampen the potential not only for combinatorial
> explosion, but also for instability.  We probably would start it out
> with a lot of such damping, but over time give it more freedom to
> control its own parameters.
>
>>RICHARD####> Are there solutions to the problems of flexible, abstract
> analogy building?
> Language learning?
> ED####> Not hard!  A Novamente class machine would be like
> Hofstadter’s
> CopyCat on steroids when it comes to making analogies.
>
> The gen/comp hierarchy of patterns would not only apply to all the
> concepts that fall directly within what we think of as NL, but also to
> the system’s world-knowledge, itself, of which such NL concepts and
> their contexts would be a part.  This includes knowledge about its own
> life-history, behavior, and the feedback it has received.  Thus, it
> would be fully capable of representing and matching concepts at the
> level humans do when understanding and communicating with NL.  The deep
> contextual grounding contained within such world knowledge and the
> ability to make inferences from it in real time would largely solve the
> hard disambiguation problems in natural language recognition, and allow
> language generation to be performed rapidly in a way that is appropriate

> to all the levels of context that humans use when speaking.
>
>>RICHARD####> Pragmatics?
> ED####> Not hard! Follows from the above answer.  Understanding of
> pragmatics would result from the ability to dynamically generalize from
> prior similar statements in prior similar contexts, of what those prior
> contexts contained.
>
>
>
>>RICHARD####> Ben Goertzel wrote:
>> >Goertzel####> This practical design is based on a theory that is
> fairly complete, but not easily verifiable using current technology.
> The verification, it seems, will come via actually getting the AGI
built!
>
> ED####>  You and Ben are totally correct.  None of this will be proven
> until it has actually been shown to work.  But significant pieces of it
> have already been shown to work.
>
> I think Ben believes it will work, as do I, but we both agree it will
> not be “verifiable” until it actually does.
>
> As I wrote to Robin Hanson earlier today, the fact you don’t agree
> with
> what we view as the relatively high probability of success for our
> approach does not reflect poorly on either your intelligence or your
> knowledge of AI.  If you haven’t spent a lot of time thinking about a
> Novamente-like approach there is no reason, no matter how bright you are

> that you should be able to understand its promise.

You are right.  I have only spent about 25 years working on this
problem.  Perhaps, no matter how bright I am, this is not enough to
understand Novamente's promise.


> I am sure you are smart enough to understand its promise if you wanted
> to.  Do you?

I did want to.

I did.

I do.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64157180-f7d524

Reply via email to