Richard,

Geortzel claims his planning indicates it is rougly 6 years x 15
excellent, hard-working programmers, or 90 man years to getting his
architecture up an running.  I assume that will involve a lot of “hard”
mental work.

By “hard problem” I mean a problem for which we don’t have what seems --
within the  Novemente model -- to be a way for handling it at, at least, a
roughly human-level.  We won’t have proof that the problem is not hard
until we actually get the part of the system that deals with that problem
up and running successfully.

Until then, you have every right to be skeptical.  But you also have the
right, should you so choose, to open your mind up to the tremendous
potential of the Novamente approach.


>RICHARD####> What would be the solution of the grounding problem?
ED####> Not hard. As one linguist said “Words are defined by the company
they keep”.  Kinda like I am guessing Google sets work, but at more
different levels in the gen/comp pattern hierarchy and with more cross
inferencing between different google-set seeds.  The same goes not only
for words, but for almost all concepts and sub-concepts.  Grounding is
made out of a life-time of experience recording such associations and the
dynamic reactivation of those associations both in the subconscious and
conscious in response to current activations.

>RICHARD####> What would be the solution of the problem of autonomous,
unsupervised learning of concepts?
ED####> Not hard! Read Novamente (or for a starter my prior summaries of
it).  That’s one of its main focus.

>RICHARD####> Can you find proofs that inference control engines will not
show divergent behavior under heavy load (i.e. will they degrade
gracefully when forced to provide answers in real time)?
ED####> Not totally clear.  Brain level hardware will really help here,
but what is six orders of magnitude to the potential of combinatorial
explosion in dynamic activations of something as large and
high-dimensional as world knowledge?.

This issue falls under the
getting-it-all-to-work-together-well-automatically heading, which I said
is non-trivial.  But Novamente directs a lot of attention to this
problems, by among other approaches (a) using long and short term
importance metrics to guide computational resource allocation, (b) having
a deep memory of which computational patterns have proven appropriate in
prior similar circumstances, (c) having a gen/comp hierarchy of such prior
computational patterns which allows them to be instantiated in a given
case in a context appropriate way, and (d) providing powerful inferencing
mechanisms that go way beyond those commonly used in most current AIs.

I am totally confident we could get something very useful out of the
system even if it was not as well tuned as a human brain.  There as all
sorts of ways you could dampen the potential not only for combinatorial
explosion, but also for instability.  We probably would start it out with
a lot of such damping, but over time give it more freedom to control its
own parameters.

>RICHARD####> Are there solutions to the problems of flexible, abstract
analogy building?
Language learning?
ED####> Not hard!  A Novamente class machine would be like Hofstadter’s
CopyCat on steroids when it comes to making analogies.

The gen/comp hierarchy of patterns would not only apply to all the
concepts that fall directly within what we think of as NL, but also to the
system’s world-knowledge, itself, of which such NL concepts and their
contexts would be a part.  This includes knowledge about its own
life-history, behavior, and the feedback it has received.  Thus, it would
be fully capable of representing and matching concepts at the level humans
do when understanding and communicating with NL.  The deep contextual
grounding contained within such world knowledge and the ability to make
inferences from it in real time would largely solve the hard
disambiguation problems in natural language recognition, and allow
language generation to be performed rapidly in a way that is appropriate
to all the levels of context that humans use when speaking.

>RICHARD####> Pragmatics?
ED####> Not hard! Follows from the above answer.  Understanding of
pragmatics would result from the ability to dynamically generalize from
prior similar statements in prior similar contexts, of what those prior
contexts contained.



>RICHARD####> Ben Goertzel wrote:
>>Goertzel####> This practical design is based on a theory that is fairly
complete, but not easily verifiable using current technology.  The
verification, it seems, will come via actually getting the AGI built!
ED####>  You and Ben are totally correct.  None of this will be proven
until it has actually been shown to work.  But significant pieces of it
have already been shown to work.

I think Ben believes it will work, as do I, but we both agree it will not
be “verifiable” until it actually does.

As I wrote to Robin Hanson earlier today, the fact you don’t agree with
what we view as the relatively high probability of success for our
approach does not reflect poorly on either your intelligence or your
knowledge of AI.  If you haven’t spent a lot of time thinking about a
Novamente-like approach there is no reason, no matter how bright you are
that you should be able to understand its promise.

I am sure you are smart enough to understand its promise if you wanted to.
Do you?

Ed Porter

-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 11, 2007 4:21 PM
To: agi@v2.listbox.com
Subject: Re: [agi] What best evidence for fast AI?


Edward W. Porter wrote:
> Robin,
>
>
>
> I would be very interested in any comments you might have about the
> paper I emailed you stating my reasons for believing that powerful
> artificial general intelligence (AGI) could be made in 5 to 10 years if
> the right people received the right funding.  Please feel free to point
> out what you perceived to be its faults as well as its strengths.  I
> want very much to learn how to better convey my point to people like
you.
>
>
>
> I understand why you would say, in the absence of compelling evidence,
> the probability of what I have predicted seems low.  If I had not done
> thousands of hours of reading and thinking about the approach I favor
> and the evidence from brain science and AI research that supports it, I,

> myself, would think its chances low.  Because AI is such a large field
> -- and because, up until recently, trying to build any successful
> whole-mind machine would have been a dead-end  -- even most of the
> leaders in AI have not done enough reading and thinking in this
> particular area to understand it.
>
>
>
> Your response indicates I have failed to explain the extend to which
> the
> Novamente, or Novamente-like, approach I favor provides reasonable
> solutions to major problems in AI.  (I cite Novamente not only because
> it is the best approach I currently know of, but also because a fair
> amount of information is publicly available on it, such as at
> http://www.novamente.net/papers/ , and in books written by Ben Goertzel
> that can be found at Amazon.com.)
>
>
>
> For example, it enables the problem of common sense reasoning to be
> solved because for the first time it will have hardware with the power
> to represent and compute from world knowledge, and it will focus on
> initially guiding such machines in the important task of automatically
> learning such knowledge.
>
>
>
> It solves the problem of providing truly general intelligence by
> providing an architecture that can learn patterns, probabilities, and
> proper inferencing of virtually any type.  It does so because its basic
> learning architecture is based on recording a succession of
> input/pattern-activation states, automatically finding patterns in those

> states, then finding patterns and generalizations composed of those
> patterns, in a multi-level compositional/generalizational hierarchy, all

> while recording indications of the frequencies of all those patterns and

> the contexts in which they are recorded.  The Serre paper I cited in my
> prior long message to you, demonstrates the amazing potential of such
> self learning multi-level compositional/generalizational hierarchies.
>
>
>
> The learning and cognitive capabilities of a system with such
> automatically learned pattern hierarchies is made even more powerful and

> general by the fact that among the hierarchy of patterns it learns are
> patterns that learn how to best control its own mental behavior in the
> pursuit of its goals in specific contexts.
> Compositional/generalizational hierarchies not only have the extremely
> valuable capability of recognizing similarities between significantly
> different instances of the same high-level pattern, but also the equally

> valuable capability of creating specific instantiations of each of the
> many elements of a high-level pattern, at each of many possible levels
> in the hierarchy, in a context appropriate way.
>
>
>
> My approach combats combinatorial explosion by giving importance
> weights, based on the roles patterns or links between patterns have
> played in satisfying some system goal, and by then using such measures
> of importance to determine what resources such patterns or pattern links

> deserve in future computation.  A vast number of academic and commercial

> projects have shown the general power of reinforcement learning, of
> which this is a form.
>
>
>
> In fact, I don’t know of any hard problems left in AI, and I have been
> looking for them for years.  (If you or any readers in this list now of
> any such problems that exist between, say Novamente, and brain level
> AGI, please email them to me.

Ed, with respect, this is simply not true.

Or rather, it may well be true that you yourself do not know of any hard
problems left in AI, but this would be a statement about your knowledge,
not a summary of the state of the art.

Some of those problems are barely even at the formulation stage (people
can hardly even articulate what the problem is, exactly) let alone at
the solution stage.  What would be the solution of the grounding
problem?  What would be the solution of the problem of autonomous,
unsupervised learning of concepts?  Can you find proofs that inference
control engines will not show divergent behavior under heavy load (i.e.
will they degrade gracefully when forced to provide answers in real
time)?  Are there solutions to the problems of flexible, abstract
analogy building?  Language learning?  Pragmatics?

And in each of these cases what we need today are concrete reasons to
believe that proposed solutions really will work, rather than just that
someone has suggested a solution, but has no particular reason beyond
blind faith that it really will work.

Even Ben Goertzel, in a recent comment, said something to the effect
that the only good reason to believe that his model is going to function
as advertised is that *when* it is working we will be able to see that
it really does work:

Ben Goertzel wrote:
> This practical design is based on a theory that is fairly complete,
> but not easily verifiable using current technology.  The verification,
> it seems, will come via actually getting the AGI built!

This is a million miles short of a declaration that there are "no hard
problems left in AI".


Sincerely,


Richard Loosemore




> (I am aware I risk being made a fool by
> asking for this, but it so it could be informative) )  There are known
> ways of addressing every single one I have ever heard of.  As Deb Roy,
> one of the MIT Media Lab’s brightest stars, once agreed with me, he saw
> no brick walls, no problems for which we hadn’t promising approaches,
> between us and powerful AI.  At this point the biggest problem (and it
> is non-trivial) is the engineering task of getting all the pieces to
> work together well and efficiently automatically.
>
>
>
> Of course, as we actually get closer to building human-level AGIs we
> probably will discover multiple new problems, but there is no strong
> reason to believe any of them will be show stoppers.  What ever the
> problems are, the brain has found a way around them, and our ability to
> unlock the secrets of the brain are growing at an ever increasing rate.
>
>
>
> Even if the task of creating true human-level AGI’s takes 10 to 20
> instead of 5 to 10 years, it is clear that vast advances in AI can be
> made within just five years by created large systems using the multiple
> pieces of the Novamente, or Novamente-like, approach I advocate, because

> those multiple pieces have proven themselves in multiple successful in
> prototypes.
>
>
>
> I really want this field to get the serious funding it deserves soon.
> Since 1970, my senior year at college when I completed a lengthy reading

> list Marvin Minsky gave me, I have been saying that when brain level
> hardware arrives human level AI would shortly follow.  At that time I
> did not see such hardware coming for decades, and perhaps not at all.
> But as my prior paper to you said, hardware roughly in the brain-level
> ball park is already here, and the price of such hardware will keep
> dropping dramatically.
>
>
>
> I am 59 and I want this all to happen soon enough that I can be a part
> of it.
>
>
>
> So I would really appreciate any suggestions you might give me about
> how
> to better communicate the potential value of the approach I support to
> other intelligent people, such as yourself -- short of actually getting
> it to work.
>
>
>
> Ed Porter
>
>
>
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]
>
>     -----Original Message-----
>     *From:* Robin Hanson [mailto:[EMAIL PROTECTED]
>     *Sent:* Saturday, November 10, 2007 4:32 PM
>     *To:* agi@v2.listbox.com
>     *Subject:* RE: [agi] What best evidence for fast AI?
>
>     At 01:52 PM 11/10/2007, Edward W. Porter wrote:
>>     I am an evangelist for the fact that the time for powerful AI
>>     could be here very rapidly if there were reasonable funding for
>>     the right people.  There is a small, but increasing number of
>>     people who pretty much understand how to build artificial brains
>>     as powerful as that of humans, not 100% but probably at least 90%
>>     at an architectual level.
>
>     Well we should all assign a chance to a recent dramatic
>     breakthrough, but in the absence of compelling evidence that chance
>     has to be pretty low.
>
>     Robin Hanson  [EMAIL PROTECTED]  http://hanson.gmu.edu
>     Research Associate, Future of Humanity Institute at Oxford
University
>     Associate Professor of Economics, George Mason University
>     MSN 1D3, Carow Hall, Fairfax VA 22030-4444
>     703-993-2326  FAX: 703-993-2323
>
>
>
------------------------------------------------------------------------
>     This list is sponsored by AGIRI: http://www.agiri.org/email
>     To unsubscribe or change your options, please go to:
>     http://v2.listbox.com/member/?&; <http://v2.listbox.com/member/?&;>
>
> ----------------------------------------------------------------------
> --
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> <http://v2.listbox.com/member/?&;>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64051768-75e396

Reply via email to