Edward W. Porter wrote:
Robin,

I would be very interested in any comments you might have about the paper I emailed you stating my reasons for believing that powerful artificial general intelligence (AGI) could be made in 5 to 10 years if the right people received the right funding. Please feel free to point out what you perceived to be its faults as well as its strengths. I want very much to learn how to better convey my point to people like you.

I understand why you would say, in the absence of compelling evidence, the probability of what I have predicted seems low. If I had not done thousands of hours of reading and thinking about the approach I favor and the evidence from brain science and AI research that supports it, I, myself, would think its chances low. Because AI is such a large field -- and because, up until recently, trying to build any successful whole-mind machine would have been a dead-end -- even most of the leaders in AI have not done enough reading and thinking in this particular area to understand it.

Your response indicates I have failed to explain the extend to which the Novamente, or Novamente-like, approach I favor provides reasonable solutions to major problems in AI. (I cite Novamente not only because it is the best approach I currently know of, but also because a fair amount of information is publicly available on it, such as at http://www.novamente.net/papers/ , and in books written by Ben Goertzel that can be found at Amazon.com.)

For example, it enables the problem of common sense reasoning to be solved because for the first time it will have hardware with the power to represent and compute from world knowledge, and it will focus on initially guiding such machines in the important task of automatically learning such knowledge. It solves the problem of providing truly general intelligence by providing an architecture that can learn patterns, probabilities, and proper inferencing of virtually any type. It does so because its basic learning architecture is based on recording a succession of input/pattern-activation states, automatically finding patterns in those states, then finding patterns and generalizations composed of those patterns, in a multi-level compositional/generalizational hierarchy, all while recording indications of the frequencies of all those patterns and the contexts in which they are recorded. The Serre paper I cited in my prior long message to you, demonstrates the amazing potential of such self learning multi-level compositional/generalizational hierarchies.

The learning and cognitive capabilities of a system with such automatically learned pattern hierarchies is made even more powerful and general by the fact that among the hierarchy of patterns it learns are patterns that learn how to best control its own mental behavior in the pursuit of its goals in specific contexts. Compositional/generalizational hierarchies not only have the extremely valuable capability of recognizing similarities between significantly different instances of the same high-level pattern, but also the equally valuable capability of creating specific instantiations of each of the many elements of a high-level pattern, at each of many possible levels in the hierarchy, in a context appropriate way.

My approach combats combinatorial explosion by giving importance weights, based on the roles patterns or links between patterns have played in satisfying some system goal, and by then using such measures of importance to determine what resources such patterns or pattern links deserve in future computation. A vast number of academic and commercial projects have shown the general power of reinforcement learning, of which this is a form.

In fact, I don’t know of any hard problems left in AI, and I have been looking for them for years. (If you or any readers in this list now of any such problems that exist between, say Novamente, and brain level AGI, please email them to me.

Ed, with respect, this is simply not true.

Or rather, it may well be true that you yourself do not know of any hard problems left in AI, but this would be a statement about your knowledge, not a summary of the state of the art.

Some of those problems are barely even at the formulation stage (people can hardly even articulate what the problem is, exactly) let alone at the solution stage. What would be the solution of the grounding problem? What would be the solution of the problem of autonomous, unsupervised learning of concepts? Can you find proofs that inference control engines will not show divergent behavior under heavy load (i.e. will they degrade gracefully when forced to provide answers in real time)? Are there solutions to the problems of flexible, abstract analogy building? Language learning? Pragmatics?

And in each of these cases what we need today are concrete reasons to believe that proposed solutions really will work, rather than just that someone has suggested a solution, but has no particular reason beyond blind faith that it really will work.

Even Ben Goertzel, in a recent comment, said something to the effect that the only good reason to believe that his model is going to function as advertised is that *when* it is working we will be able to see that it really does work:

Ben Goertzel wrote:
This practical design is based on a theory that is fairly complete, but not
easily verifiable using current technology.  The verification, it seems, will
come via actually getting the AGI built!

This is a million miles short of a declaration that there are "no hard problems left in AI".


Sincerely,


Richard Loosemore




(I am aware I risk being made a fool by asking for this, but it so it could be informative) ) There are known ways of addressing every single one I have ever heard of. As Deb Roy, one of the MIT Media Lab’s brightest stars, once agreed with me, he saw no brick walls, no problems for which we hadn’t promising approaches, between us and powerful AI. At this point the biggest problem (and it is non-trivial) is the engineering task of getting all the pieces to work together well and efficiently automatically. Of course, as we actually get closer to building human-level AGIs we probably will discover multiple new problems, but there is no strong reason to believe any of them will be show stoppers. What ever the problems are, the brain has found a way around them, and our ability to unlock the secrets of the brain are growing at an ever increasing rate.

Even if the task of creating true human-level AGI’s takes 10 to 20 instead of 5 to 10 years, it is clear that vast advances in AI can be made within just five years by created large systems using the multiple pieces of the Novamente, or Novamente-like, approach I advocate, because those multiple pieces have proven themselves in multiple successful in prototypes.

I really want this field to get the serious funding it deserves soon. Since 1970, my senior year at college when I completed a lengthy reading list Marvin Minsky gave me, I have been saying that when brain level hardware arrives human level AI would shortly follow. At that time I did not see such hardware coming for decades, and perhaps not at all. But as my prior paper to you said, hardware roughly in the brain-level ball park is already here, and the price of such hardware will keep dropping dramatically.

I am 59 and I want this all to happen soon enough that I can be a part of it.

So I would really appreciate any suggestions you might give me about how to better communicate the potential value of the approach I support to other intelligent people, such as yourself -- short of actually getting it to work.

Ed Porter

(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

    -----Original Message-----
    *From:* Robin Hanson [mailto:[EMAIL PROTECTED]
    *Sent:* Saturday, November 10, 2007 4:32 PM
    *To:* agi@v2.listbox.com
    *Subject:* RE: [agi] What best evidence for fast AI?

    At 01:52 PM 11/10/2007, Edward W. Porter wrote:
    I am an evangelist for the fact that the time for powerful AI
    could be here very rapidly if there were reasonable funding for
    the right people.  There is a small, but increasing number of
    people who pretty much understand how to build artificial brains
    as powerful as that of humans, not 100% but probably at least 90%
at an architectual level.

    Well we should all assign a chance to a recent dramatic
    breakthrough, but in the absence of compelling evidence that chance
has to be pretty low.
    Robin Hanson  [EMAIL PROTECTED]  http://hanson.gmu.edu
    Research Associate, Future of Humanity Institute at Oxford University
    Associate Professor of Economics, George Mason University
    MSN 1D3, Carow Hall, Fairfax VA 22030-4444
    703-993-2326  FAX: 703-993-2323
    ------------------------------------------------------------------------
    This list is sponsored by AGIRI: http://www.agiri.org/email
    To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&; <http://v2.listbox.com/member/?&;>
------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&; <http://v2.listbox.com/member/?&;>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64025143-5bb990

Reply via email to