>> Obviously, I don't agree with this phrasing, but if you replace "a miracle 
>> happens here" with "there is research to be done here, on the level of a 
>> good PhD thesis" then I agree with you.

Hmm.  Looks like my joke Gantt chart doesn't have the wide currency here that I 
thought that it had.  I assume that my previous explanation makes it clear that 
we're in agreement.

>> Second, over the past few years, I've become more and more convinced that 
>> discovery systems, while they do "learn", are not the type of learning that 
>> I think is necessary for AGI. 
> Here we have a fundamental philosophical disagreement.

But neither of us can prove the other wrong and there are numerous people in 
both camps . . . . 

>> There seems to be an issue of definitional disagreement here, as the PLN 
>> inference component within Novamente specifically does handle reasoning by 
>> analogy.  
>> I'm not sure why you feel analogical inference is particularly difficult, as 
>> opposed to e.g. deductive or abductive inference. 

Analogical inference *once you have found the analogy* is handled by PLN and is 
not "particularly difficult, as opposed to e.g. deductive or abductive 
inference."  Finding the analogies is a real trick, is heavily based on 
existing knowledge, and really *is* in the "a miracle happens here" realm.  PLN 
inference just resolves the problem after it is set up and *in this instance* 
is subject to many of the complaints that we all commonly level against narrow 
AI.

>>  scale-invariance of knowledge, ways of determining and exploiting 
>> encapsulation and modularity of knowledge without killing useful "leaky" 
>> abstractions, and "memory" design. (oh my!)
> When we drilled down on these issues before, Mark, they seemed to me to come 
> down to matters of taste regarding software implementation choices, rather 
> than fundamental AGI issues.

To use your phrase -- "Here we have a fundamental philosophical disagreement."  
The fact that you don't see these as fundamental AGI issues looks to me like a 
minor version of the AIXI solution (assuming infinite resources, we can . . . 
).  This *is* of lot the basis of why I believe Novamente is still very much in 
the throes of "a miracle happens here".  Your low-level stuff is great -- but I 
don't see it as capable of scaling up (and certainly not as effectively as 
possible).  You may well prove me wrong.  You may well find quick ways to 
enhance Novamente so that it isn't a problem.  I just believe that it is a 
show-stopper and currently unaddressed.


THE REST

I think that we're pretty much in agreement on.  I certainly was not addressing 
you when I said "Thus, those blindly insisting that Novamente is the 
be-all-and-end-all and that all other approaches should be abandoned are not 
doing any of us a service.  I want to see Novamente go forward but we shouldn't 
put all of our eggs in one basket."  I haven't seen you say that (for others) 
while I do believe that *YOU* should be putting all of your eggs in the 
Novamente basket because that's the only way to make progress.  My objection is 
to the "me-too" fan-boys that are stepping on other alternatives because "the 
problem is obviously solved except for a bit of work and some minor details".




  ----- Original Message ----- 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 10:54 AM
  Subject: Re: [agi] What best evidence for fast AI?



  Hi, 



    First, Novamente is a discovery system (and a *really* good one).  The 
other 
    parts of it's design, however, are not fully fleshed out and there are huge
    "a miracle happens here" holes. 

  Obviously, I don't agree with this phrasing, but if you replace "a miracle 
happens 
  here" with "there is research to be done here, on the level of a good PhD 
thesis"
  then I agree with you.



    Second, over the past few years, I've become more and more convinced that
    discovery systems, while they do "learn", are not the type of learning that
    I think is necessary for AGI. 

  Here we have a fundamental philosophical disagreement.

   
     Novamente can certainly tease out patterns 
    from large quantities of data but it isn't fully designed (at this point) to
    do anything like reasoning by analogy, for example. 

  There seems to be an issue of definitional disagreement here, as the PLN 
  inference component within Novamente specifically does handle reasoning by
  analogy.  

  I'm not sure why you feel analogical inference is particularly difficult, as 
opposed
  to e.g. deductive or abductive inference. 

   
     Ben does have some
    plans for this but, my opinion is that, he is still in the realm of "a 
    miracle happens here" on this subject.

  Actually, IMO, this is **not** one of the areas in need of most future 
research
  within the Novamente framework.  It's something that is pretty 
straightforward within 
  Novamente.
   


    Third, and I've said this before, there are some fundamental engineering 
    features (scale-invariance of knowledge, ways of determining and exploiting
    encapsulation and modularity of knowledge without killing useful "leaky"
    abstractions, etc.) that aren't implemented yet in Novamente that really 
    need to be implemented much earlier rather than later.  Also, I have a lot
    of questions about Novamente's "memory" design.

  When we drilled down on these issues before, Mark, they seemed to me to come 
  down to matters of taste regarding software implementation choices, rather 
than
  fundamental AGI issues.
   


    In particular, I think that Novamente's foray into learning in a virtual
    world is either going to be incredibly useful or a rather large bust because
    it is precisely the type of learning that Novamente hasn't specialized in 
    before this point.

  Well, our initial foray involves using MOSES to do embodied procedure learning
  based on a combination of reinforcement signals, imitative cues and user
  corrections.  This is actually quite similar to stuff we did before, using 
MOSES to 
  learn to play fetch and tag in a 2D world, for example.

  What is different here, from these prior learning experiments,
  is that we're using MOSES in a context of an agent with a memory (an 
AtomTable),
  which does a little bit of reasoning based on this memory to guide its 
learning; and
  we have a mini version of the Novamente goals, feelings and action selection 
framework
  in there. 

  So, while it's true that we have not done any commercial projects involving 
embodied 
  learning, we've done a few academic-style prototype projects of this nature 
before.





    A number of people on this list seem to regard Ben as almost a deity or a
    prophet.  Ben is intelligent, creative, has a solid background, and gets to
    work hard in the field so he looks a lot better than most everyone else.  
It 
    also means that he has polished his ideas and eliminated the most obvious
    problems.  This does not, however, mean that he has a provably correct path.

  Yes, I do not claim to have a provably correct path. 

  I really think that is an overly strict criterion.  

  For sure, to create an AGI using Novamente, there is further research to be 
done,
  not just engineering.  



    Novamente may lead to AGI (with *a lot* more hard work). 

  Yes, it will take a lot more hard work.

  My current estimate is about 6 years of full-time work for a dedicated team 
of 10-15 
  really great, appropriately trained AI engineers, who would be doing a 
combination 
  of research and software engineering and teaching and testing.

  This estimate could be an underestimate or an overestimate.  It sure ain't 
off by
  an order of magnitude though.

   
     Personally, as
    I've said, I believe that it is *a path* but one which will be overtaken and
    passed by a shorter, easier path (just as I believe that brain emulation is
    a path that will be overtaken and passed by a shorter, easier path). 


  This might be the case.  I just haven't seen that shorter, easier path clearly
  articulated anywhere.  Frankly, I have spent some time looking for it, and
  haven't found it.


    But this has gotten rather long so I should sum up . . . . Novamente has
    great promise -- but part of the reason why it has such great promise is 
    because so much of it *hasn't* been fully determined yet.  The design is
    still open enough that it can be stretched to fit many things. 

  This flexibility is intentional, and I consider it a strength of the design. 

  We still have a lot to learn, so I took great pains to construct a design with
  the flexibility to be adapted based on learning done in the course of 
  developing and teaching the system.
   
     The problem
    is that stretching it in some directions may/probably will make it less
    adept at other things (jack of all trades/master of none) 

  Well, the human brain is arguably a jack of all trades and master of none. 

  So I think this is okay for a first-pass AGI system.

   
    and it may well be 
    (and this is my primary complaint) that it is *so* general that, while it
    could serve as the basis of an AGI, it is far more complicated than
    necessary to do so (just as a bird's biology is not necessary for flight). 

  I am pretty sure that this is the case.

  However, I think that the simpler, more elegant design may become apparently 
only
  AFTER we have achieved a pretty powerful level of AGI with the current, 
  more flexible design.
   

    Thus, those blindly insisting that Novamente is the be-all-and-end-all and 
    that all other approaches should be abandoned are not doing any of us a
    service.  I want to see Novamente go forward but we shouldn't put all of our
    eggs in one basket.


  Well Mark, even I am not suggesting that the world should put all of its AGI 
  resources into developing Novamente and should ignore all other approaches.

  If I had $100M I would put it all into developing Novamente.

  If I had $1B I would not, I would fund a bunch of alternate approaches too 
;-) 

  As I have neither, I'm gonna get back to work now...

  -- Ben G



------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64175323-aef3dd

Reply via email to