>> I'm more interested at this stage in analogies like
>> -- btw seeking food and seeking understanding
>> -- between getting an object out of a hole and getting an object out of a 
>> pocket, or a guarded room
>> Why would one need to introduce advanced scientific concepts to an 
>> early-stage AGI?  I don't get it... 

:-)  A bit disingenuous there, Ben.  Obviously you start with the simple and 
move on to the complex (though I suspect that the first analogy you cite is 
rather more complex than you might think) -- but to take too simplistic an 
approach that might not grow is just the "narrow AI" approach in other clothing.

>> Hmmm....  I guess I didn't understand what you meant.
>> What I thought you meant was, if a user asked "I'm a small farmer in New 
>> Zealand.  Tell me about horses" then the system would be able to disburse 
>> its relevant knowledge about horses, filtering out the irrelevant stuff.   
>> What did you mean, exactly?

That's a good simple, starting case.  But how do you decide how much knowledge 
to disburse?  How do you know what is irrelevant?  How much do your answers 
differ between a small farmer in New Zealand, a rodeo rider in the West, a 
veterinarian is Pennsylvania, a child in Washington, a bio-mechanician studying 
gait?  And horse is actually a *really* simple concept since it refers to a 
very specific type of physical object.  

Besides, are you really claiming that you'll be able to do this next year?  
Sorry, but that is just plain, unadulterated BS.  If you can do that, you are 
light-years further along than . . . . 

>> There are specific algorithms proposed, in the NM book, for doing map 
>> encapsulation.  You may not believe they will work for the task, but still, 
>> it's not fair to use the label "a miracle happens here" to describe a 
>> description of specific algorithms applied to a specific data structure.  

I guess that the jury will have to be out until you publicize the algorithms.  
What I've seen in the past are too small, too simple, and won't scale to what 
is likely to be necessary.

>> I think it has medium-sized gaps, not huge ones.  I have not filled all 
>> these gaps because of lack of time -- implementing stuff needs to be 
>> balanced with finalizing design details of stuff that won't be implemented 
>> for a while anyway due to limited resources. 

:-)  You have more than enough design experience to know that medium-size gaps 
can frequently turn huge once you turn your attention to them.  Who are you 
snowing here?



  ----- Original Message ----- 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, November 12, 2007 12:55 PM
  Subject: Re: [agi] What best evidence for fast AI?



  Hi,
   

    >> Research project 1.  How do you find analogies between neural networks, 
enzyme kinetics and the formation of galaxies (hint:  think Boltzmann)? 
    > That is a question most humans couldn't answer, and is only suitable for 
testing an AGI that is already very advanced.

      
    In your opinion.  I don't believe that an AGI is going to get far at all 
without having at least a partial handle on this.

  I'm more interested at this stage in analogies like

  -- btw seeking food and seeking understanding
  -- between getting an object out of a hole and getting an object out of a 
pocket, or a guarded room

  etc.

  Why would one need to introduce advanced scientific concepts to an 
early-stage AGI?  I don't get it... 

   

    >> Research project 2.  How do you recognize and package up all of the data 
that represents horse and expose only that which is useful at a given time?  
    > That is covered quite adequately in the NM design, IMO.  We are actually 
doing a commercial project right now (w/ delivery in 2008) that will showcase 
our ability to solve this problem.  Details are confidential unfortunately, due 
to the customer's preference. 

    I'm afraid that I have to snort at this.  Either you didn't understand the 
full implications of what I'm saying or you're snowing me (ok, I'll give you a 
.1% chance of having it).

  Hmmm....  I guess I didn't understand what you meant.

  What I thought you meant was, if a user asked "I'm a small farmer in New 
Zealand.  Tell me about horses" then the system would be able to disburse its 
relevant knowledge about horses, filtering out the irrelevant stuff.   

  What did you mean, exactly?

   


    >> That is what is called "map encapsulation" in the Novamente design.

    Yes, yes, I saw it in the design . . . . "a miracle happens here".        
Which, granted, is better than not realizes that the area exists . . . . but 
still . . . .

  There are specific algorithms proposed, in the NM book, for doing map 
encapsulation.  You may not believe they will work for the task, but still, 
it's not fair to use the label "a miracle happens here" to describe a 
description of specific algorithms applied to a specific data structure.  

   

    >> I do not think the design has any huge gaps.  But much further R&D work 
is required, and I agree there may be a simpler approach; but I am not 
convinced that you have one. 

    These are two *very* different issues (with a really spurious statement 
tacked onto the end).

    Of course you don't think the design has any gaps -- you would have filled 
them if you saw them.

  I think it has medium-sized gaps, not huge ones.  I have not filled all these 
gaps because of lack of time -- implementing stuff needs to be balanced with 
finalizing design details of stuff that won't be implemented for a while anyway 
due to limited resources. 

   
  -- Ben


------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64224931-78764f

Reply via email to