Is "very complicated" a good reason to have 1 cognitive engine?  Why not have 
many and even use many on the same problem and then accept the best answer?  
Best answer might change for a single problem depending on other issues outside 
the actual problem area.  Why put all the eggs in one basket?  Is deduction the 
appropriate metaphor for all questions and thinking?  Do you use only logical 
analysis or fuzzy logic for everything you think about?

"It seems to be a good thing if we could design and write it just once and 
solve the whole AGI problem".

I would like to be a millionaire right now but that isn't likely to happen.  It 
seems quite obvious to me that ANY one algorithmic engine or data 
representation can only solve a small subset of what an AGI should be able to 
do.  Let's say that you start with an architecture that can include many of 
each and it turns out that just one can do an adequate job for everything, then 
just use the one that works.  If another potential algorithm doesn't work, just 
don't use it.  If an architecture is designed that can EVER only have one 
algorithm or data representation, and it just so happens you made a mistake, 
then the game is over.

Many times people on this list have stated that no one knows for sure which 
exact direction will produce an AGI.  Many different techniques have failed in 
the past.  Why set yourself up at the start of the project to have the least 
chance of success?

"But what if the AGI faces a *new*, unseen problem? "

Why not have a module that handles "new and unseen" problems while having 
others that work well in domains that you know?  It might not be the most 
efficient at first but it could be made to handle un-known problems.  The AGI 
could then make a better or more efficient module if this new problem warranted 
such importance.  In some ways, humans seem to do this now.

"but when you give the vision module a long list of requirements -- reading 
fonts, playing boardgames, understanding drawings, etc -- you may find that the 
vision module needs to be more and more *general*, to the point that you're 
almost making the general cognitive engine again. "

If the vision module got more complicated, why not have the AGI split off the 
parts that make sense, into a higher level module and leave the lowest level in 
the format that best handles that?  If 10, 20 for more modules work in 
different ways on vision, then why not let it be so?

How can one monolithic program accomplish all that humans can do now and all 
that we will do in the future?  Our brains (at the highest level) don't seem to 
be monolithic either so what evidence is there (biological or otherwise) that 
shows cognition can be had with 1 algorithm or data rep?

If there was a single quick method of creating an AGI, wouldn't someone have 
found it by now?

-- David Clark
  ----- Original Message ----- 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Thursday, March 15, 2007 6:38 AM
  Subject: Re: [agi] Logical representation

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to