Durk,

Absolutely right about the need for what is essentially an imaginative level of 
mind. But wrong in thinking:

"Vision may be classified under "Narrow" AI"

You seem to be treating this extra "audiovisual perception layer" as a purely 
passive layer. The latest psychology & philosophy recognize that this is in 
fact a level of v. active thought and intelligence. And our culture is only 
starting to understand imaginative thought generally.

Just to begin reorienting your thinking here, I suggest you consider how much 
time people spend on audiovisual information (esp. tv) vs purely symbolic 
information (books).  And allow for how much and how rapidly even academic 
thinking is going audiovisual.  

Know of anyone trying to give computers that extra layer? I saw some vague 
reference about this recently.of which I have only a confused memory.


  Durk:Although I symphathize with some of Hawkin's general ideas about 
unsupervised learning, his current HTM framework is unimpressive in comparison 
with state-of-the-art techniques such as Hinton's RBM's, LeCun's convolutional 
nets and the promising low-entropy coding variants.

  But it should be quite clear that such methods could eventually be very handy 
for AGI. For example, many of you would agree that a reliable, computationally 
affordable solution to Vision is a crucial factor for AGI: much of the world's 
information, even on the internet, is encoded in audiovisual information. 
Extracting (sub)symbolic semantics from these sources would open a world of 
learning data to symbolic systems.

  An audiovisual perception layer generates semantic interpretation on the 
(sub)symbolic level. How could a symbolic engine ever reason about the real 
world without access to such information?

  Vision may be classified under "Narrow" AI, but I reckon that an AGI can 
never understand our physical world without a reliable perceptual system. 
Therefore, perception is essential for any AGI reasoning about physical 
entities!

  Greets, Durk


  On Sun, Mar 30, 2008 at 4:34 PM, Derek Zahn <[EMAIL PROTECTED]> wrote:


    It seems like a reasonable and not uncommon idea that an AI could be built 
as a mostly-hierarchical autoassiciative memory.  As you point out, it's not so 
different from Hawkins's ideas.  Neighboring "pixels" will correlate in space 
and time; "features" such as edges should become principle components given 
enough data, and so on.  There is a bunch of such work on self-organizing the 
early visual system like this.
     
    That overall concept doesn't get you very far though; the trick is to make 
it work past the first few rather obvious feature extraction stages of sensory 
data, and to account for things like episodic memory, language use, 
goal-directed behavior, and all other cognitive activity that is not just 
statistical categorization.
     
    I sympathize with your approach and wish you luck.  If you think you have 
something that produce more than Hawkins has with his HTM, please explain it 
with enough precision that we can understand the details.
     


----------------------------------------------------------------------------
          agi | Archives  | Modify Your Subscription  




------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG. 
  Version: 7.5.519 / Virus Database: 269.22.1/1349 - Release Date: 3/29/2008 
5:02 PM

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to