Although I symphathize with some of Hawkin's general ideas about
unsupervised learning, his current HTM framework is unimpressive in
comparison with state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets and the promising low-entropy coding variants.

But it should be quite clear that such methods could eventually be very
handy for AGI. For example, many of you would agree that a reliable,
computationally affordable solution to Vision is a crucial factor for AGI:
much of the world's information, even on the internet, is encoded in
audiovisual information. Extracting (sub)symbolic semantics from these
sources would open a world of learning data to symbolic systems.

An audiovisual perception layer generates semantic interpretation on the
(sub)symbolic level. How could a symbolic engine ever reason about the real
world without access to such information?

Vision may be classified under "Narrow" AI, but I reckon that an AGI can
never understand our physical world without a reliable perceptual system.
Therefore, perception is essential for any AGI reasoning about physical
entities!

Greets, Durk

On Sun, Mar 30, 2008 at 4:34 PM, Derek Zahn <[EMAIL PROTECTED]> wrote:

>
> It seems like a reasonable and not uncommon idea that an AI could be built
> as a mostly-hierarchical autoassiciative memory.  As you point out, it's not
> so different from Hawkins's ideas.  Neighboring "pixels" will correlate in
> space and time; "features" such as edges should become principle components
> given enough data, and so on.  There is a bunch of such work on
> self-organizing the early visual system like this.
>
> That overall concept doesn't get you very far though; the trick is to make
> it work past the first few rather obvious feature extraction stages of
> sensory data, and to account for things like episodic memory, language use,
> goal-directed behavior, and all other cognitive activity that is not just
> statistical categorization.
>
> I sympathize with your approach and wish you luck.  If you think you have
> something that produce more than Hawkins has with his HTM, please explain it
> with enough precision that we can understand the details.
>
> ------------------------------
>   *agi* | Archives <http://www.listbox.com/member/archive/303/=now>
> <http://www.listbox.com/member/archive/rss/303/> | 
> Modify<http://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to