Vladimir Nesov wrote:
On Wed, Dec 17, 2008 at 6:03 PM, Ben Goertzel <b...@goertzel.org> wrote:
I happened to use CopyCat in a university AI class I taught years ago, so I
got some experience with it

It was **great** as a teaching tool, but I wouldn't say it shows anything
about what can or can't work for AGI, really...


CopyCat gives a general feel of "self-assembling" representation and
operations performed on reflexive level.  It captures intuitions about
high-level perception better than any other self-contained description
I've seen (which is rather sad, especially given that CopyCat only
touches on using hand-made shallow multilevel representations, without
inventing them, without learning). Some of the things happening in my
model of high-level representation (on the rights of description of
what's happening, not as elements of model itself) can be naturally
described using lexicon from CopyCat (slippages, temperature,
salience, structural analogy), even though algorithm on the low level
is different.


I agree with your sentiments about CopyCat (and its cousins). It is not so much that it delivers specific performance by itself, so much as it is a different way to think about how to do such things: an inspiration for a whole class of models. It is certainly part of the inspiration for my system.

Sounded to me like Ben's initial disparaging remarks about CopyCat were mostly the result of a BHDE (a Bad Hair Day Event). It *really* is not that useless.




Richard Loosemore



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to