No Mike. AGI must be able to discover regularities of all kind in all
domains.

Must it be able to *discover* regularities or must it be able to be taught and subsequently effectively use regularities? I would argue the latter. (Can we get a show of hands of those who believe the former? I think that it's a small minority but . . . )

If you can find a single domain where your AGI fails, it is no AGI.

Failure is an interesting evaluation. Ben's made it quite clear that advanced science is a domain that stupid (if not non-exceptional) humans fail at. Does that mean that most humans aren't general intelligences?

Chess is broad and narrow at the same time.
It is easy programmable and testable and humans can solve problems of this
domain using abilities which are essential for AGI. Thus chess is a good
milestone.

Chess is a good milestone because of it's very difficulty. The reason why human's learn chess so easily (and that is a relative term) is because they already have an excellent spatial domain model in place, a ton of strategy knowledge available from other learned domains, and the immense array of mental tools that we're going to need to bootstrap an AI. Chess as a GI task (or, via a GI approach) is emphatically NOT easily programmable.


----- Original Message ----- From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Friday, October 24, 2008 4:09 AM
Subject: **SPAM** AW: [agi] If your AGI can't learn to play chess it is no AGI



No Mike. AGI must be able to discover regularities of all kind in all
domains.
If you can find a single domain where your AGI fails, it is no AGI.

Chess is broad and narrow at the same time.
It is easy programmable and testable and humans can solve problems of this
domain using abilities which are essential for AGI. Thus chess is a good
milestone.

Of course it is not sufficient for AGI. But before you think about
sufficient features, necessary abilities are good milestones to verify
whether your roadmap towards AGI will not go into a dead-end after a long
way of vague hope, that future embodied experience will solve your problems
which you cannot solve today.

- Matthias



Mike wrote
P.S. Matthias seems to be cheerfully cutting his own throat here. The idea
of a single domain AGI  or pre-AGI is a contradiction in terms every which
way - not just in terms of domains/subjects or fields, but also sensory
domains.




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to