On Thu, Oct 23, 2008 at 3:19 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> I do not think that it is essential for the quality of my chess who had
> taught me to play chess.
> I could have learned the rules from a book alone.
> Of course these rules are written in a language. But this is not important
> for the quality of my chess.
>
> If a system is in state x then it is not essential for the future how x was
> generated.
> Thus a programmer can hardcode the rules of chess in his AGI and then,
> concerning chess the AGI would be in the same state as if someone teaches
> the AGI the chess rules via language.
>
> The social aspect of learning chess is of no relevance.

Sigh.

Ok, let's say I grant you the stipulation that you can hard code the
rules of chess some how.  My next question is, in a goal-based AGI
system, what goal are you going to set and how are you going to set
it?  You've ruled out language, so you're going to have to hard code
the goal too, so excuse my use of language:

"Play good chess"

Ohhhhh.. that sounds implementable.  Maybe you'll give it a copy of
GNUChess and let it go at it.. but I've known *humans* who learnt to
play chess that way and they get trounced by the first human they play
against.  How are you going to go about making an AGI that can learn
chess in a complete different way to all the known ways of learning
chess?

Or is the AGI supposed to figure that out?

I don't understand why so many of the people on this list seem to
think AGI = magic.

Trent


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to