Excellent question!

    Legg's paper does talk about an agent being able to "exploit any 
regularities in the environment"; simple agents doing "very basic learning by 
building up a table of observation and action pairs and keeping statistics on 
the rewards that follow"; and that "It is immediately clear that many 
environments, both complex and very simple, will have at least some structure 
that such an agent would take advantage of."

    Knowledge of the structure of the environment is precisely what I mean when 
I use the words "model of the world".  I have restricted my definition further, 
however, by also saying that the system *must* be able to expand it's model 
(I'd be tempted to say, in an intelligent way, but that then gets rather 
recursive :-)  I'd almost go so far as to say that a true intelligence has to 
be able to use the scientific method (but I'm still a bit timid about that :-).

    You could argue that my "model of the world" *IS* so trivial that every 
possible approach to AGI has one basically by definition . . . . but it's the 
power (and structure -- and most importantly the expandability) of the model 
that I really mean to be arguing.  And, as I've said before, my perception of 
Matt's argument is that he believes that if you throw enough simple statistics 
at something, then that "model" is powerful enough to get you to AGI.  He also 
keeps throwing that silly calculator into the mix.

    How can I tell if an agent has a model of the world?  If it can predict 
something that it hasn't seen before, then that's much the definition of having 
a good model.

    One of the things that I think is *absolutely wrong* about Legg's paper is 
that he only uses more history as an example of generalization.  I think that 
predictive power is test for intelligence (just as he states) but that it 
*must* include things that the agent has never seen before.  In this sense, I 
think that Legg's paper is off the mark to the extent of being nearly useless 
(since you can see how it's has poisoned poor Matt's approach).


----- Original Message ----- 
From: "DEREK ZAHN" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Wednesday, May 02, 2007 3:03 PM
Subject: Re: [agi] rule-based NL system


> 
> Mark Waser writes:
> 
>>Intelligence is only as good as your model of the world and what it allows 
>>you to do (which is pretty much a paraphrasing of Legg's definition as far 
>>as I'm concerned).
> 
> Since Legg's definition is quite explicitly careful not to say anything
> at all about the internal structure of an agent, this is an interesting
> statement, and I'm curious how you derive this equivalence.
> 
> I assume that you have something in mind for "model of the world"
> that isn't so trivial that every possible approach to AGI has to have one
> basically by definition... If so, it's not really worth talking about.  If 
> not,
> how can you tell if an agent has a model of the world?
> 
> 
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to