I think your definition of "understanding" is in agreement with what Hutter 
calls intelligence, although he stated it more formally in AIXI.  An agent and 
an enviroment are modeled as a pair of interactive Turing machines that pass 
symbols back and forth.  In addition, the environment passes a reward signal to 
the agent, and the agent has the goal of maximizing the accumulated reward.  
The agent does not, in general, have a model of the environment, but must learn 
it.  "Intelligence" is presumed to be correlated with a greater accumulated 
reward (perhaps averaged over a Solomonoff distribution of all environments).
 
-- Matt Mahoney, [EMAIL PROTECTED]

----- Original Message ----
From: James Ratcliff <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Saturday, November 18, 2006 7:42:19 AM
Subject: Re: [agi] A question on the symbol-system hypothesis


Have to amend that to "acts or replies"
and it could react "unpredictably" depending on the humans level of 
understanding.... if it sees a nice neat answer, (like the jumping thru the 
window cause the door was blocked) that the human wasnt aware of, or was 
suprised about it would be equally good.

And this doesnt cover the opposite of what other actions can be done, and what 
are the consequences, that is also important.

And lastly this is for a "situation" only, we also have the more general case 
about understading a "thing"  Where when it sees. or has, or is told about a 
thing, it understands it if, it know about general properties, and actions that 
can be done with, or using the thing.

The main thing being we cant and arnt really defining "understanding" but the 
effect of  the understanding, either in action or in a language reply.

And it should be a level of understanding, not just a y/n.

So if one AI saw an apple and
 said, I can throw /  cut / eat it, and weighted those ideas. and the second 
had the same list, but weighted eat as more likely, and/or knew people 
sometimes cut it before eating it.  Then the AI would "understand" to a higher 
level.
Likewise if instead, one knew you could bake an apple pie, or apples came from 
apple trees, he would understand more.

So it starts looking like a knowledge test then.

Maybe we could extract simple facts from wiki, and start creating a test there, 
then add in more complicated things.

James

Charles D Hixson <[EMAIL PROTECTED]> wrote: Ben Goertzel wrote:
> ...
> On the other hand, the notions of "intelligence" and "understanding"
> and so forth being bandied about on this list obviously ARE intended
> to capture essential aspects of the
 commonsense notions that share the
> same word with them.
> ...
> Ben
Given that purpose, I propose the following definition:
A system understands a situation that it encounters if it predictably 
acts in such a way as to maximize the probability of achieving it's 
goals in that situation.

I'll grant that it's a bit fuzzy, but I believe that it captures the 
essence of the visible evidence of understanding.  This doesn't say what 
understanding is, merely how you can recognize it.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



_______________________________________
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php 

Sponsored Link

   
Mortgage rates as low as 4.625% - $150,000 loan for $579 a month. Intro-*Terms

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to