Two problems unfortunatly arise quickly there,
1. Internal World Model.
  An intelligence must have some form of internal world model, because this is 
what it operates on internally, its memory, 
  People have a complex world model including everythign we have built up over 
years, but a reservation serivce has a world model as well, it knwo about 1000+ 
airline routes and times, it talks to you, saves your preferences for outgoign 
flight, and can use that to think and come up with a suggestion for an incoming 
flight, and which airline to take.  If the system contains weather data as 
well, and can use it, then it could be more intelligent.
  It has a world model built up there, not as complex, but defintly there, and 
I would rate that as having some level of "intelligence" and an expert system 
as having more intelligence due to a richer world model and more ability to 
give answers.
2. Learning.
  Probably a contreversial point here, but 
Do you think learning is a requirement for understanding, or intelligence?
For an intelligence, I dont believe it is.  If we took a 10 year old child, and 
stopped their ability to learn, they would still have the ability to do all the 
things they did before, can go to the store, and play and fix breakfast etc.
  Now for an AGI to grow and be able to do more and more things, it needs to 
have the ability to learn.  But understanding itself doesnt have any special 
requirement that it understand New things, just the things that are currently 
considering.

Jame Ratcliff

Mark Waser <[EMAIL PROTECTED]> wrote: > What definition of intelligence would 
you like to use?

Legg's definition is perfectly fine for me.

> How about the "answering machine" test for intelligence?  A machine passes 
> the
> test if people prefer talking to it over talking to a human.  For example, 
> I
> prefer to buy airline tickets online rather than talk to a travel agent. 
> To
> pass the answering machine test, I would make the same preference given 
> only
> voice communication, even if I know I won't be put on hold, charged a 
> higher
> price, etc.  It does not require passing the Turing test.  I may be 
> perfectly
> aware it is a machine.  You may substitute instant messages for voice if 
> you
> wish.

What does "being preferred by humans" have to do with (almost any definition 
of) intelligence?  If you mean that it can solve any problem (i.e. tell a 
caller how to reach any goal -- or better yet even, assist them) then, sure, 
it works for me.  If it's only dealing with a limited domain, like being a 
travel agent, then I'd call it a narrow AI.  Intelligence is only as good as 
your model of the world and what it allows you to do (which is pretty much a 
paraphrasing of Legg's definition as far as I'm concerned).  And if you're 
not using an expandable model, as a calculator is not, then you're not 
intelligent.

> I claim that a system that can pass this test "understands" my words and 
> knows
> what they mean, even if the words are not grounded in nonverbal 
> sensorimotor
> experience.  Its world model will be different than that of a human, but 
> so
> what?

And I'll claim that it doesn't understand a thing UNLESS it has a model of 
it's world (which could be text-only for all I care but which has the 
behavior necessary for it to accurately answer questions about the real 
world) that it is relating your words to.  If it has that and can add to 
it's world as new things are introduced to it from the "real" world, then 
I'm very willing to say that it is intelligent and that it understands it's 
world.  If not, you just have an unintelligent program.

> Its world model will be different than that of a human, but so what?

I've never claimed that an intelligence's world model has to be anything 
like that of a human.  All I require is that it be effective and expandable.


----- Original Message ----- 
From: "Matt Mahoney" 
To: 
Sent: Wednesday, May 02, 2007 12:50 PM
Subject: Re: [agi] rule-based NL system


> --- Mark Waser  wrote:
>
>> > OK, how about Legg's definition of universal intelligence as a measure 
>> > of
>> > how
>> > a system "understands" its environment?
>>
>> OK.  What purpose do you wish to use Legg's definition for?  You 
>> immediately
>> discard it below . . . .
>
> What definition of intelligence would you like to use?
>
> How about the "answering machine" test for intelligence?  A machine passes 
> the
> test if people prefer talking to it over talking to a human.  For example, 
> I
> prefer to buy airline tickets online rather than talk to a travel agent. 
> To
> pass the answering machine test, I would make the same preference given 
> only
> voice communication, even if I know I won't be put on hold, charged a 
> higher
> price, etc.  It does not require passing the Turing test.  I may be 
> perfectly
> aware it is a machine.  You may substitute instant messages for voice if 
> you
> wish.
>
> I claim that a system that can pass this test "understands" my words and 
> knows
> what they mean, even if the words are not grounded in nonverbal 
> sensorimotor
> experience.  Its world model will be different than that of a human, but 
> so
> what?
>
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



_______________________________________
James Ratcliff - http://falazar.com
Looking for something...
 
---------------------------------
Sucker-punch spam with award-winning protection.
 Try the free Yahoo! Mail Beta.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to