Agreed,
 but I think as a first level project I can accept the limitiation of modeliing 
the AI 'as' a human, as we are a long way off of turning it loose as its own 
robot, and this will allow it to act and reason more as we do.  Currently I 
have PersonAI as a subset of Person, where it will inherit most things from a 
Person, but could have sublte differences later.
  Your bell pepper example is a reasonable one, and we handle that by having a 
full belief system for every individual, and can model others belief systems as 
well internally.

On the other topic, to check its goals as a measure of understanding, we simply 
need to have it tell or explain these goals to us.

James

Charles D Hixson <[EMAIL PROTECTED]> wrote: James Ratcliff wrote:
> ...
> >
> > So if one AI saw an apple and said, I can throw / cut / eat it, and
> > weighted those ideas. and the second had the same list, but weighted
> > eat as more likely, and/or knew people sometimes cut it before eating
> > it. Then the AI would "understand" to a higher level.
> > Likewise if instead, one knew you could bake an apple pie, or apples
> > came from apple trees, he would understand more.
> No. That's what I'm challenging. You are relating the apple to the
> human world rather than to the goals of the AI.
>
> What "world" do you propose the AGI act in? Yes I posit that it should 
> act and reason according to any and all real world assumptions, and 
> that being centric to the human world.  IE, if an AGI is worried about 
> creating a daily schedule, or designing an optimal building desing, it 
> MUST take into account humans need of restrooms facilities, even 
> though that is not part of ITs requirements or concerns.
>   Likewise doors and physically interacting objects are important.
>   If you dont model this, you may hope for a AI that is solely 
> computer resident that you can ask hard questions of and receive 
> answers.... Which is good and fine until those questions have to model 
> anything in the real world, then you have the same problem.   You 
> really must wind up modeling the world and existence, to have a fully 
> useful AGI.
I must act in it's own, as we act, individually, in *our* own worlds.  
There is interaction between these, but they definitely aren't 
identical.  I discover this anew whenever I try to explain to my wife 
why I did something, or she trys to explain the same to me.  Since our 
purposes aren't the same, and our perceptions aren't the same, our bases 
for reasoning are divergent.  Fortunately our conclusions are often 
equivalent, and so the exteriorizations are the same.  But I look at a 
bell pepper with distaste.  I only consider it as an attractive food 
when I'm modeling her model of the universe.

Similarly, to an AI an apple would not be a food.  An AI would only 
model an apple as a food object when it was trying to figure out how a 
person (or some other animal) would view it.  Note the extra level of 
indirection.

Is it safe to cross the street?  Possibly the rules for an AI would be 
drastically different from those of a person.  They might, or might not, 
be similar to the rules for a person in a wheelchair, but expecting them 
to be the same as yours would drastically limit it's ability to persist 
in the physical world.

>
> >
> > So it starts looking like a knowledge test then.
> What you are proposing looks like a knowledge test. That's not what I 
> mean.
>
> Yes, I currently havnt seen any decent "definition" or explanation of 
> understanding that does not encompass intelligence or knowledge. 
>
> A couple are here:
> # Understanding is a psychological state in relation to an object or 
> person whereby one is able to think about it and use concepts to be 
> able to deal adequately with that object.
> en.wikipedia.org/wiki/Understanding 
> 
> # means the ability to apply broad knowledge to situations likely to be 
> encountered, to recognize significant deviations, and to be able to 
> carry out the research necessary to arrive at reasonable solutions. 
> (250.01.3)
> www.indiana.edu/~iuaudit/glossary.html 
> 
>
> So on first pass, "understand" is a verb, which implies an actor and 
> an action. One of the above definitions specifically uses knowledge, 
> the other implies it by "think about it and use concepts"  this 
> thinking and concepts would seem to be stored in some knowledge base 
> either AGI or human based.
>   This is very similar to the "intelligence" definitions that have 
> been floating around as well, which is why I pose that both of these 
> topics should be discussed together, and the only possible real way to 
> see if something "understands" something else, is to either witness 
> the interactions between them, or to ask them.
>   This could be posed in two ways.  The easiest is simply what we do 
> in schools, direct testing.  Unfortunatly, in the real world, you cant 
> merely spit out the answers, you have to act and perform, and there 
> are small bits of interaction knowledge which are required to 
> accomplish many tasks, IE driving, or mixing checmicals in a chemistry 
> class.  These will be much much harder to test in an AGI, and will 
> ultimatly require a bot to physically perform them, though a Sim could 
> test it on a higher level first.
>
>   My thoughts are to create a complex Sim Environment, and turn a few 
> AI's loose inside it, and give a nice, flexible usable interface for 
> humans to direct, teach and interact with the bots.  The humans can 
> look at some of the behaviours, question, and correct the AGI's to 
> continiously build up a higher set of abilities.
>   In THEORY, this sounds good, though I know it lacks in a few areas.  
> But any thoughts there, even devil's advocate positions would be great 
> to help me get this around in my head. (helpful ideas are good as well)
>   Basically I believe knowledge is the most important building block 
> to AI, and we have yet to hav e agood representation of the knowledge, 
> or an acquisition of the knowledge.   Jumping forward to wanting to be 
> able to Use the knowledge, and create and act with it is somewhat 
> premature, until these two issues are more formally defined and usable.
>
> James Ratcliff
>
>
>
> _______________________________________
> James Ratcliff - http://falazar.com
> New Torrent Site, Has TV and Movie Downloads! 
> http://www.falazar.com/projects/Torrents/tvtorrents_show.php
>
> ------------------------------------------------------------------------
> Everyone is raving about the all-new Yahoo! Mail beta 
> . 
>
> ------------------------------------------------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email To 
> unsubscribe or change your options, please go to: 
> http://v2.listbox.com/member/?list_id=303 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



_______________________________________
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
---------------------------------
Sponsored Link

Rates near 39yr lows. $420,000 Loan for $1399/mo - Calculate new house payment

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to