#7 - Understanding is learning a new language to the point of fluency.         
(When the words in the new language activate your language independent 
concepts,          and you have created sufficient behaviors so that you can 
effortlessly generate          expressions in the new language). 
~PM

Subject: Re: [agi] What is "understanding"?
From: [email protected]
Date: Wed, 3 Apr 2013 20:53:44 -0500
To: [email protected]

Thanks Aaron!  Computer people always want data structures and representation 
because it would make the problem straightforward, but there is no reason that 
there have to be any.  People don't talk about neural networks having 
"representations".  You guys ever heard of Rodney Brooks.  It seems like at the 
linguistic level, we have representations, but you look closely, no, we don't 
have retrieval, we have dynamic recreations.
As for arguing why propositions are not sufficient, how about all of 
non-explicit learning, like motor learning for example.  Now, it's tricky 
because representational systems can be made informationally complete, so you 
might get the idea that you could get such a system to eventually work.  But we 
have many years of constantly working at it.  Brittleness is one of the big 
ones, but there are plenty more.  One particular problem I am focusing on is 
the difficulty in incremental learning for them.  You have to attach a 
completely different mechanism (like weights) onto the proposition to deal with 
that, so as a simple counterexample, you might see that propositional 
representations are insufficient.
I apologize for not having a proper positive proposal, as that is one of the 
stated reasons for posting to this forum, but that's such a harder problem.  
This is still room for criticism of existing approaches.  So let me throw in 
that some of the dynamic network architectures do seem to be promising.  I 
would want the, to address the question of epistemic emotional judgements more 
directly, but I would say they are capable of it.  I haven't looked into them 
enough I guess.  Ben's system (sorry I have already forgotten what he's calling 
it, and I think I even download bits of it ) and maybe Stan Franklin's Lida.  
Personally, As I have said, I'm not great when there is a representational 
level in there anywhere.  I'd want to stay closer to pure sensori-motor data, 
with an openness to deep machine learning for features if you just have to have 
something like representations.  Ok, there.  I tried!Andi


On Apr 3, 2013, at 12:02 PM, Aaron Hosford <[email protected]> wrote:

PM said:One suggestion is that you compile language into a "database of facts" 
using a propositional representation.

In addition, you convert all sensory input to the AGI into the same 
propositional representation.
Then you do inferencing within and generate behaviors from  the aforesaid 
representation.

If by "propositional representation", a logical statement with a Boolean 
true/false value, this will not be sufficient. The reason is, "facts" are never 
certain, and you never know in advance which ones will later prove wrong. Facts 
have associated confidence levels, based on supporting and conflicting 
evidence. Boolean truth values are an idealization of this, throwing away the 
ongoing accumulation of evidence and giving us only whether a particular 
proposition is currently accepted as reliable or not. The failure to recognize 
this has held back many seemingly promising AI projects in the past.

Rich said:So, what the heck can we compile NL into that would support 
prospective AGI operation?

This is what I've been describing to you. Semantic networks, properly 
structured, are up to the task. Any proposition from PM's "propositional 
representation" can be represented in a semantic network. The advantage that a 
semantic network then conveys is that the relationships between elements 
contained within a proposition can themselves be given confidence levels; the 
analysis of evidential support is no longer limited only to the proposition as 
a whole. For example, suppose I am looking at the proposition, ate(Billy, 
Nicky's_Popsicle). In a standard propositional representation like this, I 
can't analyze where the proposition is wrong, I just have to accept it is 
either right or wrong as a whole. If I use a semantic network-style 
representation, Billy<--SUBJECT--ate--OBJECT-->Nicky's_Popsicle, I now have two 
separate locations where I can attribute the failure of the proposition as a 
whole to be true: the SUBJECT and OBJECT links. Propositions come so close to 
doing this, but fail when we attempt to attribute failure to a particular 
substructure, because they aren't generalized enough to permit full analysis of 
the relationships of substructures to the parent structure.

Andi said:
I would go with Todor on this one.  More specifically, it's very clear to me 
that language cannot be the bottom or basis of representation.  A language 
system has to be a piece on top of the basic system.  It may be the most 
important piece to us, because for interaction with us, and ability to use our 
body of written knowledge and contribute to it, a system will need to use 
language.  But, that need in no way implies that you could ever get any 
intelligent behavior if you just start at the level of language.  There are 
plenty of reasons to think otherwise.

The problem Steve and I both agree needs to be solved is: What, inside the 
mind, represents the meanings of natural language, and how do we go about 
designing an analogous structure programmatically? When you say someone 
understands a sentence, what happens in that person's mind? Is there not some 
sort of internal structure to which that sentence gets mapped through the act 
of understanding? In most AI/AGI projects to date, there have been three basic 
approaches: (1) use it directly in text form, (2) pull out what you need and 
stash it in "frames", (3) convert it to a parse tree. I think each of these is 
inadequate to the task. I think there is a more comprehensive data structure 
used in the human mind to represent what a sentence actually means, and this is 
the data structure, the lingua franca of the mind, on which the mind operates 
in the act of thinking. What would that data structure look like, were we to 
reverse engineer it to work on a computer? Language is useful towards 
accomplishing this task, not because it is already in the proper form, but 
because its structure necessarily closely mirrors that form, due to its purpose 
of communicating knowledge in that form from one mind to another. Once we have 
a proper understanding of how meaning is represented in the mind, it should be 
possible to begin mapping sensory information to that format, just as can be 
done with natural language.


On Wed, Apr 3, 2013 at 10:48 AM, Piaget Modeler <[email protected]> 
wrote:




Steve Richfield: "So, what the heck can we compile NL into that would support 
prospective AGI operation?"

One suggestion is that you compile language into a "database of facts" using a 
propositional representation.
In addition, you convert all sensory input to the AGI into the same 
propositional representation.
Then you do inferencing within and generate behaviors from  the aforesaid 
representation. 

~PM



                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to