I would be more inclined to assign nodes to concepts, classes, and specific 
instances (objects or events) rather than words themselves, since often words 
simultaneously represent multiple distinct concepts or classes as well as their 
instances. 

This means, however, that the system must be able to automatically identify the 
use of distinct meanings for the same word using context cues surrounding it's 
use, which can be a rather complex issue. Currently I'm focused on using fuzzy 
connection strengths for edges in the semantic net, along with rules for how 
those connection strengths interact based on local network topology. (This sort 
of approach has significant advantages in dealing with ambiguities and 
conflicting information.)

For example, an instance may have multiple edges indicating which nodes 
represent its class. A collection of local daemons would be triggered to help 
resolve this scenario by strengthening or weakening the connection strengths 
until a clear winner or winners had been found.

I still have a ways to go in building the mechanisms to convert a semantic 
representation of a sentence in terms of word relationships into one that 
describes the meanings of those words. I look forward to actually testing my 
ideas out.


On Aug 24, 2012 6:19 AM, Jim Bromer <[email protected]> wrote: 

If I was writing an AGI program I would start by designing a program that 
developed a semantic net as well.  Yes, as the program uses a concept, 
whether it is reacting to it or developing an idea around it, whether it is the 
in the focus of the idea or closer to the periphery, once it is considered or 
used in a consideration it may change in response to other concepts that are 
used with it and it can change the other concepts as well.  And I feel 
that this is how we use concepts.
 As a result the well defined concept (or word in the case of a semantic 
net) becomes much more varied and conceptually fuzzy.  If you stress too 
much formality in your definitions of concepts you are going to run into some 
confusion just because the meaning can carried along by the way it is used 
and by the way the it can be used. On the other hand, an AGI program has 
to have some way to "understand" the concept.  So the only way that this 
can be done is by looking at different usages, and becoming aware when a usage 
goes crosses the intuitive boundaries that you or others have held to recognize 
that it could be understood in other ways as well.
 This sounds relativistic to me. Other aspects of relativism point to 
the the lack of a fundamental base from which to work on.  Even though 
there may be fundamental truths in an environment, that does not mean that we 
can truly and fully understand them.  So when someone starts talking about 
a program that uses Bayesian Reasoning, for example, without specifying the 
frame which it would be used in (whether that specification represents an 
instance or an abstraction meant to represent possible instances) then I become 
wary that the person is talking about an idea that would work in narrow AI but 
might not work in AGI where important methods have to be stronger.  
Of course without any fundamental bases for reality one can argue that no 
definition is adequate, but that is not the point.  We have to do 
something to make rough representations of fundamentals, 
but these fundamentals are not invariant universal absolutes but more 
like specialized concepts that are useful.  
 Jim Bromer

On Thu, Aug 23, 2012 at 10:53 PM, Aaron Hosford <[email protected]> 
wrote:

Ah, that makes it a lot clearer. So really it's about experience and context 
(including other concepts and your own personal usage) modifying concepts on 
the fly. A dictionary always defines words in terms of other words, and so as 
the meaning of one word shifts, so do the meanings of all those connected to 
it. An AI can manage this sort of difficulty if it is built on the same 
principles. That's why I prefer semantic nets. They represent a data structure 
that corresponds directly to this sort of shifting web of concepts. Each 
concept receives its own node, and is defined entirely in terms of its 
connections to other nodes, which are updated constantly as new experiences and 
thoughts accrue.



On Thu, Aug 23, 2012 at 9:09 PM, Jim Bromer <[email protected]> wrote:


Aaron Hosford asked me if I could give an example of Conceptual 
Relativism?  No. I mean that everything is an example and conceptual 
relativization is so omnipresent that we are constantly adapting around it.


 Let's say that you want to understand what a word means.  You might 
start by using it in some sentences. But every time you use it in a particular 
sentence your sense of the meaning of the word naturally becomes more strongly 
associated with that situation. So you try to define it in more sentences (that 
is you try to better understand the concept-word by using the concept in 
another situation.)  It then takes on the characteristics of that its 
specialized usage as it relates to that situation.  Furthermore, you 
realize that some language-concepts can only be defined by systems of words and 
the meaning of many of the words has to be fitted to the specialized 
system of the sentence or phrase.  The same thing is true for most any 
concept that you want to think about.


 Ok here is an example: What does pattern mean?  Well, maybe 
someone might start with saying it is a symmetric image that is repeated. 
 Then you start to question what a pattern is and look it up in a 
dictionary.  It is not necessarily symmetric. By including the concept of 
a meta pattern into your definition you realize that pattern is not necessarily 
graphic.  Then you realize that there can be different definitions of meta 
patterns.  You realize that there are different -types- of patterns and 
meta patterns.  And on and on. At some point you realize that your new 
definitions of a pattern may have taken you beyond the boundaries of what you 
would intuitively call a pattern.  So then you have to ask yourself if the 
new definitions are valid.  If you start by asking if it is useful or 
interesting then the new variations will become more acceptable because they 
are interesting.  If you find that there is a better term to apply to some 
of the new variations you can learn to accept the fact that even if they are 
related to the concept of a pattern it might be easier to get other people to 
know what you are talking about using the better term.  But then you 
realize that in some conversations the term 'pattern' can help people relate to 
the greater context of some particular situation that you are trying to 
describe.


 I think this is a pretty good example although I did not present it as 
well as it could be.  It is not just a language thing, it is a 
subject-of-thought thing. In fact, you can sometimes rely on social convention 
of language to tone down linguistic relativism, but because conceptualization 
might tend to include more systems of related thought then you can describe 
using language the social conventions may get in the way of the effort to 
better understand an idea.


 It is my opinion that our knowledge is permeated with conceptual 
relativization and as a result the effort to limit the use 
components-of-knowledge in thought becomes complicated. In fact, the 
components-of-thought model becomes very tangled and when we use 
components-of-thought they become more like indexes into a range of variations 
on how the concept is used with other concepts.


 I think the most efficient computational methods are like 
combinatorial component systems.  I would want to use combinations of 
component concepts because that seems like the more efficient method for AGI. 
But, because of conceptual relativism this system cannot be logically 
constrained according to universal principles.  Of course the concept of 
'universal' in logic is relative.


 Jim Bromer   On Thu, Aug 23, 2012 at 8:16 PM, Aaron 
Hosford <[email protected]> wrote:

OK, that gives me a partial grasp. Can you give me an example?
 On Thu, Aug 23, 2012 at 7:05 PM, Jim Bromer <[email protected]> 
wrote:Conceptual relativism is the idea that concepts must be used to think 
about other concepts and when that happens the concepts that are used in an 
expression or study of the subject concept can often affect the "meaning" of 
the subject concept. So concepts are not only relative and relational they are 
also relativistic.





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  






-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to