> For example, there is the theory of perception where sensory inputs are 
> believed to be 'defined' (or something) or tagged with higher level analysis 
> shows that there are solid foundations of non-hierarchal reasoning

And where do you think this tagging or defining came from in the first place? 
Feedback, by any chance? From previously generalized sensory data? Or maybe you 
think there was some direct NLP communication from god, at the dawn of the 
history?

This goes back to my musings about radically impaired "inductive phase" in the 
dressed-up ape. Countless people with great raw intelligence & top-down focus 
seem to have trouble wrapping their minds around functional definition of 
intelligence, & its importance for developing a GI algorithm.
I thought long & hard, & came up with a very unlikely combination of three 
critical conditions:

- This level of generalization likely requires very sparse cortical 
architecture, not terribly practical otherwise.
- Given scarcity of related material, one must be emotionally detached & 
asocial enough to start a new field. Being detached & asocial is historically 
life-threatening, never mind working with nothing tangible in sight.
- One should start young, while his brain is still plastic / unmyelinated, 
because meta generalization is extremely (if subconsciously) data-intense. Of 
course, working alone is the hardest when one is young.



From: Jim Bromer 
Sent: Saturday, March 30, 2013 12:06 PM
To: AGI 
Subject: Re: [agi] Steve's placement/payload theory of language


No.  The idea that they a referent is a kind of search and compare process does 
seem obvious.  However, the conclusion that sensory inputs is at the bottom of 
a hierarchy (of search and compare or something) is not a valid conclusion.  
For example, there is the theory of perception where sensory inputs are 
believed to be 'defined' (or something) or tagged with higher level analysis 
shows that there are solid foundations of non-hierarchal reasoning.  
Furthermore, the idea that a system of hierarchy is both complete and sound is 
contra-indicated by the evidence of the dismal results of AGI so far (which is 
also foundational to your conclusion that there is a snowball's chance in hell 
(scih) of the need for sensory inputs in order for a necessarily hierarchal 
search and compare process).  If a single hierarchy was sound and complete then 
traditional logic would be sound and complete.  I think I was saying that an 
effort to create a linguistic theory which was able to include a method that 
determined some of the referents of a statement that was being analyzed was a 
search and compare method which meant it would be complex.  Finally, a keyboard 
(for example) is a sensory device.  It would not make any sense to talk about 
an AI program that was not capable of reacting to input in some way (except 
maybe as pretty far flung theoretical mathematical thing where the "input" 
could be contrived to be derived from the initial input.)  
Jim Bromer



On Sat, Mar 30, 2013 at 10:52 AM, Boris Kazachenko <[email protected]> wrote:


  > It is quite possible that the progressive discovery of referents is just a 
search and compare operation...

  Isn't this tautological? Isn't it also tautological that, to be selective, 
search ( comparison must be hierarchical? And that at the bottom of this 
hierarchy of complexity are sensory inputs? And that if your algorithm can't 
start from these inputs, then it has a snowball's chance in the hell at 
starting anywhere higher?
  Hello?  



  From: Jim Bromer 
  Sent: Saturday, March 30, 2013 9:11 AM
  To: AGI 
  Subject: Re: [agi] Steve's placement/payload theory of language


  So anyway, I think that linguistics has to be involved with the progressive 
determination of referents and how these referents can be used to define the 
meaning of the other parts of an expression.  This is so open that formal 
linguistics may not be able to define this well but it can be defined in 
general ways or for general common meanings.  Because we can provide encodings 
that, in turn, means that we can use terms in a specialized way.  (Like when I 
said that I use the term referent to refer to a real world object or a real 
world event, or to a mental object or mental event). People in these groups 
sometimes become annoyed because we can't figure out what they are talking 
about even though they have talked about their ideas numerous times. Part of 
the problem is that we can't recall all the specialized definitions that 
individuals use.  I believe that this problem is aggravated because 
specialization is an important part of communicating and even the best of 
writers rely on this even when they are using conventional terminology.

  So I think that the major obstacle confronting AGI linguistics right now is 
the discovery of referents.  Yes this could sometimes be alleviated with 
multi-modal sensors, but there is no evidence that multi-modal sensory methods 
that would allow an object to be seen and heard or sensed in other ways would 
resolve this problem of deducing what is being referred to.  It is quite 
possible that the progressive discovery of referents is just a search and 
compare operation which seems to be  major slowdown in computer science today.

  Jim Bromer



  On Fri, Mar 29, 2013 at 5:40 PM, Jim Bromer <[email protected]> wrote:

    It is a little difficult for me to answer this question so I will start 
with one part before I forget.  I had to look up sign (linguistics) and agent 
(linguistics) to get some idea about what he was talking about.  I would say 
that my approach is (or would be/will be) both.  In reading about signs I 
noticed that the idea of the referent is considered to be distinct from the 
idea of the signified.  When I refer to a referent I am referring to a real 
world thing or event or a mental idea.  I believe that the one thing that is 
missing in modern AI Linguistics is a way to follow what is being referred to.  
It probably is just too complicated for a computer program to figure out 
efficiently.  My gmail is malfunctioning so I will try to continue this later.
    Jim Bromer



    On Fri, Mar 29, 2013 at 1:09 PM, Piaget Modeler <[email protected]> 
wrote:

      Steve and Jim,  


      Kindly respond...



--------------------------------------------------------------------------
      Date: Thu, 28 Mar 2013 09:34:07 +0100
      Subject: Re: FW: [agi] Steve's placement/payload theory of language 

      From: Roland Hausser
      To: [email protected]



      Hello Michael,

      Thank you very much for your email.  I read 

      the comments by Jim Bromer and Steve Richfield

      with great interest.  They lead me to the 
      following questions:

      * Are their respective approaches sign-oriented
        or agent-oriented?

      * What do they think about defining basic concepts
        as types of the recognition and action procedures
        of an agent?

      * How about reusing these basic concepts as the 
        literal meanings of a language?

      Happy Easter to you!

      Looking forward to be reading from you,

      Best regards,

      Roland Hausser


            AGI | Archives  | Modify Your Subscription   





        AGI | Archives  | Modify Your Subscription   

        AGI | Archives  | Modify Your Subscription   



      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to