Mike, 
When you type "Chair" what should happen is the AGI's model should activate the 
chair conceptfirst at a perceptual level to form the pixels into the words, 
then at a linguistic level to form lettersinto a word, then at a conceptual 
level, then at a simulation level where images of chair instances are evoked.  
This is just simple activation.  Semantic networks tied into perception and 
simulation would achieve the necessary effect you seek.  Transformations on 
these perception-simulation-semantic networks is what much of Piaget's work was 
about.
~PM.
From: tint...@blueyonder.co.uk
To: a...@listbox.com
Subject: Re: [agi] Re: Superficiality Produces Misunderstanding - Not Good 
Enough
Date: Tue, 23 Oct 2012 15:09:30 +0100





CHAIR
 
...
 
It should be able to handle any transformation of the concept, as in
 
DRAW ME (or POINT TO/RECOGNIZE)  A CHAIR IN TWO PIECES –..
 
..SQUASHED
..IN PIECES
-HALF VISIBLE
..WITH AN ARM MISSING
...WITH NO SEAT
..IN POLKA DOTS
...WITH RED STRIPES
 
Concepts are designed for a world of everchanging, everevolving multiform 
objects (and actions).  Semantic networks have zero creativity or 
adaptability – are applicable only to a uniform set of objects, (basically a 
database) -  and also, crucially, have zero ability to physically recognize 
or interact with the relevant objects. I’ve been into it at length recently. 
You’re the one not paying attention.
 
The suggestion that networks or similar can handle concepts is completely 
absurd.
 
This is yet another form of the central problem of AGI, which you clearly 
do not understand – and I’m not trying to be abusive  – I’ve been realising 
this again recently – people here are culturally punchdrunk with concepts like 
*concept* and *creativity*, and just don’t understand them in terms of 
AGI.


 

From: Jim Bromer 
Sent: Tuesday, October 23, 2012 2:04 PM
To: AGI 

Subject: Re: [agi] Re: Superficiality Produces Misunderstanding - 
Not Good Enough
 

Mike Tintner <tint...@blueyonder.co.uk> wrote:
AI doesn’t handle concepts.
 
Give me one example to prove that AI doesn't handle concepts.
Jim Bromer
 
 
 
On Tue, Oct 23, 2012 at 4:24 AM, Mike Tintner <tint...@blueyonder.co.uk> wrote:


  
  
  
  Jim: Mike 
  refuses to try to understand what I am saying because he would have to give 
up 
  his sense of a superior point of view in order to understand 
  it
   
  Concepts have nothing to do with 
  semantic networks. 
  AI doesn’t handle 
  concepts.
  That is the challenge for 
  AGI.
  The form of concepts is 
  graphics.
  The referents of concepts are 
  infinite realms..
   
  What are you saying that is relevant 
  to this, or that can challenge this – from any evidence?
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
  
  
  
  
    
    
      AGI | Archives  | Modify Your Subscription 
      
 


  
  
    AGI | Archives  | Modify 
      Your Subscription 
    


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to