Considering I just told you what the COMMON ELEMENTS and COMMON
RELATIONSHIPS OF THOSE ELEMENTS were, namely "not the shape, but the
combination of capability to support a behind and the potential inclination
of a person to take advantage of that capability (or intention of the
creator to provide such an artifact)", I'm going to concluded you either
weren't paying attention or didn't understand what I was saying.

The common elements are:
    1. the ability to support you while sitting



On Tue, Oct 23, 2012 at 1:04 PM, Mike Tintner <tint...@blueyonder.co.uk>wrote:

>   Aaron,
>
> I have just sent out a post to PM wh. applies equally to you.
>
> This is waffle.
>
> You have to identify -
>
> what are the COMMON ELEMENTS  - and COMMON RELATIONSHIPS OF THOSE ELEMENTS
> – that will enable you or your semantic net to identify these different
> figures as belonging to the same class of “chair”  and not “collages of
> wood” or “piles of assorted forms”  or “computer desk” or “collections of
> tools”?
>
> ARE there any common elements?
>
> You haven’t identified any
>
> You have to provide a direct clue as to how you are going to solve this
> problem – the problem of AGI – and not just waffle.
>
>
>
>  *From:* hosfor...@gmail.com
> *Sent:* Tuesday, October 23, 2012 6:34 PM
>  *To:* AGI <a...@listbox.com>
> *Subject:* Re: [agi] Re: Superficiality Produces Misunderstanding - Not
> Good Enough
>
> The thing which typifies the category "chair" is not the shape, but the
> combination of capability to support a behind and the potential inclination
> of a person to take advantage of that capability (or intention of the
> creator to provide such an artifact). These are things that are easy to
> represent in semantic nets, and difficult to represent as rules about shape.
> <http://shape.if/>
> If I have a representation of an object as a semantic net describing its
> parts and their physical relationships to each other, I can write a
> straight forward algorithm to analyze the transitive "supports" and "is
> connected to" relations in that description to determine whether the spot I
> intend to sit is supported. I can also determine whether or not my behind,
> when placed there, will itself be supported, or whether I'll slide off or
> topple over.
>
> The network generating algorithm can be designed to provide the
> information needed to perform this simulation (simulation being the reason
> you say images are necessary in the first place). Once the simulation has
> been performed the first time, the node representing the chair as a whole
> object can be labeled with a summary of the results, acting as a cache for
> relevant information so that the expensive operation of full physical
> simulation can be avoided next time the information is needed. It is this
> caching ability that gives hierarchical semantic nets their leg up over
> other ways of representing the problem.
>
>
>
> -- Sent from my Palm Pre
>
> ------------------------------
> On Oct 23, 2012 11:30 AM, Mike Tintner <tint...@blueyonder.co.uk> wrote:
>
>  PM & Aaron,
>
> You do realise that whatever semantic net system you use must apply to not
> just one chair, but chair after chair – image after image?
>
> Bearing that in mind, explain the elements of your semantic net which you
> will use to analyse these fairly simple figures as **chairs**::
>
>
> http://image.shutterstock.com/display_pic_with_logo/95781/95781,1218564477,2/stock-vector-modern-chair-vector-16059484.jpg
>
> Let’s label these chairs 1-25  (going L to R from the top down, row after
> row)
>
> Start with just 1. and 2. top left and explain how your net will recognize
> 2 as another example of 1.
>
> How IOW do you define a “chair” in terms of simple abstract forms?
>
> Then we can apply your system, successively, to 3. 4. etc.
>
> This is the problem that has defeated all AGI-ers and all psychologists
> and philosophers so far.
>
> But Aaron (and PM?) has a semantic net solution to it -   if you can solve
> jungle scenes, this should be a piece of cake.
>
> I am saying, Aaron, you do not understand this problem – the problem of
> visual object recognition/conceptualisation//applicability of semantic nets.
>
> You are saying you do – and it’s me who is confused. Show me.
>
>
>
>
>
>  *From:* Piaget Modeler <piagetmode...@hotmail.com>
> *Sent:* Tuesday, October 23, 2012 4:41 PM
> *To:* AGI <a...@listbox.com>
> *Subject:* RE: [agi] Re: Superficiality Produces Misunderstanding - Not
> Good Enough
>
>  Mike,
>
> When you type "Chair" what should happen is the AGI's model should
> activate the chair concept
> first at a perceptual level to form the pixels into the words, then at a
> linguistic level to form letters
> into a word, then at a conceptual level, then at a simulation level where
> images of chair instances
> are evoked.
>
> This is just simple activation.  Semantic networks tied into perception
> and simulation would achieve
> the necessary effect you seek.  Transformations on these
> perception-simulation-semantic networks
> is what much of Piaget's work was about.
>
> ~PM.
>
>  ------------------------------
> From: tint...@blueyonder.co.uk
> To: a...@listbox.com
> Subject: Re: [agi] Re: Superficiality Produces Misunderstanding - Not Good
> Enough
> Date: Tue, 23 Oct 2012 15:09:30 +0100
>
>  CHAIR
>
> ...
>
> It should be able to handle any transformation of the concept, as in
>
> DRAW ME (or POINT TO/RECOGNIZE)  A CHAIR IN TWO PIECES –..
>
> ..SQUASHED
> ..IN PIECES
> -HALF VISIBLE
> ..WITH AN ARM MISSING
> ...WITH NO SEAT
> ..IN POLKA DOTS
> ...WITH RED STRIPES
>
> Concepts are designed for a world of everchanging, everevolving multiform
> objects (and actions).  Semantic networks have zero creativity or
> adaptability – are applicable only to a uniform set of objects, (basically
> a database) -  and also, crucially, have zero ability to physically
> recognize or interact with the relevant objects. I’ve been into it at
> length recently. You’re the one not paying attention.
>
> The suggestion that networks or similar can handle concepts is completely
> absurd.
>
> This is yet another form of the central problem of AGI, which you clearly
> do not understand – and I’m not trying to be abusive  – I’ve been realising
> this again recently – people here are culturally punchdrunk with concepts
> like *concept* and *creativity*, and just don’t understand them in terms of
> AGI.
>
>  *From:* Jim Bromer <jimbro...@gmail.com>
> *Sent:* Tuesday, October 23, 2012 2:04 PM
> *To:* AGI <a...@listbox.com>
> *Subject:* Re: [agi] Re: Superficiality Produces Misunderstanding - Not
> Good Enough
>
>  Mike Tintner <tint...@blueyonder.co.uk> wrote:
> AI doesn’t handle concepts.
>
> Give me one example to prove that AI doesn't handle concepts.
> Jim Bromer
>
>
>
> On Tue, Oct 23, 2012 at 4:24 AM, Mike Tintner <tint...@blueyonder.co.uk>wrote:
>
>   Jim: Mike refuses to try to understand what I am saying because he
> would have to give up his sense of a superior point of view in order to
> understand it
>
> Concepts have nothing to do with semantic networks.
> AI doesn’t handle concepts.
> That is the challenge for AGI.
> The form of concepts is graphics.
> The referents of concepts are infinite realms..
>
> What are you saying that is relevant to this, or that can challenge this –
> from any evidence?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription 
> <http://www.listbox.com/>
>
>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription 
> <http://www.listbox.com/>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com/>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription 
> <http://www.listbox.com/>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com/>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to