Ron,

Sounds like you're calling for something not a million miles from what I've been calling for.

[Obviously I'm a techno-idiot and so have only been very loosely, philosophically outlining what I mean, but neverthless that can be useful, because it does point in a new direction - and calls for designing a new kind of machine - which upsets everyone and turns them abusive because they don't want to have to think about that, they just want to work with the machines they've already got, even if those machines don't work. .].

Essentially, you may be saying like me, that creative analogy - which is the absolute heart of creativity, and what AGI has to achieve - works basically by what you might call "physical analogy."

How did you see/ or come up with the idea that "B" is like "13." ?

And since those two figures are rather close, let's take some further apart.

How is it that when you look at what is actually a bicycle seat and handlebars:

http://cn.cl2000.com/history/beida/ysts/image18/jpg/02.jpg

you are able to seat a bull's head, something like:

http://www.chu.cam.ac.uk/images/departments/classics_bulls_head_rhyton.jpg

How is it that you can look at an ink blot

http://www.bbc.co.uk/schools/victorians/images/school/learning/slideshow/ink_blot.jpg

and see a fish?

Or a cloud:

http://lintrups.dk/images/Diverse/hiroshima_mushroom_cloud.jpg

and see a mushroom?

What you're doing is what lies at the heart of arguably all creative analogy and metaphor. It's what enables you to understand the verbal metaphor "mushroom cloud", or to understand the words, "the clouds cried" because you can see that raindrops are like tears, or see that someone is "bull-headed" from both the way they set their head rigidly like a bull does, and proceed either in a physical charge, like a bull does; or to see that someone "eats like a pig" from the way they stick their head into a plate and chomp away, compared with the way a pig sticks its head into a trough and eats..

What we can say with great confidence, is that this cognitive process does not, and cannot work by any digital analysis - because that relies on dissecting things into their PARTS. You can't dissect the *features/parts* of a cloud and the features/parts of a mushroom and observe their similarity. Ditto with all the other examples. You can't dissect the parts of that ink blot and dissect the parts of a fish, and observe a likeness.

[You can't therefore form two verbal networks designating their features/parts - pace Gentner, Minsky et al - and recognize the similarity of the objects by comparing those feature networks]

Why? Because THERE ARE NO SIMILARITIES BETWEEN THE PARTS of the objects.

If you look, the only similarities are between the wholes - or forms of the objects - and the LOOSE FORMS at that - and esp. though not exclusively, their LOOSE OUTLINES. It's only the loose outline of that h-bomb cloud that is like the loose outline of a mushroom, and ditto only the loose outline of that blot that is like the loose outline of a fish, or the loose outline of a bicycle seat that is like the loose outline of a bull-s head..

Those outlines have to be loose, because if you look too closely at them again, the similarity vanishes. That cloud has fluffy edges to its outline, the mushroom has none. The bicycle seat is triangular-ish while the bull-s head.is oval-ish - quite a difference.

All this will bother the hell out of both a digital, analytic program and machine trying to compare the parts/features of those objects. But it doesn't bother your brain at all. You can see the actually rather-distant similarities with great facility and speed because you're working with the wholes.

So how is it done mechancally?

"Physical analogy." Your brain, or any comparable machine if it's to be successful, has to work with the wholes rather than the parts. It has, in effect, to physically overlay the outlines of those objects. Literally, physically (and not metaphorically as per current AI terminology) map their maps onto each other and see if they loosely fit.

And the brain also has to make those maps/outlines FLUID and FLEXIBLE - it treats them, I suggest, as if they were outlines seen through water - highly squishy and squishable. And it's quite prepared to SCULPT those outlines and chop off bits here, and add bits there. It's only looking for a loose, sometimes extremely loose fit.

Thus you can look at the highly intricate and serrated outline of the map of Italy and the relatively simple and smooth outline of a boot - which are actually radically different in many respects - and nevertheless see the rough similarity. Your brain has squished those shapes considerably to match each other.

Physical analogy. Free-form- matching.

(We could also use the very common term here for this kind of thinking - which has a literal, physical truth -this is literally "figurative thinking" - working with the figures, the outlines of objects).

Remember the brain, neuroscience tells us, is full of flexible maps. They are fundamental to its workings. When you make any movement, for example - reach out your hand to grasp something and shape it like a cup - you will form a flexible, fluid cup shape/outline - altering/shrinking/expanding it, maybe even losing or adding a finger or thumb, as you get nearer the object and adjust to its precise outlines. Fluid maps/outlines are essential to direct the operation of a robotic body in the real world, which to continually flexible adjust/squish its shape.

But no computer is capable of this yet. [AFAIK]. They can't 1. overlap forms/outlines directly/. And they can't 2. "SQUISH" those shapes - alter them fluidly and flexibly.. truly plastically. A computer can obviously achieve somewhat similar morphing effects by mathematical means - but they can only proceed formulaically - using formulas based on the *parts*. And this has to be free form matching of the wholes/*outlines* - as if you were squishing plasticine.

So we have to start designing a machine- possibly some new version - of the current machine. that can do these things.

{Is this, Ron, anything like what you mean]

P.S. Note that physical analogy - free-form matching - is with great probability not only at the heart of creative analogy and metaphor, but also visual object recognition. The brain has to continually compare objects of radically different shapes to visually recognize them as being of the same kind - as being basically say the same form of squashed "ball", or squished "face", or very diversely squished "amoeba" or, well, endlessly squished "plasticine" itself..




----- Original Message ----- From: "Ronald C. Blue" <ronb...@u2ai.us>
To: <agi@v2.listbox.com>
Sent: Sunday, January 11, 2009 9:29 AM
Subject: Re: [agi] Identity & abstraction


I would agree that the ABC example is an analogy. Generally speaking I am quickly successful in explaining how you can model the brain in electronic to people with backgrounds in analog electronics. The historical efforts in this direction of associationism and opponent process go all the way back to Aristole. Interesting observations reveal the opponent process nature of color. Example stare at the picture of the American flag
http://www.brainviews.com/abFiles/IntOpponent.htm
in a dimly lighted room for 45 seconds then look at at any gray area in the room and you will see the colors switch. The opponent process for color are blue-yellow, red-green, and black-white.

People who played with an opponent-process model of leaning reads like a list of who's who in psychology including Pavlov. They all dropped the model because it was not simple. Einstein said make your theories as simple as necessary to explain the data. Simple doe not mean so the average American can understand it.

Illusion are clues on what the brain is doing. What the brain is doing can be model in an AGI machine, Even computers can be programed to experience illusions or violations of the
programmed expectatons.  Example:
Marshall, J.A. & Alley, R.K. (1993, October). A Self-Organizing Neural Network that Learns to Detect and Represent Visual Depth from Occlusion Events. [In Bowyer K.W. & Hall L. (Eds.)] Proceedings of the AAAI Fall Symposium on Machine Learning and Computer Vision, Research Triangle, N.C. p70-74.

Your stated goal is the development of an AGI machine. I am telling you in my opinion that it can not be done in a programming environment but it can be done using opponent process circuits. We can not stop a child open his head and list his programs for our review and simple understanding. Sadly this is also true for analogy phase state opponent processing machines. Children are not controlable and neither are analogy phase state opponent processing machines. The current goal is developing a programming control system to interface with an analogy phase state opponent processing machine. After spending $200,000 we have been stuck at this problem level for 18 years. We had the AGI but no interface to traditional computations. At this time the current progress is promising that the two procedures can be made to cooperate with each other.

You now have enough information to start your thinking.

Ron
http://u2ai.us








-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com

Reply via email to