Hi Mike, I'm going to make one last try and then punt again.
Did you look up Robert Fulghum? Did you get the fact that once you generalize your idea enough, we're all in complete agreement -- but that *a lot* of your specific facts are just plain wrong (to whit -- the phrase "vision isn't just saccade-ing. The retina does also register whole images, even if in varying degrees of fidelity" is nonsensical if you truly understanding what saccading is and how the retina operates)? You have an excellent insight with your comment "It's interesting that all languages equate "understanding" to a v. high degree with both "seeing" and "grasping" - and what you "grasp" are whole objects." but to a large majority of the list that is just a well-known Pinkerism (reference Steven Pinker). I have previously declared you a troll and ignored you until your behavior moderated. Another list member is now threatening to do the same thing. Take the hint . . . . do some research (people have been giving you pointers and I've given you two) and make sure that *you* UNDERSTAND what you accuse us of "not getting" or "not doing" before making further accusations. Mark ----- Original Message ----- From: Mike Tintner To: agi@v2.listbox.com Sent: Monday, March 31, 2008 11:33 AM Subject: Re: [agi] Symbols I was not and am not arguing that anything is impossible. By definition - for me - if the brain can do it, a computer or some kind of machine should be able to do it eventually. But you have to start by recognizing what neither you nor anyone else is doing - that an AGI must be able to see in wholes, because there is a simply vast amount of information on that level, that is lost when you convert to pieces - not to mention whole realms of intelligence and creativity which work largely on that level.and are not even touched upon by AI/AGI currently. And then you surely have to continue by asking : how can a machine do that? Does anyone have any technological ideas? And that is one thing I would like to see discussed. Such discussions, regardless of results, would make people look at things from new perspectives, which has to be useful. As for your general "been there, done that" response, that's frankly silly. We have barely begun to understand, for example, how mirror neurons work - how we process the images/shapes of other people engaged in actions, in order to understand and mirror what they're doing. It is as Ramachandran points out a whole new area of massive and central importance to intelligence (and AGI), with many major unsolved problems, yet you are in effect claiming to know all about it already. Rudolf Arnheim argued: "Visual thinking calls for the ability to see visual shapes as images of the patterns of forces that underlie our existence-the functioning of minds, of bodies or machines, the structure of societies or ideas." He was ahead of his time, because this clearly relates in part to mirror neurons, and humans are clearly intuitively skilled at inferring a vast amount of info purely from shapes and forms (none of which AI or AGI have even dreamed about). I'm still trying to understand all this, as was Arnheim. If you think you've cracked it all, I'd be delighted to be enlightened. On a lesser point, vision isn't just saccade-ing. The retina does also register whole images, even if in varying degrees of fidelity. And, certainly, the brain re-processes/ "fakes" (as you put it) those images - but even then - as when it corrects all kinds of distortions that have been imposed by scientists' experimental lenses, it clearly does so in terms of wholes. It's interesting that all languages equate "understanding" to a v. high degree with both "seeing" and "grasping" - and what you "grasp" are whole objects. MW: You guys probably think this is all rather peripheral and unimportant - they don't teach this in AI courses, so it can't be important. Please don't assume what I'm thinking. Your points are very important. Unfortunately, they are important in the Robert Fulghum sense. >> The problem is you don't have such a machine. A computer certainly doesn't process in the same way as the brain. It might conceivably be able to but it doesn't right now. Right. I don't have one right now -- but the potential is there (eventually). You were arguing that things were impossible. I was arguing how to make them possible -- not about whether they currently exist. >> Computers right now don't. They only get the jigsaw pieces. See above point. That's why we're doing AGI. >> Obvious as it may sound, it will be one of the most valuable things you will ever do. Been there, done that, Robert Fulghum. >> But if you can't see things whole, then you can't see or connect with the real world. Disagree strenuously. This is the fundamental core of our disagreement. >> And, in case you haven't noticed, no AGI can connect with the real world. 1) It's tough for non-existent things to do anything. 2) An AGI will connect with the world -- just not necessarily in the way that you expect. >> In fact, there is no such thing as an AGI at the moment. Duh. >> And there never will be if machines can't do what the brain does - which is, first and last, and all the time, look at the world in images as wholes. Wrong, wrong, wrong. At the lowest levels of hardware, current computers see images far more as wholes than do humans (look up saccade). The human brain then integrates the pieces into a seamless whole and FAKES your conscious mind into believing that the input came in that way. Why can't you believe that a computer can do the same thing? Why can't one input device of an AGI take a natural language description and convert it into a picture and send it to the main AGI consciousness while telling that consciousness that the picture is what it actually sees? ----- Original Message ----- From: Mike Tintner To: agi@v2.listbox.com Cc: dan michaels Sent: Monday, March 31, 2008 5:56 AM Subject: Re: [agi] Symbols You're saying "I can do it.." without explaining at all how. Sort of "a miracle happens here". Crucially, you're quite right that if you have a machine that replicates the human eye and brain and how it processes the Cafe Wall illusion, then you will still see the illusion. The problem is you don't have such a machine. A computer certainly doesn't process in the same way as the brain. It might conceivably be able to but it doesn't right now. At the moment, it can only process the world "in pieces" rather than "in wholes". The human brain has all these maps that do enable it to process the world in wholes and "get the picture" and "see the whole form." Computers right now don't. They only get the jigsaw pieces. You're also simply ignoring the massively important point that a lot of the information expressed in a whole form isn't contained in any pieces. You don't actually explain how you're going to symbolically describe the Mona Lisa. Say "she has a smile on her lips"? But she doesn't! The smile only appears when your brain puts the pieces together and sees them whole - and computers can't do that, remember. And the same is true of every picture and every form. A billion computers reading trillions of webpages in seconds still won't be able to put any pieces together and see any wholes - or get any of that "missing information.". The only way for y'all to understand all this is to "see it" whole - give me any symbolic description[s] you like - a relevant chunk of program say - that expresses say the eye of Derek Z in that photo, or the eye of the Mona Lisa, and let's put it beside the actual eyes. And then the difference will click. Really - try it. Obvious as it may sound, it will be one of the most valuable things you will ever do. You guys probably think this is all rather peripheral and unimportant - they don't teach this in AI courses, so it can't be important. But if you can't see things whole, then you can't see or connect with the real world. And, in case you haven't noticed, no AGI can connect with the real world. In fact, there is no such thing as an AGI at the moment. And there never will be if machines can't do what the brain does - which is, first and last, and all the time, look at the world in images as wholes. MW: Mike Tintner > Well, guys, if the only difference between an image and, say, a symbolic - verbal or mathematical or programming - description is bandwidth, perhaps you'll be able to explain how you see the Cafe Wall illusion from a symbolic description: Sure! The Cafe Wall illusion is a result of the interaction between an image composed of four parallel horizontal lines dividing the image into five strips with alternating black and white bars with the second and fourth strips slightly offset so as to trick the human eye into believing that the parallel lines aren't and the optimizing algorithms of the human eye. I could go into enough detail to explain exactly how and why the trick works -- the fact that the eye is attempting to interpret a two-dimensional image as a three-dimensional scene -- but I think that I've made my point adequately. > A symbolic description of the above will only describe a set of parallel lines and rectangles - and there will be no illusion. Of course not, the illusion is a result of the image being implemented on the hardware of the human eye and brain. Unless you describe the human eye and brain, you don't get the illusion -- but you can do so easily as I did above and the illusion re-appears. > Or you might try a symbolic description of the Mona Lisa, and explain to me, how I will know from your description that she is smiling. You see if you take that image to pieces - as you must do in forming a symbolic description - there is no smile!: Huh? All I need to do is include the smile in the description. You can both take the image to pieces *AND* describe the whole at the same time. > And perhaps you can explain to me how you will see the final picture on any fully-formed jigsaw puzzle from just the pieces at the very beginning. Take a picture to pieces - and you don't "get" the picture any more. Wrong. Take a child's ten piece puzzle apart and re-arrange all the pieces. It's simple enough that your mind can hold all of it at once and "get" the picture. It's only when you take it to too many pieces . . . > Like I said, we are extremely ignorant about how images work. (I'll explain more another time - but in the meantime, maybe Vlad can explain to us how and where the information that is lost in the above examples, is encoded.). I would be extremely careful about throwing the word "we" around and assuming that everyone is just like you. Why does everyone else has to be ignorant about a subject just because you don't understand it yet? Do you understand general relativity? If not, does that suddenly mean that I don't understand it any more? How about biochemistry, physical chemistry, thermodynamics, evolution, simulated annealing, etc.? ------------------------------------------------------------------------ agi | Archives | Modify Your Subscription ------------------------------------------------------------------------ No virus found in this incoming message. Checked by AVG. Version: 7.5.519 / Virus Database: 269.22.1/1349 - Release Date: 3/29/2008 5:02 PM -------------------------------------------------------------------------- agi | Archives | Modify Your Subscription ---------------------------------------------------------------------------- agi | Archives | Modify Your Subscription ---------------------------------------------------------------------------- No virus found in this incoming message. Checked by AVG. Version: 7.5.519 / Virus Database: 269.22.1/1350 - Release Date: 3/30/2008 12:32 PM ------------------------------------------------------------------------------ agi | Archives | Modify Your Subscription ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63 Powered by Listbox: http://www.listbox.com