[agi] Primal Sketching

2008-02-17 Thread Mike Tintner
Hope I'm not covering old ground, but I'm wondering whether any one is interested and would like to comment on Marr's idea of vision involving a primal sketch at a basic level. I'm interested, despite much ignorance here, because it links to the image schemas of Lakoff and co.. Here's an inter

Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Mike Tintner
I believe I offered the beginning of a v. useful way to conceive of this whole area in an earlier post. The key concept is "inventory of the world." First of all, what is actually being talked about here is only a VERBAL/SYMBOLIC KB. One of the grand illusions of a literature culture is that

Re: [agi] would anyone want to use a commonsense KB?.. p.s.

2008-02-18 Thread Mike Tintner
I should add to the idea of our common sense knowledge inventory of the world - because my talk of objects and movements may make it all sound v. physical and external. That common sense inventory also includes a vast amount of non-verbal knowledge, paradoxically, about how we think and communi

Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Mike Tintner
This raises another v. interesting dimension of KB's and why they are limited. The social dimension. You might, purely for argument's sake, be able to name a vast amount of unnamed parts of the world. But you would then have to secure social agreement for them to become practically useful. Not

[agi] A Follow-Up Question Re Vision

2008-02-20 Thread Mike Tintner
I got the impression from the recent interesting thread re vision that the surface of the retina may be especially designed so as to "re-distort" that distorted retinal image back into the proper shape we actually see. A robotics friend, however, who is very into the visual cortex, insisted th

[agi] A Follow-Up Question re Vision.. P.S.

2008-02-21 Thread Mike Tintner
This friend has now pointed out that the distortion of images handled by the visual cortex (and not just the retina) is even more marked than suggested: ".. you kind of left out what I thought was most important in my previous reply, but this page shows several links regarding this topic. Thing

Re: [agi] A Follow-Up Question re Vision.. P.S.

2008-02-21 Thread Mike Tintner
Richard, Thanks for response. But it surely *is* still a puzzle as to how and indeed where that distorted image on the retina gets rectified and raises major questions about vision. No one as, I understand, has the answer. I am too ignorant to have a POV here - but my general experience is th

Re: [agi] reasoning & knowledge

2008-02-26 Thread Mike Tintner
Ben: One advantage AGIs will have over humans is better methods for translating procedural to declarative knowledge, and vice versa. For us to translate "knowing how to do X" into "knowing how we do X" can be really difficult (I play piano improvisationally and by ear, and I have a hard time fi

Re: [agi] reasoning & knowledge

2008-02-26 Thread Mike Tintner
Ben: Anyway, I agree with you that formal logical rules and inference are not the end-all of AGI and are not the right tool for handling visual imagination or motor learning. But I do think they have an important role to play even so. Just one thought here that is worth trying to express, althoug

Re: [agi] reasoning & knowledge

2008-02-26 Thread Mike Tintner
king in principally body form not just about how to move our own body, but how other bodies do and will move - mirroring with those mirror neurons. Mike Tintner <[EMAIL PROTECTED]> wrote: The idea that an AGI can symbolically encode all the knowledge, and perform all the thinking, ne

Re: [agi] reasoning & knowledge

2008-02-26 Thread Mike Tintner
Richard: Mike Tintner wrote: No one in AGI is aiming for common sense consciousness, are they? Inasmuch as I understand what you mean by that, yes of course. Both common sense and consciousness. As Ben said, it's something like "multisensory integrative consciousness" - i

Re: [agi] reasoning & knowledge

2008-02-27 Thread Mike Tintner
Ben: MT:>> You guys seem to think this - true common sense consciousness - can all be cracked in a year or two. I think there's probably a lot of good reasons - and therefore major creative problems - why it took a billion years of evolution to achieve. Ben: I'm not trying to emulate th

Re: [agi] reasoning & knowledge

2008-02-27 Thread Mike Tintner
Ben:What evidence do you have that this [body thinking] is the "largest part" ... it does not feel at all that way to me, as a subjectively-experiencing human; and I know of no evidence in this regard Like I said, I'm at the start here - and this is going against thousands of years of literat

Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread Mike Tintner
WP: > I'm going to try and elucidate my approach to building an intelligent system, in a round about fashion. This is the problem I am trying to solve. Marks for at least trying to identify an AGI problem. I can't recall anyone else doing so - which, to repeat, I think is appalling. But I d

Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-28 Thread Mike Tintner
Er, just to clarify. You guys have, or know of, AI systems which run continuous movies of the world, analysing and responding to those movies with all the relevant senses, as discussed below, and then to the world beyond those movies, in real time (or any time, for that matter)? Mike

Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-28 Thread Mike Tintner
is that in discussing this whole area, both in AI, philosophy & cog sci/psych, people tend to forget that consciousness is a continuously moving picture with the other senses continuous too, and tend to think, even if only implicitly, in terms of stills. Mike Tintner wrote: Er, just t

Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-29 Thread Mike Tintner
ent visual systems might make that connection. And I would assert - and am increasingly confident - that the grammar of language - how we put words together in whatever form - is based on cutting together internal *movies* in our head - not still images,but movies. They don't teach movie

Re: [agi] reasoning & knowledge

2008-02-29 Thread Mike Tintner
Robert: I think it would be more accurate to say that technological meme evolution was caused by the biological evolution, rather than being the extension of it, since they are in fact two quite different evolutionary systems, with different kinds of populations/survival conditions. I would sa

Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-29 Thread Mike Tintner
Vlad:> Don't you know about change blindness and the like? You don't actually see all these details, it's delusional. You only get the gist of the scene, according to current context that forms the focus of your attention. Amount of information you extract from watching a movie is not dramatica

Re: [agi] Why do fools fall in love? [WAS Re: Common Sense Consciousness ]

2008-02-29 Thread Mike Tintner
ies, and presumably all the visual arts. Strange too that the brain should spend most of the night then creating its own movies and insists on seeing events when according to you guys, it could just much more quickly and less effortfully look at their symbolic forms. You're right, this is fun.

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread Mike Tintner
detailed,impressive timetable for how long it'll take them to produce such an idea, they just will never produce one. Frankly, they're too scared). Mike Tintner <[EMAIL PROTECTED]> wrote: You must first define its existing skills, then define the new challenge with some d

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread Mike Tintner
it doesn't work this way. Because it can't work that way. An AGI working in unknown territory will have to make mistakes. Andi Quoting Mike Tintner <[EMAIL PROTECTED]>: Jeez, Will, the point of Artificial General Intelligence is that it can start adapting to an unfamiliar

Re: [agi] Thought experiment on informationally limited systems

2008-03-03 Thread Mike Tintner
on this list who dream of creating an AGI, no? -Original Message- From: Mike Tintner [mailto:[EMAIL PROTECTED] Sent: March-02-08 5:36 AM To: agi@v2.listbox.com Subject: Re: [agi] Thought experiment on informationally limited systems Jeez, Will, the point of Artificial General Intelligence i

Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread Mike Tintner
YKY: the way our language builds up new ideas seems to be very complex, and it makes natural language a bad knowledge representation for AGI. An even more complex example: "spread the jam with a knife" "draw a circle with a knife" "cut the cake with a knife" "rape the girl with a knife" "stop

Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread Mike Tintner
You seem to grasp my point, and then let it slip through your fingers - although perhaps I need to spell it out. Human concepts (of which language is the most prominent example) are fundamentally open-ended - and MEANT to be. Philosophers and rational thinkers of all kinds have fought against t

Re: [agi] Thought experiment on informationally limited systems

2008-03-03 Thread Mike Tintner
David: >I was specifically referring to your comment ending in "BY ITSELF". Jeez, Will, the point of Artificial General Intelligence is that it can start adapting to an unfamiliar situation and domain BY ITSELF. I believe this statement is just plain incorrect. David, I find that extraordi

Re: [agi] Thought experiment on informationally limited systems

2008-03-03 Thread Mike Tintner
Will:Is generalising a skill logically the first thing that you need to make an AGI? Nope, the means and sufficient architecture to acquire skills and competencies are more useful early on in an agi development Ah, you see, that's where I absolutely disagree, and a good part of why I'm hammering

Re: [agi] would anyone want to use a commonsense KB?

2008-03-04 Thread Mike Tintner
Dead right (in an ambiguous way :) ) Basically an AGI without open-ended concepts will never live in the real world. I should add that I don't believe early, true AGI's *will* be anywhere near capable of natural language. All they will need is one or more systems of open-ended concepts. Emoti

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Mike Tintner
Vlad: How to survive a zombie attack? I really like that thought :). You're right:we should seriously consider that possibility. But personally, I don't think we need to be afraid ... I'm sure they will be friendly zombies... --- agi Archives: http:/

Re: [agi] First Humanoid Robot That Will Develop Language May Be Coming Soon

2008-03-06 Thread Mike Tintner
Pei:ScienceDaily (Mar. 4, 2008) — iCub, a one metre-high baby robot which will be used to study how a robot could quickly pick up language skills, will be available next year. http://www.sciencedaily.com/releases/2008/02/080229141032.htm Thanks - but it looks like here we go again: "now, within

Re: [agi] First Humanoid Robot That Will Develop Language May Be Coming Soon

2008-03-06 Thread Mike Tintner
Bob: http://streebgreebling.blogspot.com/2008/03/running-before-you-can-walk.html The "running before you walk" analogy is interesting. It gives rise to the seed of an idea. Basically, a vast amount of what is happening and has happened in AGI and robotics is ridiculous - a whole series o

[agi] Flies & Neural Networks

2008-03-15 Thread Mike Tintner
[Someone please explain this to me]: http://www.eurekalert.org/pub_releases/2008-03/danl-loa030708.php Language of a fly proves surprising Insect's sensory data tells a new story about neural networks LOS ALAMOS, New Mexico, March 10, 2008-A group of researchers has developed a novel way to view t

Re: [agi] Flies & Neural Networks

2008-03-16 Thread Mike Tintner
William, This is v. helpful. It sounds like you're saying that neural networks have treated what could actually be different kinds of music, with many of the features of music, such as length of note and rhythm, purely quantitatively for just, say, the number of beats. In which case we could b

Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Mike Tintner
Charles: >> I don't think a General Intelligence could be built entirely out of narrow AI components, but it might well be a relatively trivial add-on. Just consider how much of human intelligence is demonstrably "narrow AI" (well, not artificial, but you know what I mean). Object recognition, e

Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Mike Tintner
John, I'm developing this argument more fully elsewhere, so I'll just give a partial gist. What I'm saying - and I stand to be corrected - is that I suspect that literally no one in AI and AGI (and perhaps philosophy) present or past understands the nature of the tools they are using. All th

Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Mike Tintner
Ben:It's not just that we can CHOOSE the meanings of concepts from a fixed menu of possibilities ... we CREATE the meanings of concepts as we use them ... this is how and why concept-meanings continually change over time in individual minds and in cultures... Yes. Good point. Generality/open-end

Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Mike Tintner
d agent - hence the use of the word "handle" in this construction. -Steve Stephen L. Reed Artificial Intelligence Researcher http://texai.org/blog http://texai.org 3008 Oak Crest Ave. Austin, Texas, USA 78704 512.791.7860 - Original Message From: Mike Ti

Re: [agi] Microsoft Launches Singularity

2008-03-28 Thread Mike Tintner
ecise geometry/ geography) of thought - i.e. a system of mental image schemas - becomes apparent. - Original Message - From: Stephen Reed To: agi@v2.listbox.com Sent: Friday, March 28, 2008 4:30 AM Subject: Re: [agi] Microsoft Launches Singularity - Original Message

Re: [agi] Instead of an AGI Textbook

2008-03-29 Thread Mike Tintner
Robert/Ben:. In fact. I would suggest that AGI researchers start to distinguish themselves from narrow AGI by replacing the over ambiguous concepts from AI, one by one. For example: knowledge representation = world model. learning = world model creation reasoning = world model simulation goal

Re: [agi] Intelligence: a pattern discovery algorithm of scalable complexity.

2008-03-30 Thread Mike Tintner
Durk, Absolutely right about the need for what is essentially an imaginative level of mind. But wrong in thinking: "Vision may be classified under "Narrow" AI" You seem to be treating this extra "audiovisual perception layer" as a purely passive layer. The latest psychology & philosophy recogn

Re: [agi] Symbols

2008-03-30 Thread Mike Tintner
In this & surrounding discussions, everyone seems deeply confused - & it's nothing personal, so is our entire culture - about the difference between SYMBOLS 1. "Derek Zahn" "curly hair" "big jaw" "intelligent eyes" . etc. etc and IMAGES 2. http://robot-club.com/teamtoad/nerc/h2-derek

Re: [agi] Symbols

2008-03-30 Thread Mike Tintner
MW: MT:>> Why are images almost always more powerful than the corresponding symbols? Why do they communicate so much faster? Um . . . . dude . . . . it's just a bandwidth thing. Vlad:Because of higher bandwidth? Well, guys, if the only difference between an image and, say, a symbol

Re: [agi] Symbols

2008-03-31 Thread Mike Tintner
e important. But if you can't see things whole, then you can't see or connect with the real world. And, in case you haven't noticed, no AGI can connect with the real world. In fact, there is no such thing as an AGI at the moment. And there never will be if machines can't do w

Re: [agi] Symbols

2008-03-31 Thread Mike Tintner
FAKES your conscious mind into believing that the input came in that way. Why can't you believe that a computer can do the same thing? Why can't one input device of an AGI take a natural language description and convert it into a picture and send it to the main AGI consciousness whi

Re: [agi] Symbols

2008-03-31 Thread Mike Tintner
Richard: What *exactly* do you mean by "an AGI must be able to see in wholes"? My point is that you cannot make criticisms at that level of vagueness. I'll give the detailed explanation that I think you're looking for, within a few days. P.S. Maybe then you'll be able to return the favour,

Re: [agi] Symbols

2008-03-31 Thread Mike Tintner
Richard: I already did publish a paper doing exactly that ... haven't you read it? Yep. And I'm still mystified. I should have added that I have a vague idea of what you mean by complex system and its newness, but no idea of why it will solve any unsolved problem of AGI, and absolutely no id

Re: [agi] The resource allocation problem

2008-04-01 Thread Mike Tintner
Charles H: Due to this, the resource management should not be algorithmic, but free to adapt to the amount of resources at hand. I'm intent on a economic solution to the problem, where each activity is an economic actor. The idea of economics is v. interesting & important. I think - & I'm

Re: [agi] Symbols

2008-04-02 Thread Mike Tintner
ideas. P.S. An advertising/marketing no-no - never use s.o. else' s title for your site. It doesn't bespeak originality. Mike Tintner wrote: Richard: I already did publish a paper doing exactly that ... haven't you read it? Yep. And I'm still mystified. I should have added t

Re: [agi] Symbols

2008-04-02 Thread Mike Tintner
meone saying: "What's my idea? I'll tell you - I'm going to get the best brains [or computers] that money can buy - loads & loads of them. And get them to come up with an idea. Pretty original, huh?" That's not an idea, Richard. *You* have to come up with that..

Re: [agi] Symbols

2008-04-02 Thread Mike Tintner
meone saying: "What's my idea? I'll tell you - I'm going to get the best brains [or computers] that money can buy - loads & loads of them. And get them to come up with an idea. Pretty original, huh?" That's not an idea, Richard. *You* have to come up with that..

Re: [agi] Symbols

2008-04-02 Thread Mike Tintner
"What's my idea? I'll tell you - I'm going to get the best brains [or computers] that money can buy - loads & loads of them. And get them to come up with an idea. Pretty original, huh?" That's not an idea, Richard. *You* have to come up with that.. P.S. Yes I did

[agi] Real and Virtual Puppies

2008-04-02 Thread Mike Tintner
http://www.sciencedaily.com/releases/2008/03/080329122121.htm http://cordis.europa.eu/ictresults/index.cfm/section/news/tpl/article/BrowsingType/Features/ID/89632 New Breed Of Cognitive Robot Is A Lot Like A Puppy ScienceDaily (Mar. 31, 2008) - Designers of artificial cognitive systems have te

Re: [agi] Symbols

2008-04-03 Thread Mike Tintner
t made to his audience to have a clear demo of his system. If you want to send me something, I'll gladly look at it & reply offline - although I'm real busy at the mo. answering *your* last question.! Best Mike Tintner wrote: Richard, I can't swear that I did read it

[agi] How Bodies of Knowledge Grow

2008-04-09 Thread Mike Tintner
I want to return to what seems to me the high-school-naive idea of how an AGI's or any body of knowledge can and/or does grow - i.e. linearly, mathematically and logically. Correct me, but I haven't seen any awareness in AI of the huge difficulties that result from the problem of : how do you t

Re: [agi] How Bodies of Knowledge Grow

2008-04-10 Thread Mike Tintner
MW/MT: Correct me, but I haven't seen any awareness in AI of the huge difficulties that result from the problem of : how do you test acquired knowledge? MW:You're missing seeing it. It's generally phrased as "converting data to knowledge" or "concept formulation" and it's currently generally

Re: [agi] How Bodies of Knowledge Grow

2008-04-10 Thread Mike Tintner
My broad point is that there is only one way to test knowledge ultimately - physically. Science demands physical evidence for everything. It then has in effect a graded system of veracity (although there is no formalised system). The truest knowledge comes from direct physical observation an

Re: [agi] How Bodies of Knowledge Grow

2008-04-10 Thread Mike Tintner
Richard:the idea that "perception is [the] fairly passive reception of impressions..." is so old and out of date that if you pick up a textbook on cognitive psychology printed 30 years ago you will find it dismissed as wrong. This is the issue of top-down vs bottom-up processing No it isn't. You'

Re: [agi] How Bodies of Knowledge Grow

2008-04-10 Thread Mike Tintner
Richard:Now, if what you *meant* to talk about was links between action and perception, all well and good, but I was just addressing the above comment of yours. I'm certainly not reiterating an ancient debate. This has been from the start an exploratory thread. Prinz summarises fairly well wha

Re: [agi] How Bodies of Knowledge Grow

2008-04-10 Thread Mike Tintner
Richard:Personally, I think that embodiment makes the development process vastly easier, but this black and white declaration of IMPOSSIBLE! that you shout seems to go too far. Well, that's the point of discussing this - yes, the culture still allows your position. But the new cog sci developme

Re: [agi] How Bodies of Knowledge Grow...P.S.

2008-04-10 Thread Mike Tintner
Richard, Just an addendum to my question - I'm quite happy to take just one disembodied subject area. But - and this is an interesting point - since we're talking A*General*I - there should really be at least two. --- agi Archives: http://www.listbox.

Re: [agi] How Bodies of Knowledge Grow

2008-04-10 Thread Mike Tintner
MW: I believe that I was also quite clear with my follow-on comment of "a cart before the horse problem. Once we know how to acquire and store knowledge, then we can develop metrics for testing it -- but, for now, it's too early to go after the problem." as well. You're basically agreeing wit

Re: [agi] Big Dog

2008-04-10 Thread Mike Tintner
Impressive. Especially their Rhex robot - v. resilient in v. different terrains: http://www.youtube.com/watch?v=wIuRVr8z_WE&feature=related Peruse the video: http://www.youtube.com/watch?v=W1czBcnX1Ww&feature=related Of course, they are only showing the best stuff. And I am sure there is ple

Re: [agi] Big Dog

2008-04-11 Thread Mike Tintner
Brad:What's really impressive is how "natural" the leg movements are So natural, I wondered whether it wasn't a hoax with real people in there. --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive

Re: [agi] Comments from a lurker...

2008-04-12 Thread Mike Tintner
Steve:If you've got a messy real-world problem, you know little, if you have an algorithm giving the solution, you know all. This is the bit where, like most, you skip over the nature of AGI - messy real-world problems. What you're saying is: "hey if you've got a messy problem, it's great, nay

Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Mike Tintner
Pei: I believe AGI is basically a theoretical problem, which will be solved by a single person or a small group, with little funding How do you define that problem? --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listb

Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Mike Tintner
Pei: I don't really want a big gang at now (that will only waste the time of mine and the others), but a small-but-good gang, plus more time for myself --- which means less group debates, I guess. ;-) Alternatively, you could open your problems for group discussion & think-tanking... I'm surp

[agi] Science 2.0

2008-04-20 Thread Mike Tintner
[Sci Am] May 08 The first generation of World Wide Web capabilities rapidly transformed retailing and information search. More recent attributes such as blogging, tagging and social networking, dubbed Web 2.0, have just as quickly expanded people's ability not just to consume online information b

[agi] Concepts - Cog Sci/AI vs Cog Neurosci

2008-04-20 Thread Mike Tintner
Current Directions in Psychological Science - April 2008 - In Press http://www.psychologicalscience.org/journals/cd/17_2_inpress/Barsalou_completed.pdf THE DOMINANT THEORY IN COGNITIVE SCIENCE Across diverse areas of psychology, computer science, linguistics, and philosophy, the dominant accou

Re: [agi] Concepts - Cog Sci/AI vs Cog Neurosci

2008-04-20 Thread Mike Tintner
gence by processing holistic images. Stephen L. Reed Artificial Intelligence Researcher http://texai.org/blog http://texai.org 3008 Oak Crest Ave. Austin, Texas, USA 78704 512.791.7860 - Original Message From: Mike Tintner <[EMAIL PROTECTED]> To: agi@v2.listbox

Re: [agi] Concepts - Cog Sci/AI vs Cog Neurosci

2008-04-20 Thread Mike Tintner
orth. The brain does not achieve its powerful forms of intelligence by processing holistic images. Stephen L. Reed Artificial Intelligence Researcher http://texai.org/blog http://texai.org 3008 Oak Crest Ave. Austin, Texas, USA 78704 512.791.7860 - Original Message From:

Re: [agi] Other AGI-like communities

2008-04-23 Thread Mike Tintner
Ben/Joshua: How do you think the AI and AGI fields relate to the embodied & grounded cognition movements in cog. sci? My impression is that the majority of people here (excluding you) still have only limited awareness of them - & are still operating in total & totally doomed defiance of their

[agi] Why Symbolic Representation without Imaginative Simulation Won't Work

2008-04-23 Thread Mike Tintner
I think one can now present a convincing case why any symbolic/linguistic approach to AGI, that is not backed by imaginative simulation, simply will not work. For example, any attempt to build an AGI with a purely symbolic database of knowledge mined from the Net or other texts, is doomed. Th

Re: [agi] Why Symbolic Representation without Imaginative Simulation Won't Work

2008-04-23 Thread Mike Tintner
Abram, Both to-the-point responses. One: how much, you're asking, are statements about movement central to language? Extremely central. That's precisely why we have this core "general activity/movement language" that we all share - all those very basic movement words - we use them so often. H

Re: [agi] Why Symbolic Representation P.S.

2008-04-23 Thread Mike Tintner
Abram, Just to illustrate further, here's the opening lines of today's Times sports report on a football match.[Liverpool v Chelsea] How on earth could this be understood without massive imaginative simulation? [Stephen?] And without mainly imaginative memories of football matches? "John Arn

Re: [agi] Why Symbolic Representation P.S.

2008-04-23 Thread Mike Tintner
in the same way Cyc did. -Steve Stephen L. Reed Artificial Intelligence Researcher http://texai.org/blog http://texai.org 3008 Oak Crest Ave. Austin, Texas, USA 78704 512.791.7860 - Original Message From: Mike Tintner <[EMAIL PROTECTED]> To: agi@v2.list

Re: [agi] Why Symbolic Representation P.S.

2008-04-24 Thread Mike Tintner
Stephen:Mike, have you given any thought to how deaf and blind humans become mentally competent? Certainly. By using their touch, smell, kinaesthetic and the other sensorimotor sensations of their own body to get to know the world. Blind people can draw - they can draw outlines of object

Re: [agi] Why Symbolic Representation P.S.

2008-04-24 Thread Mike Tintner
ing back where we might claim to "understand" text w/o sensory experience of its subject matter. So my question is: what's the difference? (Unless it really is just holding back those claims of text understanding w/o real-world sensory data-- which is a fine point.) On Wed, Apr

Re: [agi] Why Symbolic Representation P.S.

2008-04-24 Thread Mike Tintner
Vlad: I agree that some kind of simulation is necessary, probably something equivalent on high level to a 3D vector sketch of the events developing in time, containing actors, where necessary structural schemes of their bodies interacting with structure of the scene, etc. The recurrent, but u

Re: [agi] Random Thoughts on Thinking...

2008-04-24 Thread Mike Tintner
Steve:What is a novel solution?! Since THIS question seems to be driving much the current AGI efforts, I think that this should be completely wrung out.My program will identify the parts of the problem that ARE known and direct effort to the "missing pieces". You're right that creativity, smal

Re: [agi] Why Symbolic Representation P.S.

2008-04-24 Thread Mike Tintner
MW: I see all your references are reinforcing the need for grounding and some showing how grounding *can* be accomplished by images (among many other methods :-), but I have yet to find any of your references clearly saying all meanings must be grounded BY IMAGES. That was the basis for my last

Re: [agi] Why Symbolic Representation P.S.

2008-04-24 Thread Mike Tintner
between "metric" and "semantic" approaches to vision processing. Comments? Bob: Mike Tintner >> The recurrent, but underlying question in many related discussions here is whether you, (& Bob & a & linas), think a visual scene - let's say some people

Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Mike Tintner
MT: http://honolulu.hawaii.edu/distance/sci122/Programs/p3/Rorschach.gif (Oh - and a, linas, Bob, Mark, et al - can we agree that there is no way for maths to process that image, period?) Mark:No. I strongly disagree with your assertion. What you believe you are processing (w)holistically can

Re: [agi] Random Thoughts on Thinking...

2008-04-26 Thread Mike Tintner
Just getting a basically adaptive program that pace Ben's could develop something like hide-and-seek independently, after learning to fetch, is hard enough - or a maze-running creature that could, say, learn to climb over maze walls and not just run round them.. Mike, On 4/24/08,

Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Mike Tintner
BillK: MT:>> So what you must tell me is how your or any geometrical system of analysis is going to be able to take a rorschach and come up similarly with a recognizable object or creature. Bear in mind, your system will be given no initial clues as to what objects or creatures are suitable as

Re: [agi] How general can be and should be AGI?

2008-04-27 Thread Mike Tintner
Matthias: a state description could be: ...I am in a kitchen. The door is open. It has two windows. There is a sink. And three cupboards. Two chairs. A fly is on the right window. The sun is shining. The color of the chair is... etc. etc. ..

Re: [agi] An interesting project on embodied AGI

2008-04-28 Thread Mike Tintner
Bob: I'm not totally convinced that having a high number of degrees of freedom is actually necessary for the development of intelligence. Of greater importance is the sensory capability, and the ways in which that data is processed. A birds beak is a far less elaborate tool than a human hand or

Re: [agi] An interesting project on embodied AGI

2008-04-29 Thread Mike Tintner
Bob: Particularly I'd be interested in having the robot learn a model of its own body kinematics - the beginnings of a sense of self - based on data mining its sensory data and also using experimental movements to confirm or refute hypotheses, which mught to a naive observer look like "play". Cor

Re: [agi] Deliberative vs Spatial intelligence

2008-04-29 Thread Mike Tintner
Russell, This is a definite start and I'm just trying to put together a reasoned thesis on this area. You're absolutely right that this is essential to understanding AGI - General Intelligence - and literally no one does have other than tiny fragments of understanding here, either in AI/AGI or

Re: [agi] Deliberative vs Spatial intelligence

2008-04-29 Thread Mike Tintner
Moving on from my previous post, the key distinction in mentality between the literate and the new multimediate mentality is between PRE-SEMIOTIC and SEMIOTIC. The presemiotic person starts from the POV of his specialist sign system and medium, when thinking about solving particular problems,

Re: [agi] How general can be and should be AGI?

2008-04-29 Thread Mike Tintner
igence is viewed without distinguishing "mature" from fledgling. I'm interested in the "minimal" system. I consider it my good fortune to have a good seat to observe historic events - I appreciate the project, this list, and it's contributors. Mike Tintner

Re: [agi] Deliberative vs Spatial intelligence

2008-04-29 Thread Mike Tintner
ds work is the cross-modal interaction, and understanding the details of how the heuristics arise in the first place from the pressures of real-time processing constraints and deliberative modelling. Josh On Tuesday 29 April 2008 11:12:28 am, Mike Tintner wrote: Josh:You can't d

Re: [agi] Deliberative vs Spatial intelligence

2008-04-30 Thread Mike Tintner
JAR/Russell: This seems to be an example of what I was talking about in the other thread - AI-ers starting with the set of sign systems and tools - and here the kinds of intelligence - they know of personally, professionally, and assume that they are the only kind, and encompass all types of

Re: [agi] Interesting approach to controlling animated characters...

2008-05-01 Thread Mike Tintner
So what are the principles that enable animated characters and materials here to react/move in individual continually different ways, where previous characters reacted typically and consistently? Ben Now this looks like a fairly AGI-friendly approach to controlling animated characters ... unfo

Re: [agi] Interesting approach to controlling animated characters...

2008-05-01 Thread Mike Tintner
..or is it just that these figures respond differently to the slightest difference in angle and force of impact? --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscri

Re: AW: [agi] How general can be and should be AGI?

2008-05-01 Thread Mike Tintner
Charles: as far as I can tell ALL modes of human thought only operate within restricted domains. I literally can't conceive where you got this idea from :). Writing an essay - about, say, the French Revolution, future of AGI, flaws in Hamlet, what you did in the zoo, or any of the other many

Re: [agi] Interesting approach to controlling animated characters...

2008-05-01 Thread Mike Tintner
The link from Lukas seems to suggest that applying this technology is something of an art (is that right?): "As a side note, the fickle nature of the evolutionary approach is the primary reason why euphoria isn't middleware; the team at NaturalMotion helps you integrate it. Most often, you hav

Re: AW: [agi] How general can be and should be AGI?

2008-05-01 Thread Mike Tintner
Charles: Flaws in Hamlet: I don't think of this as involving general intelligence. Specialized intelligence, yes, but if you see general intelligence at work there you'll need to be more explicit for me to understand what you mean. Now determining whether a particular deviation from iambic

Re: AW: [agi] How general can be and should be AGI?

2008-05-02 Thread Mike Tintner
Charles, We're still a few million miles apart :). But perhaps we can focus on something constructive here. On the one hand, while, yes, I'm talking about extremely sophisticated behaviour in essaywriting, it has generalizable features that characterise all life. (And I think BTW that a dog is

Re: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Mike Tintner
Matthias, Your remarks stimulated some interesting thoughts for me re concept organisation. I agree with what you seem to be implying that every concept must be a cluster of different POV images, and/or image schemas in the brain. But that cluster must have a normal organisation. The bigges

Re: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Mike Tintner
MH: Since we cannot explain qualia we can also a never answer the question whether qualia is necessary for AGI Well, clearly you do need emotions, continually evaluating the worthwhileness of your current activity and its goals/ risks and costs - as set against the other goals of your psychoeco

<    1   2   3   4   5   6   7   8   9   10   >