Mike Tintner wrote:

Richard:Now, if what you *meant* to talk about was links between action and
perception, all well and good, but I was just addressing the above
comment of yours.

I'm certainly not reiterating an ancient debate. This has been from the start an exploratory thread. Prinz summarises fairly well what is happening:

"Up until recently, cognitive scientists were happy to parcel up the mind-machine into neat parts. We had perception on one side, which is in the business of representing inputs from the external world. Then we had action, on the other side, which controls an organism’s outputs, or behavior. Nestled between these “peripheral systems” when had central systems, which were presumed be the main engines of “cognition” or “thinking.” Each of these systems was supposed to work independently, like separate committees in a great corporation, only vaguely away of what the others are up to. In cogsci lingo, each system was supposed to use proprietary rules and representations. Oh, how times have changed. We are now living in an era of border disputes. The orthodox divisions of the mind are being attacked. I have tried to join the front lines myself on occasion. I think the border between perception and cognition needs to be renegotiated: thinking does not use a proprietary code; it redeploys representations used to perceive the world (Prinz 2002). I’m inclined to think that cognition also avails itself of representations used for action"

Yes, I noticed that passage in the Prinz paper. I think he is being a little self-serving here: it is *not* true that cogsci has agreed that there is a modlarity of mind of the sort that he describes. That may be HIS view of the picture, but it is not everyone's. Not by a long way.

I wonder why there are so many spelling mistakes in this paper? Seems the guy was in a bit of rush on this one.


Now what I was reaching for at the beginning - was that all the talk of developing bodies of knowledge in AI/AGI, that I'm seeing, seems to belong to the "old days" of "separate committees." Mark's comment, for example, seemed to me reasonably typical - essentially : "we can leave testing till later - that's a separate department."

I think you may want to check with Mark on that one: you seemed to be talking about different things. You: the system testing its expectations in real time. Him: the idea of validating a knowledgebase offline. At least, that is the way it looked on a cursory glance.


What is emerging, it seems to me, is the start of a cog. sci. synthesis which sees perception, action, thought, problemsolving, knowledge-gathering and embodied "mirroring" and possibly still more, as interdependent and in the final analysis, inseparable.

I do not see this "emerging". The parallel distrbuted processing folks were saying something similar 20 years ago. And even now I don't think people see a grand interdependent synthesis. And if they DID claim this, I would be on their backs.




Perception/vision, for example, isn't simply interwoven with action, because we have to turn our heads hither and thither, but because we want to consider grasping, or otherwise moving to respond to, what we see.

No disagreement there.


And more formal modes of knowledge-gathering are an extension of these processes.

Doesn't necessarily follow.


Our gathering of bodies of knowledge about the world, is evidence- and experiment-based - IOW depends on our being able to see AND grasp things physically. And while we certainly take a great deal on varying degrees of trust, it is all built on the basis of physically seeing and touching some of the world.

Then an almost-paralyzed person should not be allowed to do theoretical physics, because their brain would be inable to function properly without the ability to feel and grasp things?

Can you think about any teeny little counterexamples to this? Sitting in Newton's old rooms in Trinity College Cambrideg at this very moment, perhaps?


There is no such thing, IOW, as pure knowledge and knowledge-gathering, contained only in texts and webpages, and tested only logically "in the head" - and AI/AGI seem to rather depend on that.

See previous.


Any AGI system, to take one small example, would have a hard time understanding the considerable amount of science that is involved in discussion of the reliability of different kinds of evidence and their gathering.

Suppose that Stephen Hawking had become ill when he was barely out of diapers. Are you completely convinced that he would never have been able to learn ow to read books, then learn mathematics, then get a degree, then get where he is now?

You are utterly and completely certain he could not have done so, with the right people around him?

Everything, but everything, rests on your saying "He could not have done so", and being able to come up with convincing reasons why not.


Personally, I think that embodiment makes the development process vastly easier, but this black and white declaration of IMPOSSIBLE! that you shout seems to go too far.



Richard Loosemore



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to