Mike Tintner wrote:
Richard:  science does too know a good deal about brain
architecture!I *know* cognitive science. Cognitive science is a friend of mine.
Mike, you are no cognitive scientist.... :-).

Thanks, Richard, for keeping it friendly - but - are you saying cog sci knows the:

*'engram' - how info is encoded
*any precise cognitive form or level of the hierarchical processing vaguely defined by Hawkins et al
*how ideas are compared at any level -
*how analogies are produced
*whether templates or similar are/are not used in visual object processing

etc. etc ???

Well, you are crossing over between levels here in a way that confuses me.

Did you mean "brain architecture" when you said brain architecture? that is, are you taking about brain-level stuff, or cognitive-level stuff? I took you to be talking quite literally about the neural level.

More generally, though, we understand a lot, but of course the picture is extremely incomplete. But even though the picture is incomplete that would not mean that cognitive science knows almost nothing.

My position is that cog sci has a *huge* amount of information stashed away, but it is in a format that makes it very hard for someone trying to build an intelligent system to actually use. AI people make very little use of this information at all.

My goal is to deconstruct cog sci in such a way as to make it usable in AI. That is what I am doing now.


Obviously, if science can't answer the engram question, it can hardly answer anything else.

You are indeed a cognitive scientist but you don't seem to have a very good overall scientific/philosophical perspective on what that entails - and the status of cog. sci. is a fascinating one, philosophically. You see, I utterly believe in the cog. sci. approach of applying computational models to the brain and human thinking. But what that has produced is *not* hard knowledge. It has made us aware of the complexities of what is probably involved, got us to the point where we are, so to speak, v. "warm" / close to the truth. But no, as, I think Ben asserted, what we actually *know* for sure about the brain's information processing is v. v. little. (Just look at our previous dispute, where clearly there is no definite knowledge at all about how much parallel computation is involved in the brain's processing of any idea [like a sentence]). Those cog. sci, models are more like analogies than true theoretical models. And anyway most of the time though by no means all, cognitive scientists are like you & Minsky - much more interested in the AI applications of their models than in their literal scientific truth.

If you disagree, point to the hard knowledge re items like those listed above, which surely must be the basis of any AI system that can legitimately claim to be based on the brain's architecture.

Well, it is difficult to know where to start. What about the word priming results? There is an enormous corpus of data concerning the time course of activation of words as a result of seeing/hearing other words. I can use some of that data to constrain my models of activation.

Then there are studies of speech errors that show what kinds of events occur during attempts to articulate sentences: that data can be used to say a great deal about the processes involved in going from an intention to articulation.

On and on the list goes: I could spend all day just writing down examples of cognitive data and how it relates to models of intelligence.

Did you know, for example, that certain kinds of brain damage can leave a person with the ability to name a visually presented object, but then be unable to pick the object up and move it through space in a way that is consistent with the object's normal use ..... and that another type of brain damage can result in a person have exactly the opposite problem: they can look at an object and say "I have no idea what that is", and yet when you ask them to pick the thing up and do what they would typically do with the object, they pick it up and show every sign that they know exactly what it is for (e.g. object is a key: they say they don't know what it is, but then they pick it up and put it straight into a nearby lock).

Now, interpreting that result is not easy, but it does seem to tell us that there are two almost independent systems in the brain that handle vision-for-identification and vision-for-action. Why? I don't know, but I have some ideas, and those ideas are helping to constrain my framework.



Another example of where you are not so hot on the *philosophy* of cog. sci. is our v. first dispute. I claimed and claim that it is fundamental to cog sci to treat the brain/mind as rational. And I'm right - and produced and can continue endlessly producing evidence. (It is fundamental to all the social sciences to treat humans as rational decisionmaking agents). Oh no it doesn't, you said, in effect - sci psychology is obsessed with the irrationalities of the human mind. And that is true, too. If you hadn't gone off in high dudgeon, we could have resolved the apparent contradiction. Sci psych does indeed love to study and point out all kinds of illusions and mistakes of the human mind. But to cog. sci. these are all so many *bugs* in an otherwise rational system. The system as a whole is still rational, as far as cog sci is concerned, but some of its parts - its heuristics, attitudes etc - are not. They, however, can be fixed.

So what I have been personally asserting elsewhere - namely that the brain is fundamentally irrational or "crazy" - that the human mind can't follow a logical, "joined up" train of reflective thought for more than a relatively few seconds on end - and is positively designed to be like that, and can't and isn't meant to be fixed - does indeed represent a fundamental challenge to cog. sci's current rational paradigm of mind. (The flip side of that craziness is that it is a fundamentally *creative* mind - & this is utterly central to AGI)

I don't remember that argument, but I admit that I am confused right now: in the above paragraphs you say that your position is that the human mind is 'rational' and then later that it is 'irrational' - was the first one of those a typo?

I can't resolve the confusions in the above two paragraphs. I can't figure out what you are saying.




Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72398784-2d12af

Reply via email to