Ben Goertzel wrote:
Richard,

I re-read your paper and I'm afraid I really don't grok why you think it solves Chalmers' hard problem of consciousness...

It really seems to me like what you're suggesting is a "cognitive correlate of consciousness", to morph the common phrase "neural correlate of consciousness" ...

You seem to be stating that when X is an unanalyzable, pure atomic sensation from the perspective of cognitive system C, then C will perceive X as a raw quale ... unanalyzable and not explicable by ordinary methods of explication, yet, still subjectively real...

But, I don't see how the hypothesis

"Conscious experience is **identified with** unanalyzable mind-atoms"

could be distinguished empirically from

"Conscious experience is **correlated with** unanalyzable mind-atoms"

I think finding cognitive correlates of consciousness is interesting, but I don't think it constitutes solving the hard problem in Chalmers' sense...

I grok that you're saying "consciousness feels inexplicable because it has to do with atoms that the system can't explain, due to their role as its primitive atoms" ... and this is a good idea, but, I don't see how it bridges the gap btw subjective experience and empirical data ...

What it does is explain why, even if there *were* no hard problem, cognitive systems might feel like there is one, in regard to their unanalyzable atoms

Another worry I have is: I feel like I can be conscious of my son, even though he is not an unanalyzable atom. I feel like I can be conscious of the unique impression he makes ... in the same way that I'm conscious of redness ... and, yeah, I feel like I can't fully explain the conscious impression he makes on me, even though I can explain a lot of things about him...

So I'm not convinced that atomic sensor input is the only source of raw, unanalyzable consciousness...

My first response to this is that you still don't seem to have taken account of what was said in the second part of the paper - and, at the same time, I can find many places where you make statements that are undermined by that second part.

To take the most significant example:  when you say:

> But, I don't see how the hypothesis
>
> "Conscious experience is **identified with** unanalyzable mind-atoms"
>
> could be distinguished empirically from
>
> "Conscious experience is **correlated with** unanalyzable mind-atoms"

... there are several concepts buried in there, like [identified with], [distinguished empirically from] and [correlated with] that are theory-laden. In other words, when you use those terms you are implictly applying some standards that have to do with semantics and ontology, and it is precisely those standards that I attacked in part 2 of the paper.

However, there is also another thing I can say about this statement, based on the argument in part one of the paper.

It looks like you are also falling victim to the argument in part 1, at the same time that you are questioning its validity: one of the consequences of that initial argument was that *because* those concept-atoms are unanalyzable, you can never do any such thing as talk about their being "only correlated with a particular cognitive event" versus "actually being identified with that cognitive event"!

So when you point out that the above distinction seems impossible to make, I say: "Yes, of course: the theory itself just *said* that!".

So far, all of the serious questions that people have placed at the door of this theory have proved susceptible to that argument.

That was essentially what I did when talking to Chalmers. He came up with an objection very like the one you gave above, so I said: "Okay, the answer is that the theory itself predicts that you *must* find that question to be a stumbling block ..... AND, more importantly, you should be able to see that the strategy I am using here is a strategy that I can flexibly deploy to wipe out a whole class of objections, so the only way around that strategy (if you want to bring down this theory) is to come up a with a counter-strategy that demonstrably has the structure to undermine my strategy.... and I don't believe you can do that."

His only response, IIRC, was "Huh! This looks like it might be new. Send me a copy."

To make further progress in this discussion it is important, I think, to understand both the fact that I have that strategy, and also to appreciate that the second part of the paper went far beyond that.


Lastly, about your question re. consciousness of extended objects that are not concept-atoms.

I think there is some confusion here about what I was trying to say (my fault perhaps). It is not just the fact of those concept-atoms being at the end of the line, it is actually about what happens to the analysis mechanism. So, what I did was point to the clearest cases where people feel that a subjective experience is in need of explanation - the qualia - and I showed that in that case the explanation is a failure of the analysis mechanism because it bottoms out.

However, just because I picked that example for the sake of clarity, that does not mean that the *only* place where the analysis mechanism can get into trouble must be just when it bumps into those peripheral atoms. I tried to explain this in a previous reply to someone (perhaps it was you): it would be entirely possible that higher level atoms could get built to represent [a sum of all the qualia-atoms that are part of one object], and if that happened we might find that this higher level atom was partly analyzable (it is composed of lower level qualia) and partly not (any analysis hits the brick wall after one successful unpacking step).

So when you raise the example of being conscious of your son, it can be partly a matter of the consciousness that comes from just consciousness of his parts.

But there are other things that could be at work in this case, too. How much is that "consciousness" of a whole object an awareness of an internal visual image? How much is it due to the fact that we can represent the concept of [myself having a concept of object x] .... in which case the unanalyzability is deriving not from the large object, but from the fact that [self having a concept of...] is a representation of something your *self* is doing .... and we know already that that is a bottoming-out concept.

Overall, you can see that there are multiple ways to get the analysis mechanism to bottom out, and it may be able to bottom out partially rather than completely. Just because I used a prticular example of bottoming-out does not mean that I claimed this was the only way it could happen.

And, of course, all those other claims of "conscious experiences" are widely agreed to be more dilute (less mysterious) than such things as qualia.




Richard Loosemore


























-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to