Ed Porter wrote:

  Richard,

/(the second half of this post, that starting with the all capitalized heading, is the most important)/

I agree with your extreme cognitive semantics discussion. I agree with your statement that one criterion for “realness” is the directness and immediateness of something’s phenomenology.

I agree with your statement that, based on this criterion for “realness,” many conscious phenomena, such as qualia, which have traditionally fallen under the hard problem of consciousness seem to be “real.”

But I have problems with some of the conclusions you draw from these things, particularly in your “Implications” section at the top of the second column on Page 5 of your paper.

There you state

“…the correct explanation for consciousness is that all of its various phenomenological facets deserve to be called as “real” as any other concept we have, because there are no meaningful /objective /standards that we could apply to judge them otherwise.”

That aspects of consciousness seem real does not provides much of an “explanation for consciousness.” It says something, but not much. It adds little to Descartes’ “I think therefore I am.” I don’t think it provides much of an answer to any of the multiple questions Wikipedia associates with Chalmer’s hard problem of consciousness.

I would respond as follows. When I make statements about consciousness deserving to be called "real", I am only saying this as a summary of a long argument that has gone before. So it would not really be fair to declare that this statement of mine "says something, but not much" without taking account of the reasons that have been building up toward that statement earlier in the paper. I am arguing that when we probe the meaning of "real" we find that the best criterion of realness is the way that the system builds a population of concept-atoms that are (a) mutually consistent with one another, and (b) strongly supported by sensory evidence (there are other criteria, but those are the main ones). If you think hard enough about these criteria, you notice that the qualia-atoms (those concept-atoms that cause the analysis mechanism to bottom out) score very high indeed. This is in dramatic contrast to other concept-atoms like hallucinations, which we consider 'artifacts' precisely because they score so low. The difference between these two is so dramatic that I think we need to allow the qualia-atoms to be called "real" by all our usual criteria, BUT with the added feature that they cannot be understood in any more basic terms.

Now, all of that (and more) lies behind the simple statement that they should be called real. It wouldn't make much sense to judge that statement by itself. Only judge the argument behind it.


You further state that some aspects of consciousness have a unique status of being beyond the reach of scientific inquiry and give a purported reason why they are beyond such a reach. Similarly you say:

”…although we can never say exactly what the phenomena of consciousness are, in the way that we give scientific explanations for other things, we can nevertheless say exactly why we cannot say anything: so in the end, we can explain it.”

First, I would point out as I have in my prior papers that, given the advances that are expected to be made in AGI, brain scanning and brain science in the next fifty years, it is not clear that consciousness is necessarily any less explainable than are many other aspects of physical reality. You admit there are easy problems of consciousness that can be explained, just as there are easy parts of physical reality that can be explained. But it is not clear that the percent of consciousness that will remain a mystery in fifty years is any larger than the percent of basic physical reality that will remain a mystery in that time frame.


The paper gives a clear argument for *why* it cannot be explained.

So contradict that argument (to say "it is not clear that consciousness is necessarily any less explainable than are many other aspects of physical reality") you have to say why the argument does not work. It would make no sense for a person to simply assert the opposite of the argument's conclusion, without justification.

The argument goes into plenty of specific details, so there are many kinds of attack that you could make.


But even if we accept as true your statement that certain phenomena of consciousness are beyond analysis, that does little to explain consciousness. In fact, it does not appear to answer any of the hard problems of consciousness. For example, just because (a) we are conscious of the distinction used in our own mind’s internal representation between sensation of the colors red and blue, (b) we allegedly cannot analyze that difference further, and (c) that distinction seems subjectively real to us --- that does not shed much light on whether or not a p-zombie would be capable of acting just like a human without having consciousness of red and blue color qualia.

I think that the actual argument has not been summarized corectly here. At that point in the paper, the claim is that WE CAN UNDERSTAND WHY THINKING SYSTEMS *MUST* MAKE STATEMENTS ABOUT HOW THERE IS THIS THING CALLED "CONSCIOUSNESS" THAT SEEMS INEXPLICABLE. One of things that we can explain is that when someone (e.g. Ed Porter) looks at a theory of consciousness, he will *have* to make the statement that the theory does not address the hard problem of consciousness.

So the truth is that the argument (at that stage of the paper) is all about WHY people have trouble being specific about what consciousness is and what is the explanation of consciousness. It does this by an "in principle" argument: in principle, if the analysis mechanism hits a dead end when it tries to analyze certain concepts, the net result will be that the system comes to conclusions like "There is something real here, but we can say nothing about it". Notice that the argument is not (at this stage) saying that "consciousness is a bunch of dead end concept atoms", it is simply saying that those concept atoms cause the system to make a whole variety of statements that exactly coincide with all the statements that we see philosophers (amateur and professional) making about consciousness.



It is not even clear to me that your paper shows consciousness is not an “artifact, ” as your abstract implies. Just because something is “real” does not mean it is not an “artifact”, in many senses of the word, such as an unintended, secondary, or unessential, aspect of something.


Artifacts are explainable as due to something else that has a physical
explanation: they are a malfunction. That is not the case with the
situation I am proposing.


THE REAL WEAKNESS OF YOUR PAPER IS THAT IS PUTS WAY TOO MUCH EMPHASIS ON THE PART OF YOUR MOLECULAR FRAMEWORK THAT ALLEGEDLY BOTTOMS OUT, AND NOT ENOUGH ON THE PART OF THE FRAMEWORK YOU SAY REPORTS A SENSE OF REALNESS DESPITE SUCH BOTTOMING OUT -- THE SENSE OF REALNESS THAT IS MOST ESSENTIAL TO CONSCIOUSNESS.

I could say more. But why is this a weakness? Does it break down, become inconsistent or leave something out? I think you have to be more specific about why this is a weak point.

Every part of the paper coould do with some expansion. Alas, the limit is six pages.....

It is my belief that if you want to understand consciousness in the context of the types of things discussed in your paper, you should focus the part of the molecular framework, which you imply it is largely in the foreground, that prevents the system from returning with no answer, even when trying to analyze a node such as a lowest level input node for the color red in a given portion of the visual field.
This is the part of your molecular framework that

“…because of the nature of the representations used in the foreground, there is no way for the analysis mechanism to fail to return some kind of answer, because a non-existent answer would be the same as representing the color of red as “nothing,” and in that case all colors would be the same.” (Page 3, Col.2, first full paragraph.)

It is also presumably the part of your molecular framework that

“…report that ‘There is definitely something that it is like to be experiencing the subjective essence of red, but that thing is ineffable and inexplicable.’ ” (Page 3, Col. 2, 2^nd full paragraph.)

This is the part of your system that is providing the subjective experience that you say is providing the “realness” to your conscious experience. This is where your papers should focus. How does it provide this sense of realness.

Well, no, this is the feature of the system that explains why we end up convinced that there is something, and that it is inexplicable.

Remember: this is a hypothesis in which I say "IF this mechanism exists, then we can see that all the things people say about consciousness would be said by an intelligent system that had this mechanism." Then i am inviting you to conclude, along with me: "It seems that this mechanism is so basically plausible, and it also captures the expected locutions of philosophers so well, that we should conclude (Ockhams Razor) that this mechanism is very likely to both exist and be the culprit."

The "realness" issue is separate.

Concepts are judged "real" by the system to the extent that they play a very strongly anchored and consistent role in the foreground. Color concepts are anchored more strongly than any other, hence they are very real.

I could say more about this, for sure, but most of the philosophers I have talked to have gotten this point fairly quickly.


Unfortunately, your description of the molecular framework provides some, but very little, insight into what might be providing this subjective sense of experience, that is so key to the conclusions of your paper.

In multiple prior posts on this thread I have said I believe the real source of consciousness appears to lie in such a molecular framework, but that to have anything approaching a human level of such consciousness this framework, and its computations that give rise to consciousness, have to be extremely complex. I have also emphasized that brain scientist who have already done research on the neural correlates of consciousness, tend to indicate humans usually only report consciousness of things associated with fairly broad spread neural activation, which would normally involve many billions or trillions of inter-neuron messages per second.

The data produced by neuroscience, at this point, is extremely confusing. It is also obscured by people who are themselves confused about the distinction between the Hard and Easy problems. I do not believe you can deduce anything meaningful from the neural research yet. See Loosemore and Harley (forthcoming).





I have posited that widespread activation of the nodes directly and indirectly associated with a given “conscious” node, provides dynamic grounding for the meaning of the conscious node.

As I have pointed out, we know of nothing about physical reality that is anything other than computation (if you consider representation to be part of computation). Similarly there is nothing our subjective experience can tell us about our own consciousnesses that is other than computation. One of the key words we humans use to describe our consciousnesses is “awareness.” Awareness is created by computation. It is my belief that this awareness comes from the complex, dynamically focused, and meaningful way in which our thought processes compute in interaction with themselves.

Ed Porter

P.S. /(With regard to the alleged bottoming out reported in your papert: as I have pointed out in previous threads, even the lowest level nodes in any system would normally have associations that would give them a type and degree of grounding and, thus, further meaning So that spreading activation would normally not bottom out when it reaches the lowest level nodes. But it would be subject to circularly, or a lack of information about lowest nodes other than what could be learned from their associations with other nodes in the system.)/


The spreading activation you are talking about is not the same as the operation of the analysis mechanism. You are talking about things other than the analysis mechanism that I have posited. Hence not relevant.



Regards




Richard Loosemore



-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2008 1:57 PM
To: agi@v2.listbox.com
Subject: Re: [agi] A paper that actually does solve the problem of consciousness

Ben Goertzel wrote:

 Richard,



 I re-read your paper and I'm afraid I really don't grok why you think it

 solves Chalmers' hard problem of consciousness...



 It really seems to me like what you're suggesting is a "cognitive

 correlate of consciousness", to morph the common phrase "neural

 correlate of consciousness" ...



 You seem to be stating that when X is an unanalyzable, pure atomic

 sensation from the perspective of cognitive system C, then C will

 perceive X as a raw quale ... unanalyzable and not explicable by

 ordinary methods of explication, yet, still subjectively real...



 But, I don't see how the hypothesis



 "Conscious experience is **identified with** unanalyzable mind-atoms"



 could be distinguished empirically from



 "Conscious experience is **correlated with** unanalyzable mind-atoms"



 I think finding cognitive correlates of consciousness is interesting,

 but I don't think it constitutes solving the hard problem in Chalmers'

 sense...



 I grok that you're saying "consciousness feels inexplicable because it

 has to do with atoms that the system can't explain, due to their role as

 its primitive atoms" ... and this is a good idea, but, I don't see how

 it bridges the gap btw subjective experience and empirical data ..



 What it does is explain why, even if there *were* no hard problem,

 cognitive systems might feel like there is one, in regard to their

 unanalyzable atoms



 Another worry I have is: I feel like I can be conscious of my son, even

 though he is not an unanalyzable atom.  I feel like I can be conscious

 of the unique impression he makes ... in the same way that I'm conscious

 of redness ... and, yeah, I feel like I can't fully explain the

 conscious impression he makes on me, even though I can explain a lot of

 things about him...



 So I'm not convinced that atomic sensor input is the only source of raw,

 unanalyzable consciousness...

My first response to this is that you still don't seem to have taken

account of what was said in the second part of the paper  -  and, at the

same time, I can find many places where you make statements that are

undermined by that second part.

To take the most significant example:  when you say:

 > But, I don't see how the hypothesis

 >

 > "Conscious experience is **identified with** unanalyzable mind-atoms"

 >

 > could be distinguished empirically from

 >

 > "Conscious experience is **correlated with** unanalyzable mind-atoms"

... there are several concepts buried in there, like [identified with],

[distinguished empirically from] and [correlated with] that are

theory-laden.  In other words, when you use those terms you are

implictly applying some standards that have to do with semantics and

ontology, and it is precisely those standards that I attacked in part 2

of the paper.

However, there is also another thing I can say about this statement,

based on the argument in part one of the paper.

It looks like you are also falling victim to the argument in part 1, at

the same time that you are questioning its validity:  one of the

consequences of that initial argument was that *because* those

concept-atoms are unanalyzable, you can never do any such thing as talk

about their being "only correlated with a particular cognitive event"

versus "actually being identified with that cognitive event"!

So when you point out that the above distinction seems impossible to

make, I say:  "Yes, of course:  the theory itself just *said* that!".

So far, all of the serious questions that people have placed at the door

of this theory have proved susceptible to that argument.

That was essentially what I did when talking to Chalmers.  He came up

with an objection very like the one you gave above, so I said: "Okay,

the answer is that the theory itself predicts that you *must* find that

question to be a stumbling block ..... AND, more importantly, you should

be able to see that the strategy I am using here is a strategy that I

can flexibly deploy to wipe out a whole class of objections, so the only

way around that strategy (if you want to bring down this theory) is to

come up a with a counter-strategy that demonstrably has the structure to

undermine my strategy.... and I don't believe you can do that."

His only response, IIRC, was "Huh!  This looks like it might be new.

Send me a copy."

To make further progress in this discussion it is important, I think, to

understand both the fact that I have that strategy, and also to

appreciate that the second part of the paper went far beyond that.

Lastly, about your question re. consciousness of extended objects that

are not concept-atoms.

I think there is some confusion here about what I was trying to say (my

fault perhaps).  It is not just the fact of those concept-atoms being at

the end of the line, it is actually about what happens to the analysis

mechanism.  So, what I did was point to the clearest cases where people

feel that a subjective experience is in need of explanation - the qualia

- and I showed that in that case the explanation is a failure of the

analysis mechanism because it bottoms out.

However, just because I picked that example for the sake of clarity,

that does not mean that the *only* place where the analysis mechanism

can get into trouble must be just when it bumps into those peripheral

atoms.  I tried to explain this in a previous reply to someone (perhaps

it was you):  it would be entirely possible that higher level atoms

could get built to represent [a sum of all the qualia-atoms that are

part of one object], and if that happened we might find that this higher

level atom was partly analyzable (it is composed of lower level qualia)

and partly not (any analysis hits the brick wall after one successful

unpacking step).

So when you raise the example of being conscious of your son, it can be

partly a matter of the consciousness that comes from just consciousness

of his parts.

But there are other things that could be at work in this case, too.  How

much is that "consciousness" of a whole object an awareness of an

internal visual image?  How much is it due to the fact that we can

represent the concept of [myself having a concept of object x] ... in

which case the unanalyzability is deriving not from the large object,

but from the fact that [self having a concept of...] is a representation

of something your *self* is doing .... and we know already that that is

a bottoming-out concept.

Overall, you can see that there are multiple ways to get the analysis

mechanism to bottom out, and it may be able to bottom out partially

rather than completely.  Just because I used a prticular example of

bottoming-out does not mean that I claimed this was the only way it

could happen.

And, of course, all those other claims of "conscious experiences" are

widely agreed to be more dilute (less mysterious) than such things as

qualia.

Richard Loosemore


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to