Ed,

I'd be curious for your reaction to

http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-for.html

which explores the limits of scientific and linguistic explanation, in
a different but possibly related way to Richard's argument.

Science and language are powerful tools for explanation but there is
no reason to assume they are all-powerful.  We should push them as far
as we can, but no further...

I agree with Richard that according to standard scientific notions of
explanation, consciousness and its relation to the physical world are
inexplicable.  My intuition and reasoning are probably not exactly the
same as his, but there seems some similarity btw our views...

-- Ben G


On Wed, Nov 19, 2008 at 5:27 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Richard,
>
>
>
> (the second half of this post, that starting with the all capitalized
> heading, is the most important)
>
>
>
> I agree with your extreme cognitive semantics discussion.
>
>
>
> I agree with your statement that one criterion for "realness" is the
> directness and immediateness of something's phenomenology.
>
>
>
> I agree with your statement that, based on this criterion for "realness,"
> many conscious phenomena, such as qualia, which have traditionally fallen
> under the hard problem of consciousness seem to be "real."
>
>
>
> But I have problems with some of the conclusions you draw from these things,
> particularly in your "Implications" section at the top of the second column
> on Page 5 of your paper.
>
>
>
> There you state
>
>
>
> "…the correct explanation for consciousness is that all of its various
> phenomenological facets deserve to be called as "real" as any other concept
> we have, because there are no meaningful objective standards that we could
> apply to judge them otherwise."
>
>
>
> That aspects of consciousness seem real does not provides much of an
> "explanation for consciousness."  It says something, but not much.  It adds
> little to Descartes' "I think therefore I am."  I don't think it provides
> much of an answer to any of the multiple questions Wikipedia associates with
> Chalmer's hard problem of consciousness.
>
>
>
> You further state that some aspects of consciousness have a unique status of
> being beyond the reach of scientific inquiry and give a purported reason why
> they are beyond such a reach. Similarly you say:
>
>
>
> "…although we can never say exactly what the phenomena of consciousness are,
> in the way that we give scientific explanations for other things, we can
> nevertheless say exactly why we cannot say anything: so in the end, we can
> explain it."
>
>
>
> First, I would point out as I have in my prior papers that, given the
> advances that are expected to be made in AGI, brain scanning and brain
> science in the next fifty years, it is not clear that consciousness is
> necessarily any less explainable than are many other aspects of physical
> reality.  You admit there are easy problems of consciousness that can be
> explained, just as there are easy parts of physical reality that can be
> explained. But it is not clear that the percent of consciousness that will
> remain a mystery in fifty years is any larger than the percent of basic
> physical reality that will remain a mystery in that time frame.
>
>
>
> But even if we accept as true your statement that certain phenomena of
> consciousness are beyond analysis, that does little to explain
> consciousness.  In fact, it does not appear to answer any of the hard
> problems of consciousness.  For example, just because (a) we are conscious
> of the distinction used in our own mind's internal representation between
> sensation of the colors red and blue, (b) we allegedly cannot analyze that
> difference further, and (c) that distinction seems subjectively real to us
> --- that does not shed much light on whether or not a p-zombie would be
> capable of acting just like a human without having consciousness of red and
> blue color qualia.
>
>
>
> It is not even clear to me that your paper shows consciousness is not an
> "artifact, " as your abstract implies.  Just because something is "real"
> does not mean it is not an "artifact", in many senses of the word, such as
> an unintended, secondary, or unessential, aspect of something.
>
>
>
>
>
> THE REAL WEAKNESS OF YOUR PAPER IS THAT IS PUTS WAY TOO MUCH EMPHASIS ON THE
> PART OF YOUR MOLECULAR FRAMEWORK THAT ALLEGEDLY BOTTOMS OUT, AND NOT ENOUGH
> ON THE PART OF THE FRAMEWORK YOU SAY REPORTS A SENSE OF REALNESS DESPITE
> SUCH BOTTOMING OUT  -- THE SENSE OF REALNESS THAT IS MOST ESSENTIAL TO
> CONSCIOUSNESS.
>
>
>
> It is my belief that if you want to understand consciousness in the context
> of the types of things discussed in your paper, you should focus the part of
> the molecular framework, which you imply it is largely in the foreground,
> that prevents the system from returning with no answer, even when trying to
> analyze a node such as a lowest level input node for the color red in a
> given portion of the visual field.
>
>
>
> This is the part of your molecular framework that
>
>
>
> "…because of the nature of the representations used in the foreground, there
> is no way for the analysis mechanism to fail to return some kind of answer,
> because a non-existent answer would be the same as representing the color of
> red as "nothing," and in that case all colors would be the same." (Page 3,
> Col.2, first full paragraph.)
>
>
>
> It is also presumably the part of your molecular framework that
>
>
>
> "…report that 'There is definitely something that it is like to be
> experiencing the subjective essence of red, but that thing is ineffable and
> inexplicable.' " (Page 3, Col. 2, 2nd full paragraph.)
>
>
>
> This is the part of your system that is providing the subjective experience
> that you say is providing the "realness" to your conscious experience.  This
> is where your papers should focus.  How does it provide this sense of
> realness.
>
>
>
> Unfortunately, your description of the molecular framework provides some,
> but very little, insight into what might be providing this subjective sense
> of experience, that is so key to the conclusions of your paper.
>
>
>
> In multiple prior posts on this thread I have said I believe the real source
> of consciousness appears to lie in such a molecular framework, but that to
> have anything approaching a human level of such consciousness this
> framework, and its computations that give rise to consciousness, have to be
> extremely complex.  I have also emphasized that brain scientist who have
> already done research on the neural correlates of consciousness, tend to
> indicate humans usually only report consciousness of things associated with
> fairly broad spread neural activation, which would normally involve many
> billions or trillions of inter-neuron messages per second.  I have posited
> that widespread activation of the nodes directly and indirectly associated
> with a given "conscious" node, provides dynamic grounding for the meaning of
> the conscious node.
>
>
>
> As I have pointed out, we know of nothing about physical reality that is
> anything other than computation (if you consider representation to be part
> of computation).  Similarly there is nothing our subjective experience can
> tell us about our own consciousnesses that is other than computation.  One
> of the key words we humans use to describe our consciousnesses is
> "awareness."  Awareness is created by computation.  It is my belief that
> this awareness comes from the complex, dynamically focused, and meaningful
> way in which our thought processes compute in interaction with themselves.
>
>
>
> Ed Porter
>
>
>
> P.S. (With regard to the alleged bottoming out reported in your papert: as I
> have pointed out in previous threads, even the lowest level nodes in any
> system would normally have associations that would give them a type and
> degree of grounding and, thus, further meaning  So that spreading activation
> would normally not bottom out when it reaches the lowest level nodes.  But
> it would be subject to circularly, or a lack of information about lowest
> nodes other than what could be learned from their associations with other
> nodes in the system.)
>
>
>
>
>
>
>
> -----Original Message-----
> From: Richard Loosemore [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 19, 2008 1:57 PM
>
> To: agi@v2.listbox.com
> Subject: Re: [agi] A paper that actually does solve the problem of
> consciousness
>
>
>
> Ben Goertzel wrote:
>
>> Richard,
>
>>
>
>> I re-read your paper and I'm afraid I really don't grok why you think it
>
>> solves Chalmers' hard problem of consciousness...
>
>>
>
>> It really seems to me like what you're suggesting is a "cognitive
>
>> correlate of consciousness", to morph the common phrase "neural
>
>> correlate of consciousness" ...
>
>>
>
>> You seem to be stating that when X is an unanalyzable, pure atomic
>
>> sensation from the perspective of cognitive system C, then C will
>
>> perceive X as a raw quale ... unanalyzable and not explicable by
>
>> ordinary methods of explication, yet, still subjectively real...
>
>>
>
>> But, I don't see how the hypothesis
>
>>
>
>> "Conscious experience is **identified with** unanalyzable mind-atoms"
>
>>
>
>> could be distinguished empirically from
>
>>
>
>> "Conscious experience is **correlated with** unanalyzable mind-atoms"
>
>>
>
>> I think finding cognitive correlates of consciousness is interesting,
>
>> but I don't think it constitutes solving the hard problem in Chalmers'
>
>> sense...
>
>>
>
>> I grok that you're saying "consciousness feels inexplicable because it
>
>> has to do with atoms that the system can't explain, due to their role as
>
>> its primitive atoms" ... and this is a good idea, but, I don't see how
>
>> it bridges the gap btw subjective experience and empirical data ..
>
>>
>
>> What it does is explain why, even if there *were* no hard problem,
>
>> cognitive systems might feel like there is one, in regard to their
>
>> unanalyzable atoms
>
>>
>
>> Another worry I have is: I feel like I can be conscious of my son, even
>
>> though he is not an unanalyzable atom.  I feel like I can be conscious
>
>> of the unique impression he makes ... in the same way that I'm conscious
>
>> of redness ... and, yeah, I feel like I can't fully explain the
>
>> conscious impression he makes on me, even though I can explain a lot of
>
>> things about him...
>
>>
>
>> So I'm not convinced that atomic sensor input is the only source of raw,
>
>> unanalyzable consciousness...
>
>
>
> My first response to this is that you still don't seem to have taken
>
> account of what was said in the second part of the paper  -  and, at the
>
> same time, I can find many places where you make statements that are
>
> undermined by that second part.
>
>
>
> To take the most significant example:  when you say:
>
>
>
>  > But, I don't see how the hypothesis
>
>  >
>
>  > "Conscious experience is **identified with** unanalyzable mind-atoms"
>
>  >
>
>  > could be distinguished empirically from
>
>  >
>
>  > "Conscious experience is **correlated with** unanalyzable mind-atoms"
>
>
>
> ... there are several concepts buried in there, like [identified with],
>
> [distinguished empirically from] and [correlated with] that are
>
> theory-laden.  In other words, when you use those terms you are
>
> implictly applying some standards that have to do with semantics and
>
> ontology, and it is precisely those standards that I attacked in part 2
>
> of the paper.
>
>
>
> However, there is also another thing I can say about this statement,
>
> based on the argument in part one of the paper.
>
>
>
> It looks like you are also falling victim to the argument in part 1, at
>
> the same time that you are questioning its validity:  one of the
>
> consequences of that initial argument was that *because* those
>
> concept-atoms are unanalyzable, you can never do any such thing as talk
>
> about their being "only correlated with a particular cognitive event"
>
> versus "actually being identified with that cognitive event"!
>
>
>
> So when you point out that the above distinction seems impossible to
>
> make, I say:  "Yes, of course:  the theory itself just *said* that!".
>
>
>
> So far, all of the serious questions that people have placed at the door
>
> of this theory have proved susceptible to that argument.
>
>
>
> That was essentially what I did when talking to Chalmers.  He came up
>
> with an objection very like the one you gave above, so I said: "Okay,
>
> the answer is that the theory itself predicts that you *must* find that
>
> question to be a stumbling block ..... AND, more importantly, you should
>
> be able to see that the strategy I am using here is a strategy that I
>
> can flexibly deploy to wipe out a whole class of objections, so the only
>
> way around that strategy (if you want to bring down this theory) is to
>
> come up a with a counter-strategy that demonstrably has the structure to
>
> undermine my strategy.... and I don't believe you can do that."
>
>
>
> His only response, IIRC, was "Huh!  This looks like it might be new.
>
> Send me a copy."
>
>
>
> To make further progress in this discussion it is important, I think, to
>
> understand both the fact that I have that strategy, and also to
>
> appreciate that the second part of the paper went far beyond that.
>
>
>
>
>
> Lastly, about your question re. consciousness of extended objects that
>
> are not concept-atoms.
>
>
>
> I think there is some confusion here about what I was trying to say (my
>
> fault perhaps).  It is not just the fact of those concept-atoms being at
>
> the end of the line, it is actually about what happens to the analysis
>
> mechanism.  So, what I did was point to the clearest cases where people
>
> feel that a subjective experience is in need of explanation - the qualia
>
> - and I showed that in that case the explanation is a failure of the
>
> analysis mechanism because it bottoms out.
>
>
>
> However, just because I picked that example for the sake of clarity,
>
> that does not mean that the *only* place where the analysis mechanism
>
> can get into trouble must be just when it bumps into those peripheral
>
> atoms.  I tried to explain this in a previous reply to someone (perhaps
>
> it was you):  it would be entirely possible that higher level atoms
>
> could get built to represent [a sum of all the qualia-atoms that are
>
> part of one object], and if that happened we might find that this higher
>
> level atom was partly analyzable (it is composed of lower level qualia)
>
> and partly not (any analysis hits the brick wall after one successful
>
> unpacking step).
>
>
>
> So when you raise the example of being conscious of your son, it can be
>
> partly a matter of the consciousness that comes from just consciousness
>
> of his parts.
>
>
>
> But there are other things that could be at work in this case, too.  How
>
> much is that "consciousness" of a whole object an awareness of an
>
> internal visual image?  How much is it due to the fact that we can
>
> represent the concept of [myself having a concept of object x] ... in
>
> which case the unanalyzability is deriving not from the large object,
>
> but from the fact that [self having a concept of...] is a representation
>
> of something your *self* is doing .... and we know already that that is
>
> a bottoming-out concept.
>
>
>
> Overall, you can see that there are multiple ways to get the analysis
>
> mechanism to bottom out, and it may be able to bottom out partially
>
> rather than completely.  Just because I used a prticular example of
>
> bottoming-out does not mean that I claimed this was the only way it
>
> could happen.
>
>
>
> And, of course, all those other claims of "conscious experiences" are
>
> widely agreed to be more dilute (less mysterious) than such things as
>
> qualia.
>
>
>
>
>
>
>
>
>
> Richard Loosemore
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> -------------------------------------------
>
> agi
>
> Archives: https://www.listbox.com/member/archive/303/=now
>
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>
> Modify Your Subscription: https://www.listbox.com/member/?&;
>
> Powered by Listbox: http://www.listbox.com
>
> ________________________________
> agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight
efficiently, die gallantly. Specialization is for insects."  -- Robert
Heinlein


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to