Ed,

Unfortunately to reply to your message in detail would absorb a lot of
time, because there are two issues mixed up

1) you don't know much about computability theory, and educating you
on it would take a lot of time (and is not best done on an email list)

2) I may not have expressed some of my weird philosophical ideas about
computability and mind and reality clearly ... though Abram, at least,
seemed to "get" them ;)  [but he has a lot of background in the area]

Just to clarify some simple things though: Pi is a computable number,
because there's a program that would generate it if allowed to run
long enough....  Also, pi has been proved irrational; and, quantum
theory really has nothing directly to do with uncomputability...

About

>How can several pounds of matter that is the human brain model
> the true complexity of an infinity of infinitely complexity things?

it is certainly thinkable that the brain is infinite not finite in its
information content, or that it's a sort of "antenna" that receives
information from some infinite-information-content source.  I'm not
saying I believe this, just saying it's a logical possibility, and not
really ruled out by available data...

Your reply seems to assume that the brain is a finite computational
system and that other alternatives don't make sense.  I think this is
an OK working assumption for AGI engineers but it's not proved by any
means.

My main point in that post was, simply, that science and language seem
intrinsically unable to distinguish computable from uncomputable
realities.  That doesn't necessarily mean the latter don't "exist" but
it means they're not really scientifically useful entities.  But, my
detailed argument in favor of this point requires some basic
understanding of computability math to appreciate, and I can't review
those basics in an email, it's too much...

ben g

On Sun, Nov 30, 2008 at 4:20 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Ben,
>
>
>
> On November 19, 2008 5:39 you wrote the following under the above titled
> thread:
>
>
>
> ----------------------
>
> Ed,
>
>
>
> I'd be curious for your reaction to
>
>
>
> http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-forhtml
>
>
>
> which explores the limits of scientific and linguistic explanation, in
>
> a different but possibly related way to Richard's argument.
>
>
>
> ----------------------
>
>
>
> In the below email I asked you some questions about your article, which
> capture my major problem in understanding it, and I don't think I ever
> receive a reply
>
>
>
> The questions were at the bottom of such a long post you may well never have
> even seen them.  I know you are busy, but if you have time I would be
> interested in hearing your answers to the following questions about the
> following five quoted parts (shown in red if you are seeing this in rich
> text) from you article.  If you are too busy to respond just say so, either
> on or off list.
>
>
>
> ---------------------
>
>
>
> (1) "In the simplest case, A2 may represent U directly in the language,
> using a single expression"
>
>
>
> How, can "U" be directly represented in the language if it is uncomputable?
>
>
>
> I assume you consider any irational number, such as pi to be uncomputable
> (although, at least pi has a forumula that with enough computation can
> approach it as a limit –I assume that for most real numbers if there is such
> a formula, we do not know it.) (By the way, do we know for a fact that pi is
> irational, and if so how do we know other than that we have caluclated it to
> millions of places and not yet found an exact solution?)
>
>
>
> Merely communicating the symbol pi only represents the number if the agent
> receiving the communication has a more detailed definition, but any
> definition, such as a formula for iteratively approaching pi, which
> presumably is what you mean by "R_U" would only be an approximation.
>
>
>
> So U could never by fully represented unless one had infinite time --- and I
> generally consider it a waste of time to think about infinate time unless
> there is something valuable about such considerations that has a use in much
> more human-sized chunks of time.
>
>
>
> In fact, it seems the major message of quantum mechanics is that even
> physical reality doesn't have the time or machinery to compute uncomputable
> things, like a space constructed of dimensions each correspond to all the
> real numbers within some astronomical range .  So the real number line is
> not really real.  It is at best a construct of the human mind that can at
> best only be approximated in part.
>
>
>
> (2) "complexity(U) < complexity(R_U)"
>
>
>
> Because I did not understand how U could be represented, and how R_U could
> be anything other than an approximation for any practical purposes, I didn't
> understand the meaning of the above line from your article?
>
>
>
> If U and R_U have the meaning I guessed in my discussion of quote (1), then
> U could not be meaningfully representable in the language, other than by a
> symbol that references some definition (presumably R_U), which, in order
> even be able to approximate U's uncomputable complexity, would have to be
> more complex than U itself.
>
>
>
> Thus, according to this understanding, wouldn't quote (2) always be true?
>
>
>
>
>
> (3) "complexity(real number line R) <>"
>
>
>
> I didn't understand this formula because I don't know what the "<>" symbol
> means and I don't know if some text was supposed to follow after it.
>
>
>
> (4) "If NO, then it means the mind is better off using the axioms for R than
> using R directly. And, I suggest, that is what we actually do when using R
> in calculus. We don't use R as an "actual entity" in any strong sense, we
> use R as an abstract set of axioms."
>
>
>
> As is stated regarding quote (3) I don't understand what you are saying "NO"
> to.  But it seems pretty obvious that our minds, and even our computers, do
> not use R directly (after all the percent of R that is uncomputable would
> appear to approach 100% as a limit --- even worse it contains an infinity of
> infinitely complex things), but we have a set of axioms and models about it
> that are quite useful?
>
>
>
>
>
> (5) "What would YES mean? It would mean that somehow we, as uncomputable
> beings, used R as an internal source of intuition about continuity ... not
> thus deriving any conclusions beyond the ones obtainable using the axioms
> about R, but deriving conclusions in a way that we found subjectively
> simpler."
>
>
>
> Again, from my discussion of Quote (3), I don't know what "YES" means. But
> if by use R as an internal source of intuition about continuity, you mean we
> that we actually model the true complexity of R, I think that is absurd on
> its face.  How can several pounds of matter that is the human brain model
> the true complexity of an infinity of infinitely complexity things?
>
>
>
> ---------------------
>
>
>
> I don't understand what your paper on uncomputability has to do with my
> questions and comments about Richard's paper, other than to highlight that
> many things are uncomputable, some in theory and many more in practice, and
> that instead of dealing with many things, imagined or real, in their true
> complexity our minds deal with simplifications of them.
>
>
>
> But such simplifications, particularly since they often let us apply more
> complex analysis where it is most needed, can be very valuable.
>
>
>
> Furthermore, it is not clear to me that consciousness is not computable.  I
> think it is, in fact, computed.  But I have always felt that a given
> computation can never fully model or understand itself.
>
>
>
> Perhaps you are saying that we can never communicate the true complexity of
> our consciousness to someone else, except, to some extent, by reference to
> their own consciousnesses --- that when we use words to describe our
> consciousness we are sending symbols, somewhat like "U" in your article,
> which is defined by reference to the actual sense of consciousness in
> someone else that functions very roughly, somewhat like R_U in your paper
>
>
>
> Ed Porter
>
>
>
>
>
> -----Original Message-----
> From: Ed Porter [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 19, 2008 9:38 PM
> To: agi@v2.listbox.com
> Subject: RE: [agi] A paper that actually does solve the problem of
> consciousness
>
>
>
> Ben,
>
>
>
> I have never assumed language is all-powerful, in fact I have always assumed
> at least since boarding school, if not years before, that there are severe
> limits to human understanding.
>
>
>
> I certainly agree there are limits to what we can understand about
> consciousness.  A consciousness cannot completely model itself, because that
> would require the model created in the mind to be as complex as the
> computation that is modeling it, which seem inherently impossible.
>
>
>
> But many aspects of reality can be meaningfully represented by models that
> are substantial simplification of what they are modeling.  Since any aspect
> of physical reality that we can see or touch without the aid of instruments
> involves at least 10^20 atoms, each vibrating trillions of times a second,
> and each having electrons whose Schrödinger equations vibrate something like
> 10^19 times a second --- we humans naturally perceive, understand, and
> navigate the world only at the level of extremely gross generalizations.
> But through the tools of science, including computers, we have been able to
> create and test models that operate at much finer, or much more complex
> levels
>
>
>
> I totally disagree with the notion that consciousness and its relation to
> the physical world are inexplicable.  Clearly certain aspect of
> consciousness can be explained in terms of meaningful generalizations.
> Psychology and brain science have already created many such meaningful
> generalizations.  Richard seems to admit as much, when he dismissed all the
> examples I have given in this thread of scientific knowledge about
> consciousness as merely examples of the easy problems of consciousness.
> Easy or not, they are meaningful explanations about consciousness.
>
>
>
> Our understanding of the human mind has grown tremendously in the last
> decade and the rate of of our learning on the subject is rapidly
> accelerating.  This includes our understanding of the physical correlates of
> consciousness.
>
>
>
> So, do I think we will ever understand everything about consciousness --- of
> course not.  But do I think that within fifty years we will know much, much
> more about it – of course.
>
>
>
> In fact, I think we will come to understand the sense of awareness that we
> experience in our own consciousnesses as a natural result of a certain type
> of computation, one which has an extremely rich, but somewhat coherently
> controlled feedback loop with its own extremely complex internal state.
>
>
>
> Why does do our bodies sense reaility?  Because they are located within it,
> and have systems for sensing and affecting realities.
>
>
>
> Why do we experience consciousness?  Because computation in our mind is
> located within the mind and has system for sensing and affecting its
> states.
>
>
>
> Its not quite that simple, but that is a central part of the puzzle.  It is
> sort of a Zen thing.  But I hope people on this list open your mind to the
> concept that a human consciousness is a special type of computation.  It is
> a computation that includes the generation of a sense of experiencing and
> understanding a sequence of conscious concepts by simultaneous activation of
> prior experiences related to each such concept, being project into a mind
> previously activated by the grounding of previously selected concepts, so as
> to provide a sense of grounding for those concepts that is appropriate to
> the sequence of prior activations.
>
>
>
> The complexity of the brain, just in terms of neurons (10^11) is equal to a
> large football stadium (10^5) seats in which every single seat is itself a
> 10 large football stadiums in which each seat corresponds to a agent with
> connections to 100 to 10,000 other such agents, having memory at each of
> those connections, and there are mechanism for communicating information to
> the whole stadium of stadiums at once, and there are lots of local channels
> and screens.
>
>
>
> Image, what a complexly dynamic crowd that could be.
>
>
>
> The more you think about it the more it makes sense.  I read a book called
> the "Minds of Robots" in 1964 which said that conscious news was
> computation, but it was not until after I was well into my '69-70
> independent study on AI with comprehensive reading list from Minsky, and I
> understood roughly the numbers associated with the computation of the brain,
> had understood experiential computing based on Minsky K-line theory, and had
> thought about it in a couple on acid trips that I truly started to
> understand how straight forward such a statement is.
>
>
>
> (Please note I have not taken any acid in over three decades and I an not
> advocating its use outside of the care of a responsible psychiatrist.)
>
>
>
> ----------------
>
>
>
> With regard to your paper, I read it, but I did not spend the time that
> probably would be required for me to understand it.
>
>
>
> Unlike you, I was only briefly bothered by the fact that most of the real
> number line was full of irrational numbers.  I had very little trouble
> understanding the concept of a limit in calculus.  If some solution can be
> reasonably be shown to have an error smaller than any you would ever be
> concerned with, that's good enough for me.
>
>
>
> I guess this is because I have never been one for theoretical purity.  In
> fact, I tend to instinctually distrust it.  In fact, I think all kids should
> be taught in school to distrust, at least to some degree, all theories (as
> well as their own senses and memories).  I was originally in favor of
> spending one day a year in high school science to discuss intelligent design
> because if it would be part of an honest discussion about why, when, and to
> what degree we should trust scientific theory.  Once I found that most of
> the intelligent design texts were total, closed minded propaganda, I changed
> my mind.
>
>
>
> There were some parts of your paper I particularly did not under stand.  Let
> me quote them and then ask you about them.
>
>
>
> ---------------------
>
>
>
> (1) In the simplest case, A2 may represent U directly in the language, using
> a single expression"
>
>
>
> How, can U be directly represented in the language if it is uncomputable?
>
>
>
> I assume you consider any irational number, such as pi to be uncomputable
> (although, at least pi has a forumula that with enough computation can be
> approach it as a limit – as assume the for most real numbers if there is
> such a formula, we do not know it.)  Merely communicating the symbol pi only
> represents the number if the agent receiving the communication has a more
> detailed definition, but any definition, such as a formula for iteratively
> approaching pi, which presumably is what you mean by R_U would only be an
> approximation.
>
>
>
> So U could never by fully represented unless one had infinite time --- and I
> generally consider it a waste of time to think about infinate time unless
> there is something valuable about such considerations that has a use in much
> more human-sized chunks of time.
>
>
>
> In fact, it seems the major message of quantum mechanic is that even
> physical reality doesn't have the time or machinery to compute uncomputable
> things.  So the real number line is not really real.  It is at best a
> construct of the human mind that can at best only be approximated in part.
>
>
>
> (2) complexity(U) < complexity(R_U)
>
>
>
> Because I did not understand how U could be represented, and how R_U could
> be anything other than an approximation for any practical purposes, I didn't
> understand the meaning of the above line from your article?
>
>
>
> If U and R_U have the meaning I guessed in my discussion of text quote (1),
> above, U could not be meaningfully representable in the language, other than
> by a symbol that reference some definition (presumably R_U), which, in order
> even be able to approximate U's uncomputable complexity, would have to be
> more complex than U itself.
>
>
>
> So why wouldn't this inequality always be true?
>
>
>
> (3) complexity(real number line R) <>
>
>
>
> I didn't understand this formula because I don't know what the "<>" symbol
> means and I don't know if some text was supposed to follow after it.
>
>
>
> (4) If NO, then it means the mind is better off using the axioms for R than
> using R directly. And, I suggest, that is what we actually do when using R
> in calculus. We don't use R as an "actual entity" in any strong sense, we
> use R as an abstract set of axioms.
>
>
>
> From quote (4) above it is clear I don't understand what you are saying "NO"
> to.  But it seems pretty obvious that our minds, and even our computers, do
> not use R directly (after all the percent of it that is uncomputable would
> appear to approach 100% as a limit --- even worse it contains an infinity of
> infinitely complex things), but we have a set of axioms and models about it
> that are quite useful?
>
>
>
>
>
> (5) What would YES mean? It would mean that somehow we, as uncomputable
> beings, used R as an internal source of intuition about continuity ... not
> thus deriving any conclusions beyond the ones obtainable using the axioms
> about R, but deriving conclusions in a way that we found subjectively
> simpler.
>
>
>
> Again from Quote (4) I don't know what "YES" means. But if by use R as an
> internal source of intuition about continuity, you mean we that we actually
> model the true complexity of R, I think that is absurd on its face.  How can
> several pounds of matter that is the human brain model an infinity of
> infinitely complexity things?
>
>
>
> ---------------------
>
>
>
> I don't understand what your paper on uncomputability has to do with my
> questions and comments about Richard's paper, other than to highlight
> profoundly that many things are uncomputable.   But for at least since my
> 1969-1970 study of AI I have always felt the true complexity of any one
> human consciousness would be far beyond human comprehension.  After all I am
> the one attacking Richard's paper for not discussing depth and complexity of
> computation as a source of our perception of consciousnesses richness.
>
>
>
> As I said above, it is clearly uncomputable to have the brain be able to
> understand itself completely, or even anything close to completely.  I have
> never doubted that.  But that does not necessarily mean that we cannot come
> to know the brain and mind, through the use of computers and models, with as
> much specificity as we can understand most aspects of physical reality that
> are anywhere nearly as complex.
>
>
>
> I do not think the human mind is all irreducible complexity.  Remember that
> chaos theory is the study of systems that are a mixture of randomness and
> regularity. Despite its complexity, I think human consciousness has enough
> regularities that in fifty years we will have a surprising and
> philosophically transformative degree of understanding about it – although
> not total understanding.
>
>
>
> Remember most people in the know are predicting human-level AI in 20 years,
> so that would mean the level of understanding we would have of the human
> mind in 50 years, barring a major collapse of civilization, would benefit
> from a full 30 years of superhuman intelligence and immeasurably better
> brain scanning and interfacing technology.
>
>
>
> Ed Porter
>
>
>
>
>
>
>
>
>
>
>
> -----Original Message-----
> From: Ben Goertzel [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 19, 2008 5:39 PM
>
> To: agi@v2.listbox.com
> Subject: Re: [agi] A paper that actually does solve the problem of
> consciousness
>
>
>
> Ed,
>
>
>
> I'd be curious for your reaction to
>
>
>
> http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-for.html
>
>
>
> which explores the limits of scientific and linguistic explanation, in
>
> a different but possibly related way to Richard's argument.
>
>
>
> Science and language are powerful tools for explanation but there is
>
> no reason to assume they are all-powerful.  We should push them as far
>
> as we can, but no further...
>
>
>
> I agree with Richard that according to standard scientific notions of
>
> explanation, consciousness and its relation to the physical world are
>
> inexplicable.  My intuition and reasoning are probably not exactly the
>
> same as his, but there seems some similarity btw our views...
>
>
>
> -- Ben G
>
>
>
>
>
> On Wed, Nov 19, 2008 at 5:27 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
>
>> Richard,
>
>>
>
>>
>
>>
>
>> (the second half of this post, that starting with the all capitalized
>
>> heading, is the most important)
>
>>
>
>>
>
>>
>
>> I agree with your extreme cognitive semantics discussion.
>
>>
>
>>
>
>>
>
>> I agree with your statement that one criterion for "realness" is the
>
>> directness and immediateness of something's phenomenology.
>
>>
>
>>
>
>>
>
>> I agree with your statement that, based on this criterion for "realness,"
>
>> many conscious phenomena, such as qualia, which have traditionally fallen
>
>> under the hard problem of consciousness seem to be "real."
>
>>
>
>>
>
>>
>
>> But I have problems with some of the conclusions you draw from these
>> things,
>
>> particularly in your "Implications" section at the top of the second
>> column
>
>> on Page 5 of your paper.
>
>>
>
>>
>
>>
>
>> There you state
>
>>
>
>>
>
>>
>
>> "…the correct explanation for consciousness is that all of its various
>
>> phenomenological facets deserve to be called as "real" as any other
>> concept
>
>> we have, because there are no meaningful objective standards that we could
>
>> apply to judge them otherwise."
>
>>
>
>>
>
>>
>
>> That aspects of consciousness seem real does not provides much of an
>
>> "explanation for consciousness."  It says something, but not much.  It
>> adds
>
>> little to Descartes' "I think therefore I am."  I don't think it provides
>
>> much of an answer to any of the multiple questions Wikipedia associates
>> with
>
>> Chalmer's hard problem of consciousness.
>
>>
>
>>
>
>>
>
>> You further state that some aspects of consciousness have a unique status
>> of
>
>> being beyond the reach of scientific inquiry and give a purported reason
>> why
>
>> they are beyond such a reach. Similarly you say:
>
>>
>
>>
>
>>
>
>> "…although we can never say exactly what the phenomena of consciousness
>> are,
>
>> in the way that we give scientific explanations for other things, we can
>
>> nevertheless say exactly why we cannot say anything: so in the end, we can
>
>> explain it."
>
>>
>
>>
>
>>
>
>> First, I would point out as I have in my prior papers that, given the
>
>> advances that are expected to be made in AGI, brain scanning and brain
>
>> science in the next fifty years, it is not clear that consciousness is
>
>> necessarily any less explainable than are many other aspects of physical
>
>> reality.  You admit there are easy problems of consciousness that can be
>
>> explained, just as there are easy parts of physical reality that can be
>
>> explained. But it is not clear that the percent of consciousness that will
>
>> remain a mystery in fifty years is any larger than the percent of basic
>
>> physical reality that will remain a mystery in that time frame.
>
>>
>
>>
>
>>
>
>> But even if we accept as true your statement that certain phenomena of
>
>> consciousness are beyond analysis, that does little to explain
>
>> consciousness.  In fact, it does not appear to answer any of the hard
>
>> problems of consciousness.  For example, just because (a) we are conscious
>
>> of the distinction used in our own mind's internal representation between
>
>> sensation of the colors red and blue, (b) we allegedly cannot analyze that
>
>> difference further, and (c) that distinction seems subjectively real to us
>
>> --- that does not shed much light on whether or not a p-zombie would be
>
>> capable of acting just like a human without having consciousness of red
>> and
>
>> blue color qualia.
>
>>
>
>>
>
>>
>
>> It is not even clear to me that your paper shows consciousness is not an
>
>> "artifact, " as your abstract implies.  Just because something is "real"
>
>> does not mean it is not an "artifact", in many senses of the word, such as
>
>> an unintended, secondary, or unessential, aspect of something.
>
>>
>
>>
>
>>
>
>>
>
>>
>
>> THE REAL WEAKNESS OF YOUR PAPER IS THAT IS PUTS WAY TOO MUCH EMPHASIS ON
>> THE
>
>> PART OF YOUR MOLECULAR FRAMEWORK THAT ALLEGEDLY BOTTOMS OUT, AND NOT
>> ENOUGH
>
>> ON THE PART OF THE FRAMEWORK YOU SAY REPORTS A SENSE OF REALNESS DESPITE
>
>> SUCH BOTTOMING OUT  -- THE SENSE OF REALNESS THAT IS MOST ESSENTIAL TO
>
>> CONSCIOUSNESS.
>
>>
>
>>
>
>>
>
>> It is my belief that if you want to understand consciousness in the
>> context
>
>> of the types of things discussed in your paper, you should focus the part
>> of
>
>> the molecular framework, which you imply it is largely in the foreground,
>
>> that prevents the system from returning with no answer, even when trying
>> to
>
>> analyze a node such as a lowest level input node for the color red in a
>
>> given portion of the visual field.
>
>>
>
>>
>
>>
>
>> This is the part of your molecular framework that
>
>>
>
>>
>
>>
>
>> "…because of the nature of the representations used in the foreground,
>> there
>
>> is no way for the analysis mechanism to fail to return some kind of
>> answer,
>
>> because a non-existent answer would be the same as representing the color
>> of
>
>> red as "nothing," and in that case all colors would be the same." (Page 3,
>
>> Col.2, first full paragraph.)
>
>>
>
>>
>
>>
>
>> It is also presumably the part of your molecular framework that
>
>>
>
>>
>
>>
>
>> "…report that 'There is definitely something that it is like to be
>
>> experiencing the subjective essence of red, but that thing is ineffable
>> and
>
>> inexplicable.' " (Page 3, Col. 2, 2nd full paragraph.)
>
>>
>
>>
>
>>
>
>> This is the part of your system that is providing the subjective
>> experience
>
>> that you say is providing the "realness" to your conscious experience.
>> This
>
>> is where your papers should focus.  How does it provide this sense of
>
>> realness.
>
>>
>
>>
>
>>
>
>> Unfortunately, your description of the molecular framework provides some,
>
>> but very little, insight into what might be providing this subjective
>> sense
>
>> of experience, that is so key to the conclusions of your paper.
>
>>
>
>>
>
>>
>
>> In multiple prior posts on this thread I have said I believe the real
>> source
>
>> of consciousness appears to lie in such a molecular framework, but that to
>
>> have anything approaching a human level of such consciousness this
>
>> framework, and its computations that give rise to consciousness, have to
>> be
>
>> extremely complex.  I have also emphasized that brain scientist who have
>
>> already done research on the neural correlates of consciousness, tend to
>
>> indicate humans usually only report consciousness of things associated
>> with
>
>> fairly broad spread neural activation, which would normally involve many
>
>> billions or trillions of inter-neuron messages per second.  I have posited
>
>> that widespread activation of the nodes directly and indirectly associated
>
>> with a given "conscious" node, provides dynamic grounding for the meaning
>> of
>
>> the conscious node.
>
>>
>
>>
>
>>
>
>> As I have pointed out, we know of nothing about physical reality that is
>
>> anything other than computation (if you consider representation to be part
>
>> of computation).  Similarly there is nothing our subjective experience can
>
>> tell us about our own consciousnesses that is other than computation.  One
>
>> of the key words we humans use to describe our consciousnesses is
>
>> "awareness."  Awareness is created by computation.  It is my belief that
>
>> this awareness comes from the complex, dynamically focused, and meaningful
>
>> way in which our thought processes compute in interaction with themselves.
>
>>
>
>>
>
>>
>
>> Ed Porter
>
>>
>
>>
>
>>
>
>> P.S. (With regard to the alleged bottoming out reported in your papert: as
>> I
>
>> have pointed out in previous threads, even the lowest level nodes in any
>
>> system would normally have associations that would give them a type and
>
>> degree of grounding and, thus, further meaning  So that spreading
>> activation
>
>> would normally not bottom out when it reaches the lowest level nodes.  But
>
>> it would be subject to circularly, or a lack of information about lowest
>
>> nodes other than what could be learned from their associations with other
>
>> nodes in the system.)
>
>>
>
>>
>
>>
>
>>
>
>>
>
>>
>
>>
>
>> -----Original Message-----
>
>> From: Richard Loosemore [mailto:[EMAIL PROTECTED]
>
>> Sent: Wednesday, November 19, 2008 1:57 PM
>
>>
>
>> To: agi@v2.listbox.com
>
>> Subject: Re: [agi] A paper that actually does solve the problem of
>
>> consciousness
>
>>
>
>>
>
>>
>
>> Ben Goertzel wrote:
>
>>
>
>>> Richard,
>
>>
>
>>>
>
>>
>
>>> I re-read your paper and I'm afraid I really don't grok why you think it
>
>>
>
>>> solves Chalmers' hard problem of consciousness...
>
>>
>
>>>
>
>>
>
>>> It really seems to me like what you're suggesting is a "cognitive
>
>>
>
>>> correlate of consciousness", to morph the common phrase "neural
>
>>
>
>>> correlate of consciousness" ...
>
>>
>
>>>
>
>>
>
>>> You seem to be stating that when X is an unanalyzable, pure atomic
>
>>
>
>>> sensation from the perspective of cognitive system C, then C will
>
>>
>
>>> perceive X as a raw quale ... unanalyzable and not explicable by
>
>>
>
>>> ordinary methods of explication, yet, still subjectively real...
>
>>
>
>>>
>
>>
>
>>> But, I don't see how the hypothesis
>
>>
>
>>>
>
>>
>
>>> "Conscious experience is **identified with** unanalyzable mind-atoms"
>
>>
>
>>>
>
>>
>
>>> could be distinguished empirically from
>
>>
>
>>>
>
>>
>
>>> "Conscious experience is **correlated with** unanalyzable mind-atoms"
>
>>
>
>>>
>
>>
>
>>> I think finding cognitive correlates of consciousness is interesting,
>
>>
>
>>> but I don't think it constitutes solving the hard problem in Chalmers'
>
>>
>
>>> sense...
>
>>
>
>>>
>
>>
>
>>> I grok that you're saying "consciousness feels inexplicable because it
>
>>
>
>>> has to do with atoms that the system can't explain, due to their role as
>
>>
>
>>> its primitive atoms" ... and this is a good idea, but, I don't see how
>
>>
>
>>> it bridges the gap btw subjective experience and empirical data ..
>
>>
>
>>>
>
>>
>
>>> What it does is explain why, even if there *were* no hard problem,
>
>>
>
>>> cognitive systems might feel like there is one, in regard to their
>
>>
>
>>> unanalyzable atoms
>
>>
>
>>>
>
>>
>
>>> Another worry I have is: I feel like I can be conscious of my son, even
>
>>
>
>>> though he is not an unanalyzable atom.  I feel like I can be conscious
>
>>
>
>>> of the unique impression he makes ... in the same way that I'm conscious
>
>>
>
>>> of redness ... and, yeah, I feel like I can't fully explain the
>
>>
>
>>> conscious impression he makes on me, even though I can explain a lot of
>
>>
>
>>> things about him...
>
>>
>
>>>
>
>>
>
>>> So I'm not convinced that atomic sensor input is the only source of raw,
>
>>
>
>>> unanalyzable consciousness...
>
>>
>
>>
>
>>
>
>> My first response to this is that you still don't seem to have taken
>
>>
>
>> account of what was said in the second part of the paper  -  and, at the
>
>>
>
>> same time, I can find many places where you make statements that are
>
>>
>
>> undermined by that second part.
>
>>
>
>>
>
>>
>
>> To take the most significant example:  when you say:
>
>>
>
>>
>
>>
>
>>  > But, I don't see how the hypothesis
>
>>
>
>>  >
>
>>
>
>>  > "Conscious experience is **identified with** unanalyzable mind-atoms"
>
>>
>
>>  >
>
>>
>
>>  > could be distinguished empirically from
>
>>
>
>>  >
>
>>
>
>>  > "Conscious experience is **correlated with** unanalyzable mind-atoms"
>
>>
>
>>
>
>>
>
>> ... there are several concepts buried in there, like [identified with],
>
>>
>
>> [distinguished empirically from] and [correlated with] that are
>
>>
>
>> theory-laden.  In other words, when you use those terms you are
>
>>
>
>> implictly applying some standards that have to do with semantics and
>
>>
>
>> ontology, and it is precisely those standards that I attacked in part 2
>
>>
>
>> of the paper.
>
>>
>
>>
>
>>
>
>> However, there is also another thing I can say about this statement,
>
>>
>
>> based on the argument in part one of the paper.
>
>>
>
>>
>
>>
>
>> It looks like you are also falling victim to the argument in part 1, at
>
>>
>
>> the same time that you are questioning its validity:  one of the
>
>>
>
>> consequences of that initial argument was that *because* those
>
>>
>
>> concept-atoms are unanalyzable, you can never do any such thing as talk
>
>>
>
>> about their being "only correlated with a particular cognitive event"
>
>>
>
>> versus "actually being identified with that cognitive event"!
>
>>
>
>>
>
>>
>
>> So when you point out that the above distinction seems impossible to
>
>>
>
>> make, I say:  "Yes, of course:  the theory itself just *said* that!".
>
>>
>
>>
>
>>
>
>> So far, all of the serious questions that people have placed at the door
>
>>
>
>> of this theory have proved susceptible to that argument.
>
>>
>
>>
>
>>
>
>> That was essentially what I did when talking to Chalmers.  He came up
>
>>
>
>> with an objection very like the one you gave above, so I said: "Okay,
>
>>
>
>> the answer is that the theory itself predicts that you *must* find that
>
>>
>
>> question to be a stumbling block ..... AND, more importantly, you should
>
>>
>
>> be able to see that the strategy I am using here is a strategy that I
>
>>
>
>> can flexibly d
>
> ...
>
> [Message clipped]



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"I intend to live forever, or die trying."
-- Groucho Marx


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to