Richard, So are you saying that: "According to the ordinary scientific standards of 'explanation', the subjective experience of consciousness cannot be explained ... and as a consequence, the relationship between subjective consciousness and physical data (as required to be elucidated by any solution to Chalmers' "hard problem" as normally conceived) also cannot be explained."
If so, then: according to the ordinary scientific standards of explanation, you are not explaining consciousness, nor explaining the relation btw consciousness and the physical ... but are rather **explaining why, due to the particular nature of consciousness and its relationship to the ordinary scientific standards of explanation, this kind of explanation is not possible** ?? ben g On Wed, Nov 19, 2008 at 4:05 PM, Richard Loosemore <[EMAIL PROTECTED]>wrote: > Ben Goertzel wrote: > >> Richard, >> >> My first response to this is that you still don't seem to have taken >> account of what was said in the second part of the paper - and, at >> the same time, I can find many places where you make statements that >> are undermined by that second part. >> >> To take the most significant example: when you say: >> >> >> > But, I don't see how the hypothesis >> > >> > "Conscious experience is **identified with** unanalyzable >> mind-atoms" >> > >> > could be distinguished empirically from >> > >> > "Conscious experience is **correlated with** unanalyzable >> mind-atoms" >> >> ... there are several concepts buried in there, like [identified >> with], [distinguished empirically from] and [correlated with] that >> are theory-laden. In other words, when you use those terms you are >> implictly applying some standards that have to do with semantics and >> ontology, and it is precisely those standards that I attacked in >> part 2 of the paper. >> >> However, there is also another thing I can say about this statement, >> based on the argument in part one of the paper. >> >> It looks like you are also falling victim to the argument in part 1, >> at the same time that you are questioning its validity: one of the >> consequences of that initial argument was that *because* those >> concept-atoms are unanalyzable, you can never do any such thing as >> talk about their being "only correlated with a particular cognitive >> event" versus "actually being identified with that cognitive event"! >> >> So when you point out that the above distinction seems impossible to >> make, I say: "Yes, of course: the theory itself just *said* that!". >> >> So far, all of the serious questions that people have placed at the >> door of this theory have proved susceptible to that argument. >> >> >> >> Well, suppose I am studying your brain with a super-advanced >> brain-monitoring device ... >> >> Then, suppose that I, using the brain-monitoring device, identify the >> brain response pattern that uniquely occurs when you look at something red >> ... >> >> I can then pose the question: Is your experience of red *identical* to >> this brain-response pattern ... or is it correlated with this brain-response >> pattern? >> >> I can pose this question even though the "cognitive atoms" corresponding >> to this brain-response pattern are unanalyzable from your perspective... >> >> Next, note that I can also turn the same brain-monitoring device on >> myself... >> >> So I don't see why the question is unaskable ... it seems askable, because >> these concept-atoms in question are experience-able even if not >> analyzable... that is, they still form mental content even though they >> aren't susceptible to explanation as you describe it... >> >> I agree that, subjectively or empirically, there is no way to distinguish >> >> "Conscious experience is **identified with** unanalyzable mind-atoms" >> >> from >> >> "Conscious experience is **correlated with** unanalyzable mind-atoms" >> >> and it seems to me that this indicates you have NOT solved the hard >> problem, but only restated it in a different (possibly useful) way >> > > There are several different approaches and comments that I could take with > what you just wrote, but let me focus on just one; the last one. > > When you make a statement such as "... it seems to me that .. you have NOT > solved the hard problem, but only restated it", you are implicitly bringing > to the table a set of ideas about what it means to "solve" this problem, or > "explain" consciousness. > > Fine so far: everyone uses the rules of explanation that they have > acquired over a lifetime - and of course in science we all roughly agree on > a set of ideas about what it means to explain things. > > But what I am trying to point out in this paper is that because of the > nature of intelligent systems and how they must do their job, the very > concept of *explanation* is undermined by the topic that in this case we are > trying to explain. You cannot just go right ahead and apply a standard of > explanation right out of the box (so to speak) because unlike explaining > atoms and explaining stars, in this case you are trying to explain something > that interferes with the notion of "explanation". > > So when you imply that the theory I propose is weak *because* it provides > no way to distinguish: > > "Conscious experience is **identified with** unanalyzable mind-atoms" > > from > > "Conscious experience is **correlated with** unanalyzable mind-atoms" > > You are missing the main claim that the theory tries to make: that such > distinctions are broken precisely *because* of what is going on with the > explanandum. > > You have got to get this point to be able to understand the paper. > > I mean, it is okay to disagree with the point and say why (to talk about > what it means to explain things' to talk about the connection between the > explanandum and the methods and basic terms of the thing that we call > "explaining things"). That would be fine. > > But at the moment it seems to me that you have been through several passes > at simply re-stating your position that you do not think the theory succeeds > in explaining the subject, whereas I cannot bring you round to talking about > what is the most important idea in the paper: that such simple statements as > the ones you are making are just using a concept of explanation without > examining it. > > So we still have not addressed the content of part 2 of the paper. I did > try to say all of the above in the last post, but you didn't mention that > bit in your reply ;-) > > > > > > > Richard Loosemore > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects." -- Robert Heinlein ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com