On Oct 15, 2012, at 4:10 AM, Craig Weinberg <whatsons...@gmail.com> wrote:


>> >> But since you misunderstand the first assumption you misunderstand the 
>> >> whole argument. 
>> > 
>> > 
>> > Nope. You misunderstand my argument completely. 
>> 
>> Perhaps I do, but you specifically misunderstand that the argument 
>> depends on the assumption that computers don't have consciousness.
> 
> No, I do understand that.

Good.

>> You 
>> also misunderstand (or pretend to) the idea that a brain or computer 
>> does not have to know the entire future history of the universe and 
>> how it will respond to every situation it may encounter in order to 
>> function.
> 
> Do you have to know the entire history of how you learned English to read 
> these words? It depends what you mean by know. You don't have to consciously 
> recall learning English, but without that experience, you wouldn't be able to 
> read this. If you had a module implanted in your brain which would allow you 
> to read Chinese, it might give you an acceptable capacity to translate 
> Chinese phonemes and characters, but it would be a generic understanding, not 
> one rooted in decades of human interaction. Do you see the difference? Do you 
> see how words are not only functional data but also names which carry 
> personal significance?

The atoms in my brain don't have to know how to read Chinese. They only need to 
know how to be carbon, nitrogen, oxygen etc. atoms. The complex behaviour which 
is reading Chinese comes from the interaction of billions of these atoms doing 
their simple thing. If the atoms in my brain were put into a Chinese-reading 
configuration, either through a lot of work learning the language or through 
direct manipulation, then I would be able to understand Chinese.

>> What are some equivalently simple, uncontroversial things in 
>> what you say that i misunderstand?
> 
> You think that I don't get that Fading Qualia is a story about a world in 
> which the brain cannot be substituted, but I do. Chalmers is saying 'OK lets 
> say that's true - how would that be? Would your blue be less and less blue? 
> How could you act normally if you...blah, blah, blah'. I get that. It's 
> crystal clear.
> 
> What you don't understand is that this carries a priori assumptions about the 
> nature of consciousness, that it is an end result of a distributed process 
> which is monolithic. I am saying NO, THAT IS NOT HOW IT IS.
> 
> Imagine that we had one eye in the front of our heads and one ear in the 
> back, and that the whole of human history has been to debate over whether 
> walking forward means that objects are moving toward you or whether it means 
> changes in relative volume of sounds.
> 
> Chalmers is saying, 'if we gradually replaced the eye with parts of the ear, 
> how would our sight gradually change to sound, or would it suddenly switch 
> over?' Since both options seem absurd, then he concludes that anything that 
> is in the front of the head is an eye and everything on the back is an ear, 
> or that everything has both ear and eye potentials.
> 
> The MR model is to understand that these two views are not merely substance 
> dual or property dual, they are involuted juxtapositions of each other. The 
> difference between front and back is not merely irreconcilable, it is 
> mutually exclusive by definition in experience. I am not throwing up my hands 
> and saying 'ears can't be eyes because eyes are special', I am positively 
> asserting that there is a way of modeling the eye-ear relation based on an 
> understanding of what time, space, matter, energy, entropy, significance, 
> perception, and participation actually are and how they relate to each other.
> 
> The idea that the newly discovered ear-based models out of the back of our 
> head is eventually going to explain the view eye view out of the front is not 
> scientific, it's an ideological faith that I understand to be critically 
> flawed. The evidence is all around us, we have only to interpret it that way 
> rather than to keep updating our description of reality to match the 
> narrowness of our fundamental theory. The theory only works for the back view 
> of the world...it says *nothing* useful about the front view. To the True 
> Disbeliever, this is a sign that we need to double down on the back end view 
> because it's the best chance we have. The thinking is that any other position 
> implies that we throw out the back end view entirely and go back to the dark 
> ages of front end fanatacism. I am not suggesting a compromise, I propose a 
> complete overhaul in which we start not from the front and move back or back 
> and move front, but start from the split and see how it can be understood as 
> double knot - a fold of folds.

I'm sorry, but this whole passage is a non sequitur as far as the fading qualia 
thought experiment goes. You have to explain what you think would happen if 
part of your brain were replaced with a functional equivalent. A functional 
equivalent would stimulate the remaining neurons the same as the part that is 
replaced. The original paper says this is a computer chip but this is not 
necessary to make the point: we could just say that it is any device, not being 
the normal biological neurons. If consciousness is substrate-dependent (as you 
claim) then the device could do its job of stimulating the neurons normally 
while lacking or differing in consciousness. Since it stimulates the neurons 
normally you would behave normally. If you didn't then it would be a miracle, 
since your muscles would have to contract normally. Do you at least see this 
point, or do you think that your muscles would do something different?


-- Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to