On 01 May 2013, at 20:09, Craig Weinberg wrote:



On Wednesday, May 1, 2013 10:49:11 AM UTC-4, Bruno Marchal wrote:

On 30 Apr 2013, at 20:58, Craig Weinberg wrote:



On Wednesday, April 24, 2013 10:31:44 AM UTC-4, Bruno Marchal wrote:

On 24 Apr 2013, at 15:40, Craig Weinberg wrote:



On Wednesday, April 24, 2013 8:50:07 AM UTC-4, Bruno Marchal wrote:

On 23 Apr 2013, at 22:26, Craig Weinberg wrote:



On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:



On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg <whats...@gmail.com> wrote:


"If you think about your own vision, you can see millions of pixels constantly, you are aware of the full picture, but a computer can't do that, the cpu can only know about 32 or 64 pixels, eventually multiplied by number of kernels, but it see them as single bit's so in reality the can't be conscious of a full picture, not even of the full color at a single pixel.



He is making the same mistake Searle did regarding the Chinese room. He is conflating what the CPU can see at one time (analogous to rule follower in Chinese room) with what the program can know. Consider the program of a neural network: it can be processed by a sequentially operating CPU processing one connection at a time, but the simulated network itself can see any arbitrary number of inputs at once.

How do he propose OCR software can recognize letters if it can only see a single pixel at a time?

Who says OCR software can recognize letters? All that it needs to do is execute some algorithm sequentially and blindly against a table of expected values. There need not be any recognition of the character as a character at at all, let alone any "seeing". A program could convert a Word document into an input file for an OCR program without there ever being any optical activity - no camera, no screen caps, no monitor or printer at all. Completely in the dark, the bits of the Word file could be converted into the bits of an emulated optical scan, and presto, invisible optics.

Searle wasn't wrong. The whole point of the Chinese Room is to point out that computation is a disconnected, anesthetic function which is accomplished with no need for understanding of larger contexts.

Searle might be right on non-comp, but his argument has been shown invalid by many.

I'm surprised that you would try to pass that off as truth Bruno. You have so much tolerance for doubt and uncertainty, yet you claim that it "has been shown invalid". In whose opinion?

It is not an opinion, it is a fact that you can verify if patient enough. The refutation is already in Dennet and Hofstadter "Mind's I " book. Searle concludes that the man in the room is not understanding chinese, and that is right, but that can not refute comp, as the man in the room plays the role of a CPU, and not of the high level program on which the consciousness of the chinese guy supervene. It is a simple confusion of level.

The high level program is just a case-by-case syntactic handler though. It's not high level, it's just a big lookup table. There is no confusion of level. Neither the Chinese Room as whole, the book, nor the guy passing messages and reading the book understand Chinese at all. The person who understood Chinese and wrote the book is dead.

The kind of reasoning that you (and Dennett and Hofstadter) are using would say that someone who is color blind is not impaired if they memorize the answers to a color vision test. If I can retake the test as many times as I want, and I can know which answers I get wrong, I don't even need to cheat or get lucky. I can compute the correct answers as if I could see color in spite of my complete color blindness.

What you are saying is circular. You assume that the Chinese guy who wrote the book is running on a program, but if you knew that was the case, then there would be no point in the thought experiment. You don't know that at all though, and the Chinese Room shows why computation need only be performed on one level and never leads to understanding on any others.

I am not sure I can help you. You confuse the levels. You don't really try to understand the point, which would mean that you talk like if you knew that comp is false.

I don't expect you to help me, I'm trying to help you.

Of course. But what helps me is reasoning, not personal conviction.



I don't know that comp is false, but I know that if it isn't it won't be because of the reasons you are suggesting. Comp may be true in theory, but none of the replies to the Chinese room are adequate, or even mildly compelling to me.

Searles confuse a program, and a universal program running that program.














This page http://plato.stanford.edu/entries/chinese-room/ is quite thorough, and lists the most well known Replies, yet it concludes:

"There continues to be significant disagreement about what processes create meaning, understanding, and consciousness, as well as what can be proven a priori by thought experiments."

Thought experience are like proofs in math. Some are valid, some are not valid, some are fatally not valid, some can be corrected or made more precise. The debate often focuse on the truth of comp and non-comp, and that involves sometimes opinion. I don't really play that game.

Game? All it's saying is that there is no consensus as you claim. The fact that you claim a consensus to me smells like a major insecurity. Very much a 'pay no attention to the man behind the curtain' response.

Without that consensus, there would be no scientific researches nor beliefs. The consensus is not on truth in general, but on the means of communication. Your answer betrays that yo have more a pseudo- religious agenda than an inquiry in what could possibly be true or false.

My agenda is to understand consciousness as it actually is, rather than as a theory would like it to be.


Understanding is always in the frame of some assumption. You confuse the experience, and the possible explanation for the existence of that experience (which is indeed more direct, but that can be due to the existence of the brain).














The replies listed are not at all impressive to me, and are all really variations on the same sophistry. Obviously there is a difference between understanding a conversation and simply copying a conversation in another language. There is a difference between painting a masterpiece and doing a paint by numbers or spraypainting through a stencil. This is what computers and machines are for - to free us from having to work and think ourselves. If the machine had to think and feel that it was working like a person does, then it would want servants also. Machines don't want servants though, because they don't know that they are working, and they function without having to think or exert effort.

And this is begging the question.

Only if you are already assuming Comp is true from the start.

Not at all. It is rare I do not assume comp, though, but here I was not. Our position are not symmetrical. I suggest a theory and reason from there. You pretend knowing a truth, and use this as a pretext for not looking at a theory. I doubt the condition for a dialog is possible.

I'm not pretending to know a truth, I am stating that I understand the point that Searle and Leibniz made, and which the replies to that point do not. They underestimate the depth of consciousness, and mistake copy and pasting Shakespeare for being Shakespeare.

But here you betray that you are again begging the question. What you say is just "no doctor". So you introduce either an infinite low level of comp (= non comp), or something non turing emulable in the brain or the body.

Bruno




Craig


Bruno







Craig


Bruno





Craig


Bruno




Craig


Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en .
For more options, visit https://groups.google.com/groups/opt_out.



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en .
For more options, visit https://groups.google.com/groups/opt_out.



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en .
For more options, visit https://groups.google.com/groups/opt_out.



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en .
For more options, visit https://groups.google.com/groups/opt_out.



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to