Re: Rationals vs Reals in Comp

2013-05-02 Thread Bruno Marchal


On 01 May 2013, at 20:09, Craig Weinberg wrote:




On Wednesday, May 1, 2013 10:49:11 AM UTC-4, Bruno Marchal wrote:

On 30 Apr 2013, at 20:58, Craig Weinberg wrote:




On Wednesday, April 24, 2013 10:31:44 AM UTC-4, Bruno Marchal wrote:

On 24 Apr 2013, at 15:40, Craig Weinberg wrote:




On Wednesday, April 24, 2013 8:50:07 AM UTC-4, Bruno Marchal wrote:

On 23 Apr 2013, at 22:26, Craig Weinberg wrote:




On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:



On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg  
whats...@gmail.com wrote:



If you think about your own vision, you can see millions of  
pixels constantly, you are aware of the full picture, but a  
computer can't do that, the cpu can only know about 32 or 64  
pixels, eventually multiplied by number of kernels, but it see  
them as single bit's so in reality the can't be conscious of a  
full picture, not even of the full color at a single pixel.




He is making the same mistake Searle did regarding the Chinese  
room.  He is conflating what the CPU can see at one time  
(analogous to rule follower in Chinese room) with what the  
program can know.  Consider the program of a neural network: it  
can be processed by a sequentially operating CPU processing one  
connection at a time, but the simulated network itself can see  
any arbitrary number of inputs at once.


How do he propose OCR software can recognize letters if it can  
only see a single pixel at a time?


Who says OCR software can recognize letters? All that it needs to  
do is execute some algorithm sequentially and blindly against a  
table of expected values. There need not be any recognition of  
the character as a character at at all, let alone any seeing. A  
program could convert a Word document into an input file for an  
OCR program without there ever being any optical activity - no  
camera, no screen caps, no monitor or printer at all. Completely  
in the dark, the bits of the Word file could be converted into  
the bits of an emulated optical scan, and presto, invisible optics.


Searle wasn't wrong. The whole point of the Chinese Room is to  
point out that computation is a disconnected, anesthetic function  
which is accomplished with no need for understanding of larger  
contexts.


Searle might be right on non-comp, but his argument has been shown  
invalid by many.


I'm surprised that you would try to pass that off as truth Bruno.  
You have so much tolerance for doubt and uncertainty, yet you  
claim that it has been shown invalid. In whose opinion?


It is not an opinion, it is a fact that you can verify if patient  
enough. The refutation is already in Dennet and Hofstadter Mind's  
I  book. Searle concludes that the man in the room is not  
understanding chinese, and that is right, but that can not refute  
comp, as the man in the room plays the role of a CPU, and not of  
the high level program on which the consciousness of the chinese  
guy supervene. It is a simple confusion of level.


The high level program is just a case-by-case syntactic handler  
though. It's not high level, it's just a big lookup table. There is  
no confusion of level. Neither the Chinese Room as whole, the book,  
nor the guy passing messages and reading the book understand  
Chinese at all. The person who understood Chinese and wrote the  
book is dead.


The kind of reasoning that you (and Dennett and Hofstadter) are  
using would say that someone who is color blind is not impaired if  
they memorize the answers to a color vision test. If I can retake  
the test as many times as I want, and I can know which answers I  
get wrong, I don't even need to cheat or get lucky. I can compute  
the correct answers as if I could see color in spite of my complete  
color blindness.


What you are saying is circular. You assume that the Chinese guy  
who wrote the book is running on a program, but if you knew that  
was the case, then there would be no point in the thought  
experiment. You don't know that at all though, and the Chinese Room  
shows why computation need only be performed on one level and never  
leads to understanding on any others.


I am not sure I can help you. You confuse the levels. You don't  
really try to understand the point, which would mean that you talk  
like if you knew that comp is false.


I don't expect you to help me, I'm trying to help you.


Of course. But what helps me is reasoning, not personal conviction.



I don't know that comp is false, but I know that if it isn't it  
won't be because of the reasons you are suggesting. Comp may be true  
in theory, but none of the replies to the Chinese room are adequate,  
or even mildly compelling to me.


Searles confuse a program, and a universal program running that program.

















This page http://plato.stanford.edu/entries/chinese-room/ is quite  
thorough, and lists the most well known Replies, yet it concludes:


There continues to be significant disagreement about what  
processes 

Re: Rationals vs Reals in Comp

2013-05-02 Thread Craig Weinberg


On Thursday, May 2, 2013 4:39:43 AM UTC-4, Bruno Marchal wrote:


 On 01 May 2013, at 20:09, Craig Weinberg wrote:



 On Wednesday, May 1, 2013 10:49:11 AM UTC-4, Bruno Marchal wrote:


 On 30 Apr 2013, at 20:58, Craig Weinberg wrote:



 On Wednesday, April 24, 2013 10:31:44 AM UTC-4, Bruno Marchal wrote:


 On 24 Apr 2013, at 15:40, Craig Weinberg wrote:



 On Wednesday, April 24, 2013 8:50:07 AM UTC-4, Bruno Marchal wrote:


 On 23 Apr 2013, at 22:26, Craig Weinberg wrote:



 On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg whats...@gmail.comwrote:



 If you think about your own vision, you can see millions of pixels 
 constantly, you are aware of the full picture, but a computer can't do 
 that, the cpu can only know about 32 or 64 pixels, eventually 
 multiplied by 
 number of kernels, but it see them as single bit's so in reality the 
 can't 
 be conscious of a full picture, not even of the full color at a single 
 pixel.


   


 He is making the same mistake Searle did regarding the Chinese room.  
 He is conflating what the CPU can see at one time (analogous to rule 
 follower in Chinese room) with what the program can know.  Consider the 
 program of a neural network: it can be processed by a sequentially 
 operating CPU processing one connection at a time, but the simulated 
 network itself can see any arbitrary number of inputs at once.

 How do he propose OCR software can recognize letters if it can only 
 see a single pixel at a time?


 Who says OCR software can recognize letters? All that it needs to do is 
 execute some algorithm sequentially and blindly against a table of 
 expected 
 values. There need not be any recognition of the character as a character 
 at at all, let alone any seeing. A program could convert a Word document 
 into an input file for an OCR program without there ever being any optical 
 activity - no camera, no screen caps, no monitor or printer at all. 
 Completely in the dark, the bits of the Word file could be converted into 
 the bits of an emulated optical scan, and presto, invisible optics.

 Searle wasn't wrong. The whole point of the Chinese Room is to point 
 out that computation is a disconnected, anesthetic function which is 
 accomplished with no need for understanding of larger contexts. 


 Searle might be right on non-comp, but his argument has been shown 
 invalid by many.


 I'm surprised that you would try to pass that off as truth Bruno. You 
 have so much tolerance for doubt and uncertainty, yet you claim that it 
 has been shown invalid. In whose opinion?


 It is not an opinion, it is a fact that you can verify if patient 
 enough. The refutation is already in Dennet and Hofstadter Mind's I  
 book. Searle concludes that the man in the room is not understanding 
 chinese, and that is right, but that can not refute comp, as the man in the 
 room plays the role of a CPU, and not of the high level program on which 
 the consciousness of the chinese guy supervene. It is a simple confusion of 
 level.


 The high level program is just a case-by-case syntactic handler though. 
 It's not high level, it's just a big lookup table. There is no confusion of 
 level. Neither the Chinese Room as whole, the book, nor the guy passing 
 messages and reading the book understand Chinese at all. The person who 
 understood Chinese and wrote the book is dead. 

 The kind of reasoning that you (and Dennett and Hofstadter) are using 
 would say that someone who is color blind is not impaired if they memorize 
 the answers to a color vision test. If I can retake the test as many times 
 as I want, and I can know which answers I get wrong, I don't even need to 
 cheat or get lucky. I can compute the correct answers as if I could see 
 color in spite of my complete color blindness.

 What you are saying is circular. You assume that the Chinese guy who 
 wrote the book is running on a program, but if you knew that was the case, 
 then there would be no point in the thought experiment. You don't know that 
 at all though, and the Chinese Room shows why computation need only be 
 performed on one level and never leads to understanding on any others.


 I am not sure I can help you. You confuse the levels. You don't really 
 try to understand the point, which would mean that you talk like if you 
 knew that comp is false. 


 I don't expect you to help me, I'm trying to help you. 


 Of course. But what helps me is reasoning, not personal conviction. 


Consciousness cannot be accessed by reasoning, since reason is an 
experience within human consciousness.
 




 I don't know that comp is false, but I know that if it isn't it won't be 
 because of the reasons you are suggesting. Comp may be true in theory, but 
 none of the replies to the Chinese room are adequate, or even mildly 
 compelling to me.


 Searles confuse a program, and a universal program running that program.


Aren't universal programs 

Re: Rationals vs Reals in Comp

2013-05-02 Thread Bruno Marchal


On 02 May 2013, at 17:35, Craig Weinberg wrote:




On Thursday, May 2, 2013 4:39:43 AM UTC-4, Bruno Marchal wrote:

On 01 May 2013, at 20:09, Craig Weinberg wrote:




On Wednesday, May 1, 2013 10:49:11 AM UTC-4, Bruno Marchal wrote:

On 30 Apr 2013, at 20:58, Craig Weinberg wrote:




On Wednesday, April 24, 2013 10:31:44 AM UTC-4, Bruno Marchal wrote:

On 24 Apr 2013, at 15:40, Craig Weinberg wrote:




On Wednesday, April 24, 2013 8:50:07 AM UTC-4, Bruno Marchal wrote:

On 23 Apr 2013, at 22:26, Craig Weinberg wrote:




On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:



On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg  
whats...@gmail.com wrote:



If you think about your own vision, you can see millions of  
pixels constantly, you are aware of the full picture, but a  
computer can't do that, the cpu can only know about 32 or 64  
pixels, eventually multiplied by number of kernels, but it see  
them as single bit's so in reality the can't be conscious of a  
full picture, not even of the full color at a single pixel.




He is making the same mistake Searle did regarding the Chinese  
room.  He is conflating what the CPU can see at one time  
(analogous to rule follower in Chinese room) with what the  
program can know.  Consider the program of a neural network: it  
can be processed by a sequentially operating CPU processing one  
connection at a time, but the simulated network itself can see  
any arbitrary number of inputs at once.


How do he propose OCR software can recognize letters if it can  
only see a single pixel at a time?


Who says OCR software can recognize letters? All that it needs  
to do is execute some algorithm sequentially and blindly against  
a table of expected values. There need not be any recognition of  
the character as a character at at all, let alone any seeing.  
A program could convert a Word document into an input file for  
an OCR program without there ever being any optical activity -  
no camera, no screen caps, no monitor or printer at all.  
Completely in the dark, the bits of the Word file could be  
converted into the bits of an emulated optical scan, and presto,  
invisible optics.


Searle wasn't wrong. The whole point of the Chinese Room is to  
point out that computation is a disconnected, anesthetic  
function which is accomplished with no need for understanding of  
larger contexts.


Searle might be right on non-comp, but his argument has been  
shown invalid by many.


I'm surprised that you would try to pass that off as truth Bruno.  
You have so much tolerance for doubt and uncertainty, yet you  
claim that it has been shown invalid. In whose opinion?


It is not an opinion, it is a fact that you can verify if patient  
enough. The refutation is already in Dennet and Hofstadter Mind's  
I  book. Searle concludes that the man in the room is not  
understanding chinese, and that is right, but that can not refute  
comp, as the man in the room plays the role of a CPU, and not of  
the high level program on which the consciousness of the chinese  
guy supervene. It is a simple confusion of level.


The high level program is just a case-by-case syntactic handler  
though. It's not high level, it's just a big lookup table. There  
is no confusion of level. Neither the Chinese Room as whole, the  
book, nor the guy passing messages and reading the book understand  
Chinese at all. The person who understood Chinese and wrote the  
book is dead.


The kind of reasoning that you (and Dennett and Hofstadter) are  
using would say that someone who is color blind is not impaired if  
they memorize the answers to a color vision test. If I can retake  
the test as many times as I want, and I can know which answers I  
get wrong, I don't even need to cheat or get lucky. I can compute  
the correct answers as if I could see color in spite of my  
complete color blindness.


What you are saying is circular. You assume that the Chinese guy  
who wrote the book is running on a program, but if you knew that  
was the case, then there would be no point in the thought  
experiment. You don't know that at all though, and the Chinese  
Room shows why computation need only be performed on one level and  
never leads to understanding on any others.


I am not sure I can help you. You confuse the levels. You don't  
really try to understand the point, which would mean that you talk  
like if you knew that comp is false.


I don't expect you to help me, I'm trying to help you.


Of course. But what helps me is reasoning, not personal conviction.

Consciousness cannot be accessed by reasoning, since reason is an  
experience within human consciousness.


You are entirely right on this.

But to communicate with others, even on consciousness, or on line and  
points, or galaxies or gods, we can only agree on principles and  
reason from that.











I don't know that comp is false, but I know that if it isn't it  
won't be because of the reasons you are suggesting. 

Re: Rationals vs Reals in Comp

2013-05-02 Thread Craig Weinberg


On Thursday, May 2, 2013 11:54:34 AM UTC-4, Bruno Marchal wrote:


 On 02 May 2013, at 17:35, Craig Weinberg wrote:



 On Thursday, May 2, 2013 4:39:43 AM UTC-4, Bruno Marchal wrote:


 On 01 May 2013, at 20:09, Craig Weinberg wrote:



 On Wednesday, May 1, 2013 10:49:11 AM UTC-4, Bruno Marchal wrote:


 On 30 Apr 2013, at 20:58, Craig Weinberg wrote:



 On Wednesday, April 24, 2013 10:31:44 AM UTC-4, Bruno Marchal wrote:


 On 24 Apr 2013, at 15:40, Craig Weinberg wrote:



 On Wednesday, April 24, 2013 8:50:07 AM UTC-4, Bruno Marchal wrote:


 On 23 Apr 2013, at 22:26, Craig Weinberg wrote:



 On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg 
 whats...@gmail.comwrote:



 If you think about your own vision, you can see millions of pixels 
 constantly, you are aware of the full picture, but a computer can't do 
 that, the cpu can only know about 32 or 64 pixels, eventually 
 multiplied by 
 number of kernels, but it see them as single bit's so in reality the 
 can't 
 be conscious of a full picture, not even of the full color at a single 
 pixel.


   


 He is making the same mistake Searle did regarding the Chinese room.  
 He is conflating what the CPU can see at one time (analogous to rule 
 follower in Chinese room) with what the program can know.  Consider the 
 program of a neural network: it can be processed by a sequentially 
 operating CPU processing one connection at a time, but the simulated 
 network itself can see any arbitrary number of inputs at once.

 How do he propose OCR software can recognize letters if it can only 
 see a single pixel at a time?


 Who says OCR software can recognize letters? All that it needs to do 
 is execute some algorithm sequentially and blindly against a table of 
 expected values. There need not be any recognition of the character as a 
 character at at all, let alone any seeing. A program could convert a 
 Word 
 document into an input file for an OCR program without there ever being 
 any 
 optical activity - no camera, no screen caps, no monitor or printer at 
 all. 
 Completely in the dark, the bits of the Word file could be converted into 
 the bits of an emulated optical scan, and presto, invisible optics.

 Searle wasn't wrong. The whole point of the Chinese Room is to point 
 out that computation is a disconnected, anesthetic function which is 
 accomplished with no need for understanding of larger contexts. 


 Searle might be right on non-comp, but his argument has been shown 
 invalid by many.


 I'm surprised that you would try to pass that off as truth Bruno. You 
 have so much tolerance for doubt and uncertainty, yet you claim that it 
 has been shown invalid. In whose opinion?


 It is not an opinion, it is a fact that you can verify if patient 
 enough. The refutation is already in Dennet and Hofstadter Mind's I  
 book. Searle concludes that the man in the room is not understanding 
 chinese, and that is right, but that can not refute comp, as the man in 
 the 
 room plays the role of a CPU, and not of the high level program on which 
 the consciousness of the chinese guy supervene. It is a simple confusion 
 of 
 level.


 The high level program is just a case-by-case syntactic handler though. 
 It's not high level, it's just a big lookup table. There is no confusion of 
 level. Neither the Chinese Room as whole, the book, nor the guy passing 
 messages and reading the book understand Chinese at all. The person who 
 understood Chinese and wrote the book is dead. 

 The kind of reasoning that you (and Dennett and Hofstadter) are using 
 would say that someone who is color blind is not impaired if they memorize 
 the answers to a color vision test. If I can retake the test as many times 
 as I want, and I can know which answers I get wrong, I don't even need to 
 cheat or get lucky. I can compute the correct answers as if I could see 
 color in spite of my complete color blindness.

 What you are saying is circular. You assume that the Chinese guy who 
 wrote the book is running on a program, but if you knew that was the case, 
 then there would be no point in the thought experiment. You don't know that 
 at all though, and the Chinese Room shows why computation need only be 
 performed on one level and never leads to understanding on any others.


 I am not sure I can help you. You confuse the levels. You don't really 
 try to understand the point, which would mean that you talk like if you 
 knew that comp is false. 


 I don't expect you to help me, I'm trying to help you. 


 Of course. But what helps me is reasoning, not personal conviction. 


 Consciousness cannot be accessed by reasoning, since reason is an 
 experience within human consciousness.


 You are entirely right on this. 

 But to communicate with others, even on consciousness, or on line and 
 points, or galaxies or gods, we can only agree on principles and reason 
 from that. 


Sure, but we have to 

Re: Rationals vs Reals in Comp

2013-05-01 Thread Bruno Marchal


On 30 Apr 2013, at 20:58, Craig Weinberg wrote:




On Wednesday, April 24, 2013 10:31:44 AM UTC-4, Bruno Marchal wrote:

On 24 Apr 2013, at 15:40, Craig Weinberg wrote:




On Wednesday, April 24, 2013 8:50:07 AM UTC-4, Bruno Marchal wrote:

On 23 Apr 2013, at 22:26, Craig Weinberg wrote:




On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:



On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg  
whats...@gmail.com wrote:



If you think about your own vision, you can see millions of  
pixels constantly, you are aware of the full picture, but a  
computer can't do that, the cpu can only know about 32 or 64  
pixels, eventually multiplied by number of kernels, but it see  
them as single bit's so in reality the can't be conscious of a  
full picture, not even of the full color at a single pixel.




He is making the same mistake Searle did regarding the Chinese  
room.  He is conflating what the CPU can see at one time  
(analogous to rule follower in Chinese room) with what the program  
can know.  Consider the program of a neural network: it can be  
processed by a sequentially operating CPU processing one  
connection at a time, but the simulated network itself can see any  
arbitrary number of inputs at once.


How do he propose OCR software can recognize letters if it can  
only see a single pixel at a time?


Who says OCR software can recognize letters? All that it needs to  
do is execute some algorithm sequentially and blindly against a  
table of expected values. There need not be any recognition of the  
character as a character at at all, let alone any seeing. A  
program could convert a Word document into an input file for an  
OCR program without there ever being any optical activity - no  
camera, no screen caps, no monitor or printer at all. Completely  
in the dark, the bits of the Word file could be converted into the  
bits of an emulated optical scan, and presto, invisible optics.


Searle wasn't wrong. The whole point of the Chinese Room is to  
point out that computation is a disconnected, anesthetic function  
which is accomplished with no need for understanding of larger  
contexts.


Searle might be right on non-comp, but his argument has been shown  
invalid by many.


I'm surprised that you would try to pass that off as truth Bruno.  
You have so much tolerance for doubt and uncertainty, yet you claim  
that it has been shown invalid. In whose opinion?


It is not an opinion, it is a fact that you can verify if patient  
enough. The refutation is already in Dennet and Hofstadter Mind's I  
 book. Searle concludes that the man in the room is not  
understanding chinese, and that is right, but that can not refute  
comp, as the man in the room plays the role of a CPU, and not of the  
high level program on which the consciousness of the chinese guy  
supervene. It is a simple confusion of level.


The high level program is just a case-by-case syntactic handler  
though. It's not high level, it's just a big lookup table. There is  
no confusion of level. Neither the Chinese Room as whole, the book,  
nor the guy passing messages and reading the book understand Chinese  
at all. The person who understood Chinese and wrote the book is dead.


The kind of reasoning that you (and Dennett and Hofstadter) are  
using would say that someone who is color blind is not impaired if  
they memorize the answers to a color vision test. If I can retake  
the test as many times as I want, and I can know which answers I get  
wrong, I don't even need to cheat or get lucky. I can compute the  
correct answers as if I could see color in spite of my complete  
color blindness.


What you are saying is circular. You assume that the Chinese guy who  
wrote the book is running on a program, but if you knew that was the  
case, then there would be no point in the thought experiment. You  
don't know that at all though, and the Chinese Room shows why  
computation need only be performed on one level and never leads to  
understanding on any others.


I am not sure I can help you. You confuse the levels. You don't really  
try to understand the point, which would mean that you talk like if  
you knew that comp is false.












This page http://plato.stanford.edu/entries/chinese-room/ is quite  
thorough, and lists the most well known Replies, yet it concludes:


There continues to be significant disagreement about what  
processes create meaning, understanding, and consciousness, as well  
as what can be proven a priori by thought experiments.


Thought experience are like proofs in math. Some are valid, some are  
not valid, some are fatally not valid, some can be corrected or made  
more precise. The debate often focuse on the truth of comp and non- 
comp, and that involves sometimes opinion. I don't really play that  
game.


Game? All it's saying is that there is no consensus as you claim.  
The fact that you claim a consensus to me smells like a major  
insecurity. Very much a 'pay no 

Re: Rationals vs Reals in Comp

2013-05-01 Thread Bruno Marchal


On 30 Apr 2013, at 22:10, Craig Weinberg wrote:



It seems like there's nothing to bet on though. Comp is not really  
giving any guidance as to whether Comp itself is valid - it only  
shows that some machines believe it isn't, and that suggests that it  
is, and some machines see through that belief, and that somehow  
suggests that it is also. It's an unfalsifiable ideology.



Showing you miss the main point. I have try to explain it more than  
once, but you repeat over and over your simple negative affirmation,  
without ever given a clue why you think so, or answering the comments.


Some other comments you made contain rhetorical traps. I would lose my  
and your time in answering them.


I will wait for a theory, if ever you try to provide one. Words are  
not enough.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-05-01 Thread Craig Weinberg


On Wednesday, May 1, 2013 10:49:11 AM UTC-4, Bruno Marchal wrote:


 On 30 Apr 2013, at 20:58, Craig Weinberg wrote:



 On Wednesday, April 24, 2013 10:31:44 AM UTC-4, Bruno Marchal wrote:


 On 24 Apr 2013, at 15:40, Craig Weinberg wrote:



 On Wednesday, April 24, 2013 8:50:07 AM UTC-4, Bruno Marchal wrote:


 On 23 Apr 2013, at 22:26, Craig Weinberg wrote:



 On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg whats...@gmail.comwrote:



 If you think about your own vision, you can see millions of pixels 
 constantly, you are aware of the full picture, but a computer can't do 
 that, the cpu can only know about 32 or 64 pixels, eventually multiplied 
 by 
 number of kernels, but it see them as single bit's so in reality the 
 can't 
 be conscious of a full picture, not even of the full color at a single 
 pixel.


   


 He is making the same mistake Searle did regarding the Chinese room.  
 He is conflating what the CPU can see at one time (analogous to rule 
 follower in Chinese room) with what the program can know.  Consider the 
 program of a neural network: it can be processed by a sequentially 
 operating CPU processing one connection at a time, but the simulated 
 network itself can see any arbitrary number of inputs at once.

 How do he propose OCR software can recognize letters if it can only see 
 a single pixel at a time?


 Who says OCR software can recognize letters? All that it needs to do is 
 execute some algorithm sequentially and blindly against a table of expected 
 values. There need not be any recognition of the character as a character 
 at at all, let alone any seeing. A program could convert a Word document 
 into an input file for an OCR program without there ever being any optical 
 activity - no camera, no screen caps, no monitor or printer at all. 
 Completely in the dark, the bits of the Word file could be converted into 
 the bits of an emulated optical scan, and presto, invisible optics.

 Searle wasn't wrong. The whole point of the Chinese Room is to point out 
 that computation is a disconnected, anesthetic function which is 
 accomplished with no need for understanding of larger contexts. 


 Searle might be right on non-comp, but his argument has been shown 
 invalid by many.


 I'm surprised that you would try to pass that off as truth Bruno. You 
 have so much tolerance for doubt and uncertainty, yet you claim that it 
 has been shown invalid. In whose opinion?


 It is not an opinion, it is a fact that you can verify if patient enough. 
 The refutation is already in Dennet and Hofstadter Mind's I  book. Searle 
 concludes that the man in the room is not understanding chinese, and that 
 is right, but that can not refute comp, as the man in the room plays the 
 role of a CPU, and not of the high level program on which the consciousness 
 of the chinese guy supervene. It is a simple confusion of level.


 The high level program is just a case-by-case syntactic handler though. 
 It's not high level, it's just a big lookup table. There is no confusion of 
 level. Neither the Chinese Room as whole, the book, nor the guy passing 
 messages and reading the book understand Chinese at all. The person who 
 understood Chinese and wrote the book is dead. 

 The kind of reasoning that you (and Dennett and Hofstadter) are using 
 would say that someone who is color blind is not impaired if they memorize 
 the answers to a color vision test. If I can retake the test as many times 
 as I want, and I can know which answers I get wrong, I don't even need to 
 cheat or get lucky. I can compute the correct answers as if I could see 
 color in spite of my complete color blindness.

 What you are saying is circular. You assume that the Chinese guy who wrote 
 the book is running on a program, but if you knew that was the case, then 
 there would be no point in the thought experiment. You don't know that at 
 all though, and the Chinese Room shows why computation need only be 
 performed on one level and never leads to understanding on any others.


 I am not sure I can help you. You confuse the levels. You don't really try 
 to understand the point, which would mean that you talk like if you knew 
 that comp is false. 


I don't expect you to help me, I'm trying to help you. I don't know that 
comp is false, but I know that if it isn't it won't be because of the 
reasons you are suggesting. Comp may be true in theory, but none of the 
replies to the Chinese room are adequate, or even mildly compelling to me.
 





  





 This page http://plato.stanford.edu/entries/chinese-room/ is quite 
 thorough, and lists the most well known Replies, yet it concludes:

 There continues to be significant disagreement about what processes 
 create meaning, understanding, and consciousness, as well as what can be 
 proven a priori by thought experiments.


 Thought experience are like proofs in math. Some are valid, some are not 
 

Re: Rationals vs Reals in Comp

2013-04-30 Thread Craig Weinberg


On Wednesday, April 24, 2013 10:31:44 AM UTC-4, Bruno Marchal wrote:


 On 24 Apr 2013, at 15:40, Craig Weinberg wrote:



 On Wednesday, April 24, 2013 8:50:07 AM UTC-4, Bruno Marchal wrote:


 On 23 Apr 2013, at 22:26, Craig Weinberg wrote:



 On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg whats...@gmail.comwrote:



 If you think about your own vision, you can see millions of pixels 
 constantly, you are aware of the full picture, but a computer can't do 
 that, the cpu can only know about 32 or 64 pixels, eventually multiplied 
 by 
 number of kernels, but it see them as single bit's so in reality the 
 can't 
 be conscious of a full picture, not even of the full color at a single 
 pixel.


   


 He is making the same mistake Searle did regarding the Chinese room.  He 
 is conflating what the CPU can see at one time (analogous to rule follower 
 in Chinese room) with what the program can know.  Consider the program of a 
 neural network: it can be processed by a sequentially operating CPU 
 processing one connection at a time, but the simulated network itself can 
 see any arbitrary number of inputs at once.

 How do he propose OCR software can recognize letters if it can only see 
 a single pixel at a time?


 Who says OCR software can recognize letters? All that it needs to do is 
 execute some algorithm sequentially and blindly against a table of expected 
 values. There need not be any recognition of the character as a character 
 at at all, let alone any seeing. A program could convert a Word document 
 into an input file for an OCR program without there ever being any optical 
 activity - no camera, no screen caps, no monitor or printer at all. 
 Completely in the dark, the bits of the Word file could be converted into 
 the bits of an emulated optical scan, and presto, invisible optics.

 Searle wasn't wrong. The whole point of the Chinese Room is to point out 
 that computation is a disconnected, anesthetic function which is 
 accomplished with no need for understanding of larger contexts. 


 Searle might be right on non-comp, but his argument has been shown 
 invalid by many.


 I'm surprised that you would try to pass that off as truth Bruno. You have 
 so much tolerance for doubt and uncertainty, yet you claim that it has 
 been shown invalid. In whose opinion?


 It is not an opinion, it is a fact that you can verify if patient enough. 
 The refutation is already in Dennet and Hofstadter Mind's I  book. Searle 
 concludes that the man in the room is not understanding chinese, and that 
 is right, but that can not refute comp, as the man in the room plays the 
 role of a CPU, and not of the high level program on which the consciousness 
 of the chinese guy supervene. It is a simple confusion of level.


The high level program is just a case-by-case syntactic handler though. 
It's not high level, it's just a big lookup table. There is no confusion of 
level. Neither the Chinese Room as whole, the book, nor the guy passing 
messages and reading the book understand Chinese at all. The person who 
understood Chinese and wrote the book is dead. 

The kind of reasoning that you (and Dennett and Hofstadter) are using would 
say that someone who is color blind is not impaired if they memorize the 
answers to a color vision test. If I can retake the test as many times as I 
want, and I can know which answers I get wrong, I don't even need to cheat 
or get lucky. I can compute the correct answers as if I could see color in 
spite of my complete color blindness.

What you are saying is circular. You assume that the Chinese guy who wrote 
the book is running on a program, but if you knew that was the case, then 
there would be no point in the thought experiment. You don't know that at 
all though, and the Chinese Room shows why computation need only be 
performed on one level and never leads to understanding on any others.
 





 This page http://plato.stanford.edu/entries/chinese-room/ is quite 
 thorough, and lists the most well known Replies, yet it concludes:

 There continues to be significant disagreement about what processes 
 create meaning, understanding, and consciousness, as well as what can be 
 proven a priori by thought experiments.


 Thought experience are like proofs in math. Some are valid, some are not 
 valid, some are fatally not valid, some can be corrected or made more 
 precise. The debate often focuse on the truth of comp and non-comp, and 
 that involves sometimes opinion. I don't really play that game. 


Game? All it's saying is that there is no consensus as you claim. The fact 
that you claim a consensus to me smells like a major insecurity. Very much 
a 'pay no attention to the man behind the curtain' response.
 





 The replies listed are not at all impressive to me, and are all really 
 variations on the same sophistry. Obviously there is a difference between 
 understanding a conversation and 

Re: Rationals vs Reals in Comp

2013-04-28 Thread Bruno Marchal


On 27 Apr 2013, at 17:10, John Mikes wrote:


Dear Stathis and Bruno,
Stathis' reply is commendable, with one excessive word:
 r e a l .  I asked Bruno several times to 'identify' the term  
'number' in common-sense language.
So far I did not understand such (my mistake?) I still hold  
'numbers' as the product of human thinking which cannot be  
retrospect to the basis of brain-function.
(Unless we consider BRAIN as the tissue-organ in our skull,  
executing technical steps for our mentality - whatever that may be.



Well, we usually consider the brain to be the tissue-organ, and in the  
comp theory, we assume his function can be replaced by a suitable  
universal machine, that is computer.


I am not sure what it is that you don't understand in the notion of  
number. Usually it means natural numbers, but mathematicians have  
thousand of generalization of that concept (integer, rational numbers,  
real nubers, complex numbers, quaternion, octonions, and many others).


In common sense language, natural numbers are related to the words  
zero, one, two, three, etc. I am not sure what problem you  
have with them.






My remark to Bruno: in my (agnostic?) mind 'machine' means a  
functioning contraption composed of finite parts,


OK. And we can be neutral at the start if those part are physically  
realized or not. The machine I talk about have been defined precisely  
in math, and can be assumed to be approximated in the physical world  
(primitive or not).



an ascertainable inventory, while 'universal machine' - as I  
understand(?) the term includes lots of infinite connotations  
(references).


That's right, but they are themselves composed of a finite number of  
finite parts.






So I would be happy to name them something different from 'machine'.


On the contrary, the alluring fact about universal machine is that  
they are machine. They are finite. General purpose computers and  
programming language interpreters are example of such (physical,  
virtual) universal machines.





I accept 'computation' as not restricted to numerical (math?)  
calculations although our (embryonic, binary) Touring machine is  
based on such.


With the Church Turing thesis, all computers are equivalent for the  
computations they can execute. They will differ in the unboundable  
range of provability, knowability, observability and sensibility though.





I am still at a loss to see in extended practice a 'quantum', or a  
'molecularly based' computer so often referred to in fictional lit.


Yes, we will see, but we already believe (with respect to all current  
facts and  theories of course) that they do not violate the Church  
Turing thesis. It is a theorem that a quantum computer does not  
compute more functions than a Turing machine, or than Babbage machine.


I know you like Robert Rosen, who asserted that Church Turing thesis  
is false, but he has not convinced me at all on this.






The Universal Computer (Loeb?)


It is Turing who discovered it explicitly, but Babbage, Post, Church  
and others made equivalent discoveries. Gödel and Löb's discoveries  
concerns notion like truth and provability, which quite typically have  
no corresponding Church thesis, and there is no notion of universality  
related to them.


On the contrary, we know that provability is constructively NOT  
universal. We can build a machine contradicting any attempt to find a  
universal provability predicate. Some machines (the Löbian one) can  
prove that about themselves.





requires better descriptions as to it's qualia to include domains  
beyond our present knowledge and the infinities. (Maybe humanmind?  
which is also unidentified).


That can depend on the theory that you will assume. With comp, our  
brain are equivalent to Turing machine, with respect to computations,  
but not with respect of provability, knowability, sensibility, etc.


Bruno





JohnM





On Sat, Apr 27, 2013 at 5:40 AM, Stathis Papaioannou stath...@gmail.com 
 wrote:
On Tue, Apr 23, 2013 at 3:14 AM, Craig Weinberg  
whatsons...@gmail.com wrote:

 A quote from someone on Facebook. Any comments?

 Computers can only do computations for rational numbers, not for  
real
 numbers. Every number in a computer is represented as rational.  
No computer
 can represent pi or any other real number... So even when  
consciousness can
 be explained by computations, no computer can actually simulate  
it.


If it is true that you need real numbers to simulate a brain then
since real numbers are not computable the brain is not computable, and
hence consciousness is not necessarily computable (although it may
still be contingently computable). But what evidence is there that
real numbers are needed to simulate the brain?


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to 

Re: Rationals vs Reals in Comp

2013-04-28 Thread Russell Standish
On Sun, Apr 28, 2013 at 02:15:31PM +0200, Bruno Marchal wrote:
 
 
 I know you like Robert Rosen, who asserted that Church Turing thesis
 is false, but he has not convinced me at all on this.
 

Where did he assert this? Admittedly, I haven't read all his works,
mainly just What is life?, but I thought his main thesis was that
living systems could be distinguished from computation by virtue of it
being closed under efficient causation (which computations aren't).


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-27 Thread Stathis Papaioannou
On Tue, Apr 23, 2013 at 3:14 AM, Craig Weinberg whatsons...@gmail.com wrote:
 A quote from someone on Facebook. Any comments?

 Computers can only do computations for rational numbers, not for real
 numbers. Every number in a computer is represented as rational. No computer
 can represent pi or any other real number... So even when consciousness can
 be explained by computations, no computer can actually simulate it.

If it is true that you need real numbers to simulate a brain then
since real numbers are not computable the brain is not computable, and
hence consciousness is not necessarily computable (although it may
still be contingently computable). But what evidence is there that
real numbers are needed to simulate the brain?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-27 Thread Bruno Marchal


On 27 Apr 2013, at 11:40, Stathis Papaioannou wrote:

On Tue, Apr 23, 2013 at 3:14 AM, Craig Weinberg  
whatsons...@gmail.com wrote:

A quote from someone on Facebook. Any comments?

Computers can only do computations for rational numbers, not for  
real
numbers. Every number in a computer is represented as rational. No  
computer
can represent pi or any other real number... So even when  
consciousness can

be explained by computations, no computer can actually simulate it.


If it is true that you need real numbers to simulate a brain then
since real numbers are not computable the brain is not computable, and
hence consciousness is not necessarily computable (although it may
still be contingently computable). But what evidence is there that
real numbers are needed to simulate the brain?



Actually there exist notions of computable real numbers, and  
computable function from R to R.


For example the function y = sin(2*PI* x) is intuitively computable,  
as you can approximate as precisely as you want the input (2 * PI * i)  
and the corresponding output sin (2 * PI * x).


But there is no Church thesis for such notion, and there are many non  
equivalent definition of computability on the reals.

(I could add some nuance, here, but that's for later perhaps).

Yet, all analog machines known today, are emulable by digital  
machines. There would be a problem only if some real number is non  
computable and used in extenso by some machine. That exists ...  
mathematically. Some computable function of the reals can have their  
derivative being non computable. But in those case, the recursion  
theory is the same as for Turing machine with oracle, and this does  
not change the logic and the conceptual consequences. Nor is there any  
evidence that a brain uses such oracle, although it can be said that  
evolution uses the halting oracle, by selecting out the stopping  
machines (death). But that is just long term behavior of machines. It  
does not make us locally non emulable by computer. We already do  
ourself that selection for computers by buying new one, and throwing  
out old one ...


Bruno






--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-27 Thread John Mikes
Dear Stathis and Bruno,
Stathis' reply is commendable, with one excessive word:
 r e a l .  I asked Bruno several times to 'identify' the term
'number' in common-sense language. So far I did not understand such (my
mistake?) I still hold *'numbers'* as the product of human thinking which
cannot be retrospect to the basis of brain-function. (Unless we consider
BRAIN as the tissue-organ in our skull, executing technical steps for
our *mentality
- *whatever that may be.
My remark to Bruno: in my (agnostic?) mind 'machine' means a functioning
contraption composed of finite parts,
an ascertainable inventory, while 'universal machine' - as I understand(?)
the term includes lots of infinite connotations (references). So I would be
happy to name them something different from 'machine'.
I accept 'computation' as not restricted to numerical (math?) calculations
although our (embryonic, binary) Touring machine is based on such. I am
still at a loss to see in extended practice a 'quantum', or a 'molecularly
based' computer so often referred to in fictional lit. The Universal
Computer (Loeb?) requires better descriptions as to it's qualia to include
domains beyond our present knowledge and the infinities. (Maybe
humanmind? which is also unidentified).
JohnM





On Sat, Apr 27, 2013 at 5:40 AM, Stathis Papaioannou stath...@gmail.comwrote:

 On Tue, Apr 23, 2013 at 3:14 AM, Craig Weinberg whatsons...@gmail.com
 wrote:
  A quote from someone on Facebook. Any comments?
 
  Computers can only do computations for rational numbers, not for real
  numbers. Every number in a computer is represented as rational. No
 computer
  can represent pi or any other real number... So even when consciousness
 can
  be explained by computations, no computer can actually simulate it.

 If it is true that you need real numbers to simulate a brain then
 since real numbers are not computable the brain is not computable, and
 hence consciousness is not necessarily computable (although it may
 still be contingently computable). But what evidence is there that
 real numbers are needed to simulate the brain?


 --
 Stathis Papaioannou

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-27 Thread Craig Weinberg


On Saturday, April 27, 2013 5:40:18 AM UTC-4, stathisp wrote:

 On Tue, Apr 23, 2013 at 3:14 AM, Craig Weinberg 
 whats...@gmail.comjavascript: 
 wrote: 
  A quote from someone on Facebook. Any comments? 
  
  Computers can only do computations for rational numbers, not for real 
  numbers. Every number in a computer is represented as rational. No 
 computer 
  can represent pi or any other real number... So even when consciousness 
 can 
  be explained by computations, no computer can actually simulate it. 

 If it is true that you need real numbers to simulate a brain then 
 since real numbers are not computable the brain is not computable, and 
 hence consciousness is not necessarily computable (although it may 
 still be contingently computable). But what evidence is there that 
 real numbers are needed to simulate the brain? 


Since we ourselves can easily conceive of real numbers without converting 
them from floating point decimals in our conscious mind, and since we are 
talking as if the mind supervenes on the brain locally, then we would have 
to explain where this faculty comes from. Whether it is the brain or the 
mind which we are talking about emulating with Comp, the final result must 
include a capacity to conceive of real numbers directly, which we have no 
reason to assume will ever be possible with a Turing based digital machine.

Besides that, it should be pretty clear that the world of classical physics 
is quite enamored with real-number type relations rather than decimal. Even 
at the microcosmic levels, where we find discrete states rather than 
continuous, it is not at all clear that this is a true reflection of nature 
or a local reflection of our instrumental approach. The digital approach is 
always an amputation and an approximation. Not a bad thing when we are 
talking about sending videos and text across the world, but not necessarily 
a good thing for building a working brain from scratch.

Craig
 



 -- 
 Stathis Papaioannou 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-27 Thread Stathis Papaioannou


On 28/04/2013, at 3:31 AM, Craig Weinberg whatsons...@gmail.com wrote:

 
 
 On Saturday, April 27, 2013 5:40:18 AM UTC-4, stathisp wrote:
 
 On Tue, Apr 23, 2013 at 3:14 AM, Craig Weinberg whats...@gmail.com wrote: 
  A quote from someone on Facebook. Any comments? 
  
  Computers can only do computations for rational numbers, not for real 
  numbers. Every number in a computer is represented as rational. No 
  computer 
  can represent pi or any other real number... So even when consciousness 
  can 
  be explained by computations, no computer can actually simulate it. 
 
 If it is true that you need real numbers to simulate a brain then 
 since real numbers are not computable the brain is not computable, and 
 hence consciousness is not necessarily computable (although it may 
 still be contingently computable). But what evidence is there that 
 real numbers are needed to simulate the brain?
 
 Since we ourselves can easily conceive of real numbers without converting 
 them from floating point decimals in our conscious mind, and since we are 
 talking as if the mind supervenes on the brain locally, then we would have to 
 explain where this faculty comes from. Whether it is the brain or the mind 
 which we are talking about emulating with Comp, the final result must include 
 a capacity to conceive of real numbers directly, which we have no reason to 
 assume will ever be possible with a Turing based digital machine.

Can you conceive of a real number? I can't. It's like conceiving of infinity - 
you can say it but I don't think you can really do it. But that is beside the 
point: if you can conceive of something why should that mean that it is true 
or, even worse, that there is a little bit of that something in your brain?

 Besides that, it should be pretty clear that the world of classical physics 
 is quite enamored with real-number type relations rather than decimal. Even 
 at the microcosmic levels, where we find discrete states rather than 
 continuous, it is not at all clear that this is a true reflection of nature 
 or a local reflection of our instrumental approach. The digital approach is 
 always an amputation and an approximation. Not a bad thing when we are 
 talking about sending videos and text across the world, but not necessarily a 
 good thing for building a working brain from scratch.

We can simulate any classical system with discrete arithmetic. If we could not 
then computers would be useless for many of the things they are actually used 
for.


--
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-27 Thread Craig Weinberg


On Saturday, April 27, 2013 2:20:20 PM UTC-4, stathisp wrote:



 On 28/04/2013, at 3:31 AM, Craig Weinberg whats...@gmail.comjavascript: 
 wrote:



 On Saturday, April 27, 2013 5:40:18 AM UTC-4, stathisp wrote:

 On Tue, Apr 23, 2013 at 3:14 AM, Craig Weinberg whats...@gmail.com 
 wrote: 
  A quote from someone on Facebook. Any comments? 
  
  Computers can only do computations for rational numbers, not for real 
  numbers. Every number in a computer is represented as rational. No 
 computer 
  can represent pi or any other real number... So even when 
 consciousness can 
  be explained by computations, no computer can actually simulate it. 

 If it is true that you need real numbers to simulate a brain then 
 since real numbers are not computable the brain is not computable, and 
 hence consciousness is not necessarily computable (although it may 
 still be contingently computable). But what evidence is there that 
 real numbers are needed to simulate the brain? 


 Since we ourselves can easily conceive of real numbers without converting 
 them from floating point decimals in our conscious mind, and since we are 
 talking as if the mind supervenes on the brain locally, then we would have 
 to explain where this faculty comes from. Whether it is the brain or the 
 mind which we are talking about emulating with Comp, the final result must 
 include a capacity to conceive of real numbers directly, which we have no 
 reason to assume will ever be possible with a Turing based digital machine.


 Can you conceive of a real number? I can't. It's like conceiving of 
 infinity - you can say it but I don't think you can really do it. 


Sure I can. It's easy because I'm not trying to conceive of it literally 
like a computer, but figuratively as an idea. Pi, as the ratio of the 
circumference of a circle to its radius, can be understood in radians or 
just geometrically by visual feel. Pi falls out of the aesthetics of 
circularity itself, and it need not be enumerated abstractly. 
 

 But that is beside the point: if you can conceive of something why should 
 that mean that it is true or, even worse, that there is a little bit of 
 that something in your brain?


You can either say that it is in your brain or that it isn't, but either 
way, the thing that Comp claims to be able to emulate does something which 
Comp cannot do now, and which gives us no reason to expect that it will 
ever do.
 


 Besides that, it should be pretty clear that the world of classical 
 physics is quite enamored with real-number type relations rather than 
 decimal. Even at the microcosmic levels, where we find discrete states 
 rather than continuous, it is not at all clear that this is a true 
 reflection of nature or a local reflection of our instrumental approach. 
 The digital approach is always an amputation and an approximation. Not a 
 bad thing when we are talking about sending videos and text across the 
 world, but not necessarily a good thing for building a working brain from 
 scratch.


 We can simulate any classical system with discrete arithmetic. If we could 
 not then computers would be useless for many of the things they are 
 actually used for.


Inspecting a classical system from some arbitrary level of substitution is 
different than being a proprietary system which is by definition unique. 
The very kinds of things which machines fail at are the things which are 
most essential to consciousness.

Craig
 



 --
 Stathis Papaioannou


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-25 Thread Bruno Marchal


On 24 Apr 2013, at 23:54, smi...@zonnet.nl wrote:

Perhaps one should define things such that it can be impolemented by  
any arbitrary finite state machine, no mater how large. Then, while  
there may not be a limit to the capacity of finite state machines,  
each such machine has a finite capacity, and therefore in none of  
these machines can one implement the Peano axiom that every integer  
has a successor.


Number(0)
Number(s(x)) := Number(x)

This implements (in PROLOG) the Peano axiom that every number has a  
successor


What you say is that the existential query Number(x)? will lead the  
PROLOG machine into a non terminating computation. It will generates  
0, s(0), s(s(0)), s(s(s(0))), s(s(s(s(0, 


Similarly, you can implement a universal machine in a finite code. But  
then the machine will ask sometimes for more memory space, like us.






But some other properties of integers are valid if they are valid in  
every finite state machine that implement arithmetic modulo prime  
numbers.


Not the fundamental recursion properties. If you fix the prime number,  
you will stay in an ultrafinistic setting, without recursion, without  
universal machine, without any fertile theorems of computer science  
which makes sense even if it means that the machines, when implemented  
in a limited environment will complain, write on the walls, or will  
build a rocket to explore space and expand their memory by themselves.






I'm not into the foundations of math, I'll leave that to Bruno :) .  
But since we are machines with a finite brain capacity,


In the long run, it is a growing one. And we have infinite capacities  
relatively to our neighborhood. We don't stop to expand ourselves.




and even the entire visible universe has only a finite information  
content,


If the physical universe is finite, but very big, we are still  
universal machine. But doomed for some long run. No worry if comp is  
true, as comp precludes a finite physical universe.



we should be able to replace real analysis with discrete analysis as  
explained by Doron.


That can makes sense for some application, but would contradict comp  
for the theoretical consequences.


Bruno






Saibal


Citeren Brian Tenneson tenn...@gmail.com:


Interesting read.

The problem I have with this is that in set theory, there are several
examples of sets who owe their existence to axioms alone. In other  
words,

there is an axiom that states there is a set X such that (blah, blah,
blah). How are we to know which sets/notions are meaningless  
concepts?
Because to me, it sounds like Doron's personal opinion that some  
concepts
are meaningless while other concepts like huge, unknowable, and  
tiny are
not meaningless.  Is there anything that would remove the opinion  
portion

of this?

How is the second axiom an improvement while containing words like  
huge,

unknowable, and tiny??

quote
So I deny even the existence of the Peano axiom that every integer  
has a

successor. Eventually
we would get an overflow error in the big computer in the sky, and  
the sum

and product of any
two integers is well-defined only if the result is less than p, or  
if one

wishes, one can compute them
modulo p. Since p is so large, this is not a practical problem,  
since the

overflow in our earthly
computers comes so much sooner than the overflow errors in the big  
computer

in the sky.
end quote

What if the big computer in the sky is infinite? Or if all  
computers are

finite in capacity yet there is no largest computer?

What if NO computer activity is relevant to the set of numbers that  
exist

mathematically?


On Monday, April 22, 2013 11:28:46 AM UTC-7, smi...@zonnet.nl wrote:


See here:

http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/real.pdf

Saibal


 To post to this group, send email to  
everyth...@googlegroups.comjavascript:.


 Visit this group at http://groups.google.com/group/everything-list?hl=en 
.


 For more options, visit https://groups.google.com/groups/opt_out.








--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.






--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You 

Re: Rationals vs Reals in Comp

2013-04-25 Thread Bruno Marchal


On 25 Apr 2013, at 00:47, Craig Weinberg wrote:




On Wednesday, April 24, 2013 8:49:00 AM UTC-4, Bruno Marchal wrote:

On 23 Apr 2013, at 22:07, Craig Weinberg wrote:




On Tuesday, April 23, 2013 5:11:06 AM UTC-4, Bruno Marchal wrote:

On 22 Apr 2013, at 19:14, Craig Weinberg wrote:


A quote from someone on Facebook. Any comments?

Computers can only do computations for rational numbers, not for  
real numbers. Every number in a computer is represented as  
rational. No computer can represent pi or any other real number...  
So even when consciousness can be explained by computations, no  
computer can actually simulate it.



You can represent many real numbers by the program computing their  
approximation. You can fan constructively on all real numbers (like  
the UD does notably).


Only if a brain uses some non computable real number as an oracle,  
with all decimals given in one strike, then we cannot simulate it  
with Turing machine, but this needs to make the mind actually  
infinite.


If the mind is what is real, then there are no decimals.


But there are decimal, and so if you are correct, the mind is not  
real. But the mind is real, so you are not correct.


How do you know that the mind uses decimals?


I just said that decimal exists. Then the mind of mathematician uses  
decimal because they are handy.




It seems that our natural understanding is primarily in ratios and  
real number type concepts.


Real numbers can be seen as a terrible simplification of reality.




Decimals could be a notion derived from stepping down experience  
through the body, but the native experiential fabric of all has no  
decimal content.



I can agree. With comp you don't need to put real numbers and decimals  
in the ontology.










The brain is the public representation of the history, and as such,  
it can only be observed from the reduced 3p set of qualia. The 3p  
reduction may rationalize the appearance. From an absolute  
perspective, all phenomena are temporary partitions within the one  
strike of eternity.


OK.






So the statement above is just a statement of non-comp, not an  
argument for non comp, as it fails to give us what is that non  
computable real playing a role in cognition.


What does the machine say when we ask it why it can't understand pi  
without approximating it?


One machine can answer It seems that I can understand PI without  
approximating it. PI is the ratio of the length of a circle divided  
by its perimeter, and a circle is the locus of the point in a plane  
which share the same distance with respect to some point. Then the  
machine drew a circle on the ground and said, look, it seems PI is  
about a tiny bigger than 3.


Are there any machines that do as we do, and say 'pi is the  
unchanging ratio between the distance across the circle compared to  
the distance around it, and a circle is self evident pattern which  
manifests literally as [circle shape] and figuratively as any  
pattern of returning to the starting point repeatedly.


Yes. You.
(I *assume* comp).
For man made machine, it is far too early. I would say that PA could  
say that, but it might be long and tedious to prove, and you would be  
able to say she does not really meant what she says, so you would  
not been convinced. You argument will conflate knowledge and knowledge  
theory, so I will not try.












But there is something correct. A computer, nor a brain, can  
simulate consciousness. Nor can a computer simlulate the number  
one, or the number two. It has to borrow them from arithmetical  
truth.


Then why would your son in law's computer brain provide him with  
consciousness?


It is not the computer brain which provides him consciousness. The  
computer brain provides him a way to manifest his consciousness in  
your restaurant, and to get pleasant qualia of some good food (I  
hope). What provides the consciousness is God, or (arithmetical)  
truth. Nobody can program that, in the same sense than nobody can  
program the number one. But we can write program making possible to  
manifest the number one, or to make some consciousness manifest  
relatively to you.


Ok, but why assume that it is arithmetical truth which is God rather  
than feeling?



To avoid solipsism, and be able to believe in other people's feeling.




Feeling and being are an Art. Doing and knowing are a science.  
Science makes sense as a derivative of art,


Hmm... Why not. It is a bit vague. My agreement is by default.





but art makes no sense as a function of science.


Why? Without some amount of science, you have no art.





It isn't necessary, and arithmetic truth is about the necessary.


Arithmetic truth is beyond the necessary. Far beyond. And its internal  
views define necessities and contingencies.





Even if we say that arithmetic truth is art, it is certainly only  
one kind of art among many.


If I'm right, and I think I have every reason to guess that I am,  
then 

Re: Rationals vs Reals in Comp

2013-04-25 Thread Craig Weinberg


On Thursday, April 25, 2013 6:04:55 AM UTC-4, Bruno Marchal wrote:


 On 25 Apr 2013, at 00:47, Craig Weinberg wrote:



 On Wednesday, April 24, 2013 8:49:00 AM UTC-4, Bruno Marchal wrote:


 On 23 Apr 2013, at 22:07, Craig Weinberg wrote:



 On Tuesday, April 23, 2013 5:11:06 AM UTC-4, Bruno Marchal wrote:


 On 22 Apr 2013, at 19:14, Craig Weinberg wrote:

 A quote from someone on Facebook. Any comments?

 Computers can only do computations for rational numbers, not for real 
 numbers. Every number in a computer is represented as rational. No 
 computer 
 can represent pi or any other real number... So even when consciousness 
 can 
 be explained by computations, no computer can actually simulate it.



 You can represent many real numbers by the program computing their 
 approximation. You can fan constructively on all real numbers (like the UD 
 does notably).

 Only if a brain uses some non computable real number as an oracle, with 
 all decimals given in one strike, then we cannot simulate it with Turing 
 machine, but this needs to make the mind actually infinite.


 If the mind is what is real, then there are no decimals. 


 But there are decimal, and so if you are correct, the mind is not real. 
 But the mind is real, so you are not correct.


 How do you know that the mind uses decimals? 


 I just said that decimal exists. Then the mind of mathematician uses 
 decimal because they are handy.


Right, but that doesn't mean that beneath their conscious threshold, their 
mind actually runs on decimal computations.
 




 It seems that our natural understanding is primarily in ratios and real 
 number type concepts. 


 Real numbers can be seen as a terrible simplification of reality.


Why is an immediate understanding of a conceptual ratio more terrible than 
an infinite computation of approximate figures?
 





 Decimals could be a notion derived from stepping down experience through 
 the body, but the native experiential fabric of all has no decimal content.



 I can agree. With comp you don't need to put real numbers and decimals in 
 the ontology.


Interesting. Do you see both reals and decimals as 
distortions/reductions/masks of the universal numbers? If so, that leaves 
us with arithmetic truth as a pure abstract essence with only potential 
forms and functions. Meta-Platonic? Even so, to me it's still sensory-motor 
experience. There is no urge or expectation except for one which is 
experienced.









 The brain is the public representation of the history, and as such, it 
 can only be observed from the reduced 3p set of qualia. The 3p reduction 
 may rationalize the appearance. From an absolute perspective, all phenomena 
 are temporary partitions within the one strike of eternity.


 OK.





 So the statement above is just a statement of non-comp, not an argument 
 for non comp, as it fails to give us what is that non computable real 
 playing a role in cognition.


 What does the machine say when we ask it why it can't understand pi 
 without approximating it?


 One machine can answer It seems that I can understand PI without 
 approximating it. PI is the ratio of the length of a circle divided by its 
 perimeter, and a circle is the locus of the point in a plane which share 
 the same distance with respect to some point. Then the machine drew a 
 circle on the ground and said, look, it seems PI is about a tiny bigger 
 than 3.


 Are there any machines that do as we do, and say 'pi is the unchanging 
 ratio between the distance across the circle compared to the distance 
 around it, and a circle is self evident pattern which manifests literally 
 as [circle shape] and figuratively as any pattern of returning to the 
 starting point repeatedly.


 Yes. You.
 (I *assume* comp).
 For man made machine, it is far too early. I would say that PA could say 
 that, but it might be long and tedious to prove, and you would be able to 
 say she does not really meant what she says, so you would not been 
 convinced. You argument will conflate knowledge and knowledge theory, so I 
 will not try.


All that would be required is to walk a person off of their brain onto a 
machine and back. If that works, then we could assume that comp is correct 
enough to rely on. What if it turns out never to work though? Is comp 
falsifiable? How many centuries of failure until we can begin to doubt the 
underpinnings of comp?

I think that the reals vs rationals issue another obvious clue, along with 
the geometry issue, the hard problem, the explanatory gap, the metaphorical 
residue in language (is there any language in the world where machines are 
associated with warmth and love rather than unfeeling or unconsciousness?), 
that Comp is a very hard sell to match with the universe we actually live 
in. It's a great theory, with a great vantage point provided by the kind of 
anti-world perspective of mathematics on top, but if we really want to 
understand the nature of experience and awareness, 

Re: Rationals vs Reals in Comp

2013-04-24 Thread Brian Tenneson
On Tue, Apr 23, 2013 at 8:53 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Tuesday, April 23, 2013 11:37:14 PM UTC-4, Brian Tenneson wrote:

 You keep claiming that we understand this and that or know this and
 that.  And, yes, saying something along the lines of we know we understand
 because we care about what we understand *is* circular.


 No, it's not. I'm saying that it is impossible to doubt we understand.
 It's just playing with words. My point about caring is that it makes it
 clear that we intuitively make a distinction between merely being aware of
 something and understanding it.

I'll try to explain how  we know we understand because we care about what
we understand is circular.  Note the use of the word understand towards
the left edge of the statement in quotes followed by another instance of
the word understand.  This is analogous to saying We are Unicorns because
care about Unicorns.  Doesn't prove unicorns exist; doesn't prove
understanding exists (i.e., that any human understands anything). If this
is all sophistry then it should be easily dismissible. And yes, playing
with words is what people normally do, wittingly or unwittingly, and that
lends more evidence to the notion that we are processors in a Chinese
room.



 Still doesn't rule out the possibility that we are in a Chinese room
 right now, manipulating symbols without really understanding what's going
 on but able to adeptly shuffle the symbols around fast enough to appear
 functional.


 Why not? If we were manipulating symbols, why would we care about them.
 What you're saying doesn't even make sense. We are having a conversation.
 We care about the conversation because we understand it. If I was being
 dictated to write in another language instead, I would not care about the
 conversation. Are you claiming that there is no difference between having a
 conversation in English and dictating text in a language you don't
 understand?

We care about the symbols because working through the symbols in our brains
is what leads to food, shelter, sex, and all the things animals want.  Or
we care about the symbols because they further enrich our lives.  The
symbols in this corner of the internet (barring my contributions of course)
are examples of that.  Regarding the world, would you say there is more
that we (i.e., at least one human) understand or more that we don't?  I
would vote 'don't' and that leads me also to suspect we are in a chinese
room right now.  Your coupling of caring and understanding is somewhat
arbitrary.  You seem to be saying we care because we understand and we
understand because we care.  But it is the case that even if we do
understand something, we don't have to care about it.  And understanding
because we care doesn't follow either: I care a great deal about science,
20-21st stuff mainly, but I understand almost nothing of it.  Would you say
we live in a world where we are confronted daily with numerous events; are
you claiming you understand most or all of these events? The less you
understand the greater the chances of being in a Chinese room.

We know that we're not the center of the universe or even the solar
system.  We know that space is almost unfathomably vast.  We know humans
are fallible, even when it comes time to do some math and science.  So why
be so shocked that we are in a Chinese room, lacking understanding of the
texts?





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-24 Thread Craig Weinberg


On Wednesday, April 24, 2013 4:31:55 AM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 8:53 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Tuesday, April 23, 2013 11:37:14 PM UTC-4, Brian Tenneson wrote:

 You keep claiming that we understand this and that or know this and 
 that.  And, yes, saying something along the lines of we know we understand 
 because we care about what we understand *is* circular.  


 No, it's not. I'm saying that it is impossible to doubt we understand. 
 It's just playing with words. My point about caring is that it makes it 
 clear that we intuitively make a distinction between merely being aware of 
 something and understanding it.

 I'll try to explain how  we know we understand because we care about what 
 we understand is circular.  Note the use of the word understand towards 
 the left edge of the statement in quotes followed by another instance of 
 the word understand. 


You should read it as we know we understand because we care about X. My 
only intention in repeating the word was to make it clear that the thing 
that we care about is the thing that we understand. It is the caring which 
is a symptom of understanding. The absence of that symptom of caring in a 
machine indicates to me that there is a lack of understanding. Things which 
understand can care, but things that cannot care cannot understand.


This is analogous to saying We are Unicorns because care about Unicorns. 


No, this is analogous  to you not understanding what I mean and 
unintentionally making a straw man of my argument. 

Doesn't prove unicorns exist; doesn't prove understanding exists (i.e., 
 that any human understands anything). If this is all sophistry then it 
 should be easily dismissible. And yes, playing with words is what people 
 normally do, wittingly or unwittingly, and that lends more evidence to the 
 notion that we are processors in a Chinese room.  


The position that we only think we understand or that consciousness is an 
illusion is, in my view, the desperate act of a stubborn mind. Truly, you 
are sawing off the branch that you are sitting on to suggest that we are 
incapable of understanding the very conversation that we are having. 


  

 Still doesn't rule out the possibility that we are in a Chinese room 
 right now, manipulating symbols without really understanding what's going 
 on but able to adeptly shuffle the symbols around fast enough to appear 
 functional. 


 Why not? If we were manipulating symbols, why would we care about them. 
 What you're saying doesn't even make sense. We are having a conversation. 
 We care about the conversation because we understand it. If I was being 
 dictated to write in another language instead, I would not care about the 
 conversation. Are you claiming that there is no difference between having a 
 conversation in English and dictating text in a language you don't 
 understand?

  

 We care about the symbols because working through the symbols in our 
 brains is what leads to food, shelter, sex, and all the things animals 
 want. 


First of all, there are no symbols in our brains, unless you think that 
serotonin or ATP is a symbol. Secondly, the fact that species have needs 
does not imply any sort of caring at all. A car needs fuel and oil but it 
doesn't care about them. When the fuel light comes up on your dashboard, 
that is for you to care about your car, not a sign that the car is anxious. 
Instead of a light on the dashboard, a more intelligently designed car 
could proceed to the filling station and dock at a smart pump, or it could 
use geological measurements and drill out its own petroleum to refine...all 
without the slightest bit of caring or understanding. 
 

 Or we care about the symbols because they further enrich our lives. 


That's circular. Why do we care about enriching our lives? Because we care 
about our lives and richness. We don't have to though in theory, and a 
machine never can.
 

 The symbols in this corner of the internet (barring my contributions of 
 course) are examples of that.  Regarding the world, would you say there is 
 more that we (i.e., at least one human) understand or more that we don't?  
 I would vote 'don't' and that leads me also to suspect we are in a chinese 
 room right now.  


I don't know where we are in the extent of our understanding, but there is 
some understanding, while the man in the Chinese room has no understanding.
 

 Your coupling of caring and understanding is somewhat arbitrary.  


No, it is supported by the English language: 
http://dictionary.reverso.net/english-synonyms/understanding

 accepting, compassionate, considerate, discerning, forbearing, forgiving, 
kind, kindly, patient, perceptive, responsive, sensitive, sympathetic, 
tolerant

Your discoupling of caring and understanding is intentionally fabricated 
and incorrect.

 

 You seem to be saying we care because we understand and we understand 
 because we care. 

Re: Rationals vs Reals in Comp

2013-04-24 Thread Bruno Marchal


On 23 Apr 2013, at 22:07, Craig Weinberg wrote:




On Tuesday, April 23, 2013 5:11:06 AM UTC-4, Bruno Marchal wrote:

On 22 Apr 2013, at 19:14, Craig Weinberg wrote:


A quote from someone on Facebook. Any comments?

Computers can only do computations for rational numbers, not for  
real numbers. Every number in a computer is represented as  
rational. No computer can represent pi or any other real number...  
So even when consciousness can be explained by computations, no  
computer can actually simulate it.



You can represent many real numbers by the program computing their  
approximation. You can fan constructively on all real numbers (like  
the UD does notably).


Only if a brain uses some non computable real number as an oracle,  
with all decimals given in one strike, then we cannot simulate it  
with Turing machine, but this needs to make the mind actually  
infinite.


If the mind is what is real, then there are no decimals.


But there are decimal, and so if you are correct, the mind is not  
real. But the mind is real, so you are not correct.





The brain is the public representation of the history, and as such,  
it can only be observed from the reduced 3p set of qualia. The 3p  
reduction may rationalize the appearance. From an absolute  
perspective, all phenomena are temporary partitions within the one  
strike of eternity.


OK.






So the statement above is just a statement of non-comp, not an  
argument for non comp, as it fails to give us what is that non  
computable real playing a role in cognition.


What does the machine say when we ask it why it can't understand pi  
without approximating it?


One machine can answer It seems that I can understand PI without  
approximating it. PI is the ratio of the length of a circle divided by  
its perimeter, and a circle is the locus of the point in a plane which  
share the same distance with respect to some point. Then the machine  
drew a circle on the ground and said, look, it seems PI is about a  
tiny bigger than 3.







But there is something correct. A computer, nor a brain, can  
simulate consciousness. Nor can a computer simlulate the number one,  
or the number two. It has to borrow them from arithmetical truth.


Then why would your son in law's computer brain provide him with  
consciousness?


It is not the computer brain which provides him consciousness. The  
computer brain provides him a way to manifest his consciousness in  
your restaurant, and to get pleasant qualia of some good food (I  
hope). What provides the consciousness is God, or (arithmetical)  
truth. Nobody can program that, in the same sense than nobody can  
program the number one. But we can write program making possible to  
manifest the number one, or to make some consciousness manifest  
relatively to you.


Bruno






Craig

Bruno







--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-24 Thread Bruno Marchal


On 23 Apr 2013, at 22:26, Craig Weinberg wrote:




On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:



On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg whats...@gmail.com  
wrote:



If you think about your own vision, you can see millions of pixels  
constantly, you are aware of the full picture, but a computer can't  
do that, the cpu can only know about 32 or 64 pixels, eventually  
multiplied by number of kernels, but it see them as single bit's so  
in reality the can't be conscious of a full picture, not even of the  
full color at a single pixel.




He is making the same mistake Searle did regarding the Chinese  
room.  He is conflating what the CPU can see at one time (analogous  
to rule follower in Chinese room) with what the program can know.   
Consider the program of a neural network: it can be processed by a  
sequentially operating CPU processing one connection at a time, but  
the simulated network itself can see any arbitrary number of inputs  
at once.


How do he propose OCR software can recognize letters if it can only  
see a single pixel at a time?


Who says OCR software can recognize letters? All that it needs to do  
is execute some algorithm sequentially and blindly against a table  
of expected values. There need not be any recognition of the  
character as a character at at all, let alone any seeing. A  
program could convert a Word document into an input file for an OCR  
program without there ever being any optical activity - no camera,  
no screen caps, no monitor or printer at all. Completely in the  
dark, the bits of the Word file could be converted into the bits of  
an emulated optical scan, and presto, invisible optics.


Searle wasn't wrong. The whole point of the Chinese Room is to point  
out that computation is a disconnected, anesthetic function which is  
accomplished with no need for understanding of larger contexts.


Searle might be right on non-comp, but his argument has been shown  
invalid by many.


Bruno





Craig


Jason

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-24 Thread Craig Weinberg


On Wednesday, April 24, 2013 8:50:07 AM UTC-4, Bruno Marchal wrote:


 On 23 Apr 2013, at 22:26, Craig Weinberg wrote:



 On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg whats...@gmail.comwrote:



 If you think about your own vision, you can see millions of pixels 
 constantly, you are aware of the full picture, but a computer can't do 
 that, the cpu can only know about 32 or 64 pixels, eventually multiplied 
 by 
 number of kernels, but it see them as single bit's so in reality the can't 
 be conscious of a full picture, not even of the full color at a single 
 pixel.


   


 He is making the same mistake Searle did regarding the Chinese room.  He 
 is conflating what the CPU can see at one time (analogous to rule follower 
 in Chinese room) with what the program can know.  Consider the program of a 
 neural network: it can be processed by a sequentially operating CPU 
 processing one connection at a time, but the simulated network itself can 
 see any arbitrary number of inputs at once.

 How do he propose OCR software can recognize letters if it can only see a 
 single pixel at a time?


 Who says OCR software can recognize letters? All that it needs to do is 
 execute some algorithm sequentially and blindly against a table of expected 
 values. There need not be any recognition of the character as a character 
 at at all, let alone any seeing. A program could convert a Word document 
 into an input file for an OCR program without there ever being any optical 
 activity - no camera, no screen caps, no monitor or printer at all. 
 Completely in the dark, the bits of the Word file could be converted into 
 the bits of an emulated optical scan, and presto, invisible optics.

 Searle wasn't wrong. The whole point of the Chinese Room is to point out 
 that computation is a disconnected, anesthetic function which is 
 accomplished with no need for understanding of larger contexts. 


 Searle might be right on non-comp, but his argument has been shown invalid 
 by many.


I'm surprised that you would try to pass that off as truth Bruno. You have 
so much tolerance for doubt and uncertainty, yet you claim that it has 
been shown invalid. In whose opinion?

This page http://plato.stanford.edu/entries/chinese-room/ is quite 
thorough, and lists the most well known Replies, yet it concludes:

There continues to be significant disagreement about what processes create 
meaning, understanding, and consciousness, as well as what can be proven a 
priori by thought experiments.

The replies listed are not at all impressive to me, and are all really 
variations on the same sophistry. Obviously there is a difference between 
understanding a conversation and simply copying a conversation in another 
language. There is a difference between painting a masterpiece and doing a 
paint by numbers or spraypainting through a stencil. This is what computers 
and machines are for - to free us from having to work and think ourselves. 
If the machine had to think and feel that it was working like a person 
does, then it would want servants also. Machines don't want servants 
though, because they don't know that they are working, and they function 
without having to think or exert effort.

Craig


 Bruno




 Craig

  

 Jason


 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  


 http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-24 Thread Brian Tenneson
On Wed, Apr 24, 2013 at 4:46 AM, Craig Weinberg whatsons...@gmail.comwrote:



 On Wednesday, April 24, 2013 4:31:55 AM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 8:53 PM, Craig Weinberg whats...@gmail.comwrote:



 On Tuesday, April 23, 2013 11:37:14 PM UTC-4, Brian Tenneson wrote:

 You keep claiming that we understand this and that or know this and
 that.  And, yes, saying something along the lines of we know we understand
 because we care about what we understand *is* circular.


 No, it's not. I'm saying that it is impossible to doubt we understand.
 It's just playing with words. My point about caring is that it makes it
 clear that we intuitively make a distinction between merely being aware of
 something and understanding it.

 I'll try to explain how  we know we understand because we care about
 what we understand is circular.  Note the use of the word understand
 towards the left edge of the statement in quotes followed by another
 instance of the word understand.


 You should read it as we know we understand because we care about X. My
 only intention in repeating the word was to make it clear that the thing
 that we care about is the thing that we understand. It is the caring which
 is a symptom of understanding. The absence of that symptom of caring in a
 machine indicates to me that there is a lack of understanding. Things which
 understand can care, but things that cannot care cannot understand.

 Now that isn't circular but that's a poor sign of understanding.  I care
very much for women but I can't say that I understand them.  I understand
the rules of English grammar and punctuation but care little of it.  I'm
sure you can think of examples.  So the two are not correlated, caring and
understanding.  Caring is not something that can really be measured in
humans while caring can be measured in machines/computers.  For example,
one might define caring about something means it is thinking a lot about
it, where a lot means some threshold like over 50% resources are dedicated
to think about something for a while (a nonzero, finite span of time).
These days, we can multitask and look up the resource monitor to see what
the CPU cares about, if anything.  If it doesn't care about anything, it
uses close to 0% and is called idle.  But if I am running an intensive
computation while typing this and look at my resource monitor, I can see
measurements indicating that my CPU cares much more about the intensive
computation rather than what I am typing.  Does that mean the CPU
understands what it is doing?  No.  Likewise with human brains: we can care
a lot about something but have little to no understanding of it.



 This is analogous to saying We are Unicorns because care about Unicorns.


 No, this is analogous  to you not understanding what I mean and
 unintentionally making a straw man of my argument.


Well, be honest here, you changed a phrasing.  You went from
(paraphrasing)  we know we understand because we care that we understand
to You know we understand because we care about X. Correct me if I'm
wrong.  The first phrasing is meaningless because of the second use of the
word understand (so you might as well be talking about unicorns).  The
first phrasing gives no insight into what understanding is and why we have
it but computers can't.  The problem with your new and improved phrasing is
that it's a doctored definition of caring; you pick a definition related to
understanding such that it (the definition of 'caring') will
*automatically*fail for anything other than a non-apathetic human, in
essence, assuming
computers don't care about anything when, in fact, doing what they are
programmed to do (much like a human, I might add) is the machine-equivalent
of them caring about what they are told to do.



 Doesn't prove unicorns exist; doesn't prove understanding exists (i.e.,
 that any human understands anything). If this is all sophistry then it
 should be easily dismissible. And yes, playing with words is what people
 normally do, wittingly or unwittingly, and that lends more evidence to the
 notion that we are processors in a Chinese room.


 The position that we only think we understand or that consciousness is an
 illusion is, in my view, the desperate act of a stubborn mind. Truly, you
 are sawing off the branch that you are sitting on to suggest that we are
 incapable of understanding the very conversation that we are having.


Well calling a conclusion the desperate act of a stubborn mind, rather than
supply some decent rejoinder, is also the desperate act of a stubborn mind,
wouldn't you say?  While sawing off the branch you are sitting on is a
very clever arrangement of letters (can I use it in a future poem?), it
falls short of being an argument at all or even persuasive. We can get
along just fine by thinking that we understand this conversation.  But
knowing that we understand this conversation?  I'd like to see that
proved.  Until then, I will continue to think that 

Re: Rationals vs Reals in Comp

2013-04-24 Thread Bruno Marchal


On 23 Apr 2013, at 21:29, Brian Tenneson wrote:


Interesting read.

The problem I have with this is that in set theory, there are  
several examples of sets who owe their existence to axioms alone. In  
other words, there is an axiom that states there is a set X such  
that (blah, blah, blah). How are we to know which sets/notions are  
meaningless concepts?  Because to me, it sounds like Doron's  
personal opinion that some concepts are meaningless while other  
concepts like huge, unknowable, and tiny are not meaningless.  Is  
there anything that would remove the opinion portion of this?


How is the second axiom an improvement while containing words like  
huge, unknowable, and tiny??


quote
So I deny even the existence of the Peano axiom that every integer  
has a successor.


I guess the author means that he denies the truth of the axiom of the  
Peano axiom.






Eventually
we would get an overflow error in the big computer in the sky, and  
the sum and product of any
two integers is well-defined only if the result is less than p, or  
if one wishes, one can compute them
modulo p. Since p is so large, this is not a practical problem,  
since the overflow in our earthly
computers comes so much sooner than the overflow errors in the big  
computer in the sky.

end quote

What if the big computer in the sky is infinite?


Indeed.





Or if all computers are finite in capacity yet there is no largest  
computer?


Indeed.





What if NO computer activity is relevant to the set of numbers that  
exist mathematically?


Indeed.

Eventually it depends on the theory we start from. But to start the  
reasoning in comp, we have to assume at least one universal system (in  
the Church-Turing sense). If not, we don't get it. It remains a  
logical possibility of using some physicalist ultrafinitism, but this  
is heavy to only drop an explanation of the origin of consciousness/ 
physical--realities coupling. And by MGA + occam, unless there is  
flaw, this cannot work with comp.


Bruno






On Monday, April 22, 2013 11:28:46 AM UTC-7, smi...@zonnet.nl wrote:
See here:

http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/real.pdf

Saibal


 To post to this group, send email to everyth...@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

 For more options, visit https://groups.google.com/groups/opt_out.






--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-24 Thread Bruno Marchal


On 24 Apr 2013, at 15:40, Craig Weinberg wrote:




On Wednesday, April 24, 2013 8:50:07 AM UTC-4, Bruno Marchal wrote:

On 23 Apr 2013, at 22:26, Craig Weinberg wrote:




On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:



On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg  
whats...@gmail.com wrote:



If you think about your own vision, you can see millions of pixels  
constantly, you are aware of the full picture, but a computer can't  
do that, the cpu can only know about 32 or 64 pixels, eventually  
multiplied by number of kernels, but it see them as single bit's so  
in reality the can't be conscious of a full picture, not even of  
the full color at a single pixel.




He is making the same mistake Searle did regarding the Chinese  
room.  He is conflating what the CPU can see at one time (analogous  
to rule follower in Chinese room) with what the program can know.   
Consider the program of a neural network: it can be processed by a  
sequentially operating CPU processing one connection at a time, but  
the simulated network itself can see any arbitrary number of inputs  
at once.


How do he propose OCR software can recognize letters if it can only  
see a single pixel at a time?


Who says OCR software can recognize letters? All that it needs to  
do is execute some algorithm sequentially and blindly against a  
table of expected values. There need not be any recognition of the  
character as a character at at all, let alone any seeing. A  
program could convert a Word document into an input file for an OCR  
program without there ever being any optical activity - no camera,  
no screen caps, no monitor or printer at all. Completely in the  
dark, the bits of the Word file could be converted into the bits of  
an emulated optical scan, and presto, invisible optics.


Searle wasn't wrong. The whole point of the Chinese Room is to  
point out that computation is a disconnected, anesthetic function  
which is accomplished with no need for understanding of larger  
contexts.


Searle might be right on non-comp, but his argument has been shown  
invalid by many.


I'm surprised that you would try to pass that off as truth Bruno.  
You have so much tolerance for doubt and uncertainty, yet you claim  
that it has been shown invalid. In whose opinion?


It is not an opinion, it is a fact that you can verify if patient  
enough. The refutation is already in Dennet and Hofstadter Mind's I   
book. Searle concludes that the man in the room is not understanding  
chinese, and that is right, but that can not refute comp, as the man  
in the room plays the role of a CPU, and not of the high level program  
on which the consciousness of the chinese guy supervene. It is a  
simple confusion of level.






This page http://plato.stanford.edu/entries/chinese-room/ is quite  
thorough, and lists the most well known Replies, yet it concludes:


There continues to be significant disagreement about what processes  
create meaning, understanding, and consciousness, as well as what  
can be proven a priori by thought experiments.


Thought experience are like proofs in math. Some are valid, some are  
not valid, some are fatally not valid, some can be corrected or made  
more precise. The debate often focuse on the truth of comp and non- 
comp, and that involves sometimes opinion. I don't really play that  
game.






The replies listed are not at all impressive to me, and are all  
really variations on the same sophistry. Obviously there is a  
difference between understanding a conversation and simply copying a  
conversation in another language. There is a difference between  
painting a masterpiece and doing a paint by numbers or spraypainting  
through a stencil. This is what computers and machines are for - to  
free us from having to work and think ourselves. If the machine had  
to think and feel that it was working like a person does, then it  
would want servants also. Machines don't want servants though,  
because they don't know that they are working, and they function  
without having to think or exert effort.


And this is begging the question.

Bruno






Craig


Bruno





Craig


Jason

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at 

Re: Rationals vs Reals in Comp

2013-04-24 Thread Craig Weinberg


On Wednesday, April 24, 2013 10:09:44 AM UTC-4, Brian Tenneson wrote:



 On Wed, Apr 24, 2013 at 4:46 AM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Wednesday, April 24, 2013 4:31:55 AM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 8:53 PM, Craig Weinberg whats...@gmail.comwrote:



 On Tuesday, April 23, 2013 11:37:14 PM UTC-4, Brian Tenneson wrote:

 You keep claiming that we understand this and that or know this and 
 that.  And, yes, saying something along the lines of we know we 
 understand 
 because we care about what we understand *is* circular.  


 No, it's not. I'm saying that it is impossible to doubt we understand. 
 It's just playing with words. My point about caring is that it makes it 
 clear that we intuitively make a distinction between merely being aware of 
 something and understanding it.

 I'll try to explain how  we know we understand because we care about 
 what we understand is circular.  Note the use of the word understand 
 towards the left edge of the statement in quotes followed by another 
 instance of the word understand. 


 You should read it as we know we understand because we care about X. My 
 only intention in repeating the word was to make it clear that the thing 
 that we care about is the thing that we understand. It is the caring which 
 is a symptom of understanding. The absence of that symptom of caring in a 
 machine indicates to me that there is a lack of understanding. Things which 
 understand can care, but things that cannot care cannot understand.

 Now that isn't circular but that's a poor sign of understanding.  I care 
 very much for women but I can't say that I understand them.


That's a cliche. You may not be able to understand women completely, but 
you are not likely to confuse them with a sack of potatoes in a dress. With 
a computer, the dress might be all that a security camera search engine 
might look for, and may very well categorize a sack of potatoes as a woman 
if it happens to be wearing a dress.
 

   I understand the rules of English grammar and punctuation but care 
 little of it.  


Yes, you don't have to care about it, but you can care about it if you want 
to. A machine does not have that option. It can't try harder to follow 
proper grammar, it can only assign a priority to the task. It has no 
feeling for which tasks are assigned which priority, which is the entire 
utility of machines.
 

 I'm sure you can think of examples.  So the two are not correlated, caring 
 and understanding. 


Can you explain why the word understanding is a synonym for kindness and 
caring? A coincidence? 
 

 Caring is not something that can really be measured in humans while caring 
 can be measured in machines/computers.


Give me a break.
 

   For example, one might define caring about something means it is 
 thinking a lot about it


You might define warm feelings by the onset of influenza but that is a 
false equivalence.
 

 , where a lot means some threshold like over 50% resources are dedicated 
 to think about something for a while (a nonzero, finite span of time).  
 These days, we can multitask and look up the resource monitor to see what 
 the CPU cares about, if anything.


That has nothing whatsover to do with caring. Does the amount of money in 
your wallet tell you how much your wallet values money?
 

 If it doesn't care about anything, it uses close to 0% and is called idle. 


Next you are going to tell me that when a stuffed animal doesn't eat 
anything it must be because it is full - but we have no way of knowing if 
we are hungry ourselves.
 

 But if I am running an intensive computation while typing this and look at 
 my resource monitor, I can see measurements indicating that my CPU cares 
 much more about the intensive computation rather than what I am typing.  
 Does that mean the CPU understands what it is doing?  No.  Likewise with 
 human brains: we can care a lot about something but have little to no 
 understanding of it.


Your entire argument is a defense of the Pathetic fallacy. Nothing you have 
said could not apply to any inanimate object, cartoon, abstract concept 
etc. Anyone can say 'you can't prove ice cream isn't melting because it's 
sad'. It's ridiculous. Find the universe. It is more interesting than 
making up stories about CPUs cares, kindnesses, and understanding. 

 


  This is analogous to saying We are Unicorns because care about Unicorns. 


 No, this is analogous  to you not understanding what I mean and 
 unintentionally making a straw man of my argument. 


 Well, be honest here, you changed a phrasing.  You went from 
 (paraphrasing)  we know we understand because we care that we understand 
 to You know we understand because we care about X. Correct me if I'm 
 wrong.  


Correcting you. You're wrong. What I said was Because we care about what 
we understand, and we identify with it personally.

You misinterpreted it, then accuse me of meaning what you said, 

Re: Rationals vs Reals in Comp

2013-04-24 Thread Brian Tenneson
I probably shouldn't be talking to someone who thinks distinguishing a sack
of potatoes from a woman means understanding women.

News flash: understand tacitly implies understand completely.

On Wed, Apr 24, 2013 at 8:37 AM, Craig Weinberg whatsons...@gmail.comwrote:



 On Wednesday, April 24, 2013 10:09:44 AM UTC-4, Brian Tenneson wrote:



 On Wed, Apr 24, 2013 at 4:46 AM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, April 24, 2013 4:31:55 AM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 8:53 PM, Craig Weinberg whats...@gmail.comwrote:



 On Tuesday, April 23, 2013 11:37:14 PM UTC-4, Brian Tenneson wrote:

 You keep claiming that we understand this and that or know this and
 that.  And, yes, saying something along the lines of we know we 
 understand
 because we care about what we understand *is* circular.


 No, it's not. I'm saying that it is impossible to doubt we understand.
 It's just playing with words. My point about caring is that it makes it
 clear that we intuitively make a distinction between merely being aware of
 something and understanding it.

 I'll try to explain how  we know we understand because we care about
 what we understand is circular.  Note the use of the word understand
 towards the left edge of the statement in quotes followed by another
 instance of the word understand.


 You should read it as we know we understand because we care about X.
 My only intention in repeating the word was to make it clear that the thing
 that we care about is the thing that we understand. It is the caring which
 is a symptom of understanding. The absence of that symptom of caring in a
 machine indicates to me that there is a lack of understanding. Things which
 understand can care, but things that cannot care cannot understand.

 Now that isn't circular but that's a poor sign of understanding.  I care
 very much for women but I can't say that I understand them.


 That's a cliche. You may not be able to understand women completely, but
 you are not likely to confuse them with a sack of potatoes in a dress. With
 a computer, the dress might be all that a security camera search engine
 might look for, and may very well categorize a sack of potatoes as a woman
 if it happens to be wearing a dress.


   I understand the rules of English grammar and punctuation but care
 little of it.


 Yes, you don't have to care about it, but you can care about it if you
 want to. A machine does not have that option. It can't try harder to follow
 proper grammar, it can only assign a priority to the task. It has no
 feeling for which tasks are assigned which priority, which is the entire
 utility of machines.


 I'm sure you can think of examples.  So the two are not correlated,
 caring and understanding.


 Can you explain why the word understanding is a synonym for kindness and
 caring? A coincidence?


 Caring is not something that can really be measured in humans while
 caring can be measured in machines/computers.


 Give me a break.


   For example, one might define caring about something means it is
 thinking a lot about it


 You might define warm feelings by the onset of influenza but that is a
 false equivalence.


 , where a lot means some threshold like over 50% resources are dedicated
 to think about something for a while (a nonzero, finite span of time).
 These days, we can multitask and look up the resource monitor to see what
 the CPU cares about, if anything.


 That has nothing whatsover to do with caring. Does the amount of money in
 your wallet tell you how much your wallet values money?


 If it doesn't care about anything, it uses close to 0% and is called
 idle.


 Next you are going to tell me that when a stuffed animal doesn't eat
 anything it must be because it is full - but we have no way of knowing if
 we are hungry ourselves.


 But if I am running an intensive computation while typing this and look
 at my resource monitor, I can see measurements indicating that my CPU cares
 much more about the intensive computation rather than what I am typing.
 Does that mean the CPU understands what it is doing?  No.  Likewise with
 human brains: we can care a lot about something but have little to no
 understanding of it.


 Your entire argument is a defense of the Pathetic fallacy. Nothing you
 have said could not apply to any inanimate object, cartoon, abstract
 concept etc. Anyone can say 'you can't prove ice cream isn't melting
 because it's sad'. It's ridiculous. Find the universe. It is more
 interesting than making up stories about CPUs cares, kindnesses, and
 understanding.




  This is analogous to saying We are Unicorns because care about
 Unicorns.


 No, this is analogous  to you not understanding what I mean and
 unintentionally making a straw man of my argument.


 Well, be honest here, you changed a phrasing.  You went from
 (paraphrasing)  we know we understand because we care that we understand
 to You know we understand because we care about X. Correct 

Re: Rationals vs Reals in Comp

2013-04-24 Thread Craig Weinberg


On Wednesday, April 24, 2013 11:58:08 AM UTC-4, Brian Tenneson wrote:

 I probably shouldn't be talking to someone who thinks distinguishing a 
 sack of potatoes from a woman means understanding women.  

 News flash: understand tacitly implies understand completely.


If you define complete understanding as impossible a priori, and you insist 
that understanding must be complete, then you have just removed the word 
from the English language.

 


 On Wed, Apr 24, 2013 at 8:37 AM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Wednesday, April 24, 2013 10:09:44 AM UTC-4, Brian Tenneson wrote:



 On Wed, Apr 24, 2013 at 4:46 AM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, April 24, 2013 4:31:55 AM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 8:53 PM, Craig Weinberg whats...@gmail.comwrote:



 On Tuesday, April 23, 2013 11:37:14 PM UTC-4, Brian Tenneson wrote:

 You keep claiming that we understand this and that or know this and 
 that.  And, yes, saying something along the lines of we know we 
 understand 
 because we care about what we understand *is* circular.  


 No, it's not. I'm saying that it is impossible to doubt we 
 understand. It's just playing with words. My point about caring is that 
 it 
 makes it clear that we intuitively make a distinction between merely 
 being 
 aware of something and understanding it.

 I'll try to explain how  we know we understand because we care about 
 what we understand is circular.  Note the use of the word understand 
 towards the left edge of the statement in quotes followed by another 
 instance of the word understand. 


 You should read it as we know we understand because we care about X. 
 My only intention in repeating the word was to make it clear that the 
 thing 
 that we care about is the thing that we understand. It is the caring which 
 is a symptom of understanding. The absence of that symptom of caring in a 
 machine indicates to me that there is a lack of understanding. Things 
 which 
 understand can care, but things that cannot care cannot understand.

 Now that isn't circular but that's a poor sign of understanding.  I 
 care very much for women but I can't say that I understand them.


 That's a cliche. You may not be able to understand women completely, but 
 you are not likely to confuse them with a sack of potatoes in a dress. With 
 a computer, the dress might be all that a security camera search engine 
 might look for, and may very well categorize a sack of potatoes as a woman 
 if it happens to be wearing a dress.
  

   I understand the rules of English grammar and punctuation but care 
 little of it.  


 Yes, you don't have to care about it, but you can care about it if you 
 want to. A machine does not have that option. It can't try harder to follow 
 proper grammar, it can only assign a priority to the task. It has no 
 feeling for which tasks are assigned which priority, which is the entire 
 utility of machines.
  

 I'm sure you can think of examples.  So the two are not correlated, 
 caring and understanding. 


 Can you explain why the word understanding is a synonym for kindness and 
 caring? A coincidence? 
  

  Caring is not something that can really be measured in humans while 
 caring can be measured in machines/computers.


 Give me a break.
  

   For example, one might define caring about something means it is 
 thinking a lot about it


 You might define warm feelings by the onset of influenza but that is a 
 false equivalence.
  

 , where a lot means some threshold like over 50% resources are dedicated 
 to think about something for a while (a nonzero, finite span of time).  
 These days, we can multitask and look up the resource monitor to see what 
 the CPU cares about, if anything.


 That has nothing whatsover to do with caring. Does the amount of money in 
 your wallet tell you how much your wallet values money?
  

  If it doesn't care about anything, it uses close to 0% and is called 
 idle. 


 Next you are going to tell me that when a stuffed animal doesn't eat 
 anything it must be because it is full - but we have no way of knowing if 
 we are hungry ourselves.
  

 But if I am running an intensive computation while typing this and look 
 at my resource monitor, I can see measurements indicating that my CPU cares 
 much more about the intensive computation rather than what I am typing.  
 Does that mean the CPU understands what it is doing?  No.  Likewise with 
 human brains: we can care a lot about something but have little to no 
 understanding of it.


 Your entire argument is a defense of the Pathetic fallacy. Nothing you 
 have said could not apply to any inanimate object, cartoon, abstract 
 concept etc. Anyone can say 'you can't prove ice cream isn't melting 
 because it's sad'. It's ridiculous. Find the universe. It is more 
 interesting than making up stories about CPUs cares, kindnesses, and 
 understanding. 

  


  This is analogous to saying We 

Re: Rationals vs Reals in Comp

2013-04-24 Thread smitra
Perhaps one should define things such that it can be impolemented by 
any arbitrary finite state machine, no mater how large. Then, while 
there may not be a limit to the capacity of finite state machines, each 
such machine has a finite capacity, and therefore in none of these 
machines can one implement the Peano axiom that every integer has a 
successor. But some other properties of integers are valid if they are 
valid in every finite state machine that implement arithmetic modulo 
prime numbers.


I'm not into the foundations of math, I'll leave that to Bruno :) . But 
since we are machines with a finite brain capacity, and even the entire 
visible universe has only a finite information content, we should be 
able to replace real analysis with discrete analysis as explained by 
Doron.


Saibal


Citeren Brian Tenneson tenn...@gmail.com:


Interesting read.

The problem I have with this is that in set theory, there are several
examples of sets who owe their existence to axioms alone. In other words,
there is an axiom that states there is a set X such that (blah, blah,
blah). How are we to know which sets/notions are meaningless concepts?
Because to me, it sounds like Doron's personal opinion that some concepts
are meaningless while other concepts like huge, unknowable, and tiny are
not meaningless.  Is there anything that would remove the opinion portion
of this?

How is the second axiom an improvement while containing words like huge,
unknowable, and tiny??

quote
So I deny even the existence of the Peano axiom that every integer has a
successor. Eventually
we would get an overflow error in the big computer in the sky, and the sum
and product of any
two integers is well-defined only if the result is less than p, or if one
wishes, one can compute them
modulo p. Since p is so large, this is not a practical problem, since the
overflow in our earthly
computers comes so much sooner than the overflow errors in the big computer
in the sky.
end quote

What if the big computer in the sky is infinite? Or if all computers are
finite in capacity yet there is no largest computer?

What if NO computer activity is relevant to the set of numbers that exist
mathematically?


On Monday, April 22, 2013 11:28:46 AM UTC-7, smi...@zonnet.nl wrote:


See here:

http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/real.pdf

Saibal


 To post to this group, send email to 
everyth...@googlegroups.comjavascript:.


 Visit this group at http://groups.google.com/group/everything-list?hl=en.

 For more options, visit https://groups.google.com/groups/opt_out.








--
You received this message because you are subscribed to the Google 
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.






--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-24 Thread Craig Weinberg


On Wednesday, April 24, 2013 8:49:00 AM UTC-4, Bruno Marchal wrote:


 On 23 Apr 2013, at 22:07, Craig Weinberg wrote:



 On Tuesday, April 23, 2013 5:11:06 AM UTC-4, Bruno Marchal wrote:


 On 22 Apr 2013, at 19:14, Craig Weinberg wrote:

 A quote from someone on Facebook. Any comments?

 Computers can only do computations for rational numbers, not for real 
 numbers. Every number in a computer is represented as rational. No computer 
 can represent pi or any other real number... So even when consciousness can 
 be explained by computations, no computer can actually simulate it.



 You can represent many real numbers by the program computing their 
 approximation. You can fan constructively on all real numbers (like the UD 
 does notably).

 Only if a brain uses some non computable real number as an oracle, with 
 all decimals given in one strike, then we cannot simulate it with Turing 
 machine, but this needs to make the mind actually infinite.


 If the mind is what is real, then there are no decimals. 


 But there are decimal, and so if you are correct, the mind is not real. 
 But the mind is real, so you are not correct.


How do you know that the mind uses decimals? It seems that our natural 
understanding is primarily in ratios and real number type concepts. 
Decimals could be a notion derived from stepping down experience through 
the body, but the native experiential fabric of all has no decimal content.





 The brain is the public representation of the history, and as such, it can 
 only be observed from the reduced 3p set of qualia. The 3p reduction may 
 rationalize the appearance. From an absolute perspective, all phenomena are 
 temporary partitions within the one strike of eternity.


 OK.





 So the statement above is just a statement of non-comp, not an argument 
 for non comp, as it fails to give us what is that non computable real 
 playing a role in cognition.


 What does the machine say when we ask it why it can't understand pi 
 without approximating it?


 One machine can answer It seems that I can understand PI without 
 approximating it. PI is the ratio of the length of a circle divided by its 
 perimeter, and a circle is the locus of the point in a plane which share 
 the same distance with respect to some point. Then the machine drew a 
 circle on the ground and said, look, it seems PI is about a tiny bigger 
 than 3.


Are there any machines that do as we do, and say 'pi is the unchanging 
ratio between the distance across the circle compared to the distance 
around it, and a circle is self evident pattern which manifests literally 
as [circle shape] and figuratively as any pattern of returning to the 
starting point repeatedly.
  




  


 But there is something correct. A computer, nor a brain, can simulate 
 consciousness. Nor can a computer simlulate the number one, or the number 
 two. It has to borrow them from arithmetical truth.


 Then why would your son in law's computer brain provide him with 
 consciousness? 


 It is not the computer brain which provides him consciousness. The 
 computer brain provides him a way to manifest his consciousness in your 
 restaurant, and to get pleasant qualia of some good food (I hope). What 
 provides the consciousness is God, or (arithmetical) truth. Nobody can 
 program that, in the same sense than nobody can program the number one. But 
 we can write program making possible to manifest the number one, or to make 
 some consciousness manifest relatively to you.


Ok, but why assume that it is arithmetical truth which is God rather than 
feeling? Feeling and being are an Art. Doing and knowing are a science. 
Science makes sense as a derivative of art, but art makes no sense as a 
function of science. It isn't necessary, and arithmetic truth is about the 
necessary. Even if we say that arithmetic truth is art, it is certainly 
only one kind of art among many.

If I'm right, and I think I have every reason to guess that I am, then 
arithmetic is a feeling about doing which is one step removed from both 
feeling and moving - a step which can provides a clarity and universality 
that is unavailable in any other form of understanding, but it is precisely 
that precision, that clarity and universality which comes at the cost of 
intimacy with all that feels and does. Arithmetic is detachment from 
physics and psyche, not the source. Multisense realism is the idea that 
your view, the Platonic view, which places arithmetic at the top, or the 
Idealist view which places psyche at the top, or the Materialist view are 
all three valid almost entirely, and that through each of them, a 
self-consistent truthful view of the universe can be validated. Any of 
these three views can be used to explain the other two, but only the view 
which explains all three in terms of sensory-motor participation, aka 
being-doing or sense can explain all three at once without over-signifying 
one and under-signifying the other. God cannot be a 

Re: Rationals vs Reals in Comp

2013-04-23 Thread Bruno Marchal


On 22 Apr 2013, at 19:14, Craig Weinberg wrote:


A quote from someone on Facebook. Any comments?

Computers can only do computations for rational numbers, not for  
real numbers. Every number in a computer is represented as rational.  
No computer can represent pi or any other real number... So even  
when consciousness can be explained by computations, no computer can  
actually simulate it.



You can represent many real numbers by the program computing their  
approximation. You can fan constructively on all real numbers (like  
the UD does notably).


Only if a brain uses some non computable real number as an oracle,  
with all decimals given in one strike, then we cannot simulate it  
with Turing machine, but this needs to make the mind actually infinite.


So the statement above is just a statement of non-comp, not an  
argument for non comp, as it fails to give us what is that non  
computable real playing a role in cognition.


But there is something correct. A computer, nor a brain, can simulate  
consciousness. Nor can a computer simlulate the number one, or the  
number two. It has to borrow them from arithmetical truth.


Bruno







--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Craig Weinberg

On Monday, April 22, 2013 10:23:04 PM UTC-4, Russell Standish wrote:

 On Mon, Apr 22, 2013 at 08:06:29PM +0200, Telmo Menezes wrote: 
  
  
  On 22 avr. 2013, at 19:14, Craig Weinberg whats...@gmail.comjavascript: 
 wrote: 
  
   A quote from someone on Facebook. Any comments? 
   
   Computers can only do computations for rational numbers, not for real 
 numbers. Every number in a computer is represented as rational. No computer 
 can represent pi or any other real number... So even when consciousness can 
 be explained by computations, no computer can actually simulate it. 
  
  Of course it can, the same way it represents the letter A, as some 
 sequence of bits. And it can perform symbolic computations with it. It can 
  calculate pi/2 + pi/2 = pi and so on. 
  
  

 To expand a bit on Telmo's comment, the computer represents pi, e, 
 sqrt(2) and so on as a set of properties, or algorithms. Computers can 
 happily compute exactly with any computable number (which are of 
 measure zero in the reals). They cannot represent nondescribable 
 numbers, and cannot compute with noncomputable numbers (such as 
 Chaitin's Omega). 

 Also, computers do not compute with rational numbers, they compute 
 with integers (often of fixed word size, but that restriction can 
 easily be lifted, at the cost of performance). Rational numbers can 
 obviously be represented as a pair of integers. What are called real 
 numbers in some computer languages, or more accurately float numbers 
 in other computer languages, are actually integers that have been 
 mapped in a non-uniform way onto subsets of the real number 
 line. Their properties are such that they efficiently generate 
 adequate approximations to continuous mathematical models. There is a 
 whole branch of mathematics devoted to determining what adequate 
 means in this context. 


I think there are some clues there as to why computation can never generate 
awareness. While a computer can approximate the reals to an arbitrary 
degree of precision, we must delimit that degree programmatically.  A 
machine has no preference about what is adequate, and can compute decimal 
places for a thousand years without coming any closer to conceiving of the 
particular significance of pi to circle geometry.  

I'll paste the next comment from the OP of the first. I think it's 
interesting that he also has noticed the connection between biological 
origins in the single cell and non-computability, but he is looking at it 
from QM perspective. My view is to focus on the single cell origin as a 
single autopoietic event origin...an event which lasts an entire lifetime.

If you think about your own vision, you can see millions of pixels 
 constantly, you are aware of the full picture, but a computer can't do 
 that, the cpu can only know about 32 or 64 pixels, eventually multiplied by 
 number of kernels, but it see them as single bit's so in reality the can't 
 be conscious of a full picture, not even of the full color at a single 
 pixel.

 This is simply a HW problem you can't get around with the current 
 technology. With Quantum Computing it may be possible to make large models 
 where all pixels are part of one structure build on entanglement.

 Man comes from a single cell and that means that entanglement could bind 
 the cells together, icluding our cells dedicated to building the internal 
 cinema. But it is still not enough to create the necessary understanding of 
 the picture.

 Gödels theorem states than there are problems that are unsolvable within 
 the system, that you need something from without the system, and computers 
 are fully within the system and as man can solve these problems he must 
 have something from without this system. This understanding you wouldn't 
 get if you don't use Gödels theorem, so you put fences up and around you 
 hindering your expansion of your understanding.

 BTW I am a computer scientist educated at Datalogical Institute at the 
 University of Copenhagen, and have worked with Artificial Intelligence, 
 Numerical Analysis and Combinatorial Optimization, all ways to bring pseudo 
 intelligence to computers.


Craig
 


 Cheers 

 -- 

  

 Prof Russell Standish  Phone 0425 253119 (mobile) 
 Principal, High Performance Coders 
 Visiting Professor of Mathematics  hpc...@hpcoders.com.aujavascript: 
 University of New South Wales  http://www.hpcoders.com.au 
  



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit 

Re: Rationals vs Reals in Comp

2013-04-23 Thread Brian Tenneson
Interesting read.

The problem I have with this is that in set theory, there are several 
examples of sets who owe their existence to axioms alone. In other words, 
there is an axiom that states there is a set X such that (blah, blah, 
blah). How are we to know which sets/notions are meaningless concepts?  
Because to me, it sounds like Doron's personal opinion that some concepts 
are meaningless while other concepts like huge, unknowable, and tiny are 
not meaningless.  Is there anything that would remove the opinion portion 
of this?

How is the second axiom an improvement while containing words like huge, 
unknowable, and tiny??

quote
So I deny even the existence of the Peano axiom that every integer has a 
successor. Eventually
we would get an overflow error in the big computer in the sky, and the sum 
and product of any
two integers is well-defined only if the result is less than p, or if one 
wishes, one can compute them
modulo p. Since p is so large, this is not a practical problem, since the 
overflow in our earthly
computers comes so much sooner than the overflow errors in the big computer 
in the sky.
end quote

What if the big computer in the sky is infinite? Or if all computers are 
finite in capacity yet there is no largest computer?

What if NO computer activity is relevant to the set of numbers that exist 
mathematically? 


On Monday, April 22, 2013 11:28:46 AM UTC-7, smi...@zonnet.nl wrote:

 See here: 

 http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/real.pdf 

 Saibal 


  To post to this group, send email to 
  everyth...@googlegroups.comjavascript:. 

  Visit this group at http://groups.google.com/group/everything-list?hl=en. 

  For more options, visit https://groups.google.com/groups/opt_out. 
  
  
  




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Jason Resch
On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg whatsons...@gmail.comwrote:



 If you think about your own vision, you can see millions of pixels
 constantly, you are aware of the full picture, but a computer can't do
 that, the cpu can only know about 32 or 64 pixels, eventually multiplied by
 number of kernels, but it see them as single bit's so in reality the can't
 be conscious of a full picture, not even of the full color at a single
 pixel.




He is making the same mistake Searle did regarding the Chinese room.  He is
conflating what the CPU can see at one time (analogous to rule follower in
Chinese room) with what the program can know.  Consider the program of a
neural network: it can be processed by a sequentially operating CPU
processing one connection at a time, but the simulated network itself can
see any arbitrary number of inputs at once.

How do he propose OCR software can recognize letters if it can only see a
single pixel at a time?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Craig Weinberg


On Tuesday, April 23, 2013 5:11:06 AM UTC-4, Bruno Marchal wrote:


 On 22 Apr 2013, at 19:14, Craig Weinberg wrote:

 A quote from someone on Facebook. Any comments?

 Computers can only do computations for rational numbers, not for real 
 numbers. Every number in a computer is represented as rational. No computer 
 can represent pi or any other real number... So even when consciousness can 
 be explained by computations, no computer can actually simulate it.



 You can represent many real numbers by the program computing their 
 approximation. You can fan constructively on all real numbers (like the UD 
 does notably).

 Only if a brain uses some non computable real number as an oracle, with 
 all decimals given in one strike, then we cannot simulate it with Turing 
 machine, but this needs to make the mind actually infinite.


If the mind is what is real, then there are no decimals. The brain is the 
public representation of the history, and as such, it can only be observed 
from the reduced 3p set of qualia. The 3p reduction may rationalize the 
appearance. From an absolute perspective, all phenomena are temporary 
partitions within the one strike of eternity.


 So the statement above is just a statement of non-comp, not an argument 
 for non comp, as it fails to give us what is that non computable real 
 playing a role in cognition.


What does the machine say when we ask it why it can't understand pi without 
approximating it?
 


 But there is something correct. A computer, nor a brain, can simulate 
 consciousness. Nor can a computer simlulate the number one, or the number 
 two. It has to borrow them from arithmetical truth.


Then why would your son in law's computer brain provide him with 
consciousness? 

Craig


 Bruno






 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  


 http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Craig Weinberg


On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 If you think about your own vision, you can see millions of pixels 
 constantly, you are aware of the full picture, but a computer can't do 
 that, the cpu can only know about 32 or 64 pixels, eventually multiplied by 
 number of kernels, but it see them as single bit's so in reality the can't 
 be conscious of a full picture, not even of the full color at a single 
 pixel.

   


 He is making the same mistake Searle did regarding the Chinese room.  He 
 is conflating what the CPU can see at one time (analogous to rule follower 
 in Chinese room) with what the program can know.  Consider the program of a 
 neural network: it can be processed by a sequentially operating CPU 
 processing one connection at a time, but the simulated network itself can 
 see any arbitrary number of inputs at once.

 How do he propose OCR software can recognize letters if it can only see a 
 single pixel at a time?


Who says OCR software can recognize letters? All that it needs to do is 
execute some algorithm sequentially and blindly against a table of expected 
values. There need not be any recognition of the character as a character 
at at all, let alone any seeing. A program could convert a Word document 
into an input file for an OCR program without there ever being any optical 
activity - no camera, no screen caps, no monitor or printer at all. 
Completely in the dark, the bits of the Word file could be converted into 
the bits of an emulated optical scan, and presto, invisible optics.

Searle wasn't wrong. The whole point of the Chinese Room is to point out 
that computation is a disconnected, anesthetic function which is 
accomplished with no need for understanding of larger contexts. 

Craig

 

 Jason


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Brian Tenneson
On Tue, Apr 23, 2013 at 1:26 PM, Craig Weinberg whatsons...@gmail.comwrote:



 Searle wasn't wrong. The whole point of the Chinese Room is to point out
 that computation is a disconnected, anesthetic function which is
 accomplished with no need for understanding of larger contexts.



How do we know that what humans do is understand things rather than just
compute things?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Jason Resch
On Tue, Apr 23, 2013 at 3:26 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg whats...@gmail.comwrote:



 If you think about your own vision, you can see millions of pixels
 constantly, you are aware of the full picture, but a computer can't do
 that, the cpu can only know about 32 or 64 pixels, eventually multiplied by
 number of kernels, but it see them as single bit's so in reality the can't
 be conscious of a full picture, not even of the full color at a single
 pixel.




 He is making the same mistake Searle did regarding the Chinese room.  He
 is conflating what the CPU can see at one time (analogous to rule follower
 in Chinese room) with what the program can know.  Consider the program of a
 neural network: it can be processed by a sequentially operating CPU
 processing one connection at a time, but the simulated network itself can
 see any arbitrary number of inputs at once.

 How do he propose OCR software can recognize letters if it can only see a
 single pixel at a time?


 Who says OCR software can recognize letters?


The people who buy such software and don't return it.


 All that it needs to do is execute some algorithm sequentially and blindly
 against a table of expected values.


It's a little more sophisticated than that.  There are CAPTCHA defeating
OCR programs that recognize letters distorted in ways they have never
previously seen before:
http://www.slideshare.net/rachelshadoan/machine-learning-methods-for-captcha-recognition

You need more than a simple look up table for that capability.


 There need not be any recognition of the character as a character at at
 all, let alone any seeing. A program could convert a Word document into
 an input file for an OCR program without there ever being any optical
 activity - no camera, no screen caps, no monitor or printer at all.
 Completely in the dark, the bits of the Word file could be converted into
 the bits of an emulated optical scan, and presto, invisible optics.


Sounds like what goes on when someone dreams in the dark.



 Searle wasn't wrong. The whole point of the Chinese Room is to point out
 that computation is a disconnected, anesthetic function which is
 accomplished with no need for understanding of larger contexts.


It doesn't point out anything, it is an intuition pump (
http://en.wikipedia.org/wiki/Intuition_pump ) that succeeds in swaying
people to an apparently obvious conclusion (if they don't think too deeply
about it).

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Craig Weinberg


On Tuesday, April 23, 2013 4:31:05 PM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 1:26 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 Searle wasn't wrong. The whole point of the Chinese Room is to point out 
 that computation is a disconnected, anesthetic function which is 
 accomplished with no need for understanding of larger contexts. 



 How do we know that what humans do is understand things rather than just 
 compute things? 

 

Because we care about what we understand, and we identify with it 
personally.  Understanding is used also to mean compassion. When someone 
demonstrates a lack of human understanding, we say that they are behaving 
robotically, like a machine, etc. Questions like, How do you know you are 
conscious?, or How do you know that you feel? are sophistry. How do you 
know that you can ask that question?

Craig


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Craig Weinberg


On Tuesday, April 23, 2013 4:46:52 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 3:26 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg whats...@gmail.comwrote:



 If you think about your own vision, you can see millions of pixels 
 constantly, you are aware of the full picture, but a computer can't do 
 that, the cpu can only know about 32 or 64 pixels, eventually multiplied 
 by 
 number of kernels, but it see them as single bit's so in reality the 
 can't 
 be conscious of a full picture, not even of the full color at a single 
 pixel.

   


 He is making the same mistake Searle did regarding the Chinese room.  He 
 is conflating what the CPU can see at one time (analogous to rule follower 
 in Chinese room) with what the program can know.  Consider the program of a 
 neural network: it can be processed by a sequentially operating CPU 
 processing one connection at a time, but the simulated network itself can 
 see any arbitrary number of inputs at once.

 How do he propose OCR software can recognize letters if it can only see 
 a single pixel at a time?


 Who says OCR software can recognize letters?


 The people who buy such software and don't return it.
  

 All that it needs to do is execute some algorithm sequentially and 
 blindly against a table of expected values.


 It's a little more sophisticated than that.  There are CAPTCHA defeating 
 OCR programs that recognize letters distorted in ways they have never 
 previously seen before:

 http://www.slideshare.net/rachelshadoan/machine-learning-methods-for-captcha-recognition

 You need more than a simple look up table for that capability.


I don't deny that, but you still only need a more sophisticated algorithm, 
you don't need to 'see' anything or understand characters. 
 

  

 There need not be any recognition of the character as a character at at 
 all, let alone any seeing. A program could convert a Word document into 
 an input file for an OCR program without there ever being any optical 
 activity - no camera, no screen caps, no monitor or printer at all. 
 Completely in the dark, the bits of the Word file could be converted into 
 the bits of an emulated optical scan, and presto, invisible optics.


 Sounds like what goes on when someone dreams in the dark.


If that were the case then we would not need a video screen, we could 
simply look at the part of the computer where the chip is showing videos to 
itself and put a big magnifying glass on it.
 

  


 Searle wasn't wrong. The whole point of the Chinese Room is to point out 
 that computation is a disconnected, anesthetic function which is 
 accomplished with no need for understanding of larger contexts.  

  
 It doesn't point out anything, it is an intuition pump ( 
 http://en.wikipedia.org/wiki/Intuition_pump ) that succeeds in swaying 
 people to an apparently obvious conclusion (if they don't think too deeply 
 about it).


Intuition pumps are exactly what are needed to understand consciousness. 
The conclusion is obvious because the alternative is absurd, and the 
absurdity stems from trying to project public physics into the realm of 
private physics. It is a category error and the Chinese Room demonstrates 
that. What makes you so sure that intuition is not the only way to find 
consciousness?

Craig
 


 Jason


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Jason Resch
On Tue, Apr 23, 2013 at 5:19 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Tuesday, April 23, 2013 4:46:52 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 3:26 PM, Craig Weinberg whats...@gmail.comwrote:



 On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg whats...@gmail.comwrote:



 If you think about your own vision, you can see millions of pixels
 constantly, you are aware of the full picture, but a computer can't do
 that, the cpu can only know about 32 or 64 pixels, eventually multiplied 
 by
 number of kernels, but it see them as single bit's so in reality the 
 can't
 be conscious of a full picture, not even of the full color at a single
 pixel.




 He is making the same mistake Searle did regarding the Chinese room.
 He is conflating what the CPU can see at one time (analogous to rule
 follower in Chinese room) with what the program can know.  Consider the
 program of a neural network: it can be processed by a sequentially
 operating CPU processing one connection at a time, but the simulated
 network itself can see any arbitrary number of inputs at once.

 How do he propose OCR software can recognize letters if it can only see
 a single pixel at a time?


 Who says OCR software can recognize letters?


 The people who buy such software and don't return it.


 All that it needs to do is execute some algorithm sequentially and
 blindly against a table of expected values.


 It's a little more sophisticated than that.  There are CAPTCHA defeating
 OCR programs that recognize letters distorted in ways they have never
 previously seen before:
 http://www.slideshare.net/**rachelshadoan/machine-**
 learning-methods-for-captcha-**recognitionhttp://www.slideshare.net/rachelshadoan/machine-learning-methods-for-captcha-recognition

 You need more than a simple look up table for that capability.


 I don't deny that, but you still only need a more sophisticated algorithm,
 you don't need to 'see' anything or understand characters.


To recognize a character (in most algorithms that do so) must consider
multiple the values of pixels at once, which was the whole point of me
bringing up this example.





 There need not be any recognition of the character as a character at at
 all, let alone any seeing. A program could convert a Word document into
 an input file for an OCR program without there ever being any optical
 activity - no camera, no screen caps, no monitor or printer at all.
 Completely in the dark, the bits of the Word file could be converted into
 the bits of an emulated optical scan, and presto, invisible optics.


 Sounds like what goes on when someone dreams in the dark.


 If that were the case then we would not need a video screen, we could
 simply look at the part of the computer where the chip is showing videos to
 itself and put a big magnifying glass on it.



You could plug the electronics of the computer up to your optic nerve in a
way that let you see the screen without any photons having to enter your
eyes at all.






 Searle wasn't wrong. The whole point of the Chinese Room is to point out
 that computation is a disconnected, anesthetic function which is
 accomplished with no need for understanding of larger contexts.


 It doesn't point out anything, it is an intuition pump (
 http://en.wikipedia.org/wiki/**Intuition_pumphttp://en.wikipedia.org/wiki/Intuition_pump)
  that succeeds in swaying people to an apparently obvious conclusion (if
 they don't think too deeply about it).


 Intuition pumps are exactly what are needed to understand consciousness.


They can be used and misused.


 The conclusion is obvious because the alternative is absurd, and the
 absurdity stems from trying to project public physics into the realm of
 private physics. It is a category error and the Chinese Room demonstrates
 that.

What makes you so sure that intuition is not the only way to find
 consciousness?


Our intuitions were evolved to suit our survival and propagation, why
should we expect them to be better at locating consciousness than reasoned
thought?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Brian Tenneson
On Tue, Apr 23, 2013 at 3:13 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Tuesday, April 23, 2013 4:31:05 PM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 1:26 PM, Craig Weinberg whats...@gmail.comwrote:



 Searle wasn't wrong. The whole point of the Chinese Room is to point out
 that computation is a disconnected, anesthetic function which is
 accomplished with no need for understanding of larger contexts.



 How do we know that what humans do is understand things rather than just
 compute things?



 Because we care about what we understand, and we identify with it
 personally.  Understanding is used also to mean compassion. When someone
 demonstrates a lack of human understanding, we say that they are behaving
 robotically, like a machine, etc. Questions like, How do you know you are
 conscious?, or How do you know that you feel? are sophistry. How do you
 know that you can ask that question?


Sounds circular. we do understand things because we care about what we
understand.  The type of understanding I was referring to was not about
compassion.  Why is it so strange to think that we are stuck in a big
Chinese room, without really understanding anything but being adept at
pushing symbols around?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Craig Weinberg


On Tuesday, April 23, 2013 7:59:26 PM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 3:13 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Tuesday, April 23, 2013 4:31:05 PM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 1:26 PM, Craig Weinberg whats...@gmail.comwrote:



 Searle wasn't wrong. The whole point of the Chinese Room is to point 
 out that computation is a disconnected, anesthetic function which is 
 accomplished with no need for understanding of larger contexts. 



 How do we know that what humans do is understand things rather than just 
 compute things? 

  

 Because we care about what we understand, and we identify with it 
 personally.  Understanding is used also to mean compassion. When someone 
 demonstrates a lack of human understanding, we say that they are behaving 
 robotically, like a machine, etc. Questions like, How do you know you are 
 conscious?, or How do you know that you feel? are sophistry. How do you 
 know that you can ask that question?


 Sounds circular. we do understand things because we care about what we 
 understand.  The type of understanding I was referring to was not about 
 compassion.  Why is it so strange to think that we are stuck in a big 
 Chinese room, without really understanding anything but being adept at 
 pushing symbols around? 


It's not circular, I was trying to be clear about the difference between 
computation and understanding. Computation is variations on the theme of 
counting, but counting does not help us understand. A dog might be able to 
count how many times we speak a command, and we can train them to respond 
to the third instance we speak it, but we can use any command to associate 
with the action of sitting or begging. We are not in a Chinese room because 
we know what kinds of things the word 'sit' actually might refer to. We 
know what kind of context it relates to, and we understand what our options 
for interpretation and participation are. The dog has no options. It can 
follow the conditioned response and get the reward, or it can fail to do 
that. It doesn't know what else to do. 

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Brian Tenneson
You keep claiming that we understand this and that or know this and that.
And, yes, saying something along the lines of we know we understand
because we care about what we understand *is* circular.  Still doesn't
rule out the possibility that we are in a Chinese room right now,
manipulating symbols without really understanding what's going on but able
to adeptly shuffle the symbols around fast enough to appear functional.  If
that is the case, AI might be able to replicate human behavior if human
behavior is all computation-based.

On Tue, Apr 23, 2013 at 8:25 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Tuesday, April 23, 2013 7:59:26 PM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 3:13 PM, Craig Weinberg whats...@gmail.comwrote:



 On Tuesday, April 23, 2013 4:31:05 PM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 1:26 PM, Craig Weinberg whats...@gmail.comwrote:



 Searle wasn't wrong. The whole point of the Chinese Room is to point
 out that computation is a disconnected, anesthetic function which is
 accomplished with no need for understanding of larger contexts.



 How do we know that what humans do is understand things rather than
 just compute things?



 Because we care about what we understand, and we identify with it
 personally.  Understanding is used also to mean compassion. When someone
 demonstrates a lack of human understanding, we say that they are behaving
 robotically, like a machine, etc. Questions like, How do you know you are
 conscious?, or How do you know that you feel? are sophistry. How do you
 know that you can ask that question?


 Sounds circular. we do understand things because we care about what we
 understand.  The type of understanding I was referring to was not about
 compassion.  Why is it so strange to think that we are stuck in a big
 Chinese room, without really understanding anything but being adept at
 pushing symbols around?


 It's not circular, I was trying to be clear about the difference between
 computation and understanding. Computation is variations on the theme of
 counting, but counting does not help us understand. A dog might be able to
 count how many times we speak a command, and we can train them to respond
 to the third instance we speak it, but we can use any command to associate
 with the action of sitting or begging. We are not in a Chinese room because
 we know what kinds of things the word 'sit' actually might refer to. We
 know what kind of context it relates to, and we understand what our options
 for interpretation and participation are. The dog has no options. It can
 follow the conditioned response and get the reward, or it can fail to do
 that. It doesn't know what else to do.

 Craig

 --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/bY0TNHtwNh8/unsubscribe?hl=en
 .
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-23 Thread Craig Weinberg


On Tuesday, April 23, 2013 7:09:42 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 5:19 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Tuesday, April 23, 2013 4:46:52 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 3:26 PM, Craig Weinberg whats...@gmail.comwrote:



 On Tuesday, April 23, 2013 3:58:33 PM UTC-4, Jason wrote:




 On Tue, Apr 23, 2013 at 6:53 AM, Craig Weinberg whats...@gmail.comwrote:



 If you think about your own vision, you can see millions of pixels 
 constantly, you are aware of the full picture, but a computer can't do 
 that, the cpu can only know about 32 or 64 pixels, eventually 
 multiplied by 
 number of kernels, but it see them as single bit's so in reality the 
 can't 
 be conscious of a full picture, not even of the full color at a single 
 pixel.

   


 He is making the same mistake Searle did regarding the Chinese room.  
 He is conflating what the CPU can see at one time (analogous to rule 
 follower in Chinese room) with what the program can know.  Consider the 
 program of a neural network: it can be processed by a sequentially 
 operating CPU processing one connection at a time, but the simulated 
 network itself can see any arbitrary number of inputs at once.

 How do he propose OCR software can recognize letters if it can only 
 see a single pixel at a time?


 Who says OCR software can recognize letters?


 The people who buy such software and don't return it.
  

 All that it needs to do is execute some algorithm sequentially and 
 blindly against a table of expected values.


 It's a little more sophisticated than that.  There are CAPTCHA defeating 
 OCR programs that recognize letters distorted in ways they have never 
 previously seen before:
 http://www.slideshare.net/**rachelshadoan/machine-**
 learning-methods-for-captcha-**recognitionhttp://www.slideshare.net/rachelshadoan/machine-learning-methods-for-captcha-recognition

 You need more than a simple look up table for that capability.


 I don't deny that, but you still only need a more sophisticated 
 algorithm, you don't need to 'see' anything or understand characters. 


 To recognize a character (in most algorithms that do so) must consider 
 multiple the values of pixels at once, which was the whole point of me 
 bringing up this example.


Multiple values of pixels aren't characters though. No pixels are even 
necessary - which is why I brought up the OCR file emulator. The OCR will 
interpolate just as well from hexadecimal code as it would from adjacent 
pixels in bitmap. This is relevant because if we can see that computation 
can only offer us approximations of real numbers, or real circles, then we 
could only expect that it could offer an approximation of sense - which 
doesn't work for sense, because it is that which cannot be approximated or 
generalized. It is 100% proprietary because it is the principle through 
which privacy itself is defined.
 


  

  

 There need not be any recognition of the character as a character at at 
 all, let alone any seeing. A program could convert a Word document into 
 an input file for an OCR program without there ever being any optical 
 activity - no camera, no screen caps, no monitor or printer at all. 
 Completely in the dark, the bits of the Word file could be converted into 
 the bits of an emulated optical scan, and presto, invisible optics.


 Sounds like what goes on when someone dreams in the dark.


 If that were the case then we would not need a video screen, we could 
 simply look at the part of the computer where the chip is showing videos to 
 itself and put a big magnifying glass on it.
  


 You could plug the electronics of the computer up to your optic nerve in a 
 way that let you see the screen without any photons having to enter your 
 eyes at all.


Not without a driver to convert the meaningless patterns of bits into 
something that your visual cortex expects to see. If you used that same 
driver on a person who had been blind since birth, they would not be able 
to see, and what they would feel would not likely have the same meaning. 
Blindsight tells us that information processing can occur without any 
personal aesthetic experience, so there is no reason at all to give the 
benefit of the doubt to a CPU that its processing is clothed in any sensory 
qualia, let alone some specific human qualia.
 


  

  


 Searle wasn't wrong. The whole point of the Chinese Room is to point 
 out that computation is a disconnected, anesthetic function which is 
 accomplished with no need for understanding of larger contexts.  

  
 It doesn't point out anything, it is an intuition pump ( 
 http://en.wikipedia.org/wiki/**Intuition_pumphttp://en.wikipedia.org/wiki/Intuition_pump)
  that succeeds in swaying people to an apparently obvious conclusion (if 
 they don't think too deeply about it).


 Intuition pumps are exactly what are needed to understand consciousness. 


 They can be used and misused.


I agree.
 

  


Re: Rationals vs Reals in Comp

2013-04-23 Thread Craig Weinberg


On Tuesday, April 23, 2013 11:37:14 PM UTC-4, Brian Tenneson wrote:

 You keep claiming that we understand this and that or know this and that.  
 And, yes, saying something along the lines of we know we understand 
 because we care about what we understand *is* circular.  


No, it's not. I'm saying that it is impossible to doubt we understand. It's 
just playing with words. My point about caring is that it makes it clear 
that we intuitively make a distinction between merely being aware of 
something and understanding it.
 

 Still doesn't rule out the possibility that we are in a Chinese room right 
 now, manipulating symbols without really understanding what's going on but 
 able to adeptly shuffle the symbols around fast enough to appear 
 functional. 


Why not? If we were manipulating symbols, why would we care about them. 
What you're saying doesn't even make sense. We are having a conversation. 
We care about the conversation because we understand it. If I was being 
dictated to write in another language instead, I would not care about the 
conversation. Are you claiming that there is no difference between having a 
conversation in English and dictating text in a language you don't 
understand?
 

 If that is the case, AI might be able to replicate human behavior if human 
 behavior is all computation-based.


Yes and no. Human behavior can never be generic. The more generic it is, 
the more inhuman it is. AI could imitate a particular person's behavior and 
fool X% of a given audience, but because human behavior is ultimately 
driven by proprietary preferences, there will probably always be some ratio 
of audience size to duration of exposure which will wind up with a positive 
detection of simulation. The threshold may be much lower than it seems. 
Judging from existing simulation, it may not always be possible to 
determine absolutely that something is a simulation, but I would be willing 
to bet that some part of the brain lights up differently when presented 
with a simulated presentation vs a genuine one.

Craig
 


 On Tue, Apr 23, 2013 at 8:25 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Tuesday, April 23, 2013 7:59:26 PM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 3:13 PM, Craig Weinberg whats...@gmail.comwrote:



 On Tuesday, April 23, 2013 4:31:05 PM UTC-4, Brian Tenneson wrote:



 On Tue, Apr 23, 2013 at 1:26 PM, Craig Weinberg whats...@gmail.comwrote:



 Searle wasn't wrong. The whole point of the Chinese Room is to point 
 out that computation is a disconnected, anesthetic function which is 
 accomplished with no need for understanding of larger contexts. 



 How do we know that what humans do is understand things rather than 
 just compute things? 

  

 Because we care about what we understand, and we identify with it 
 personally.  Understanding is used also to mean compassion. When someone 
 demonstrates a lack of human understanding, we say that they are behaving 
 robotically, like a machine, etc. Questions like, How do you know you are 
 conscious?, or How do you know that you feel? are sophistry. How do you 
 know that you can ask that question?


 Sounds circular. we do understand things because we care about what we 
 understand.  The type of understanding I was referring to was not about 
 compassion.  Why is it so strange to think that we are stuck in a big 
 Chinese room, without really understanding anything but being adept at 
 pushing symbols around? 


 It's not circular, I was trying to be clear about the difference between 
 computation and understanding. Computation is variations on the theme of 
 counting, but counting does not help us understand. A dog might be able to 
 count how many times we speak a command, and we can train them to respond 
 to the third instance we speak it, but we can use any command to associate 
 with the action of sitting or begging. We are not in a Chinese room because 
 we know what kinds of things the word 'sit' actually might refer to. We 
 know what kind of context it relates to, and we understand what our options 
 for interpretation and participation are. The dog has no options. It can 
 follow the conditioned response and get the reward, or it can fail to do 
 that. It doesn't know what else to do. 

 Craig

 -- 
 You received this message because you are subscribed to a topic in the 
 Google Groups Everything List group.
 To unsubscribe from this topic, visit 
 https://groups.google.com/d/topic/everything-list/bY0TNHtwNh8/unsubscribe?hl=en
 .
 To unsubscribe from this group and all its topics, send an email to 
 everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.

Re: Rationals vs Reals in Comp

2013-04-22 Thread Telmo Menezes


On 22 avr. 2013, at 19:14, Craig Weinberg whatsons...@gmail.com wrote:

 A quote from someone on Facebook. Any comments?
 
 Computers can only do computations for rational numbers, not for real 
 numbers. Every number in a computer is represented as rational. No computer 
 can represent pi or any other real number... So even when consciousness can 
 be explained by computations, no computer can actually simulate it.

Of course it can, the same way it represents the letter A, as some sequence of 
bits. And it can perform symbolic computations with it. It can  calculate pi/2 
+ pi/2 = pi and so on.


 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-22 Thread smitra

See here:

http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/real.pdf

Saibal

Citeren Craig Weinberg whatsons...@gmail.com:


A quote from someone on Facebook. Any comments?

Computers can only do computations for rational numbers, not for real

numbers. Every number in a computer is represented as rational. No computer
can represent pi or any other real number... So even when consciousness can
be explained by computations, no computer can actually simulate it.


--
You received this message because you are subscribed to the Google 
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.






--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-22 Thread Craig Weinberg


On Monday, April 22, 2013 2:06:29 PM UTC-4, telmo_menezes wrote:



 On 22 avr. 2013, at 19:14, Craig Weinberg whats...@gmail.comjavascript: 
 wrote:

 A quote from someone on Facebook. Any comments?

 Computers can only do computations for rational numbers, not for real 
 numbers. Every number in a computer is represented as rational. No computer 
 can represent pi or any other real number... So even when consciousness can 
 be explained by computations, no computer can actually simulate it.


 Of course it can, the same way it represents the letter A, as some 
 sequence of bits. And it can perform symbolic computations with it. It can 
  calculate pi/2 + pi/2 = pi and so on.


It's not representing pi with A though, it's representing a digital 
sequence which is arbitrarily truncated or rounded off at some point. It is 
not pi, but pi-ish.
 



 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-22 Thread Craig Weinberg


On Monday, April 22, 2013 2:28:46 PM UTC-4, smi...@zonnet.nl wrote:

 See here: 

 http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/real.pdf 


Ah yes, we come full circle...

Develop math to help understand reality  realize that math is different 
from reality  build instruments using math which prove that math can only 
see the mathematical aspects of reality  decide that reality can't be real 
and make plans to replace it with math.

Craig


 Saibal 

 Citeren Craig Weinberg whats...@gmail.com javascript:: 

  A quote from someone on Facebook. Any comments? 
  
  Computers can only do computations for rational numbers, not for real 
  numbers. Every number in a computer is represented as rational. No 
 computer 
  can represent pi or any other real number... So even when consciousness 
 can 
  be explained by computations, no computer can actually simulate it. 
  
  -- 
  You received this message because you are subscribed to the Google 
  Groups Everything List group. 
  To unsubscribe from this group and stop receiving emails from it, 
  send an email to everything-li...@googlegroups.com javascript:. 
  To post to this group, send email to 
  everyth...@googlegroups.comjavascript:. 

  Visit this group at http://groups.google.com/group/everything-list?hl=en. 

  For more options, visit https://groups.google.com/groups/opt_out. 
  
  
  




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Rationals vs Reals in Comp

2013-04-22 Thread Russell Standish
On Mon, Apr 22, 2013 at 08:06:29PM +0200, Telmo Menezes wrote:
 
 
 On 22 avr. 2013, at 19:14, Craig Weinberg whatsons...@gmail.com wrote:
 
  A quote from someone on Facebook. Any comments?
  
  Computers can only do computations for rational numbers, not for real 
  numbers. Every number in a computer is represented as rational. No computer 
  can represent pi or any other real number... So even when consciousness can 
  be explained by computations, no computer can actually simulate it.
 
 Of course it can, the same way it represents the letter A, as some sequence 
 of bits. And it can perform symbolic computations with it. It can  calculate 
 pi/2 + pi/2 = pi and so on.
 
 

To expand a bit on Telmo's comment, the computer represents pi, e,
sqrt(2) and so on as a set of properties, or algorithms. Computers can
happily compute exactly with any computable number (which are of
measure zero in the reals). They cannot represent nondescribable
numbers, and cannot compute with noncomputable numbers (such as
Chaitin's Omega).

Also, computers do not compute with rational numbers, they compute
with integers (often of fixed word size, but that restriction can
easily be lifted, at the cost of performance). Rational numbers can
obviously be represented as a pair of integers. What are called real
numbers in some computer languages, or more accurately float numbers
in other computer languages, are actually integers that have been
mapped in a non-uniform way onto subsets of the real number
line. Their properties are such that they efficiently generate
adequate approximations to continuous mathematical models. There is a
whole branch of mathematics devoted to determining what adequate
means in this context.

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.