Re: Minds, Machines and Gödel

2013-12-26 Thread Craig Weinberg


On Wednesday, December 25, 2013 2:09:07 PM UTC-5, Bruno Marchal wrote:


 On 25 Dec 2013, at 16:18, Craig Weinberg wrote:



 On Wednesday, December 25, 2013 5:07:22 AM UTC-5, Bruno Marchal wrote:


 On 24 Dec 2013, at 17:31, Craig Weinberg wrote:


 It's straighforward I think. What you are saying is that this semantic 
 trick prevents us from seeing that the truth does not agree with the 
 theory.


 ? (sorry but I still fail to see the connection). I am just saying that 
 the discovery of the many non computable attribute of machine makes invalid 
 the reasoning against comp invoking non computable aspect of the human mind.


 What I'm saying is that the reference to non-computable phenomena means 
 that they are not likely to be attributes of machines. 


 Yes, that is what you were saying, and my point is that this is not valid.

 Most machine's or number's attributes are not computable. 



Then how do you know that they are attributes of the number? If I count 
that I have five fingers, I don't assume that the fingers are attributes of 
the number five.

 


 In fact, it is the price of the consistency of Church thesis, as I have 
 often explained in detail. If interested I could show it to you.


The consistency may come at the expense of reality.
 





 Comp has no right to ever mention non computable attributes of anything 
 and still be comp.



 ?
 Comp is I am a machine (3-I). This does not entail that everything is 
 computable.


Then how do you know that what you are has anything to do with machines? If 
some things are not computable, what are they, and why would they have 
anything to do with computation?

 

 Worse, the price of universality entails that many things *about* machine 
 will necessarily be non-computable.
 A large part of computability theory is really incomputability theory, the 
 studies of the complex hierarchies of non computability and non solvability 
 in arithmetic and computer science.


Have you considered that they be non computable and non solvable because 
they aren't directly related to mathematics?
 




 It would have to explain how non-computable phenomena are derived from 
 computation and what that can even mean. 


 I can do that. I can prove that if a universal number exists, then non 
 computable relation between numbers exists.
 Löbian numbers can actually already prove that about themselves. 


How do you know that the numbers aren't just the computable relations 
between experiences instead?
 




 For comp to be consistent, it can only ask 'what do you mean 
 'non-computable?'.


 For finite to be consistent, it can only ask what do you mean by 
 infinite? Well, OK. But we can do that.


We can ask, but if we say that something is infinite, then our theory of 
finite can't be complete.
 


 Even with the intuitive definition, we can do that. 
 A function (from N to N) is computable iff you can explain in a finite 
 numbers of words, in a non ambiguous grammar, to a reasonably dumb fellow, 
 how to compute it, in a finite time, for each of its finite argument.

 Now, a function is not computable, if you cannot do that, even assuming 
 you are immortal.

 Church thesis say the number LAMDDA is a universal number. This 
 simplifies non computability. A function is not computable if you cannot 
 program it in LAMBDA. The universal number LAMBDA cannot simulate that 
 function.



If LAMBDA is all that you have, how do you know that what it can't program 
is a number at all?
 





 If I had a theory of autovehicularism in which cars drive themselves, I 
 can't then claim that these soft things that sit behind the wheel inside 
 the car are non-vehicular attributes of cars. If there can be 
 non-vehicular attributes of cars then any autovehicular theory of cars is 
 false.
  







  

 It means also that most proposition *about* machine, cannot be found in 
 a mechanical way.
 The simplest examples are that no machine can decide if some arbitrary 
 machine will stop not, or no machine can decide if two arbitrary machine 
 compute or not the same function, etc.
 If there is no complete theories for machines and/or numbers, it makes 
 harder to defend non-comp, etc.



 How can computationalism support the idea of there being a 
 non-mechanical way though? What other way is there?


 Computation with oracle for non computable arithmetical truth, or just 
 some non computable arithmetical truth. Arithmetic is full of them.



 You are telling me that arithmetic is full of non-arithmetic, 


 No. Full of non computable relations between number.


 If they are not computable, how do you know they are part of arithmetic 
 rather than physics or sense?



 Because I work in arithmetic. I use Gödel's arithmetization of 
 meta-arithmetic. In AUDA, I never leave arithmetic.


Then how do you know that you aren't suffering from the fallacy of the 
instrument?
 


 Most of arithmetic is not computable. Truth escapes proof, and many 
 computations do not stop, 

Re: Minds, Machines and Gödel

2013-12-25 Thread LizR
On 25 December 2013 16:51, Craig Weinberg whatsons...@gmail.com wrote:



 On Saturday, December 21, 2013 5:28:29 PM UTC-5, Edgar L. Owen wrote:

 Craig,

 Sorry, but I don't really understand what you are trying to get at. Your
 terminology is not giving me any clarity of what you are really trying to
 say...

 Edgar


 The condensed version of what I'm trying to say is that computation is
 less than real, reality combines experience and computation, and experience
 is greater than reality and does not depend on computation.

 Computation is less than real?  - how so?

And what is experience, in your view?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Minds, Machines and Gödel

2013-12-25 Thread Bruno Marchal


On 21 Dec 2013, at 21:52, Edgar Owen wrote:


Liz,

No, that doesn't make Reality subject to the halting problem. The  
halting problem is when a computer program is trying to reach some  
independently postulated result and may or may not be able to reach  
it.


Reality doesn't have any problem like this. It just computes the  
logical results of the evolution of the current information state of  
the universe. There are no independently postulated states that  
aren't directly computed by reality which reality then attempts to  
reach (prove).


This contradicts both comp and QM.

Bruno




Edgar



On Dec 21, 2013, at 3:26 PM, LizR wrote:

Reality is analogous to a running software program. Godel's Theorem  
does not apply. A human could speculate as to whether any  
particular state of Reality could ever arise computationally and it  
might be impossible to determine that, but again that has nothing  
to do with the actual operation of Reality,since it is only a  
particular internal mental model of that reality.


Wouldn't that make reality susceptible to the halting problem?

...hello, is anybody there? Why have all the stars gone out?


--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.



--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Minds, Machines and Gödel

2013-12-25 Thread Bruno Marchal


On 25 Dec 2013, at 16:18, Craig Weinberg wrote:




On Wednesday, December 25, 2013 5:07:22 AM UTC-5, Bruno Marchal wrote:

On 24 Dec 2013, at 17:31, Craig Weinberg wrote:



It's straighforward I think. What you are saying is that this  
semantic trick prevents us from seeing that the truth does not  
agree with the theory.


? (sorry but I still fail to see the connection). I am just saying  
that the discovery of the many non computable attribute of machine  
makes invalid the reasoning against comp invoking non computable  
aspect of the human mind.


What I'm saying is that the reference to non-computable phenomena  
means that they are not likely to be attributes of machines.


Yes, that is what you were saying, and my point is that this is not  
valid.


Most machine's or number's attributes are not computable.

In fact, it is the price of the consistency of Church thesis, as I  
have often explained in detail. If interested I could show it to you.





Comp has no right to ever mention non computable attributes of  
anything and still be comp.



?
Comp is I am a machine (3-I). This does not entail that everything  
is computable. Worse, the price of universality entails that many  
things *about* machine will necessarily be non-computable.
A large part of computability theory is really incomputability theory,  
the studies of the complex hierarchies of non computability and non  
solvability in arithmetic and computer science.




It would have to explain how non-computable phenomena are derived  
from computation and what that can even mean.


I can do that. I can prove that if a universal number exists, then non  
computable relation between numbers exists.

Löbian numbers can actually already prove that about themselves.



For comp to be consistent, it can only ask 'what do you mean 'non- 
computable?'.


For finite to be consistent, it can only ask what do you mean by  
infinite? Well, OK. But we can do that.


Even with the intuitive definition, we can do that.
A function (from N to N) is computable iff you can explain in a finite  
numbers of words, in a non ambiguous grammar, to a reasonably dumb  
fellow, how to compute it, in a finite time, for each of its finite  
argument.


Now, a function is not computable, if you cannot do that, even  
assuming you are immortal.


Church thesis say the number LAMDDA is a universal number. This  
simplifies non computability. A function is not computable if you  
cannot program it in LAMBDA. The universal number LAMBDA cannot  
simulate that function.








If I had a theory of autovehicularism in which cars drive  
themselves, I can't then claim that these soft things that sit  
behind the wheel inside the car are non-vehicular attributes of  
cars. If there can be non-vehicular attributes of cars then any  
autovehicular theory of cars is false.











It means also that most proposition *about* machine, cannot be  
found in a mechanical way.
The simplest examples are that no machine can decide if some  
arbitrary machine will stop not, or no machine can decide if two  
arbitrary machine compute or not the same function, etc.
If there is no complete theories for machines and/or numbers, it  
makes harder to defend non-comp, etc.




How can computationalism support the idea of there being a non- 
mechanical way though? What other way is there?


Computation with oracle for non computable arithmetical truth, or  
just some non computable arithmetical truth. Arithmetic is full of  
them.



You are telling me that arithmetic is full of non-arithmetic,


No. Full of non computable relations between number.

If they are not computable, how do you know they are part of  
arithmetic rather than physics or sense?



Because I work in arithmetic. I use Gödel's arithmetization of meta- 
arithmetic. In AUDA, I never leave arithmetic.


Most of arithmetic is not computable. Truth escapes proof, and many  
computations do not stop, without us able to prove this in advance in  
any specific way.
I'm afraid you are unaware of computer science. I told you to be  
cautious with machines and numbers, because since Gödel we know that  
we know about nothing on them.









so therefore your computationalism - the idea that consciousness  
and physics develop from unconscious computation, includes  
(unspecified, unknowable) non-computationalism too.


I don't see what you mean by includes non-computationalism.
I can try to make sense. yes, the arithmetical reality is 99,999...%  
non computable. But computationalism is not the thesis that  
everything is computable. It is the thesis that the working of my  
brain can be imitate enough closely by a digital machine so that my  
first person experience will not see any difference.


If only 0.000...1% of arithmetic truth is computable, why would a  
digital computation be enough to imitate anything other than another  
digital computation?


It can't, indeed. Computation and imitation or simulation, or  

Re: Minds, Machines and Gödel

2013-12-24 Thread Edgar Owen
Liz,

No, that doesn't make Reality subject to the halting problem. The halting 
problem is when a computer program is trying to reach some independently 
postulated result and may or may not be able to reach it. 

Reality doesn't have any problem like this. It just computes the logical 
results of the evolution of the current information state of the universe. 
There are no independently postulated states that aren't directly computed by 
reality which reality then attempts to reach (prove).

Edgar



On Dec 21, 2013, at 3:26 PM, LizR wrote:

 Reality is analogous to a running software program. Godel's Theorem does not 
 apply. A human could speculate as to whether any particular state of Reality 
 could ever arise computationally and it might be impossible to determine 
 that, but again that has nothing to do with the actual operation of 
 Reality,since it is only a particular internal mental model of that reality.
 
 Wouldn't that make reality susceptible to the halting problem?
 
 ...hello, is anybody there? Why have all the stars gone out?
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Minds, Machines and Gödel

2013-12-24 Thread LizR
I have probably missed this - I don't have time to engage as much as I
would like with this list (or any others) - but where or how are these
computations taking place?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Minds, Machines and Gödel

2013-12-22 Thread Bruno Marchal


On 21 Dec 2013, at 17:32, Craig Weinberg wrote:




On Thursday, December 19, 2013 10:13:25 AM UTC-5, Bruno Marchal wrote:

On 19 Dec 2013, at 15:07, Craig Weinberg wrote:




On Thursday, December 19, 2013 5:23:20 AM UTC-5, Bruno Marchal wrote:
Hello Craig,


That is the very well known attempt by Lucas to use Gödel's theorem  
to refute mechanism. He was not the only one.


Most people thinking about this have found the argument, and  
usually found the mistakes in it.


To my knowledge Emil Post is the first to develop both that  
argument, and to understand that not only that argument does not  
work, but that the machines can already refute that argument, due  
to the mechanizability of the diagonalization, made very general by  
Church thesis.


In fact either the argument is presented in an effective way, and  
then machine can refute it precisely, or the argument is based on  
some fuzziness, and then it proves nothing.


If 'proof' is an inappropriate concept for first person physics,  
then I would expect that fuzziness would be the only symptom we can  
expect. The criticism of Lucas seems to not really understand the  
spirit of Gödel's theorem, but only focus on the letter of its  
application...which in the case of Gödel's theorem is precisely the  
opposite of its meaning.


The link that Stathis provided demonstrates that Gödel himself  
understood this:


So the following disjunctive conclusion is inevitable: Either  
mathematics is incompletable in this sense, that its evident axioms  
can never be comprised in a finite rule, that is to say, the human  
mind (even within the realm of pure mathematics) infinitely  
surpasses the powers of any finite machine, or else there exist  
absolutely unsolvable diophantine problems of the type  
specified . . . (Gödel 1995: 310).


To me it's clear that Gödel means that incompleteness reveals that  
mathematics is not completable


OK. Even arithmetic.



in the sense that it is not enough to contain the reality of human  
experience,


?

He says the 'human mind', but I say human experience.


Mathematics is not enough for the mind and experience of ... the  
machines.









not that it proves that mathematics or arithmetic truth is  
omniscient and omnipotent beyond our wildest dreams.


Arithmetical truth is by definition arithmetically omniscient, but  
certainly not omniscient in general. Indeed to get the whole  
arithmetical Noùs, Arithmetical truth is still too much weak. All  
what Gödel showed is that arithmetical truth (or any richer notion  
of truth, like set theoretical, group theoretical, etc.) cannot be  
enumerated by machines or effective sound theories.


The issue though is whether that non-enumerablity is a symptom of  
the inadequacy of Noùs to contain Psyche, or a symptom of Noùs being  
so undefinable that it can easily contain Psyche as well as Physics.


The Noùs is the intelligible reality. It is not computable, but it is  
definable. Unlike truth and knowledge or first person experience.




I think that Gödel interpreted his own work in the former and you  
are interpreting it in the latter - doesn't mean you're wrong, but I  
agree with him if he thought the former, because Psyche doesn't make  
sense as a part of Noùs.


That is too much ambiguous. The psyche is not really a part of the  
Noùs, which is still purely 3p.




I see Psyche and Physics as the personal and impersonal  
presentations of sense,


Machine think the same, with sense replaced by arithmetical truth.  
Except that the machine has to be confused and for her that truth is  
beyond definability, like sense.




and Noùs is the re-presentation of physics (meaning physics is re- 
personalized as abstract digital concepts).


The Noùs has nothing to do with physics a priori. It is the world of  
the eternal platonic ideas, or God's ideas.


keep in mind the 8 hypostases:

-  p  (truth, not definable in arithmetic, but emulable in some  
trivial sense)
-  Bp (provable, believable, assumable, communicable). It splits into  
a communicable and non communicable part (some fact about  
communication are not communicable)
-  Bp  p (the soul, the knower, ... the psyche is here). It does not  
split.


-  Bp  Dt  (the intelligible matter, ... matter and physics is here).  
It splits in two.
-  Bp  Dt  p (the sensible matter. the physical experience, (pain,  
pleasure, qualia) are here. It splits also in two parts.




Physics is the commercialization of sense. Psyche is residential  
sense. Noùs is the hotel...commercialized residence.









An excellent book has been written on that subject by Judson Webb  
(mechanism, mentalism and metamathematics, reference in the  
bibliographies in my URL, or in any of my papers).


In conscience and mechanism, I show all the details of why the  
argument of Lucas is already refuted by Löbian machines, and Lucas  
main error is reduced to a confusion between Bp and Bp  p. It is  
an implicit assumption, in the 

Re: Minds, Machines and Gödel

2013-12-22 Thread Bruno Marchal


On 21 Dec 2013, at 19:06, Edgar Owen wrote:


Craig,

Godel's Theorem applies only to human mathematical systems.


provably assuming that humans are arithmetically sound machine (which  
is a rather strong assumption).




It doesn't apply to the logico-mathematical system of reality, of  
which the computational systems of biological organisms including  
humans are a part.


I agree.




Why? The answer is straightforward. Because Reality's logico- 
mathematical system is entirely computational in the sense that  
every state at every present moment is directly computed from the  
prior state.


Only in the third person perspective, but with computationalism, all  
accessible realities are not computation, nor result of computation,  
but they are the result of infinitely many computations mixed with the  
first person indeterminacies.





Godel's Theorem does not apply to this.


Right. Gödel' theorem applies to finite or enumerable machines or  
theories. Not on their models, even in arithmetic.




What Godel's Theorem says is that given some mathematical system it  
is possible to formulate a correct statement


It is correct if we already know that the theory is correct, which is  
doubtful for rich theories like us, in case of comp.




which is not computable from the axioms. But Reality doesn't work  
that way. It simply computes the next state of itself which is  
always possible.


Reality does not compute. That's the digital physics thesis, which  
makes no sense. Indeed, as often explained here:
if digital physics is correct then comp is correct, BUT if comp is  
correct then digital physics is incorrect. thus digital physics  
entails the negation of digital physics, and this makes digital  
physics incorrect (for a TOE) in all case (with comp or with non comp).






The implication is that the logico-mathematical system of reality IS  
AND IN FACT MUST NECESSARILY BE logically consistent and logically  
complete in every detail. If it wasn't Reality would tear itself  
apart at the inconsistencies and pause at the incompletenesses and  
could not exist. But Reality does exist.


OK, but we don't *know* that. We hope that. We know only that we are  
conscious here-and-now. We don't *know* if there are planets and  
galaxies. We bet on that. Those are theoretical assumptions.






Reality is analogous to a running software program.


Read the UDA. Apparent realities have to be much bigger than anything  
we could emulate on a computer. That is already the case for  
arithmetic itself. You might confuse proof and computation.




Godel's Theorem does not apply. A human could speculate as to  
whether any particular state of Reality could ever arise  
computationally and it might be impossible to determine that, but  
again that has nothing to do with the actual operation of  
Reality,since it is only a particular internal mental model of that  
reality.


The universal dovetailer get all states of mind, but no states of  
physical reality at all, which needs the non computable First Person  
Indeterminacy on all (relative) computations. Then the bigger  
theological (true) reality is even bigger.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Minds, Machines and Gödel

2013-12-22 Thread Craig Weinberg


On Sunday, December 22, 2013 7:21:05 AM UTC-5, Bruno Marchal wrote:


 On 21 Dec 2013, at 17:32, Craig Weinberg wrote: 

  
  
  On Thursday, December 19, 2013 10:13:25 AM UTC-5, Bruno Marchal wrote: 
  
  On 19 Dec 2013, at 15:07, Craig Weinberg wrote: 
  
  
  
  On Thursday, December 19, 2013 5:23:20 AM UTC-5, Bruno Marchal wrote: 
  Hello Craig, 
  
  
  That is the very well known attempt by Lucas to use Gödel's theorem   
  to refute mechanism. He was not the only one. 
  
  Most people thinking about this have found the argument, and   
  usually found the mistakes in it. 
  
  To my knowledge Emil Post is the first to develop both that   
  argument, and to understand that not only that argument does not   
  work, but that the machines can already refute that argument, due   
  to the mechanizability of the diagonalization, made very general by   
  Church thesis. 
  
  In fact either the argument is presented in an effective way, and   
  then machine can refute it precisely, or the argument is based on   
  some fuzziness, and then it proves nothing. 
  
  If 'proof' is an inappropriate concept for first person physics,   
  then I would expect that fuzziness would be the only symptom we can   
  expect. The criticism of Lucas seems to not really understand the   
  spirit of Gödel's theorem, but only focus on the letter of its   
  application...which in the case of Gödel's theorem is precisely the   
  opposite of its meaning. 
  
  The link that Stathis provided demonstrates that Gödel himself   
  understood this: 
  
  So the following disjunctive conclusion is inevitable: Either   
  mathematics is incompletable in this sense, that its evident axioms   
  can never be comprised in a finite rule, that is to say, the human   
  mind (even within the realm of pure mathematics) infinitely   
  surpasses the powers of any finite machine, or else there exist   
  absolutely unsolvable diophantine problems of the type   
  specified . . . (Gödel 1995: 310). 
  
  To me it's clear that Gödel means that incompleteness reveals that   
  mathematics is not completable 
  
  OK. Even arithmetic. 
  
  
  
  in the sense that it is not enough to contain the reality of human   
  experience, 
  
  ? 
  
  He says the 'human mind', but I say human experience. 

 Mathematics is not enough for the mind and experience of ... the   
 machines. 



i agree, of course, but how is that view compatible with computationalism?
 




  
  
  
  
  not that it proves that mathematics or arithmetic truth is   
  omniscient and omnipotent beyond our wildest dreams. 
  
  Arithmetical truth is by definition arithmetically omniscient, but   
  certainly not omniscient in general. Indeed to get the whole   
  arithmetical Noùs, Arithmetical truth is still too much weak. All   
  what Gödel showed is that arithmetical truth (or any richer notion   
  of truth, like set theoretical, group theoretical, etc.) cannot be   
  enumerated by machines or effective sound theories. 
  
  The issue though is whether that non-enumerablity is a symptom of   
  the inadequacy of Noùs to contain Psyche, or a symptom of Noùs being   
  so undefinable that it can easily contain Psyche as well as Physics. 

 The Noùs is the intelligible reality. It is not computable, but it is   
 definable. Unlike truth and knowledge or first person experience. 


 The Noùs is intelligible, but why is it necessarily reality?




  I think that Gödel interpreted his own work in the former and you   
  are interpreting it in the latter - doesn't mean you're wrong, but I   
  agree with him if he thought the former, because Psyche doesn't make   
  sense as a part of Noùs. 

 That is too much ambiguous. The psyche is not really a part of the   
 Noùs, which is still purely 3p. 


Cool, we agree.
 




  I see Psyche and Physics as the personal and impersonal   
  presentations of sense, 

 Machine think the same, with sense replaced by arithmetical truth.   
 Except that the machine has to be confused and for her that truth is   
 beyond definability, like sense. 


I don't think that Psyche can be strongly related to arithmetic truth. 
There are thematic associations, but I would say that they are by way of 
reflected Noùs. First person arithmetic truth is intuition of Noùs, and 
Noùs is alienated sense. The idea that confusion of truth would be 
necessary to transform quantitative rules into qualitative experiences 
seems to be a shaky premise at best. It smells like hasty reverse 
engineering to plug a major hole in comp. It creates an unacknowledged 
dualism between arithmetic truth/definitions and colorful/magic confusion 
of definition.
 




  and Noùs is the re-presentation of physics (meaning physics is re- 
  personalized as abstract digital concepts). 

 The Noùs has nothing to do with physics a priori. It is the world of   
 the eternal platonic ideas, or God's ideas. 


I understand, yes. I place it here on the upper left (West) side: 


Re: Minds, Machines and Gödel

2013-12-22 Thread Bruno Marchal


On 22 Dec 2013, at 14:56, Craig Weinberg wrote:




On Sunday, December 22, 2013 7:21:05 AM UTC-5, Bruno Marchal wrote:


Mathematics is not enough for the mind and experience of ... the
machines.


i agree, of course, but how is that view compatible with  
computationalism?



It prevents the use of the idea that mathematics is not enough to  
circumscribe the human mind, to be applied against mechanism.
It means also that most proposition *about* machine, cannot be found  
in a mechanical way.
The simplest examples are that no machine can decide if some arbitrary  
machine will stop not, or no machine can decide if two arbitrary  
machine compute or not the same function, etc.
If there is no complete theories for machines and/or numbers, it makes  
harder to defend non-comp, etc.









 The issue though is whether that non-enumerablity is a symptom of
 the inadequacy of Noùs to contain Psyche, or a symptom of Noùs  
being

 so undefinable that it can easily contain Psyche as well as Physics.

The Noùs is the intelligible reality. It is not computable, but it is
definable. Unlike truth and knowledge or first person experience.

 The Noùs is intelligible, but why is it necessarily reality?


It is the world of ideas, and with comp it is the world of universal  
numbers' idea, which rise up as a consequences of addition and  
multiplication. It splits into G and G* (but you need to study a bit  
of math for this).










 I think that Gödel interpreted his own work in the former and you
 are interpreting it in the latter - doesn't mean you're wrong, but I
 agree with him if he thought the former, because Psyche doesn't make
 sense as a part of Noùs.

That is too much ambiguous. The psyche is not really a part of the
Noùs, which is still purely 3p.

Cool, we agree.




 I see Psyche and Physics as the personal and impersonal
 presentations of sense,

Machine think the same, with sense replaced by arithmetical truth.
Except that the machine has to be confused and for her that truth is
beyond definability, like sense.

I don't think that Psyche can be strongly related to arithmetic  
truth. There are thematic associations, but I would say that they  
are by way of reflected Noùs. First person arithmetic truth is  
intuition of Noùs, and Noùs is alienated sense.


No problem. The intuition of truth comes from the fact that sometimes  
our beliefs are true. The Noùs is alienating us, as anything which is  
not personal consciousness. The Noùs is a gate to the others.




The idea that confusion of truth would be necessary to transform  
quantitative rules into qualitative experiences seems to be a shaky  
premise at best. It smells like hasty reverse engineering to plug a  
major hole in comp. It creates an unacknowledged dualism between  
arithmetic truth/definitions and colorful/magic confusion of  
definition.


The idea comes from Plato and notably the Theaetetus idea of defining  
knowledge by true belief. It works well. Socrate refuted the idea, but  
Gödel's incompleteness refutes Socrate's refutation of Theaetetus.
Also, it is the only definition of knowledge which is coherent with  
the dream metaphysical argument, and thus with comp. This wold be long  
to be developed. All this is fully developed in conscience et  
mécanisme.










 and Noùs is the re-presentation of physics (meaning physics is re-
 personalized as abstract digital concepts).

The Noùs has nothing to do with physics a priori. It is the world of
the eternal platonic ideas, or God's ideas.

I understand, yes. I place it here on the upper left (West) side:




keep in mind the 8 hypostases:

-  p  (truth, not definable in arithmetic, but emulable in some
trivial sense)

Instead of p being truth,


p is just a symbolic way to represent truth. p alone means p is  
true, when asserted by a machine which is supposed to be correct by  
definition and choice.






I see truth as a narrow intellectual sensitivity, not primordial.


Truth encompasses everything. It is provably beyond anything  
intellectual. In the Plotinus/arithmetic lexicon: Arithmetical truth  
plays the role of the non nameable God of the machine.





The primordial capacity to experience, from which comparisons and  
discernments can self-diverge *must* be more primitive than the  
notion of right and wrong or is-ness and may-not-be-ness. Before  
anything can 'be', there must be a the potential for a difference  
between being and non-being to be experienced. That difference is a  
quality, not a logic. The logic of the discernment I think must be  
second order - the primary quality of discernment is a sense of  
obstruction, a fork in the road which interrupts peace/solitude.


Perhaps.





-  Bp (provable, believable, assumable, communicable). It splits into
a communicable and non communicable part (some fact about
communication are not communicable)

Instead of belief or proof being primitive or ontological,


Belief or proof are not primitive. They are 

Re: Minds, Machines and Gödel

2013-12-21 Thread Craig Weinberg


On Thursday, December 19, 2013 10:13:25 AM UTC-5, Bruno Marchal wrote:


 On 19 Dec 2013, at 15:07, Craig Weinberg wrote:



 On Thursday, December 19, 2013 5:23:20 AM UTC-5, Bruno Marchal wrote:

 Hello Craig,


 That is the very well known attempt by Lucas to use Gödel's theorem to 
 refute mechanism. He was not the only one.

 Most people thinking about this have found the argument, and usually 
 found the mistakes in it. 

 To my knowledge Emil Post is the first to develop both that argument, and 
 to understand that not only that argument does not work, but that the 
 machines can already refute that argument, due to the mechanizability of 
 the diagonalization, made very general by Church thesis.

 In fact either the argument is presented in an effective way, and then 
 machine can refute it precisely, or the argument is based on some 
 fuzziness, and then it proves nothing.


 If 'proof' is an inappropriate concept for first person physics, then I 
 would expect that fuzziness would be the only symptom we can expect. The 
 criticism of Lucas seems to not really understand the spirit of Gödel's 
 theorem, but only focus on the letter of its application...which in the 
 case of Gödel's theorem is precisely the opposite of its meaning.

 The link that Stathis provided demonstrates that Gödel himself understood 
 this:

 So the following disjunctive conclusion is inevitable: Either mathematics 
 is incompletable in this sense, that its evident axioms can never be 
 comprised in a finite rule, that is to say, the human mind (even within the 
 realm of pure mathematics) infinitely surpasses the powers of any finite 
 machine, or else there exist absolutely unsolvable diophantine problems of 
 the type specified . . . (Gödel 1995: 310).

  
 To me it's clear that Gödel means that incompleteness reveals that 
 mathematics is not completable 


 OK. Even arithmetic.



 in the sense that it is not enough to contain the reality of human 
 experience, 


 ?


He says the 'human mind', but I say human experience.
 




 not that it proves that mathematics or arithmetic truth is omniscient and 
 omnipotent beyond our wildest dreams.


 Arithmetical truth is by definition arithmetically omniscient, but 
 certainly not omniscient in general. Indeed to get the whole arithmetical 
 Noùs, Arithmetical truth is still too much weak. All what Gödel showed is 
 that arithmetical truth (or any richer notion of truth, like set 
 theoretical, group theoretical, etc.) cannot be enumerated by machines or 
 effective sound theories.


The issue though is whether that non-enumerablity is a symptom of the 
inadequacy of Noùs to contain Psyche, or a symptom of Noùs being so 
undefinable that it can easily contain Psyche as well as Physics. I think 
that Gödel interpreted his own work in the former and you are interpreting 
it in the latter - doesn't mean you're wrong, but I agree with him if he 
thought the former, because Psyche doesn't make sense as a part of Noùs. I 
see Psyche and Physics as the personal and impersonal presentations of 
sense, and Noùs is the re-presentation of physics (meaning physics is 
re-personalized as abstract digital concepts). Physics is the 
commercialization of sense. Psyche is residential sense. Noùs is the 
hotel...commercialized residence.







 An excellent book has been written on that subject by Judson Webb 
 (mechanism, mentalism and metamathematics, reference in the bibliographies 
 in my URL, or in any of my papers).

 In conscience and mechanism, I show all the details of why the argument 
 of Lucas is already refuted by Löbian machines, and Lucas main error is 
 reduced to a confusion between Bp and Bp  p. It is an implicit assumption, 
 in the mind of Lucas and Penrose, of self-correctness, or self-consistency. 
 To be sure, I found 49 errors of logic in Lucas' paper, but the main 
 conceptual one is in that self-correctness assertion.

 Penrose corrected his argument, and understood that it proves only that 
 if we are machine, we cannot know which machine we are, and that gives the 
 math of the 1-indeterminacy, exploited in the arithmetical hypostases. 
 Unfortunately, Penrose did not take that correction into account.

 Gödel's theorem and Quantum Mechanics could not have been more pleasing 
 for the comp aficionado. 
 Gödel's theorem (+UDA) shows that machine have a rich non trivial 
 theology including physics, and QM confirms the most startling points of 
 the comp physics.


 As far as QM goes, it would not surprise me in the least that a formal 
 system based on formal measurements is only able to consider itself and 
 fails to locate the sensory experience or the motive 'power on' required to 
 formalize them in the first place.


 They don't address that question.
 Formal systems are seen as mathematical object, even number, and they 
 exist independently of us, if you still accept arithmetical realism.


I accept the realism of arithmetic representation, and that they 

Re: Minds, Machines and Gödel

2013-12-21 Thread LizR
Reality is analogous to a running software program. Godel's Theorem does
not apply. A human could speculate as to whether any particular state of
Reality could ever arise computationally and it might be impossible to
determine that, but again that has nothing to do with the actual operation
of Reality,since it is only a particular internal mental model of that
reality.

Wouldn't that make reality susceptible to the halting problem?

...hello, is anybody there? Why have all the stars gone out?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Minds, Machines and Gödel

2013-12-20 Thread Bruno Marchal


On 20 Dec 2013, at 01:01, LizR wrote:


On 20 December 2013 11:40, meekerdb meeke...@verizon.net wrote:
On 12/19/2013 1:30 PM, Jesse Mazer wrote:
To me it seems like thinking something is true is much more of a  
fuzzy category that asserting something is true


Maybe.  But note that Bruno's MGA is couched in terms of a dream,  
just to avoid any input/output.  That seems like a suspicious move  
to me; one that may lead intuition astray.


I seem to recall that Bruno claimed this is a legal move because  
any possible input/output can be encoded as data within the  
computation (or something along those lines.


Yes. Eventually it comes to decide what is your generalized brain.  
If you need the entire physical universe, with 10^100 decimals,  that  
will change nothing in the reasoning, because in step seven, your  
state will still be accessed.


Of course, the entire physical universe also has no input nor output  
(by definition of entire).


For the six first steps, it is easier to assume some high substitution  
(neuronal) for the thought experiment. Then in step 7, this high  
level assumption is eliminated.






No doubt Bruno will be able to explain much better than me).


I have tried to talk in English. Now the fact that we can put the  
input in the code is a fundamental theorem for the universal system,  
know as the SMN theorems. In terms of the phi_i it means that there is  
one function S of two arguments with


phi_i(x) = phi_S(x, 4)() (S10)

or

phi_i (4, y, z) = phi_S(x, 4) (y, z)   (S32)

The meta-program S take the input (4), and put it in the code, and  
suppress one variable.


For example S(4, READ x, READ y, output x + y) = Read Y, output 4 +  
y.


S is really a substitution.

S is a program, so it exists a number s such that S = phi_s. You can  
use this to see that we can write the SMN theorems with quantifying  
only on numbers.


The whole of recursion theory can be based axiomatically on the two  
axioms:


- SMN theorem (here an axiom, provable for all reasonable  
programming languages, or universal system)
- It exists u such that phi_i(x) = phi_u(i, x) (existence of a  
universal number)   (again provable for each individual programming  
language). The universal function u computes phi_i(x), for any program  
i and any data x.


But I guess that here, I do not explain better than you, as I use  
notation, which frighten the beginners or the non mathematicians.


Yet, we need the SMN theorem to explain the Dx = xx method (to  
define self-reference in arithmetic) in terms of the phi_i and the w_i  
(which I promised to do for you!)


But we might need to revise a bit those phi_i and w_i perhaps, but  
then I don't want to annoy you with too much technic either. What do  
you think? Also we started this on the FOAR list, would you like to  
continue this, and on which list? Take it easy. I know we are in an  
end of the year feast period :)


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Minds, Machines and Gödel

2013-12-19 Thread meekerdb
A nice exposition, Jesse.  But it bothers me that it seems to rely on the idea of output 
and a kind of isolation like invoking a meta-level.  What if instead of Craig Weinberg 
will never in his lifetime assert that this statement is true we considered Craig 
Weinberg will never in his lifetime think that this statement is true?  Then it seems 
that one invokes a kind of paraconsistent logic in which one just refuses to draw any 
inferences from this sentence that one cannot think either true or false.


Brent


On 12/19/2013 8:08 AM, Jesse Mazer wrote:
The argument only works if you assume from the beginning that an A.I. is unconscious or 
doesn't have the same sort of mind as a human (and given your views you probably do 
presuppose these things--but if the conclusion *requires* such presuppositions, then 
it's an exercise in circular reasoning). If you are instead willing to consider that an 
A.I. mind works basically like a human mind (including things like being able to make 
mistakes, and being able to understand things it doesn't say out loud), and are 
willing to put yourself in the place of an A.I. being faced with its own Godel 
statement, then you can see it's like a more formal equivalent of me asking you to 
evaluate the statement Craig Weinberg will never in his lifetime assert that this 
statement is true. You can understand that if you *did* assert that it's true, that 
would of course make it false, but you can likewise understand that as long as you try 
to refrain from uttering any false statements including that one, it *will* end up being 
true.


Similarly, an A.I. who is capable of making erroneous statements, and of understanding 
things distinct from its output to the world outside the program, might well 
understand that its own Godel statement is true--provided it never outputs a formal 
judgment that the statement is true, which would mean it's false! So if the A.I. in fact 
avoided ever giving as output a judgment about that the statement is true, it need not 
be because it lacks an understanding of what's going on, but rather just because it's 
caught in a bind similar to the one you're caught in with Craig Weinberg will never in 
his lifetime assert that this statement is true.


To flesh this out a bit, imagine a community of human-like A.I. mathematicians (mind 
uploads, say), living in a self-contained simulated world with no input from the 
outside, who have the ability to reflect on various arithmetical propositions. Once 
there is a consensus in this community that a proposition has been proven true or false, 
they can go to a special terminal (call it the output terminal) and enter it on the 
list of proven statements, which will constitute the simulation's output to those of 
us watching it run in the real world. Suppose also that the simulated world is 
constantly growing, and that they have an internal simulated supercomputer within their 
world to help with their mathematical investigations, and this supercomputer is 
constantly growing in memory too. So if we imagine a string encoding the *initial* state 
of the simulation along with the rules determining its evolution, although this string 
may be very large, after some time has passed the memory of the simulated supercomputer 
will be much larger than that, so it's feasible to have this string appear within the 
supercomputer's memory (and it's part of the rules of the simulation that the string 
automatically appears in the supercomputer's memory after some finite time T within the 
simulation, and all the A.I. mathematicians knew that this was scheduled to happen).


Once the A.I. mathematicians have the program's initial conditions and the rules 
governing subsequent evolution, they can construct their own Godel statement. Of course 
they can never really be sure that the string they are given correctly describes the 
true initial conditions of their own simulated universe, but let's say they have a high 
degree of trust that it is--for example, they might be mind uploads of the humans who 
designed the original simulation, and they remember having designed it to ensure that 
the string that would appear in the supercomputer's memory is the correct one. They 
could even use the growing supercomputer to run a simulation-within-the-simulation of 
their own history, starting from those initial conditions--the sub-simulation would 
always lag behind what they were experiencing, but they could continually verify that 
the events in the sub-simulation matched their historical records and memories up to 
some point in the past.


So, they have a high degree of confidence that the Godel statement they've constructed 
actually is the correct one for their own simulated universe. They can therefore 
interpret the conceptual meaning of the statement as something like you guys living in 
the simulation will never enter into your output terminal a judgment that this statement 
is true. So they could understand perfectly 

Re: Minds, Machines and Gödel

2013-12-19 Thread Jesse Mazer
To me it seems like thinking something is true is much more of a fuzzy
category that asserting something is true (even assertions can be
ambiguous when stated in natural language, but they can be made non-fuzzy
by requiring that each assertion be framed in terms of some formal language
and entered into a computer, as in my thought-experiment). Is there any
exact point where you cross between categories like being completely
unsure whether it's true and having a strong hunch it's true and having
an argument in mind that it's true but not feeling completely sure there
isn't a flaw in the reasoning and being as confident as you can possibly
be that it's true? I never really feel *absolute* certainty that anything
I think is true, even basic arithmetical statements like 1+1=2, because I'm
aware of how I've sometimes made sloppy mistakes in thinking in the past,
and because I know intelligent people can seem to come to incorrect
conclusions about basic ideas when hypnotized, or when dreaming (like the
logic of various characters in Alice in Wonderland). I think of certain
truth as being like an asymptote that an individual or community of
thinkers can continually get closer to but never quite reach.

If I consider the statement Jesse Mazer will never think this statement is
true, I may imagine the perspective of someone else and see that from
their perspective it must be true if Jesse's thinking is trustworthy, but
then I'll catch myself and see that this imaginary perspective is really
just a thought in Jesse's head--at that point, have I had the thought that
it's true? And at some point in considering it I can't really help thinking
some words along the lines of oh, so then it *is* true (it's hard to
avoid thinking something you know you are forbidden to think, like when
someone tells you don't think of an elephant), but is merely thinking the
magic words enough to count as having thought it's true, and therefore
having made it false once and for all?

Jesse


On Thu, Dec 19, 2013 at 3:46 PM, meekerdb meeke...@verizon.net wrote:

 A nice exposition, Jesse.  But it bothers me that it seems to rely on the
 idea of output and a kind of isolation like invoking a meta-level.  What
 if instead of Craig Weinberg will never in his lifetime assert that this
 statement is true we considered Craig Weinberg will never in his lifetime
 think that this statement is true?  Then it seems that one invokes a kind
 of paraconsistent logic in which one just refuses to draw any inferences
 from this sentence that one cannot think either true or false.

 Brent



 On 12/19/2013 8:08 AM, Jesse Mazer wrote:

 The argument only works if you assume from the beginning that an A.I. is
 unconscious or doesn't have the same sort of mind as a human (and given
 your views you probably do presuppose these things--but if the conclusion
 *requires* such presuppositions, then it's an exercise in circular
 reasoning). If you are instead willing to consider that an A.I. mind works
 basically like a human mind (including things like being able to make
 mistakes, and being able to understand things it doesn't say out loud),
 and are willing to put yourself in the place of an A.I. being faced with
 its own Godel statement, then you can see it's like a more formal
 equivalent of me asking you to evaluate the statement Craig Weinberg will
 never in his lifetime assert that this statement is true. You can
 understand that if you *did* assert that it's true, that would of course
 make it false, but you can likewise understand that as long as you try to
 refrain from uttering any false statements including that one, it *will*
 end up being true.

 Similarly, an A.I. who is capable of making erroneous statements, and of
 understanding things distinct from its output to the world outside the
 program, might well understand that its own Godel statement is
 true--provided it never outputs a formal judgment that the statement is
 true, which would mean it's false! So if the A.I. in fact avoided ever
 giving as output a judgment about that the statement is true, it need not
 be because it lacks an understanding of what's going on, but rather just
 because it's caught in a bind similar to the one you're caught in with
 Craig Weinberg will never in his lifetime assert that this statement is
 true.

 To flesh this out a bit, imagine a community of human-like A.I.
 mathematicians (mind uploads, say), living in a self-contained simulated
 world with no input from the outside, who have the ability to reflect on
 various arithmetical propositions. Once there is a consensus in this
 community that a proposition has been proven true or false, they can go to
 a special terminal (call it the output terminal) and enter it on the list
 of proven statements, which will constitute the simulation's output to
 those of us watching it run in the real world. Suppose also that the
 simulated world is constantly growing, and that they have an internal
 simulated supercomputer 

Re: Minds, Machines and Gödel

2013-12-19 Thread meekerdb

On 12/19/2013 1:30 PM, Jesse Mazer wrote:
To me it seems like thinking something is true is much more of a fuzzy category that 
asserting something is true


Maybe.  But note that Bruno's MGA is couched in terms of a dream, just to avoid any 
input/output.  That seems like a suspicious move to me; one that may lead intuition astray.


Brent


(even assertions can be ambiguous when stated in natural language, but they can be made 
non-fuzzy by requiring that each assertion be framed in terms of some formal language 
and entered into a computer, as in my thought-experiment). Is there any exact point 
where you cross between categories like being completely unsure whether it's true and 
having a strong hunch it's true and having an argument in mind that it's true but not 
feeling completely sure there isn't a flaw in the reasoning and being as confident as 
you can possibly be that it's true? I never really feel *absolute* certainty that 
anything I think is true, even basic arithmetical statements like 1+1=2, because I'm 
aware of how I've sometimes made sloppy mistakes in thinking in the past, and because I 
know intelligent people can seem to come to incorrect conclusions about basic ideas when 
hypnotized, or when dreaming (like the logic of various characters in Alice in 
Wonderland). I think of certain truth as being like an asymptote that an individual or 
community of thinkers can continually get closer to but never quite reach.


If I consider the statement Jesse Mazer will never think this statement is true, I may 
imagine the perspective of someone else and see that from their perspective it must be 
true if Jesse's thinking is trustworthy, but then I'll catch myself and see that this 
imaginary perspective is really just a thought in Jesse's head--at that point, have I 
had the thought that it's true? And at some point in considering it I can't really help 
thinking some words along the lines of oh, so then it *is* true (it's hard to avoid 
thinking something you know you are forbidden to think, like when someone tells you 
don't think of an elephant), but is merely thinking the magic words enough to count as 
having thought it's true, and therefore having made it false once and for all?


Jesse


On Thu, Dec 19, 2013 at 3:46 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


A nice exposition, Jesse.  But it bothers me that it seems to rely on the 
idea of
output and a kind of isolation like invoking a meta-level.  What if 
instead of
Craig Weinberg will never in his lifetime assert that this statement is 
true we
considered Craig Weinberg will never in his lifetime think that this 
statement is
true?  Then it seems that one invokes a kind of paraconsistent logic in 
which one
just refuses to draw any inferences from this sentence that one cannot 
think either
true or false.

Brent



On 12/19/2013 8:08 AM, Jesse Mazer wrote:

The argument only works if you assume from the beginning that an A.I. is
unconscious or doesn't have the same sort of mind as a human (and 
given your
views you probably do presuppose these things--but if the conclusion 
*requires*
such presuppositions, then it's an exercise in circular reasoning). If 
you are
instead willing to consider that an A.I. mind works basically like a 
human mind
(including things like being able to make mistakes, and being able to 
understand
things it doesn't say out loud), and are willing to put yourself in 
the
place of an A.I. being faced with its own Godel statement, then you 
can see
it's like a more formal equivalent of me asking you to evaluate the 
statement
Craig Weinberg will never in his lifetime assert that this statement is 
true.
You can understand that if you *did* assert that it's true, that would 
of course
make it false, but you can likewise understand that as long as you try 
to
refrain from uttering any false statements including that one, it 
*will* end up
being true.

Similarly, an A.I. who is capable of making erroneous statements, and of
understanding things distinct from its output to the world outside the
program, might well understand that its own Godel statement is 
true--provided it
never outputs a formal judgment that the statement is true, which would 
mean
it's false! So if the A.I. in fact avoided ever giving as output a 
judgment
about that the statement is true, it need not be because it lacks an
understanding of what's going on, but rather just because it's caught 
in a bind
similar to the one you're caught in with Craig Weinberg will never in 
his
lifetime assert that this statement is true.

To flesh this out a bit, imagine a community of human-like A.I. 
mathematicians
(mind uploads, say), living in a self-contained simulated world with no 
input
   

Re: Minds, Machines and Gödel

2013-12-19 Thread LizR
On 20 December 2013 11:40, meekerdb meeke...@verizon.net wrote:

  On 12/19/2013 1:30 PM, Jesse Mazer wrote:

 To me it seems like thinking something is true is much more of a fuzzy
 category that asserting something is true


 Maybe.  But note that Bruno's MGA is couched in terms of a dream, just to
 avoid any input/output.  That seems like a suspicious move to me; one that
 may lead intuition astray.


I seem to recall that Bruno claimed this is a legal move because any
possible input/output can be encoded as data within the computation (or
something along those lines. No doubt Bruno will be able to explain much
better than me).

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Minds, Machines and Gödel

2013-12-18 Thread LizR
If this is a proof of the falsity of mechanism, is there any chance of a
precis? :-)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Minds, Machines and Gödel

2013-12-18 Thread Stathis Papaioannou
On 19 December 2013 08:32, LizR lizj...@gmail.com wrote:
 If this is a proof of the falsity of mechanism, is there any chance of a
 precis? :-)

The argument has been restated with elaboration by Penrose, and has
been extensively criticised.

http://www.iep.utm.edu/lp-argue/


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.