Re: How would a computer know if it were conscious?

2007-06-28 Thread Bruno Marchal

David,


Le 17-juin-07, à 18:28, David Nyman a écrit :

 IMHO this semantic model gives you a knock-down argument against
 'computationalism', *unless* one identifies (I'm hoping to hear from
 Bruno on this) the 'primitive' entities and operators with those of
 the number realm - i.e. you make numbers and their relationships the
 'primitive base'.  But crucially, you must still take these entities
 and their relationships to be the *real* basis of personal-world
 'grasp'.  If you continue to adopt a 'somethingist' view, then no
 'program' (i.e. one of the arbitrarily large set that could be imputed
 to any 'something') could coherently be responsible for its personal-
 world grasp (such as it may be).  This is the substance of the UDA
 argument.  All personal-worlds must emerge internally via recursive
 levels of relationship inherited from primitive grasp: in a
 'somethingist' view, such grasp must reside with a primitive
 'something', as we have seen, and in a computationalist view, it must
 reside in the number realm.  But the fundamental insight applies.



I agree completely, but I am not yet convinced that you appreciate my 
methodological way of proceeding. I have to ask you questions, but I 
see you have been prolific during the Siena congress, which is not 
gentle for my mailbox :). Anyway I will take some time to read yours' 
and the others' posts before asking for questions that others have 
perhaps asked and that you have perhaps already answered.

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism

2007-06-28 Thread Bruno Marchal

Le 19-juin-07, à 10:55, Mohsen Ravanbakhsh wrote (to Torgny Tholerus)

 TT: The subjective experience is just some sort of behaviour.  You 
 can make computers show the same sort of behavior, if the computers 
 are enough complicated.

 But we're not talking about 3rd person point of view. I can not see 
 how you reduce the subjective experience of first person to the 
 behavior  that a third person view can evaluate! All the problem is 
 this first person experience.


Of course, in this context, I do agree with Mohsen Ravanbakhsh's anwer. 
But eventually, I could say, perhaps with David, that the first person 
experience is not so much the problem. On the contrary, the third 
person discourse and its apparent sharability (first person plural, 
with the comp hyp), is the real difficult problem. It just happens 
that we are used to take that problem for granted.
Also, for Torgny, I doubt there is a problem with first person notions, 
given that for him (if that means something) there is no first person!
Torgny self-zombiness is irrefutable, like solipsism (but more original 
than solipsism though). Of course each of us capable of knowing 
anything knows that Torgny is wrong about us, and I guess Torgny is not 
a zombie so that I guess (and cannot do anything more than that) that 
he is also wrong about himself. But this nobody can know for sure. OK?


Bruno




http://iridia.ulb.ac.be/~marchal/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-28 Thread David Nyman
On 28/06/07, Bruno Marchal [EMAIL PROTECTED] wrote:

BM:  I agree completely.

DN:  A good beginning!

BM:  .but I am not yet convinced that you appreciate my
methodological way of proceeding.

DN: That may well be so.  In that case it's interesting that we reached the
same conclusion.

BM:  Anyway I will take some time to read yours'
and the others' posts before asking for questions that others have
perhaps asked and that you have perhaps already answered.

DN:  I'm at your disposal.

David


 David,


 Le 17-juin-07, à 18:28, David Nyman a écrit :

  IMHO this semantic model gives you a knock-down argument against
  'computationalism', *unless* one identifies (I'm hoping to hear from
  Bruno on this) the 'primitive' entities and operators with those of
  the number realm - i.e. you make numbers and their relationships the
  'primitive base'.  But crucially, you must still take these entities
  and their relationships to be the *real* basis of personal-world
  'grasp'.  If you continue to adopt a 'somethingist' view, then no
  'program' (i.e. one of the arbitrarily large set that could be imputed
  to any 'something') could coherently be responsible for its personal-
  world grasp (such as it may be).  This is the substance of the UDA
  argument.  All personal-worlds must emerge internally via recursive
  levels of relationship inherited from primitive grasp: in a
  'somethingist' view, such grasp must reside with a primitive
  'something', as we have seen, and in a computationalist view, it must
  reside in the number realm.  But the fundamental insight applies.



 I agree completely, but I am not yet convinced that you appreciate my
 methodological way of proceeding. I have to ask you questions, but I
 see you have been prolific during the Siena congress, which is not
 gentle for my mailbox :). Anyway I will take some time to read yours'
 and the others' posts before asking for questions that others have
 perhaps asked and that you have perhaps already answered.

 Bruno


 http://iridia.ulb.ac.be/~marchal/


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism

2007-06-28 Thread Bruno Marchal


Le 19-juin-07, à 21:27, Brent Meeker wrote to Quentin:


 Quentin Anciaux wrote:
 On Tuesday 19 June 2007 20:16:57 Brent Meeker wrote:
 Quentin Anciaux wrote:
 On Tuesday 19 June 2007 11:37:09 Torgny Tholerus wrote:
  Mohsen Ravanbakhsh skrev:
 The subjective experience is just some sort of behaviour.  You 
 can
 make computers show the same sort of behavior, if the computers 
 are
 enough complicated.
  But we're not talking about 3rd person point of view. I can not 
 see how
 you reduce the subjective experience of first person to the 
 behavior
 that a third person view can evaluate! All the problem is this 
 first
 person experience.

  What you call the subjective experience of first person is just 
 some
 sort of behaviour.  When you claim that you have the subjective
 experience of first person, I can see that you are just showing a
 special kind of behaviour.  You behave as if you have the 
 subjective
 experience of first person.  And it is possible for an enough
 complicated computer to show up the exact same behaviour.  But in 
 the
 case of the computer, you can see that there is no subjective
 experience, there are just a lot of electrical fenomena 
 interacting
 with each other.

  There is no first person experience problem, because there is no 
 first
 person experience.

  --
  Torgny Tholerus
 Like I said earlier, this is pure nonsense as I have proof that I 
 have
 inner experience... I can't prove it to you because this is what 
 this is
 all about, you can't prove 1st person pov to others. And I don't 
 see why
 the fact that a computer is made of wire can't give it 
 consciousness...
 there is no implication at all.

 Again denying the phenomena does not make it disappear... it's no
 explanation at all.

 Quentin
 I think the point is that after all the behavior is explained, 
 including
 brain processes,  we will just say, See, that's the consciousness 
 there.
 Just as after explaining metabolism and growth and reproduction we 
 said,
 See, that's life.  Some people still wanted to know where the 
 life
 (i.e. elan vital) was, but it seemed to be an uninteresting 
 question of
 semantics.

 Brent Meeker

 I don't think the comparison is fair... between 'elan vital' and
 consciousness.

 I think it is fair.  Remember that in prospect people argued that 
 chemistry and physics could never explain life no matter how 
 completely they described the physical processes in a living thing.  
 All those cells and molecules and atoms were inanimate, none of them 
 had life - so they couldn't possibly explain the difference between 
 alive and dead.


I think you miss the point.  To define life/death can only be a useless 
semantic game. But nobody really doubts about his own consciousness 
(especially going to the dentist), despite we cannot define it nor 
explain it completely. Like Quentin I do think it is unfair to compare 
elan vital and consciousness. Somehow elan vital is a poor theory 
which has been overthrown by a better one. consciousness is a fact, 
albeit a peculiar personal one in need of an explanation; and there is 
a quasi consensus among workers in that field that we don't see how to 
explain consciousness from something simpler (a bit like the number 
btw...).




 I don't think consciousness is just a semantic question.

 I didn't mean to imply that.  I meant that the residual question, 
 after all the behavior and processes are explained (answering very 
 substantive questions) will seem to be a matter of making semantic 
 distinctions, like the question, Is a virus alive?

 As I
 don't believe that you could pin point consciousness... until proved
 otherwise.

 No it won't be pin pointed.  It will be diffuse, an interaction of 
 multiple sensory and action processes and you won't be able to point 
 to a single location.  But, if we do succeed with our explanation, 
 maybe we'll be able to say, This being is conscious of this now and 
 not conscious of that. or This being does not have self-awareness 
 and this one does.



Well,  now, I can prove that if the comp hyp is true then those 
brave-new-worlds-like assertions are provably wrong. If comp is true, 
nobody, I should perhaps say nosoul, will ever been able to decide if 
any other entity is conscious or not. Actually comp could be false 
because it is not even clear some entity can be completely sure of 
his/her/it own consciousness 





 And conscious and aware will have well defined operational (3rd 
 person) meanings.

 Or maybe we'll discover that we have to talk in some other terms not 
 yet invented, just as our predecessors had to stop talking about 
 animate and inanimate and instead talk about metabolism and 
 replication.

Terms by themselves will not sort out the difficulty. Even just our 
beliefs or bets in numbers presents big conceptual difficulty.


Bruno



 Brent Meeker
 One cannot guess the real difficulties of a problem before
 having solved it.
--- Carl Ludwig Siegel


 Quentin






 

Re: How would a computer know if it were conscious?

2007-06-28 Thread Bruno Marchal


Le 21-juin-07, à 01:07, David Nyman a écrit :


 On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:

 Personally I don' think we can be *personally* mistaken about our own
 consciousness even if we can be mistaken about anything that
 consciousness could be about.

 I agree with this, but I would prefer to stop using the term
 'consciousness' at all.


Why?



 To make a decision (to whatever degree of
 certainty) about whether a machine possessed a 1-person pov analogous
 to a human one, we would surely ask it the same sort of questions one
 would ask a human.  That is: questions about its personal 'world' -
 what it sees, hears, tastes (and perhaps extended non-human
 modalitiies); what its intentions are, and how it carries them into
 practice.  From the machine's point-of-view, we would expect it to
 report such features of its personal world as being immediately
 present (as ours are), and that it be 'blind' to whatever 'rendering
 mechanisms' may underlie this (as we are).

 If it passed these tests, it would be making similar claims on a
 personal world as we do, and deploying this to achieve similar ends.
 Since in this case it could ask itself the same questions that we can,
 it would have the same grounds for reaching the same conclusion.

 However, I've argued in the other bit of this thread against the
 possibility of a computer in practice being able to instantiate such a
 1-person world merely in virtue of 'soft' behaviour (i.e.
 programming).  I suppose I would therefore have to conclude that no
 machine could actually pass the tests I describe above - whether self-
 administered or not - purely in virtue of running some AI program,
 however complex.  This is an empirical prediction, and will have to
 await an empirical outcome.


Now I have big problems to understand this post. I must think ... (and 
go).

Bye,

Bruno





 On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:
 Le 03-juin-07, à 21:52, Hal Finney a écrit :



 Part of what I wanted to get at in my thought experiment is the
 bafflement and confusion an AI should feel when exposed to human 
 ideas
 about consciousness.  Various people here have proffered their own
 ideas, and we might assume that the AI would read these suggestions,
 along with many other ideas that contradict the ones offered here.
 It seems hard to escape the conclusion that the only logical response
 is for the AI to figuratively throw up its hands and say that it is
 impossible to know if it is conscious, because even humans cannot 
 agree
 on what consciousness is.

 Augustin said about (subjective) *time* that he knows perfectly what 
 it
 is, but that if you ask him to say what it is, then he admits being
 unable to say anything. I think that this applies to consciousness.
 We know what it is, although only in some personal and uncommunicable
 way.
 Now this happens to be true also for many mathematical concept.
 Strictly speaking we don't know how to define the natural numbers, and
 we know today that indeed we cannot define them in a communicable way,
 that is without assuming the auditor knows already what they are.

 So what can we do. We can do what mathematicians do all the time. We
 can abandon the very idea of *defining* what consciousness is, and try
 instead to focus on principles or statements about which we can agree
 that they apply to consciousness. Then we can search for 
 (mathematical)
 object obeying to such or similar principles. This can be made easier
 by admitting some theory or realm for consciousness like the idea that
 consciousness could apply to *some* machine or to some *computational
 events etc.

 We could agree for example that:
 1) each one of us know what consciousness is, but nobody can prove
 he/she/it is conscious.
 2) consciousness is related to inner personal or self-referential
 modality
 etc.

 This is how I proceed in Conscience et Mécanisme.  (conscience is
 the french for consciousness, conscience morale is the french for 
 the
 english conscience).



 In particular I don't think an AI could be expected to claim that it
 knows that it is conscious, that consciousness is a deep and 
 intrinsic
 part of itself, that whatever else it might be mistaken about it 
 could
 not be mistaken about being conscious.  I don't see any logical way 
 it
 could reach this conclusion by studying the corpus of writings on the
 topic.  If anyone disagrees, I'd like to hear how it could happen.

 As far as a machine is correct, when she introspects herself, she
 cannot not discover a gap between truth (p) and provability (Bp). The
 machine can discover correctly (but not necessarily in a completely
 communicable way) a gap between provability (which can potentially
 leads to falsities, despite correctness) and the incorrigible
 knowability or knowledgeability (Bp  p), and then the gap between
 those notions and observability (Bp  Dp) and sensibility (Bp  Dp 
 p). Even without using the conventional name of 

Re: Asifism

2007-06-28 Thread Torgny Tholerus

Bruno Marchal skrev:

 But nobody really doubts about his own consciousness 
 (especially going to the dentist), despite we cannot define it nor 
 explain it completely.
That sentence is wrong.  There is at least one person (me...) that 
really doubts about my own consciousness.  I am conscious about that I 
am not conscious.  I know that I does not know anything.  When I go to 
the dentist I behave as if I am feeling strong pain, because my pain 
center is directly stimulated by the dentist, which is causing my behaviour.

Consciouslike behaviour is good for a species to survive.  Therefore 
human beings show that type of behaviour.

-- 
Torgny Tholerus


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: How would a computer know if it were conscious?

2007-06-28 Thread David Nyman
On 28/06/07, Bruno Marchal [EMAIL PROTECTED] wrote:

Hi Bruno

The remarks you comment on are certainly not the best-considered or most
cogently expressed of my recent posts.  However, I'll try to clarify if you
have specific questions.  As to why I said I'd rather not use the term
'consciousness', it's because of some recent confusion and circular disputes
(e.g. with Torgny, or about whether hydrogen atoms are 'conscious').  Some
of the sometimes confused senses (not by you, I hasten to add!) seem to be:

1) The fact of possessing awareness
2) The fact of being aware of one's awareness
3) the fact of being aware of some content of one's awareness

So now I would prefer to talk about self-relating to a 1-personal 'world',
where previously I might have said 'I am conscious', and that such a world
mediates or instantiates 3-personal content.  I've tried to root this (in
various posts) in a logically or semantically primitive notion of
self-relation that could underly 0, 1, or 3-person narratives, and to
suggest that such self-relation might be intuited as 'sense' or 'action'
depending on the narrative selected. But crucially such nuances would merely
be partial takes on the underlying self-relation, a 'grasp' which is not
decomposable.

So ISTM that questions should attempt to elicit the machine's self-relation
to such a world and its contents: i.e. it's 'grasp' of a reality analogous
to our own.  And ISTM the machine could also ask itself such questions, just
as we can, if indeed such a world existed for it.

I realise of course that it's fruitless to try to impose my jargon on anyone
else, but I've just been trying to see whether I could become less confused
by expressing things in this way.  Of course, a reciprocal effect might just
be to make others more confused!

David



 Le 21-juin-07, à 01:07, David Nyman a écrit :

 
  On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:
 
  Personally I don' think we can be *personally* mistaken about our own
  consciousness even if we can be mistaken about anything that
  consciousness could be about.
 
  I agree with this, but I would prefer to stop using the term
  'consciousness' at all.


 Why?



  To make a decision (to whatever degree of
  certainty) about whether a machine possessed a 1-person pov analogous
  to a human one, we would surely ask it the same sort of questions one
  would ask a human.  That is: questions about its personal 'world' -
  what it sees, hears, tastes (and perhaps extended non-human
  modalitiies); what its intentions are, and how it carries them into
  practice.  From the machine's point-of-view, we would expect it to
  report such features of its personal world as being immediately
  present (as ours are), and that it be 'blind' to whatever 'rendering
  mechanisms' may underlie this (as we are).
 
  If it passed these tests, it would be making similar claims on a
  personal world as we do, and deploying this to achieve similar ends.
  Since in this case it could ask itself the same questions that we can,
  it would have the same grounds for reaching the same conclusion.
 
  However, I've argued in the other bit of this thread against the
  possibility of a computer in practice being able to instantiate such a
  1-person world merely in virtue of 'soft' behaviour (i.e.
  programming).  I suppose I would therefore have to conclude that no
  machine could actually pass the tests I describe above - whether self-
  administered or not - purely in virtue of running some AI program,
  however complex.  This is an empirical prediction, and will have to
  await an empirical outcome.


 Now I have big problems to understand this post. I must think ... (and
 go).

 Bye,

 Bruno



 
 
  On Jun 5, 3:12 pm, Bruno Marchal [EMAIL PROTECTED] wrote:
  Le 03-juin-07, à 21:52, Hal Finney a écrit :
 
 
 
  Part of what I wanted to get at in my thought experiment is the
  bafflement and confusion an AI should feel when exposed to human
  ideas
  about consciousness.  Various people here have proffered their own
  ideas, and we might assume that the AI would read these suggestions,
  along with many other ideas that contradict the ones offered here.
  It seems hard to escape the conclusion that the only logical response
  is for the AI to figuratively throw up its hands and say that it is
  impossible to know if it is conscious, because even humans cannot
  agree
  on what consciousness is.
 
  Augustin said about (subjective) *time* that he knows perfectly what
  it
  is, but that if you ask him to say what it is, then he admits being
  unable to say anything. I think that this applies to consciousness.
  We know what it is, although only in some personal and uncommunicable
  way.
  Now this happens to be true also for many mathematical concept.
  Strictly speaking we don't know how to define the natural numbers, and
  we know today that indeed we cannot define them in a communicable way,
  that is without assuming the auditor knows already 

Re: Asifism

2007-06-28 Thread Quentin Anciaux

On Thursday 28 June 2007 16:52:12 Torgny Tholerus wrote:
 Bruno Marchal skrev:
  But nobody really doubts about his own consciousness
  (especially going to the dentist), despite we cannot define it nor
  explain it completely.

 That sentence is wrong.  

Don't think so...

 There is at least one person (me...) that 
 really doubts about my own consciousness.  I am conscious about that I
 am not conscious.  I know that I does not know anything.  When I go to
 the dentist I behave as if I am feeling strong pain, because my pain
 center is directly stimulated by the dentist, which is causing my
 behaviour.

What is behaving ? (can't ask for who obviously you're insisting that there 
isn't any).

 Consciouslike behaviour is good for a species to survive.  Therefore
 human beings show that type of behaviour.

I don't know what is consciouslike behaviour without consciousness in the 
first place.

Quenton

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism

2007-06-28 Thread Torgny Tholerus





Quentin Anciaux skrev:

  On Thursday 28 June 2007 16:52:12 Torgny Tholerus wrote:
  
  
Consciouslike behaviour is good for a species to survive.  Therefore
human beings show that type of behaviour.

  
  I don't know what is consciouslike behaviour without consciousness in the 
first place.
  

An animal can show a consciouslike behaviour. When a dog sees a
rabbit, then the dog behaves as if he is conscious about that there is
food in front of him. He starts running after the rabbit as quick as
he can.

-- 
Torgny Tholerus

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups Everything List group.  To post to this group, send email to [EMAIL PROTECTED]  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/everything-list?hl=en  -~--~~~~--~~--~--~---







Re: Asifism

2007-06-28 Thread Quentin Anciaux

On Thursday 28 June 2007 19:22:35 Torgny Tholerus wrote:
  Quentin Anciaux skrev:
 On Thursday 28 June 2007 16:52:12 Torgny Tholerus wrote:

 Consciouslike behaviour is good for a species to survive.  Therefore
 human beings show that type of behaviour.

 I don't know what is consciouslike behaviour without consciousness in the
 first place.

  An animal can show a consciouslike behaviour.  When a dog sees a rabbit,
 then the dog behaves as if he is conscious about that there is food in
 front of him.  He starts running after the rabbit as quick as he can.

  --
  Torgny Tholerus

It doesn't mean anything... what means as if if the thing you are comparing 
it to does not exists (here consciousness). You can't act as if you are 
conscious if cousciousness is something which does not exists, it simply 
doesn't mean anything. By the way, I'm sure dogs are conscious (have inner 
personal world).

Quentin

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-06-28 Thread LauLuna


This is not fair to Penrose. He has convincingly argued in 'Shadows of
the Mind' that human mathematical intelligence cannot be a knowably
sound algorithm.

Assume X is an algorithm representing the human mathematical
intelligence. The point is not that man cannot recognize X as
representing his own intellingence, it is rather that human
intellingence cannot know X to be sound (independently of whether X is
recognized as what it is). And this is strange because humans could
exhaustively inspect X and they should find it correct since it
contains the same principles of reasoning human intelligence employs!

One way out is claiming that human intelligence is insonsistent.
Another, that such a thing as human intelligence could not exist,
since it is not well defined. The latter seems more of a serious
objection to me. So, I consider Penrose's argument inconclusive.

Anyway, the use Lucas and Penrose make of Gödel's theorem make it seem
less likely that human reason can be reproduced by machines. This must
be granted.

Regards


On 9 jun, 18:40, Bruno Marchal [EMAIL PROTECTED] wrote:
 Hi Chris,

 Le 09-juin-07, à 13:03, chris peck a écrit :







  Hello

  The time has come again when I need to seek advice from the
  everything-list
  and its contributors.

  Penrose I believe has argued that the inability to algorithmically
  solve the
  halting problem but the ability of humans, or at least Kurt Godel, to
  understand that formal systems are incomplete together demonstrate that
  human reason is not algorithmic in nature - and therefore that the AI
  project is fundamentally flawed.

  What is the general consensus here on that score. I know that there
  are many
  perspectives here including those who agree with Penrose. Are there any
  decent threads I could look at that deal with this issue?

  All the best

  Chris.

 This is a fundamental issue, even though things are clear for the
 logicians since 1921 ...
 But apparently it is still very cloudy for the physicists (except
 Hofstadter!).

 I have no time to explain, but let me quote the first paragraph of my
 Siena papers (your question is at the heart of the interview of the
 lobian machine and the arithmetical interpretation of Plotinus).

 But you can find many more explanation in my web pages (in french and
 in english). In a nutshell, Penrose, though quite courageous and more
 lucid on the mind body problem than the average physicist, is deadly
 mistaken on Godel. Godel's theorem are very lucky event for mechanism:
 eventually it leads to their theologies ...

 The book by Franzen on the misuse of Godel is quite good. An deep book
 is also the one by Judson Webb, ref in my thesis). We will have the
 opportunity to come back on this deep issue, which illustrate a gap
 between logicians and physicists.

 Best,

 Bruno

 -- (excerp of A Purely Arithmetical, yet Empirically Falsifiable,
 Interpretation of Plotinus¹ Theory of Matter Cie 2007 )
 1) Incompleteness and Mechanism
 There is a vast literature where G odel¹s first and second
 incompleteness theorems are used to argue that human beings are
 different of, if not superior to, any machine. The most famous attempts
 have been given by J. Lucas in the early sixties and by R. Penrose in
 two famous books [53, 54]. Such type of argument are not well
 supported. See for example the recent book by T. Franzen [21]. There is
 also a less well known tradition where G odel¹s theorems is used in
 favor of the mechanist thesis. Emil Post, in a remarkable anticipation
 written about ten years before G odel published his incompleteness
 theorems, already discovered both the main ³G odelian motivation²
 against mechanism, and the main pitfall of such argumentations [17,
 55]. Post is the first discoverer 1 of Church Thesis, or Church Turing
 Thesis, and Post is the first one to prove the first incompleteness
 theorem from a statement equivalent to Church thesis, i.e. the
 existence of a universal‹Post said ³complete²‹normal (production)
 system 2. In his anticipation, Post concluded at first that the
 mathematician¹s mind or that the logical process is essentially
 creative. He adds : ³It makes of the mathematician much more than a
 clever being who can do quickly what a machine could do ultimately. We
 see that a machine would never give a complete logic ; for once the
 machine is made we could prove a theorem it does not prove²(Post
 emphasis). But Post quickly realized that a machine could do the same
 deduction for its own mental acts, and admits that : ³The conclusion
 that man is not a machine is invalid. All we can say is that man cannot
 construct a machine which can do all the thinking he can. To illustrate
 this point we may note that a kind of machine-man could be constructed
 who would prove a similar theorem for his mental acts.²
 This has probably constituted his motivation for lifting the term
 creative to his set theoretical formulation of mechanical universality
 [56]. To be sure, an 

Re: Asifism

2007-06-28 Thread Brent Meeker

Quentin Anciaux wrote:
 On Thursday 28 June 2007 16:52:12 Torgny Tholerus wrote:
 Bruno Marchal skrev:
 But nobody really doubts about his own consciousness
 (especially going to the dentist), despite we cannot define it nor
 explain it completely.
 That sentence is wrong.  
 
 Don't think so...
 
 There is at least one person (me...) that 
 really doubts about my own consciousness.  I am conscious about that I
 am not conscious.  I know that I does not know anything.  When I go to
 the dentist I behave as if I am feeling strong pain, because my pain
 center is directly stimulated by the dentist, which is causing my
 behaviour.
 
 What is behaving ? (can't ask for who obviously you're insisting that there 
 isn't any).
 
 Consciouslike behaviour is good for a species to survive.  Therefore
 human beings show that type of behaviour.
 
 I don't know what is consciouslike behaviour without consciousness in the 
 first place.
 
 Quenton

But if consciousness is implied by conscious like behavior then it may 
be explained by the same things that explain behavior, i.e. physics and 
chemistry.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-06-28 Thread Jesse Mazer

LauLuna wrote:




This is not fair to Penrose. He has convincingly argued in 'Shadows of
the Mind' that human mathematical intelligence cannot be a knowably
sound algorithm.

Assume X is an algorithm representing the human mathematical
intelligence. The point is not that man cannot recognize X as
representing his own intellingence, it is rather that human
intellingence cannot know X to be sound (independently of whether X is
recognized as what it is). And this is strange because humans could
exhaustively inspect X and they should find it correct since it
contains the same principles of reasoning human intelligence employs!

But why do you think human mathematical intelligence should be based on 
nothing more than logical deductions from certain principles of reasoning, 
like an axiomatic system? It seems to me this is the basic flaw in the 
argument--for an axiomatic system we can look at each axiom individually, 
and if we think they're all true statements about mathematics, we can feel 
confident that any theorems derived logically from these axioms should be 
true as well. But if someone gives you a detailed simulation of the brain of 
a human mathematician, there's nothing analogous you can do to feel 100% 
certain that the simulated brain will never give you a false statement. It 
helps if you actually imagine such a simulation being performed, and then 
think about what Godel's theorem would tell you about this simulation, as I 
did in this post:

http://groups.google.com/group/everything-list/browse_thread/thread/f97ba8b290f7/5627eb66017304f2?lnk=gstrnum=1#5627eb66017304f2

Jesse

_
Make every IM count. Download Messenger and join the i'm Initiative now. 
It's free. http://im.live.com/messenger/im/home/?source=TAGHM_June07


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Asifism

2007-06-28 Thread Quentin Anciaux

On Thursday 28 June 2007 21:59:40 Brent Meeker wrote:
 Quentin Anciaux wrote:
  On Thursday 28 June 2007 16:52:12 Torgny Tholerus wrote:
  Bruno Marchal skrev:
  But nobody really doubts about his own consciousness
  (especially going to the dentist), despite we cannot define it nor
  explain it completely.
 
  That sentence is wrong.
 
  Don't think so...
 
  There is at least one person (me...) that
  really doubts about my own consciousness.  I am conscious about that I
  am not conscious.  I know that I does not know anything.  When I go to
  the dentist I behave as if I am feeling strong pain, because my pain
  center is directly stimulated by the dentist, which is causing my
  behaviour.
 
  What is behaving ? (can't ask for who obviously you're insisting that
  there isn't any).
 
  Consciouslike behaviour is good for a species to survive.  Therefore
  human beings show that type of behaviour.
 
  I don't know what is consciouslike behaviour without consciousness in the
  first place.
 
  Quenton

 But if consciousness is implied by conscious like behavior then it may
 be explained by the same things that explain behavior, i.e. physics and
 chemistry.

 Brent Meeker

Well, I don't see how that denies consciousness... In the other hand, 
currently, physics and chemistry don't explain everything... and maybe Bruno 
hypothesis is what underlink all this... still that does not deny 
consciousness phenomena. And still I *can't* accept any (so called) proof 
that consciousness does not exists given *I* at least am conscious for sure.

Regards,
Quentin

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-06-28 Thread LauLuna

For any Turing machine there is an equivalent axiomatic system;
whether we could construct it or not, is of no significance here.

Reading your link I was impressed by Russell Standish's sentence:

'I cannot prove this statement'

and how he said he could not prove it true and then proved it true.

Isn't it more likely that the sentence is paradoxical and therefore
non propositional. This is what could make a difference between humans
and computers: the correspinding sentence for a computer (when 'I' is
replaced with the description of a computer) could not be non
propositional: it would be a gödelian sentence.

Regards

On Jun 28, 10:05 pm, Jesse Mazer [EMAIL PROTECTED] wrote:
 LauLuna wrote:

 This is not fair to Penrose. He has convincingly argued in 'Shadows of
 the Mind' that human mathematical intelligence cannot be a knowably
 sound algorithm.

 Assume X is an algorithm representing the human mathematical
 intelligence. The point is not that man cannot recognize X as
 representing his own intellingence, it is rather that human
 intellingence cannot know X to be sound (independently of whether X is
 recognized as what it is). And this is strange because humans could
 exhaustively inspect X and they should find it correct since it
 contains the same principles of reasoning human intelligence employs!

 But why do you think human mathematical intelligence should be based on
 nothing more than logical deductions from certain principles of reasoning,
 like an axiomatic system? It seems to me this is the basic flaw in the
 argument--for an axiomatic system we can look at each axiom individually,
 and if we think they're all true statements about mathematics, we can feel
 confident that any theorems derived logically from these axioms should be
 true as well. But if someone gives you a detailed simulation of the brain of
 a human mathematician, there's nothing analogous you can do to feel 100%
 certain that the simulated brain will never give you a false statement. It
 helps if you actually imagine such a simulation being performed, and then
 think about what Godel's theorem would tell you about this simulation, as I
 did in this post:

 http://groups.google.com/group/everything-list/browse_thread/thread/f...

 Jesse

 _
 Make every IM count. Download Messenger and join the i'm Initiative now.
 It's free.http://im.live.com/messenger/im/home/?source=TAGHM_June07


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Penrose and algorithms

2007-06-28 Thread Jesse Mazer

LauLuna wrote:



For any Turing machine there is an equivalent axiomatic system;
whether we could construct it or not, is of no significance here.

But for a simulation of a mathematician's brain, the axioms wouldn't be 
statements about arithmetic which we could inspect and judge whether they 
were true or false individually, they'd just be statements about the initial 
state and behavior of the simulated brain. So again, there'd be no way to 
inspect the system and feel perfectly confident the system would never 
output a false statement about arithmetic, unlike in the case of the 
axiomatic systems used by mathematicians to prove theorems.


Reading your link I was impressed by Russell Standish's sentence:

'I cannot prove this statement'

and how he said he could not prove it true and then proved it true.

But prove does not have any precisely-defined meaning here. If you wanted 
to make it closer to Godel's theorem, then again, you'd have to take a 
detailed simulation of a human mind which can output various statements, and 
then look at the statement The simulation will never output this 
statement--certainly the simulated mind can see that if he doesn't make a 
mistake he *will* never output that statement, but he can't be 100% sure 
he'll never make a mistake, and the statement itself is only about the 
well-defined notion of what output the simulation gives, not in more 
ill-defined notions of what the simulation knows or can prove in its own 
mind.

Jesse

_
Get a preview of Live Earth, the hottest event this summer - only on MSN 
http://liveearth.msn.com?source=msntaglineliveearthhm


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---