On Fri, Jun 10, 2011 at 09:45:56AM +0200, Bruno Marchal wrote:
> 
> I realize I have been clear on this in some FOR list post, perhaps
> not here. I don't think I have varied on this. To be conscious, you
> need only to be universal. 

I have heard of universality being argued to be necessary (to which I
have some sympathy, but not beyond doubt), but not sufficient
before. Why do you say that universality is sufficient for
consciousness? Surely the machine needs to actually be running the right
program in order to be consious.

> To be self-conscious, and have free-will,
> you need to be Löbian. 
> I have no doubts that planaria and other
> worms are conscious, but they have no notion of self and of others
> (or very crude one). Löbianity is more sophisticated. They can infer
> proposition on themselves and on others. They can attribute
> consciousness on others.

OK - I understand this. But effectively Loeb's axiom gives rise to
self-referential discourse, so it doesn't really add anything over
saying something is self-aware to say it is Loebian. It would be nice
if the approach gave us some new tests of self-awareness, or even
better ways of quantifying the level of thinking. 

> 
> In the arithmetical term, consciousness appears with Robinson
> Arithmetic (= Peano Arithmetic without the induction axioms), and
> Löbianity (and self-consciousness) appears with Peano Arithmetic.
> Löbian entity have the same rich theology (captured *completely* by
> G and G* at the propositional level).
> 

Sufficient or necessary? I find it hard to believe that all RA
theorems are consious, but I can accept that some might be, given that
a universal machine must appear within RA.

> So to be conscious, all you need is a brain, or a computer. All
> animals with a centralized nervous system are probably conscious. 

Again, I think you can only say it is necessary to have a brain, or a
computer, but not sufficient. Unless I'm missing something.


> 
> 
> >Assuming you're
> >identifying free will (or Loebianity) with consciousness, then only
> >selectively granting species might get around the anthropic ant
> >argument.
> 
> My critics of that argument was more about the use of a form of
> Absolute Self-sampling assumption. It makes no sense for me to ask
> what is the probability of being a bacterium, or a human, or an
> alien.
>  The only probability is the probability to have some
> conscious state starting from having some conscious state (cf. RSSA
> versus ASSA).
> 

I take it that you find the original doomsday argument as absurd. I
don't, so I'm keen to hear other people's rationlisation of why it
doesn't work. Even better would be an empirical test that it fails abysmally. 

Only the ASSA do I find absurd :). But I don't use this.

-- 

----------------------------------------------------------------------------
Prof Russell Standish                  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics      hpco...@hpcoders.com.au
University of New South Wales          http://www.hpcoders.com.au
----------------------------------------------------------------------------

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to