On Tue, Sep 16, 2014 at 06:21:04PM +0200, Bruno Marchal wrote:
> Hi Russell, Hi Others,
> 
> Sorry for the delay. Some comments on your (Russell) MGA paper
> appear below.
> 
> 
> 
> Incidentally, when you see the complexity of the interaction between
> the roots of trees and the soils, chemicals and through bacteria,
> and when you believe, as some experiences suggest, that trees and
> plant communicate, I am not so sure if trees and forest, perhaps on
> different time scale, have not some awareness,  and a self-awareness
> of some sort. (I take awareness as synonymous with consciousness,
> although I change my mind below!).
> 

Intra-plant communication appears to be too simple to support
consciousness, but rhizozone networks are indeed a different
story. We can leave it as an open problem whether the rhizozone of a
forest could be conscious, just as we're prepared to consider ant
colonies as conscious.

> 
> 
> OK. I will make a try. Awareness in its most basic forms comes from
> the ability to distinguish a good feeling from a bad feeling. The
> amoeba, like us, knows (in a weak sense) that eating some paramecium
> is good, but that hot or to cold place are bad, and this makes it
> reacts accordingly with some high degrees of relative
> self-referential correctness. 

This definition would grant consiousness to thermostats. I don't
believe it is enough - it really evacuates the concept of
consciousness. But until there is some agreement on what
"consciousness" means, this will be a sterile debate.

...

> 
> I agree that to have awareness, you need a self, a third person
> self. But that is well played by the relative body (actually bodies,
> incarnate through the UD).
> 
> Maybe we should define consciousness by self-awareness, and then
> self-consciousness would be the higher form of self-self-awareness?
> That makes one "self" per reflexive loop.
> 

What's the distinction?

> 
> 
> 
> 
> >
> >Attacks on anthropic reasoning will work better by choosing a
> >reference class which is indisputably a subset of the reference class,
> >such as all human beings, and then demonstrating a contradiction. I
> >thought I had come up with such an example with my "Chinese paradox",
> >but it turned out anthropic reasoning was rescued from that by the
> >peculiar distribution of country population sizes that happens to hold
> >in reality. AR has proved remakably resilient to empirical tests.
> 
> I am still a bit agnostic for its use in the fundamentals, as the
> probability, with computationalism, are always relative. It is the
> same in quantum mechanics, where the probabilities are not on
> states, but on relative states: they have always the form <a I b>^2,
> the probability for finding b when being in the state a.
> But we can extract useful information from the Anthropic principle,
> and even from the most general Turing-thropic. Just saying that the
> laws of physics should be a calculus of relative probabilities.

But the AP is applied relatively anyway. Indeed, there is evidence
that the absolute measure is not positive real-valued, so the only
meaningful probabilities are relative.


> 
> 
> PS I have printed your MGA paper, and so read it and comment it
> despite being in a busy period.
> 
> Let me say here, as we are in the good thread, two main points,
> where we might have vocabulary issue, or perhaps disagree on
> something? So you might think about this and be prepared :)
> 
> The first point concerns the relation between counterfactualness and
> modal realism, that you link in a way which makes me a bit uneasy. I
> do believe in some links between them, though, but it might not
> correspond to yours. Examples will follow later.
> 
> The second point is the one we have already discussed, and concerns
> the definition of supervenience. We do both agree on the Stanford
> definition, but I am still thinking you are misusing it when apply
> to the Alice and Bob in the classroom situation.
> 
> You agree that
>   C supervenes on B if to change C it is necessary to change B.
> For example, consciousness C supervenes on a brain activity B,
> because to change that consciousness you need to change that brain
> activity.
> 

I address this in the paper. What you go on to say that consciousness
C (ie the consciousness attached to body C, which is in B) supervenes
on B+A, which is correct. But my point is that consciousness itself
(not necessarily attached to a particular body or person) is not
supervenient on B+A in this case, as the consciousness could be a C or
a D (where D supervenes on A).

Where this matters is that one cannot say consciousness supervenes on
the universal dovetailer.


-- 

----------------------------------------------------------------------------
Prof Russell Standish                  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics      hpco...@hpcoders.com.au
University of New South Wales          http://www.hpcoders.com.au

 Latest project: The Amoeba's Secret 
         (http://www.hpcoders.com.au/AmoebasSecret.html)
----------------------------------------------------------------------------

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to