Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/15/2015 11:14 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 10:29 PM, Bruce Kellett wrote:


The AI that I envisage will probably be based on a learning program of some sort, that 
will have to learn in much the same way as an infant human learns. I doubt that we 
will ever be able to create an AI that is essentially an intelligent adult human when 
it is first turned on.


I agree with that, but once an AI is realized it will be possible to copy it.  And if 
it's digital it will be possible to implement it using different hardware.  If it's not 
digital, it will (in principle) be able to implement it arbitrarily closely with a 
digital device.  And we will have the same question - what is that makes that hardware 
device conscious?  I don't see any plausible answer except "Running the program it 
instantiates."


But that does not imply that consciousness is itself a computation. 


I didn't draw that conclusion.  That's the conclusion Bruno wants to draw - or close to it 
(he talks about consciousness supervening on an infinite number of computational threads).


There is not some subroutine in your AI the is labelled "this subroutine computes 
consciousness". Consciousness is a function of the whole functioning system, not of some 
particular feature. 


Right.  And the system includes the environment that the consciousness refers 
to.

That is why I think identifying consciousness with computation is in fact adding some 
additional magic to the machine.


I don't see that it's adding anything, magic or otherwise.  As you say, any process can 
regarded as a computation.




Consciousness arose in nature by a process of natural evolution. Proto-consciousness 
gave some evolutionary advantage, so it grew and developed. 


Yes, it seems like a natural extension of modeling one's surroundings as part of decision 
making, to add yourself into the model in order to imagine yourself making different choices.


Nature did not at some point add the fact that it was a computation, and then it 
suddenly become conscious. Consciousness is a computation only in the trivial sense that 
any physical process can be regarded as a computation, or mapping taking some input to 
some output. There is not some special, magical class of computations that are unique to 
consciousness. Consciousness is an evolved bulk property, not just one specific feature 
of that bulk.


But computation is also not just one specific feature of a process, it's a wholistic 
concept.  So I disagree that there is not some special class of programs that implement 
consciousness; specifically those that model the device as part of it's own deicision 
processes.


Brent



Bruce



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread Bruce Kellett

meekerdb wrote:

On 5/15/2015 10:29 PM, Bruce Kellett wrote:


The AI that I envisage will probably be based on a learning program of 
some sort, that will have to learn in much the same way as an infant 
human learns. I doubt that we will ever be able to create an AI that 
is essentially an intelligent adult human when it is first turned on.


I agree with that, but once an AI is realized it will be possible to 
copy it.  And if it's digital it will be possible to implement it using 
different hardware.  If it's not digital, it will (in principle) be able 
to implement it arbitrarily closely with a digital device.  And we will 
have the same question - what is that makes that hardware device 
conscious?  I don't see any plausible answer except "Running the program 
it instantiates."


But that does not imply that consciousness is itself a computation. 
There is not some subroutine in your AI the is labelled "this subroutine 
computes consciousness". Consciousness is a function of the whole 
functioning system, not of some particular feature. That is why I think 
identifying consciousness with computation is in fact adding some 
additional magic to the machine.


Consciousness arose in nature by a process of natural evolution. 
Proto-consciousness gave some evolutionary advantage, so it grew and 
developed. Nature did not at some point add the fact that it was a 
computation, and then it suddenly become conscious. Consciousness is a 
computation only in the trivial sense that any physical process can be 
regarded as a computation, or mapping taking some input to some output. 
There is not some special, magical class of computations that are unique 
to consciousness. Consciousness is an evolved bulk property, not just 
one specific feature of that bulk.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/15/2015 10:29 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 9:31 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 7:37 PM, Bruce Kellett wrote:

meekerdb wrote:


But you could turn this around and pick some arbitrary sequence/recording and say, 
"Well it would be the right program to be conscious in SOME circumstance, therefore 
it's conscious."


I think it goes without saying that it is a recording of brain activity of a 
conscious person -- not a film of your dog chasing a ball. We have to assume a 
modicum of common sense.


Fine.  But then what is it about the recording of the brain activity of a conscious 
person that makes it conscious?  Why is it a property of just that sequence, when in 
general we would attribute consciousness only to an entity that responded 
intelligently/differently to different circumstances.  We wouldn't attribute 
consciousness based on just a short sequence of behavior such as might be evinced by 
one of Disney's animitronics.


What is it about the brain activity of a conscious person that makes him conscious? 
Whatever made the person conscious in the first instance is what makes the recording 
recreate that conscious moment. 


Unless we know what it is about the brain processes that make it conscious we can't 
know what it is necessary to record.


I thought the idea was that we recorded everything that was going on.


The point here is that consciousness supervenes on the brain activity. This makes no 
ontological claims -- simply an epistemological claim. This brain activity is 
associated with the phenomenon we call consciousness.


So are you assuming that only a brain can instantiate consciousness?


No. All functional brains are conscious does not entail that all consciousness comes 
with function goo in a skull.



Do you not believe that consciousness is a matter of what information processing the 
brain is doing, but that it requires wetware?  Bruno's idea is that he may solve the 
mind-body problem; but you seem not to see any problem.


No, I don't see any particular problem. In fact, if there is a difference between brain 
activity and consciousness, you are introducing some weird dualist Cartesian theatre -- 
the brain activity is only conscious when it is enlivened by some extra computational 
magic stuff.



 Of course consciousness supervenes on brain activity - but maybe not just any brain 
activity (c.f. anesthesia).  The question is whether it can supervene on something else 
and if so, what?


I don't see any problem here -- see above: brain goo activity -> consciousness does not 
mean that consciousness -> brain goo activity.




How we determine whether a person is conscious in the first place is a 
different matter.


But that completely avoids the question of creating a conscious AI program, whether 
it's possible, and whether it's identical with making an intelligent AI program.


I didn't think we were trying to create a conscious AI in this discussion. I think this 
is probably possible, and that the means by which it is done will probably be quite 
different from programs written to control robots. I suspect that the difference might 
well be in the provision of language skills -- so that an internal narrative can be 
developed.


The AI that I envisage will probably be based on a learning program of some sort, that 
will have to learn in much the same way as an infant human learns. I doubt that we will 
ever be able to create an AI that is essentially an intelligent adult human when it is 
first turned on.


I agree with that, but once an AI is realized it will be possible to copy it.  And if it's 
digital it will be possible to implement it using different hardware.  If it's not 
digital, it will (in principle) be able to implement it arbitrarily closely with a digital 
device.  And we will have the same question - what is that makes that hardware device 
conscious?  I don't see any plausible answer except "Running the program it instantiates."


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread Bruce Kellett

meekerdb wrote:

On 5/15/2015 9:31 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 7:37 PM, Bruce Kellett wrote:

meekerdb wrote:


But you could turn this around and pick some arbitrary 
sequence/recording and say, "Well it would be the right program to 
be conscious in SOME circumstance, therefore it's conscious."


I think it goes without saying that it is a recording of brain 
activity of a conscious person -- not a film of your dog chasing a 
ball. We have to assume a modicum of common sense.


Fine.  But then what is it about the recording of the brain activity 
of a conscious person that makes it conscious?  Why is it a property 
of just that sequence, when in general we would attribute 
consciousness only to an entity that responded 
intelligently/differently to different circumstances.  We wouldn't 
attribute consciousness based on just a short sequence of behavior 
such as might be evinced by one of Disney's animitronics.


What is it about the brain activity of a conscious person that makes 
him conscious? Whatever made the person conscious in the first 
instance is what makes the recording recreate that conscious moment. 


Unless we know what it is about the brain processes that make it 
conscious we can't know what it is necessary to record.


I thought the idea was that we recorded everything that was going on.


The point here is that consciousness supervenes on the brain activity. 
This makes no ontological claims -- simply an epistemological claim. 
This brain activity is associated with the phenomenon we call 
consciousness.


So are you assuming that only a brain can instantiate consciousness?


No. All functional brains are conscious does not entail that all 
consciousness comes with function goo in a skull.



Do you not believe that consciousness is a matter of what information 
processing the brain is doing, but that it requires wetware?  Bruno's 
idea is that he may solve the mind-body problem; but you seem not to see 
any problem.


No, I don't see any particular problem. In fact, if there is a 
difference between brain activity and consciousness, you are introducing 
some weird dualist Cartesian theatre -- the brain activity is only 
conscious when it is enlivened by some extra computational magic stuff.



 Of course consciousness supervenes on brain activity - but 
maybe not just any brain activity (c.f. anesthesia).  The question is 
whether it can supervene on something else and if so, what?


I don't see any problem here -- see above: brain goo activity -> 
consciousness does not mean that consciousness -> brain goo activity.



How we determine whether a person is conscious in the first place is a 
different matter.


But that completely avoids the question of creating a conscious AI 
program, whether it's possible, and whether it's identical with making 
an intelligent AI program.


I didn't think we were trying to create a conscious AI in this 
discussion. I think this is probably possible, and that the means by 
which it is done will probably be quite different from programs written 
to control robots. I suspect that the difference might well be in the 
provision of language skills -- so that an internal narrative can be 
developed.


The AI that I envisage will probably be based on a learning program of 
some sort, that will have to learn in much the same way as an infant 
human learns. I doubt that we will ever be able to create an AI that is 
essentially an intelligent adult human when it is first turned on.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-15 Thread meekerdb

On 5/15/2015 10:10 PM, Stathis Papaioannou wrote:





On 13 May 2015, at 11:59 am, Jason Resch > wrote:


Chalmer's fading quailia argument  shows that if 
replacing a biological neuron with a functionally equivalent silicon neuron changed 
conscious perception, then it would lead to an absurdity, either:
1. quaila fade/change as silicon neurons gradually replace the biological ones, leading 
to a case where the quaila are being completely out of touch with the functional state 
of the brain.

or
2. the replacement eventually leads to a sudden and complete loss of all quaila, but 
this suggests a single neuron, or even a few molecules of that neuron, when 
substituted, somehow completely determine the presence of quaila


His argument is convincing, but what happens when we replace neurons not with 
functionally identical ones, but with neurons that fire according to a RNG. In all but 
1 case, the random firings of the neurons will result in completely different 
behaviors, but what about that 1 (immensely rare) case where the random neuron firings 
(by chance) equal the firing patterns of the substituted neurons.


In this case, behavior as observed from the outside is identical. Brain patterns and 
activity are similar, but according to computationalism the consciousness is different, 
or perhaps a zombie (if all neurons are replaced with random firing neurons). Presume 
that the activity of neurons in the visual cortex is required for visual quaila, and 
that all neurons in the visual cortex are replaced with random firing neurons, which by 
chance, mimic the behavior of neurons when viewing an apple.


Is this not an example of fading quaila, or quaila desynchronized from the brain state? 
Would this person feel that they are blind, or lack visual quaila, all the while not 
being able to express their deficiency? I used to think when Searle argued this exact 
same thing would occur when substituted functionally identical biological neurons with 
artificial neurons that it was completely ridiculous, for there would be no room in the 
functionally equivalent brain to support thoughts such as "help! I can't see, I am 
blind!" for the information content in the brain is identical when the neurons are 
functionally identical.


But then how does this reconcile with fading quaila as the result of substituting 
randomly firing neurons? The computations are not the same, so presumably the 
consciousness is not the same. But also, the information content does not support 
knowing/believing/expressing/thinking something is wrong. If anything, the information 
content of this random brain is much less, but it seems the result is something where 
the quaila is out of sync with the global state of the brain. Can anyone else where 
shed some clarity on what they think happens, and how to explain it in the rare case of 
luckily working randomly firing neurons, when only partial substitutions of the neurons 
in a brain is performed?


So Jason, are you still convinced that the random neurons would not be conscious? If you 
are, you are putting the cart before the horse. The fading qualia argument makes the 
case that any process preserving function also preserves consciousness. Any process; 
that computations are one such process is fortuitous.


But that seems unlikely. How much function, for  how long, in what circumstances?  For a 
millisecond?   Does "function" include the internal functions such as neurons firing?  
hormones diffusing?  or just sending output electrical pulses to muscles?


You write "any process", apparently including random/accidental ones. So how are we to 
decide whether my car is conscious?  It executes processes and functions.


Brent

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-15 Thread Stathis Papaioannou




> On 13 May 2015, at 11:59 am, Jason Resch  wrote:
> 
> Chalmer's fading quailia argument shows that if replacing a biological neuron 
> with a functionally equivalent silicon neuron changed conscious perception, 
> then it would lead to an absurdity, either:
> 1. quaila fade/change as silicon neurons gradually replace the biological 
> ones, leading to a case where the quaila are being completely out of touch 
> with the functional state of the brain.
> or
> 2. the replacement eventually leads to a sudden and complete loss of all 
> quaila, but this suggests a single neuron, or even a few molecules of that 
> neuron, when substituted, somehow completely determine the presence of quaila
> 
> His argument is convincing, but what happens when we replace neurons not with 
> functionally identical ones, but with neurons that fire according to a RNG. 
> In all but 1 case, the random firings of the neurons will result in 
> completely different behaviors, but what about that 1 (immensely rare) case 
> where the random neuron firings (by chance) equal the firing patterns of the 
> substituted neurons.
> 
> In this case, behavior as observed from the outside is identical. Brain 
> patterns and activity are similar, but according to computationalism the 
> consciousness is different, or perhaps a zombie (if all neurons are replaced 
> with random firing neurons). Presume that the activity of neurons in the 
> visual cortex is required for visual quaila, and that all neurons in the 
> visual cortex are replaced with random firing neurons, which by chance, mimic 
> the behavior of neurons when viewing an apple.
> 
> Is this not an example of fading quaila, or quaila desynchronized from the 
> brain state? Would this person feel that they are blind, or lack visual 
> quaila, all the while not being able to express their deficiency? I used to 
> think when Searle argued this exact same thing would occur when substituted 
> functionally identical biological neurons with artificial neurons that it was 
> completely ridiculous, for there would be no room in the functionally 
> equivalent brain to support thoughts such as "help! I can't see, I am blind!" 
> for the information content in the brain is identical when the neurons are 
> functionally identical.
> 
> But then how does this reconcile with fading quaila as the result of 
> substituting randomly firing neurons? The computations are not the same, so 
> presumably the consciousness is not the same. But also, the information 
> content does not support knowing/believing/expressing/thinking something is 
> wrong. If anything, the information content of this random brain is much 
> less, but it seems the result is something where the quaila is out of sync 
> with the global state of the brain. Can anyone else where shed some clarity 
> on what they think happens, and how to explain it in the rare case of luckily 
> working randomly firing neurons, when only partial substitutions of the 
> neurons in a brain is performed?

So Jason, are you still convinced that the random neurons would not be 
conscious? If you are, you are putting the cart before the horse. The fading 
qualia argument makes the case that any process preserving function also 
preserves consciousness. Any process; that computations are one such process is 
fortuitous.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/15/2015 9:31 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 7:37 PM, Bruce Kellett wrote:

meekerdb wrote:


But you could turn this around and pick some arbitrary sequence/recording and say, 
"Well it would be the right program to be conscious in SOME circumstance, therefore 
it's conscious."


I think it goes without saying that it is a recording of brain activity of a conscious 
person -- not a film of your dog chasing a ball. We have to assume a modicum of common 
sense.


Fine.  But then what is it about the recording of the brain activity of a conscious 
person that makes it conscious?  Why is it a property of just that sequence, when in 
general we would attribute consciousness only to an entity that responded 
intelligently/differently to different circumstances.  We wouldn't attribute 
consciousness based on just a short sequence of behavior such as might be evinced by 
one of Disney's animitronics.


What is it about the brain activity of a conscious person that makes him conscious? 
Whatever made the person conscious in the first instance is what makes the recording 
recreate that conscious moment. 


Unless we know what it is about the brain processes that make it conscious we can't know 
what it is necessary to record.


The point here is that consciousness supervenes on the brain activity. This makes no 
ontological claims -- simply an epistemological claim. This brain activity is associated 
with the phenomenon we call consciousness.


So are you assuming that only a brain can instantiate consciousness? Do you not believe 
that consciousness is a matter of what information processing the brain is doing, but that 
it requires wetware?  Bruno's idea is that he may solve the mind-body problem; but you 
seem not to see any problem.  Of course consciousness supervenes on brain activity - but 
maybe not just any brain activity (c.f. anesthesia).  The question is whether it can 
supervene on something else and if so, what?




How we determine whether a person is conscious in the first place is a 
different matter.


But that completely avoids the question of creating a conscious AI program, whether it's 
possible, and whether it's identical with making an intelligent AI program.


Brent



Bruce

I think Bruno is right that it makes more sense to attribute consciousness, like 
intelligence, to a program that can respond differently and effectively to a wide range 
of inputs.  And, maybe unlike Bruno, I think intelligence and consciousness is only 
possible relative to an environment, one with extent in time as well as space.


Brent




--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread Bruce Kellett

meekerdb wrote:

On 5/15/2015 7:37 PM, Bruce Kellett wrote:

meekerdb wrote:


But you could turn this around and pick some arbitrary 
sequence/recording and say, "Well it would be the right program to be 
conscious in SOME circumstance, therefore it's conscious."


I think it goes without saying that it is a recording of brain 
activity of a conscious person -- not a film of your dog chasing a 
ball. We have to assume a modicum of common sense.


Fine.  But then what is it about the recording of the brain activity of 
a conscious person that makes it conscious?  Why is it a property of 
just that sequence, when in general we would attribute consciousness 
only to an entity that responded intelligently/differently to different 
circumstances.  We wouldn't attribute consciousness based on just a 
short sequence of behavior such as might be evinced by one of Disney's 
animitronics.


What is it about the brain activity of a conscious person that makes him 
conscious? Whatever made the person conscious in the first instance is 
what makes the recording recreate that conscious moment. The point here 
is that consciousness supervenes on the brain activity. This makes no 
ontological claims -- simply an epistemological claim. This brain 
activity is associated with the phenomenon we call consciousness.


How we determine whether a person is conscious in the first place is a 
different matter.


Bruce

I think Bruno is right that it makes more sense to attribute 
consciousness, like intelligence, to a program that can respond 
differently and effectively to a wide range of inputs.  And, maybe 
unlike Bruno, I think intelligence and consciousness is only possible 
relative to an environment, one with extent in time as well as space.


Brent


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/15/2015 7:37 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 6:18 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 4:40 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb > wrote:


I'm trying to understand what "counterfactual correctness" means in
the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the antecedent is false. 
So Bruno's referring to the branching 'if A then B else C' construction of a 
program is not really a counterfactual at all, since to be a counterfactual A 
*must* be false. So the counterfactual construction is 'A then C', where A happens 
to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never made explicit.  
In the beginning when one is asked to accept a digital prosthesis for a brain part, 
Bruno says almost everyone agrees that consciousness is realized by a certain class 
of computations.  The alternative, as suggested by Searle for example, that 
consciousness depends not only of the activity of the brain but also what the 
physical material is, seems like invoking magic.  So we agree that consciousness 
depends on the program that's running, not the hardware it's running on.  And 
implicit in this is that this program implements intelligence, the ability to 
respond differently to different externals signals/environment. Bruno says that's 
what is meant by "computation", but whether that's entailed by the word or not 
seems like a semantic quibble.  Whatever you call it, it's implicit in the idea of 
digital brain prosthesis and in the idea of strong AI that the program 
instantiating consciousness must be able to respond differently to different inputs.


But it doesn't have respond differently to every different input or to all 
logically possible inputs.  It only needs to be able to respond to inputs within 
some range as might occur in its environment - whether that environment is a whole 
world or just the other parts of the brain.  So the digital prosthesis needs to do 
this with that same functionality over the same domain as the brain parts it 
replaced.  In which case it is "counterfactually correct". Right?  It's a concept 
relative to a limited domain.


That is probably right. But that just means that the prosthesis is functionally 
equivalent over the required domain. To call this 'counterfactual correctness' seems 
to me to be just confused.


What makes the consciousness, in Bruno's view, is that it's the right kind of program 
being run - which seems fairly uncontroversial.  And part of being the right kind is 
that it is "counterfactually correct" = "functionally equivalent at the software 
level".  Of course this also means it correctly interfaces physically with the rest 
of the world of which it is conscious.  But Bruno minimizes this by two moves. First, 
he considers the brain as dreaming so it is not interacting via perceptions.  I 
objected to this as missing the essential fact that the processes in the brain refer 
to perceptions and other concepts learned in its waking state and this is what gives 
them meaning.  Second, Bruno notes that one can just expand the digital prosthesis to 
include a digital artificial world, including even a simulation of a whole universe. 
To which my attitude is that this makes the concept of "prosthesis" and "artificial" 
moot.


I don't think you would consider just *any* piece of software running to be conscious 
and I do think you would consider some, sufficiently intelligent behaving software, 
plus perhaps certain I/O, to be conscious.  So what would be the crucial difference 
between these two software packages? I'd say having the ability to produce 
intelligent looking responses to a large range of inputs would be a minimum.


Quite probably. But the argument was made that the detailed recording of the sequence 
of brain states of a conscious person could not be conscious because it was not 
counterfactually correct. This charge has always seemed to me to be misguided, since 
the recording does not pretend to be functionally equivalent to the original in all 
circumstances -- just in the particular circumstance in which the recording was made. 
It has never been proposed that the film could be used as a prosthesis for all 
situations. So this argument against the replayed recording recreating the original 
conscious moments must fail -- on the basis of total irrelevance.


But you could turn this around and pick some arbitrary sequence/recording and say, 
"Well it would be the right program to be conscious in SOME circumstance, therefore 
it's conscious."


I think it goes without saying that it is a recording of brain activity of a conscious 
person -- not a film of your dog chasing a ball. We have to assume a modicum of common 
sense.


F

Re: What does the MGA accomplish?

2015-05-15 Thread Bruce Kellett

meekerdb wrote:

On 5/15/2015 6:18 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 4:40 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb > wrote:


I'm trying to understand what "counterfactual correctness" 
means in

the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the 
antecedent is false. So Bruno's referring to the branching 'if A 
then B else C' construction of a program is not really a 
counterfactual at all, since to be a counterfactual A *must* be 
false. So the counterfactual construction is 'A then C', where A 
happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never 
made explicit.  In the beginning when one is asked to accept a 
digital prosthesis for a brain part, Bruno says almost everyone 
agrees that consciousness is realized by a certain class of 
computations.  The alternative, as suggested by Searle for example, 
that consciousness depends not only of the activity of the brain 
but also what the physical material is, seems like invoking magic.  
So we agree that consciousness depends on the program that's 
running, not the hardware it's running on.  And implicit in this is 
that this program implements intelligence, the ability to respond 
differently to different externals signals/environment.  Bruno says 
that's what is meant by "computation", but whether that's entailed 
by the word or not seems like a semantic quibble.  Whatever you 
call it, it's implicit in the idea of digital brain prosthesis and 
in the idea of strong AI that the program instantiating 
consciousness must be able to respond differently to different inputs.


But it doesn't have respond differently to every different input or 
to all logically possible inputs.  It only needs to be able to 
respond to inputs within some range as might occur in its 
environment - whether that environment is a whole world or just the 
other parts of the brain.  So the digital prosthesis needs to do 
this with that same functionality over the same domain as the brain 
parts it replaced.  In which case it is "counterfactually correct". 
Right?  It's a concept relative to a limited domain.


That is probably right. But that just means that the prosthesis is 
functionally equivalent over the required domain. To call this 
'counterfactual correctness' seems to me to be just confused.


What makes the consciousness, in Bruno's view, is that it's the right 
kind of program being run - which seems fairly uncontroversial.  And 
part of being the right kind is that it is "counterfactually correct" 
= "functionally equivalent at the software level".  Of course this 
also means it correctly interfaces physically with the rest of the 
world of which it is conscious.  But Bruno minimizes this by two 
moves. First, he considers the brain as dreaming so it is not 
interacting via perceptions.  I objected to this as missing the 
essential fact that the processes in the brain refer to perceptions 
and other concepts learned in its waking state and this is what gives 
them meaning.  Second, Bruno notes that one can just expand the 
digital prosthesis to include a digital artificial world, including 
even a simulation of a whole universe. To which my attitude is that 
this makes the concept of "prosthesis" and "artificial" moot.


I don't think you would consider just *any* piece of software running 
to be conscious and I do think you would consider some, sufficiently 
intelligent behaving software, plus perhaps certain I/O, to be 
conscious.  So what would be the crucial difference between these two 
software packages?  I'd say having the ability to produce intelligent 
looking responses to a large range of inputs would be a minimum.


Quite probably. But the argument was made that the detailed recording 
of the sequence of brain states of a conscious person could not be 
conscious because it was not counterfactually correct. This charge has 
always seemed to me to be misguided, since the recording does not 
pretend to be functionally equivalent to the original in all 
circumstances -- just in the particular circumstance in which the 
recording was made. It has never been proposed that the film could be 
used as a prosthesis for all situations. So this argument against the 
replayed recording recreating the original conscious moments must fail 
-- on the basis of total irrelevance.


But you could turn this around and pick some arbitrary 
sequence/recording and say, "Well it would be the right program to be 
conscious in SOME circumstance, therefore it's conscious."


I think it goes without saying that it is a recording of brain activity 
of a conscious person -- not a film of your dog chasing a ball. We have 
to assume a modicum of common sense.


Bruce

--
You received this mes

Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/15/2015 6:18 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 4:40 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb > wrote:


I'm trying to understand what "counterfactual correctness" means in
the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the antecedent is false. 
So Bruno's referring to the branching 'if A then B else C' construction of a program 
is not really a counterfactual at all, since to be a counterfactual A *must* be 
false. So the counterfactual construction is 'A then C', where A happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never made explicit.  In 
the beginning when one is asked to accept a digital prosthesis for a brain part, 
Bruno says almost everyone agrees that consciousness is realized by a certain class 
of computations.  The alternative, as suggested by Searle for example, that 
consciousness depends not only of the activity of the brain but also what the 
physical material is, seems like invoking magic.  So we agree that consciousness 
depends on the program that's running, not the hardware it's running on.  And 
implicit in this is that this program implements intelligence, the ability to respond 
differently to different externals signals/environment.  Bruno says that's what is 
meant by "computation", but whether that's entailed by the word or not seems like a 
semantic quibble.  Whatever you call it, it's implicit in the idea of digital brain 
prosthesis and in the idea of strong AI that the program instantiating consciousness 
must be able to respond differently to different inputs.


But it doesn't have respond differently to every different input or to all logically 
possible inputs.  It only needs to be able to respond to inputs within some range as 
might occur in its environment - whether that environment is a whole world or just 
the other parts of the brain.  So the digital prosthesis needs to do this with that 
same functionality over the same domain as the brain parts it replaced.  In which 
case it is "counterfactually correct". Right?  It's a concept relative to a limited 
domain.


That is probably right. But that just means that the prosthesis is functionally 
equivalent over the required domain. To call this 'counterfactual correctness' seems 
to me to be just confused.


What makes the consciousness, in Bruno's view, is that it's the right kind of program 
being run - which seems fairly uncontroversial.  And part of being the right kind is 
that it is "counterfactually correct" = "functionally equivalent at the software 
level".  Of course this also means it correctly interfaces physically with the rest of 
the world of which it is conscious.  But Bruno minimizes this by two moves. First, he 
considers the brain as dreaming so it is not interacting via perceptions.  I objected 
to this as missing the essential fact that the processes in the brain refer to 
perceptions and other concepts learned in its waking state and this is what gives them 
meaning.  Second, Bruno notes that one can just expand the digital prosthesis to 
include a digital artificial world, including even a simulation of a whole universe. To 
which my attitude is that this makes the concept of "prosthesis" and "artificial" moot.


I don't think you would consider just *any* piece of software running to be conscious 
and I do think you would consider some, sufficiently intelligent behaving software, 
plus perhaps certain I/O, to be conscious.  So what would be the crucial difference 
between these two software packages?  I'd say having the ability to produce intelligent 
looking responses to a large range of inputs would be a minimum.


Quite probably. But the argument was made that the detailed recording of the sequence of 
brain states of a conscious person could not be conscious because it was not 
counterfactually correct. This charge has always seemed to me to be misguided, since the 
recording does not pretend to be functionally equivalent to the original in all 
circumstances -- just in the particular circumstance in which the recording was made. It 
has never been proposed that the film could be used as a prosthesis for all situations. 
So this argument against the replayed recording recreating the original conscious 
moments must fail -- on the basis of total irrelevance.


But you could turn this around and pick some arbitrary sequence/recording and say, "Well 
it would be the right program to be conscious in SOME circumstance, therefore it's conscious."


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post 

Re: What does the MGA accomplish?

2015-05-15 Thread Bruce Kellett

meekerdb wrote:

On 5/15/2015 4:40 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb > wrote:


I'm trying to understand what "counterfactual correctness" 
means in

the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the 
antecedent is false. So Bruno's referring to the branching 'if A 
then B else C' construction of a program is not really a 
counterfactual at all, since to be a counterfactual A *must* be 
false. So the counterfactual construction is 'A then C', where A 
happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never 
made explicit.  In the beginning when one is asked to accept a 
digital prosthesis for a brain part, Bruno says almost everyone 
agrees that consciousness is realized by a certain class of 
computations.  The alternative, as suggested by Searle for example, 
that consciousness depends not only of the activity of the brain but 
also what the physical material is, seems like invoking magic.  So we 
agree that consciousness depends on the program that's running, not 
the hardware it's running on.  And implicit in this is that this 
program implements intelligence, the ability to respond differently 
to different externals signals/environment.  Bruno says that's what 
is meant by "computation", but whether that's entailed by the word or 
not seems like a semantic quibble.  Whatever you call it, it's 
implicit in the idea of digital brain prosthesis and in the idea of 
strong AI that the program instantiating consciousness must be able 
to respond differently to different inputs.


But it doesn't have respond differently to every different input or 
to all logically possible inputs.  It only needs to be able to 
respond to inputs within some range as might occur in its environment 
- whether that environment is a whole world or just the other parts 
of the brain.  So the digital prosthesis needs to do this with that 
same functionality over the same domain as the brain parts it 
replaced.  In which case it is "counterfactually correct". Right?  
It's a concept relative to a limited domain.


That is probably right. But that just means that the prosthesis is 
functionally equivalent over the required domain. To call this 
'counterfactual correctness' seems to me to be just confused.


What makes the consciousness, in Bruno's view, is that it's the right 
kind of program being run - which seems fairly uncontroversial.  And 
part of being the right kind is that it is "counterfactually correct" = 
"functionally equivalent at the software level".  Of course this also 
means it correctly interfaces physically with the rest of the world of 
which it is conscious.  But Bruno minimizes this by two moves. First, he 
considers the brain as dreaming so it is not interacting via 
perceptions.  I objected to this as missing the essential fact that the 
processes in the brain refer to perceptions and other concepts learned 
in its waking state and this is what gives them meaning.  Second, Bruno 
notes that one can just expand the digital prosthesis to include a 
digital artificial world, including even a simulation of a whole 
universe. To which my attitude is that this makes the concept of 
"prosthesis" and "artificial" moot.


I don't think you would consider just *any* piece of software running to 
be conscious and I do think you would consider some, sufficiently 
intelligent behaving software, plus perhaps certain I/O, to be 
conscious.  So what would be the crucial difference between these two 
software packages?  I'd say having the ability to produce intelligent 
looking responses to a large range of inputs would be a minimum.


Quite probably. But the argument was made that the detailed recording of 
the sequence of brain states of a conscious person could not be 
conscious because it was not counterfactually correct. This charge has 
always seemed to me to be misguided, since the recording does not 
pretend to be functionally equivalent to the original in all 
circumstances -- just in the particular circumstance in which the 
recording was made. It has never been proposed that the film could be 
used as a prosthesis for all situations. So this argument against the 
replayed recording recreating the original conscious moments must fail 
-- on the basis of total irrelevance.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/15/2015 4:40 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb > wrote:


I'm trying to understand what "counterfactual correctness" means in
the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the antecedent is false. So 
Bruno's referring to the branching 'if A then B else C' construction of a program is 
not really a counterfactual at all, since to be a counterfactual A *must* be false. So 
the counterfactual construction is 'A then C', where A happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never made explicit.  In 
the beginning when one is asked to accept a digital prosthesis for a brain part, Bruno 
says almost everyone agrees that consciousness is realized by a certain class of 
computations.  The alternative, as suggested by Searle for example, that consciousness 
depends not only of the activity of the brain but also what the physical material is, 
seems like invoking magic.  So we agree that consciousness depends on the program 
that's running, not the hardware it's running on.  And implicit in this is that this 
program implements intelligence, the ability to respond differently to different 
externals signals/environment.  Bruno says that's what is meant by "computation", but 
whether that's entailed by the word or not seems like a semantic quibble.  Whatever you 
call it, it's implicit in the idea of digital brain prosthesis and in the idea of 
strong AI that the program instantiating consciousness must be able to respond 
differently to different inputs.


But it doesn't have respond differently to every different input or to all logically 
possible inputs.  It only needs to be able to respond to inputs within some range as 
might occur in its environment - whether that environment is a whole world or just the 
other parts of the brain.  So the digital prosthesis needs to do this with that same 
functionality over the same domain as the brain parts it replaced.  In which case it is 
"counterfactually correct". Right?  It's a concept relative to a limited domain.


That is probably right. But that just means that the prosthesis is functionally 
equivalent over the required domain. To call this 'counterfactual correctness' seems to 
me to be just confused.


What makes the consciousness, in Bruno's view, is that it's the right kind of program 
being run - which seems fairly uncontroversial.  And part of being the right kind is that 
it is "counterfactually correct" = "functionally equivalent at the software level".  Of 
course this also means it correctly interfaces physically with the rest of the world of 
which it is conscious.  But Bruno minimizes this by two moves. First, he considers the 
brain as dreaming so it is not interacting via perceptions.  I objected to this as missing 
the essential fact that the processes in the brain refer to perceptions and other concepts 
learned in its waking state and this is what gives them meaning.  Second, Bruno notes that 
one can just expand the digital prosthesis to include a digital artificial world, 
including even a simulation of a whole universe. To which my attitude is that this makes 
the concept of "prosthesis" and "artificial" moot.


I don't think you would consider just *any* piece of software running to be conscious and 
I do think you would consider some, sufficiently intelligent behaving software, plus 
perhaps certain I/O, to be conscious.  So what would be the crucial difference between 
these two software packages?  I'd say having the ability to produce intelligent looking 
responses to a large range of inputs would be a minimum.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread Bruce Kellett

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb > wrote:


I'm trying to understand what "counterfactual correctness" means in
the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the 
antecedent is false. So Bruno's referring to the branching 'if A then 
B else C' construction of a program is not really a counterfactual at 
all, since to be a counterfactual A *must* be false. So the 
counterfactual construction is 'A then C', where A happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never made 
explicit.  In the beginning when one is asked to accept a digital 
prosthesis for a brain part, Bruno says almost everyone agrees that 
consciousness is realized by a certain class of computations.  The 
alternative, as suggested by Searle for example, that consciousness 
depends not only of the activity of the brain but also what the physical 
material is, seems like invoking magic.  So we agree that consciousness 
depends on the program that's running, not the hardware it's running 
on.  And implicit in this is that this program implements intelligence, 
the ability to respond differently to different externals 
signals/environment.  Bruno says that's what is meant by "computation", 
but whether that's entailed by the word or not seems like a semantic 
quibble.  Whatever you call it, it's implicit in the idea of digital 
brain prosthesis and in the idea of strong AI that the program 
instantiating consciousness must be able to respond differently to 
different inputs.


But it doesn't have respond differently to every different input or to 
all logically possible inputs.  It only needs to be able to respond to 
inputs within some range as might occur in its environment - whether 
that environment is a whole world or just the other parts of the brain.  
So the digital prosthesis needs to do this with that same functionality 
over the same domain as the brain parts it replaced.  In which case it 
is "counterfactually correct". Right?  It's a concept relative to a 
limited domain.


That is probably right. But that just means that the prosthesis is 
functionally equivalent over the required domain. To call this 
'counterfactual correctness' seems to me to be just confused.


Bruce





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Shermer becomes sceptical about scepticism!

2015-05-15 Thread spudboy100 via Everything List
I trust Lomborg far more than I trust academics who hold fanatically, not to 
reason, but to this red-green ideology of theirs. On addressing the problem, 
even Brent and I are seemingly on the same side in that we both want a massive 
switch to solar, and the best solution. But, alas, I am but a serf with zero 
influence. I would make billions and billions available for energy storage 
engineering, and not a cent to pay crony greens, salaraies, so they can donate 
to the mafia inc, elitist parties in the US.

Sent from AOL Mobile Mail


-Original Message-
From: Telmo Menezes 
To: everything-list 
Sent: Fri, May 15, 2015 05:52 PM
Subject: Re: Michael Shermer becomes sceptical about scepticism!





 
Thanks.
  

   

  
  

The story, as told by him, sound quite appalling.
  
  

   

  
  

I googled him to see other sides of the story and found this:
  
  

   http://www.theguardian.com/environment/planet-oz/2015/may/15/how-conservatives-lost-the-plot-over-the-rejection-of-bjorn-lomborg";>http://www.theguardian.com/environment/planet-oz/2015/may/15/how-conservatives-lost-the-plot-over-the-rejection-of-bjorn-lomborg
   

  
  

   

  
  

Which is just political drivel... Conservatives blah blah blah, shark jump, 
losing the plot yada yada. These people really like their clichés.
  
  

   

  
  

However, there's some evidence of cherrypicking on the part of Lomborg:
  
  

   http://www.theguardian.com/environment/planet-oz/2015/apr/23/australia-paying-4-million-for-bjrn-lomborgs-flawed-methods-that-downgrade-climate-change";>http://www.theguardian.com/environment/planet-oz/2015/apr/23/australia-paying-4-million-for-bjrn-lomborgs-flawed-methods-that-downgrade-climate-change
   

  
  

   

  
  

On one hand, Lomborg looks a bit shady to me. On the other, the increasing 
tendency for suppression of dissent in academia is quite troubling (not just on 
climate issues).
  
  

   

  
  

Oh well. I guess there's nothing good in this world that politics won't turn 
into shit.
  
  

   

  
 
 
  

  
On Fri, May 15, 2015 at 8:58 PM, spudboy100 via Everything List 
   everything-list@googlegroups.com>
 wrote:
   

   
here is the article, Telmo, linked by Lomborg's own site. Good reading.


 

 
  http://www.lomborg.com/news/the-honor-of-being-mugged-by-climate-censors";>http://www.lomborg.com/news/the-honor-of-being-mugged-by-climate-censors
  

 
 

  
Sent from AOL Mobile Mail


-Original Message-

  From: spudboy100 via Everything List everything-list@googlegroups.com>
To: everything-list everything-list@googlegroups.com>

  
   
Sent: Fri, May 15, 2015 02:53 PM

Subject: Re: Michael Shermer becomes sceptical about scepticism!





 

 Crap Telmo, because its WSJ, its a paywall for cut and pastes. Basically 
Lomborg got dogged because he by some aussie academics, because be went against 
their holy conclusions. I am and admirer of John Kennedy, even though he made 
nearly lethal mistakes for the world, in foreign policy. A quote, "don't get 
mad, get even." I hope Lomborg does. He might just have with this unseen WSJ 
article. I will send a site link which will show the full article.  
 
 
 
Sent from AOL Mobile Mail 
 
 
 
 
 
-Original Message- 
 
From: Telmo Menezes <
 mailto:te...@telmomenezes.com";>te...@telmomenezes.com> 
 
To: everything-list <
 mailto:everything-list@googlegroups.com";>everything-list@googlegroups.com>
 
 
Sent: Fri, May 15, 2015 11:22 AM 
 
Subject: Re: Michael Shermer becomes sceptical about scepticism! 
 
 
 
 
 
 
  
  
 Hi! 
   
 Most of the article is behind a paywall for me... 
 
 
 
 


 Cheers 
 


 Telmo. 
 

 
 
 
 
 On Fri, May 15, 2015 at 3:43 PM, spudboy100 via Everything List 
  everything-list@googlegroups.com>
 wrote: 
  
 
  
 Hello from the US. Here is an article by the WSJ, by Bjorn Lomborg, speaking 
to the climate cult ideology, that pervades acadamia. Like Lomborg, I have to 
believe in GW, but it ain't climate catastrophe, as the red-greens now choose 
to label it. Like Lomborg, I believe there are things we can to to mitigate it. 
In any case,  here is a link to Lomborg's article (hoping it works, sans fee). 


 

   
 
http://www.wsj.com/articles/the-honor-of-being-mugged-by-climate-censors-1431558936";>http://www.wsj.com/articles/the-honor-of-being-mugged-by-climate-censors-1431558936
 

 

Re: Theories that explain everything explain nothing

2015-05-15 Thread Colin Hales
You've done it again.
There could be 1000 mathematical abstractions (not simple) that, as a
depiction of reality, may reveal a process called scientific observation.

You think that abstraction is an instance of scientific observation.

I say that this entire comp argument is about confusing the two things.

All you ever do is write ^*^$^%#$12324op][][][][ descriptions and endlessly
discuss that confusion.

0) Reality.
1) Descriptions of how it appears (observations)
2) Descriptions of what it is made of (including how observation works).

What you endlessly reiterate is one of a 1000 item 2). In order to talk to
you I have to make the same mistakes as you and I won't do that.

Look. I am so over this. Just forget I ever said anything.





On Sat, May 16, 2015 at 3:41 AM, Bruno Marchal  wrote:

>
> On 15 May 2015, at 00:44, colin hales wrote:
>
> Your suggestion presupposes  a limit to reach that we don't necessarily
> have to assume.
>
> Theories of  everything but the scientific observer  (what tends to be
> called a TOE historically)
>
>
> In the aristotelian picture. In the beginning it meant only unification of
> the known force and objects.
> In this list, we take into account consciousness and the first person
> points of view.
>
>
>
> and
> Theories of everything including the scientific observer.( what I am
> suggesting as a real TOE)
>
>
> Like Comp and Everett already.
> Arithmetic or any universal system (in the CT sense) allows that, and much
> more, by the closure for the diagonalization.
>
>
>
>
>  Can be different categories of scientific account.
>
>
> The arithmetical hypostases. The same sigma_1 reality, viewed by 8 points
> of viewed, multiplied by aleph_0, if not aleph_1 in the first-person
> delay-amnesia limit.
>
>
> Find the way that this can be the case and you have solved the problem.
>
>
> You have begun, only. Comp makes it mathematical, and it is not simple.
>
>
>
> Confuse the two and become part of the problem. Fail to realize there are
> two theory categories and you are also part of the problem.
>
> This dual-'theory' state is a Comp-agnostic position and forms a place
> from which arguments about COMP get  clarity. The magic of COMP being true
> occurs when the two kinds are identities. Under what conditions might that
> be?
>
>
> Hmm... You seem to intuit, or understood the key things:
>
> - that the universal numbers can prove p -> []p, for the p in the sigma_1
> reality,
>
> - but that they cannot yet identify p and []p, because the other side: []p
> -> p is still what they can only pray for, or work for.
>
>
>
>
> Rhetorical question intended to provoke a bit of thought.
>
>
> The first post which I can understand!
>
> G1 proves p -> []p
> G1* \ G1 proves []p -> p.
>
> Best,
>
> Bruno
>
>
>
>
> Cheers
> Colin
>
>
>
>
> --
> From: John Mikes 
> Sent: ‎15/‎05/‎2015 7:32 AM
> To: everything-list@googlegroups.com
> Subject: Re: Theories that explain everything explain nothing
>
> Colin: wouldn't it fit to call "TOE"  -  Theory of Everything WE KNOW
> ABOUT?  or: Everything in our reach?
> I mentioned my agnostic views.
> Greetings
> John Mikes
>
> On Wed, May 13, 2015 at 8:40 PM, colin hales  wrote:
>
>> 
>>
>> Perhaps better
>>
>> All posited (so far) scientific TOE are actually wrongly named. They
>> would be correctly named:
>>
>> "Theories predicting how the universe appears to an assumed scientific
>> observer inside it"
>>
>> Or maybe
>>
>> "Theories of everything except the scientific observer"
>>
>> By Scientific observer I mean consciousness... What scientific
>> observation uses/is.
>>
>> From here you might ask yourself what a scientist would be doing if they
>> _were_ explaining the scientific observer (consciousness). For whatever
>> that is, it's not a member of the set of  the kind of science outcomes in
>> which these so-called TOE sit, smugly claiming everything while actually
>> failing without realizing.
>>
>> Cheers
>> Colin
>> --
>> From: Bruce Kellett 
>> Sent: ‎14/‎05/‎2015 9:15 AM
>> To: everything-list@googlegroups.com
>> Subject: Theories that explain everything explain nothing
>>
>> As an aside to recent discussions, it is interesting to point out that
>> physics has some of the problems associated with over-confidence in
>> ideas coming from pure intuition too.
>>
>> http://aeon.co/magazine/science/has-cosmology-run-into-a-creative-crisis
>>
>> This article by Ross Anderson in Aeon Magazine surveys some of the
>> recent history of press announcements by leading cosmologists. Believing
>> too strongly in your own pet theory can be a dangerous pastime.
>>
>> Bruce
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>>

Re: Michael Shermer becomes sceptical about scepticism!

2015-05-15 Thread meekerdb

On 5/15/2015 2:37 PM, Telmo Menezes wrote:



On Fri, May 15, 2015 at 11:27 PM, meekerdb > wrote:


On 5/15/2015 2:38 AM, Telmo Menezes wrote:



On Thu, May 14, 2015 at 3:07 AM, LizR mailto:lizj...@gmail.com>> wrote:

On 13 May 2015 at 21:30, Telmo Menezes mailto:te...@telmomenezes.com>> wrote:


Clouds, especially high clouds have some effect. They reflect 
visible
bands back to space and they also absorb and reemit IR.  Low 
clouds
tend to increase heat load because they reflect in the day, but 
they
insulate day and night.  It's not magic, it's just calculation.


Of course, I am not suggesting it's anything else.
My question is about complex interactions between these several 
phenomena.
Does a change in the composition of the atmosphere affect cloud 
formation?
In what ways? Does temperature?

Is the idea that we shouldn't do anything because we haven't got a 
perfect
model of the atmosphere?


Is it unreasonable to ask for evidence and serious risk analysis before 
messing
with the energy supply chain that keeps 7 billion people alive?


How is replacing one energy supply with a different energy supply 
endangering those
people.


If the new energy supply was more efficient than fossil, then you would not need 
incentives or regulation. Fossil would not be able to compete. Since this is not the 
case, I have to assume that the new energy supply is less efficient, which means that 
there will be less energy resources.


That depends on whether "efficiency" counts the harm done by global warming.  As it is now 
that is not paid by the emitters of CO2.




The loophole in my argument might be fossil fuel subsidising, which sounds like an 
appallingly bad idea. I am 100% in favor of stopping that.



I assume that isn't the point - after all, if we followed that logic 
we'd still
be living in caves.


If progress depended on planet-wide collective action and consensus, we 
would
surely still be living in caves. We are not living in caves because people 
look for
realistic solutions to the problems they are faced with. There is no 
planetary
"we", and I think that's a good thing. In some dystopian scenarios, 
survival may
not be worth it.

But then what is the point?

The point is to do risk analysis and treat the problem as a trade-off, 
because
cutting CO2 emissions is far from not having potentially catastrophic 
consequences too.


Nobody is relying on having CO2 to breathe.  So replacing the energy has no
downsides except economic ones.


Which is the same to say that it has no downsides except for human suffering. The 
economy is just resource allocation.


And the climate is one of those resources.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Shermer becomes sceptical about scepticism!

2015-05-15 Thread Telmo Menezes
Thanks.

The story, as told by him, sound quite appalling.

I googled him to see other sides of the story and found this:
http://www.theguardian.com/environment/planet-oz/2015/may/15/how-conservatives-lost-the-plot-over-the-rejection-of-bjorn-lomborg

Which is just political drivel... Conservatives blah blah blah, shark jump,
losing the plot yada yada. These people really like their clichés.

However, there's some evidence of cherrypicking on the part of Lomborg:
http://www.theguardian.com/environment/planet-oz/2015/apr/23/australia-paying-4-million-for-bjrn-lomborgs-flawed-methods-that-downgrade-climate-change

On one hand, Lomborg looks a bit shady to me. On the other, the increasing
tendency for suppression of dissent in academia is quite troubling (not
just on climate issues).

Oh well. I guess there's nothing good in this world that politics won't
turn into shit.


On Fri, May 15, 2015 at 8:58 PM, spudboy100 via Everything List <
everything-list@googlegroups.com> wrote:

> here is the article, Telmo, linked by Lomborg's own site. Good reading.
>
> http://www.lomborg.com/news/the-honor-of-being-mugged-by-climate-censors
>
> Sent from AOL Mobile Mail
>
>
> -Original Message-
> From: spudboy100 via Everything List 
> To: everything-list 
> Sent: Fri, May 15, 2015 02:53 PM
> Subject: Re: Michael Shermer becomes sceptical about scepticism!
>
>
>  Crap Telmo, because its WSJ, its a paywall for cut and pastes. Basically
> Lomborg got dogged because he by some aussie academics, because be went
> against their holy conclusions. I am and admirer of John Kennedy, even
> though he made nearly lethal mistakes for the world, in foreign policy. A
> quote, "don't get mad, get even." I hope Lomborg does. He might just have
> with this unseen WSJ article. I will send a site link which will show the
> full article.
>
> Sent from AOL Mobile Mail
>
>
> -Original Message-
> From: Telmo Menezes 
> To: everything-list 
> Sent: Fri, May 15, 2015 11:22 AM
> Subject: Re: Michael Shermer becomes sceptical about scepticism!
>
>
>  Hi!
> Most of the article is behind a paywall for me...
>
>  Cheers
>  Telmo.
>
>  On Fri, May 15, 2015 at 3:43 PM, spudboy100 via Everything List <
> everything-list@googlegroups.com> wrote:
>
> Hello from the US. Here is an article by the WSJ, by Bjorn Lomborg,
> speaking to the climate cult ideology, that pervades acadamia. Like
> Lomborg, I have to believe in GW, but it ain't climate catastrophe, as the
> red-greens now choose to label it. Like Lomborg, I believe there are things
> we can to to mitigate it. In any case,  here is a link to Lomborg's article
> (hoping it works, sans fee).
>
>
> http://www.wsj.com/articles/the-honor-of-being-mugged-by-climate-censors-1431558936
>
> Sent from AOL Mobile Mail
>
>
> -Original Message-
> From: Telmo Menezes 
> To: everything-list 
> Sent: Fri, May 15, 2015 07:52 AM
> Subject: Re: Michael Shermer becomes sceptical about scepticism!
>
>
>
>
>  On Fri, May 15, 2015 at 12:21 PM, LizR  wrote:
>
>   On 15 May 2015 at 21:38, Telmo Menezes  wrote:
>
>   On Thu, May 14, 2015 at 3:07 AM, LizR  wrote:
>
>   On 13 May 2015 at 21:30, Telmo Menezes  wrote:
>
>
> Clouds, especially high clouds have some effect.  They reflect visible
> bands back to space and they also absorb and reemit IR.  Low clouds tend to
> increase heat load because they reflect in the day, but they insulate day
> and night.  It's not magic, it's just calculation.
>
>
>  Of course, I am not suggesting it's anything else.
>  My question is about complex interactions between these several
> phenomena. Does a change in the composition of the atmosphere affect cloud
> formation? In what ways? Does temperature?
>
>Is the idea that we shouldn't do anything because we haven't got a
> perfect model of the atmosphere?
>
>
>  Is it unreasonable to ask for evidence and serious risk analysis before
> messing with the energy supply chain that keeps 7 billion people alive?
>
>
>  Of course it isn't. Such risk analysis has been done, and it appears
> around 97% of the most competent people available in the field think the
> risks caused by the rising CO2 levels are more dangerous than the risks of
> doing nothing about them.
>
>
>  Ok, I wouldn't be surprised if you are right. I only claim ignorance,
> and ask questions when something looks fishy. I also care about science
> more than anything else, so arguments around what "97% of the most
> competent people" think mean nothing to me. For me, that is politician
> speak. Consensus are easy to manufacture, even in science. I care about
> correct predictions and a good understanding of the mechanisms. What makes
> these people so competent? Have they created models that led to correct
> predictions?
>
>  This is all just intellectual curiosity anyway. My opinion on the matter
> has no importance whatsoever. I don't even vote.
>
>
>
>
>--
> You received this message because you are subscribed to the Google Groups
> "Everything List" gr

Re: Michael Shermer becomes sceptical about scepticism!

2015-05-15 Thread Telmo Menezes
On Fri, May 15, 2015 at 11:27 PM, meekerdb  wrote:

>  On 5/15/2015 2:38 AM, Telmo Menezes wrote:
>
>
>
> On Thu, May 14, 2015 at 3:07 AM, LizR  wrote:
>
>>  On 13 May 2015 at 21:30, Telmo Menezes  wrote:
>>
>>>
  Clouds, especially high clouds have some effect.  They reflect visible
 bands back to space and they also absorb and reemit IR.  Low clouds tend to
 increase heat load because they reflect in the day, but they insulate day
 and night.  It's not magic, it's just calculation.

>>>
>>>  Of course, I am not suggesting it's anything else.
>>> My question is about complex interactions between these several
>>> phenomena. Does a change in the composition of the atmosphere affect cloud
>>> formation? In what ways? Does temperature?
>>>
>>>Is the idea that we shouldn't do anything because we haven't got a
>> perfect model of the atmosphere?
>>
>
>  Is it unreasonable to ask for evidence and serious risk analysis before
> messing with the energy supply chain that keeps 7 billion people alive?
>
>
> How is replacing one energy supply with a different energy supply
> endangering those people.
>

If the new energy supply was more efficient than fossil, then you would not
need incentives or regulation. Fossil would not be able to compete. Since
this is not the case, I have to assume that the new energy supply is less
efficient, which means that there will be less energy resources.

The loophole in my argument might be fossil fuel subsidising, which sounds
like an appallingly bad idea. I am 100% in favor of stopping that.

> I assume that isn't the point - after all, if we followed that logic we'd
>> still be living in caves.
>>
>
>  If progress depended on planet-wide collective action and consensus, we
> would surely still be living in caves. We are not living in caves because
> people look for realistic solutions to the problems they are faced with.
> There is no planetary "we", and I think that's a good thing. In some
> dystopian scenarios, survival may not be worth it.
>
>
>>   But then what is the point?
>>
>
> The point is to do risk analysis and treat the problem as a trade-off,
> because cutting CO2 emissions is far from not having potentially
> catastrophic consequences too.
>
>
> Nobody is relying on having CO2 to breathe.  So replacing the energy has
> no downsides except economic ones.
>

Which is the same to say that it has no downsides except for human
suffering. The economy is just resource allocation.


>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Shermer becomes sceptical about scepticism!

2015-05-15 Thread meekerdb

On 5/15/2015 2:38 AM, Telmo Menezes wrote:



On Thu, May 14, 2015 at 3:07 AM, LizR mailto:lizj...@gmail.com>> wrote:

On 13 May 2015 at 21:30, Telmo Menezes mailto:te...@telmomenezes.com>> wrote:


Clouds, especially high clouds have some effect.  They reflect 
visible bands
back to space and they also absorb and reemit IR. Low clouds tend to
increase heat load because they reflect in the day, but they 
insulate day
and night.  It's not magic, it's just calculation.


Of course, I am not suggesting it's anything else.
My question is about complex interactions between these several 
phenomena. Does
a change in the composition of the atmosphere affect cloud formation? 
In what
ways? Does temperature?

Is the idea that we shouldn't do anything because we haven't got a perfect 
model of
the atmosphere?


Is it unreasonable to ask for evidence and serious risk analysis before messing with the 
energy supply chain that keeps 7 billion people alive?


How is replacing one energy supply with a different energy supply endangering 
those people.


I assume that isn't the point - after all, if we followed that logic we'd 
still be
living in caves.


If progress depended on planet-wide collective action and consensus, we would surely 
still be living in caves. We are not living in caves because people look for realistic 
solutions to the problems they are faced with. There is no planetary "we", and I think 
that's a good thing. In some dystopian scenarios, survival may not be worth it.


But then what is the point?

The point is to do risk analysis and treat the problem as a trade-off, because cutting 
CO2 emissions is far from not having potentially catastrophic consequences too.


Nobody is relying on having CO2 to breathe.  So replacing the energy has no downsides 
except economic ones.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb mailto:meeke...@verizon.net>> 
wrote:


I'm trying to understand what "counterfactual correctness" means in
the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the antecedent is false. So 
Bruno's referring to the branching 'if A then B else C' construction of a program is not 
really a counterfactual at all, since to be a counterfactual A *must* be false. So the 
counterfactual construction is 'A then C', where A happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never made explicit.  In the 
beginning when one is asked to accept a digital prosthesis for a brain part, Bruno says 
almost everyone agrees that consciousness is realized by a certain class of computations.  
The alternative, as suggested by Searle for example, that consciousness depends not only 
of the activity of the brain but also what the physical material is, seems like invoking 
magic.  So we agree that consciousness depends on the program that's running, not the 
hardware it's running on.  And implicit in this is that this program implements 
intelligence, the ability to respond differently to different externals 
signals/environment.  Bruno says that's what is meant by "computation", but whether that's 
entailed by the word or not seems like a semantic quibble.  Whatever you call it, it's 
implicit in the idea of digital brain prosthesis and in the idea of strong AI that the 
program instantiating consciousness must be able to respond differently to different inputs.


But it doesn't have respond differently to every different input or to all logically 
possible inputs.  It only needs to be able to respond to inputs within some range as might 
occur in its environment - whether that environment is a whole world or just the other 
parts of the brain.  So the digital prosthesis needs to do this with that same 
functionality over the same domain as the brain parts it replaced.  In which case it is 
"counterfactually correct". Right?  It's a concept relative to a limited domain.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: My comments on "The Movie Graph Argument Revisited" by Russell Standish

2015-05-15 Thread John Mikes
JohnC:
forget about "sdtatistical"!
Statistics is 'counting WITHIN arbitrary (and that can mean: presently
knowable) limitations. Exceed those and your statistics is hogwash.

Try the infinite (I know you cannot) and you find 'equal' %-s for
everything.
Random, not random. Chaotic - ordered. Entropic or not. Emergent, or fully
already known.

Such thinking may not be too flattering to our ego, but that is what we are:
stupid ingredients in a fraction of of an unfatomable Everything.
I call it (partially) agnostic - a nicer word for ignorant.

JohnM

On Fri, May 15, 2015 at 11:47 AM, John Clark  wrote:

> On Thu, May 14, 2015  John Mikes  wrote:
>
> > How come we observe physical laws exempt from random occurrences?
>
>
> That's easy if the physical laws are statistical. For example a law might
> say that under circumstance X outcome Y will happen 80% of the time and
> outcome Z 20%. And even if the outcome is produced by completely random
> variables (events without causes) they will still tend to form a
> predictable bell shaped curve, and the more outcomes there are the closer
> the graph will resemble that precisely defined bell shaped curve.
>
>  John K Clark
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: My comments on "The Movie Graph Argument Revisited" by Russell Standish

2015-05-15 Thread John Clark
On Fri, May 15, 2015  Bruno Marchal  wrote:
>
> >>>   Why would Turing machine obeys the laws of physics?
>>
>
> >> Because a Turing Machine like all machines involves change.
>
> > The change are injection in N.
>

So what? A injection is a function and a function is a machine that MOVES
from one element in a set to another and ASSOCIATES it with elements of
another set. All these things involve change.


> >> and determine if it is white or black, and a clockwork must determine
>> if it should change the color of that cell or not, and a clockwork must
>> determine if it should move the tape one space to the right or one space to
>> the left or just stop. And nobody knows how to make clockwork without using
>> matter that obeys the laws of physics. Nobody, absolutely nobody.
>
>
> > Oh, so you do assume primitive matter.
>

I assume nothing, I know for a fact that NOBODY knows how to make a
clockwork without using matter and the laws of physics, maybe someday
somebody will figure it out but as of today NOBODY has the slightest idea
of how to do it.


> >>> You can implement Turing machine in Lambda calculus
>
>
> >> No you can not!
>


> Then not only Turing and Church were wrong, as they will both proves this.
>

Don't tell me show me. I think you're talking Bullshit but it would be easy
to prove me wrong, just make a Lambda Machine that makes no use of matter
that obeys the laws of physics and make a calculation, any calculation with
it. Do that and you will have not only won the argument but I will
personally buy your airline ticket to Stockholm for your Nobel Prize
ceremony.


> > You confuse [...]
>

Enough with the "you confuse" crap! Every post of yours contains a "you
confuse", put a little variety into your phrases.

> if you agree that 2+2=4, and if you use the standard definition, then you
> can prove that a tiny part of the standard model of arithmetic run all
> computations.
>

The word " run" involves changes in physical quantities  like position and
time. And what sort of thing are you running these calculations on?

>> Nothing can divide all arithmetical truth from all arithmetical
>> falsehoods. Nothing can do it including arithmetic.
>
>
> > Sorry Arithmetical truth does it, trivially.
>

What a steaming pile of Bullshit.

 > You just dismiss a whole branch of math.
>

I dismiss junk science.


> > you are a comp1 believer, and "comp" is comp1. Then it implies comp2
>

Oh for christ sake! As if "comp" wasn't bad enough now we have "comp1" and
"comp2" and I'm not even going to ask what the hell this new form of
babytalk is supposed to mean, assuming it means anything at all.


> > you invoke a God for which we have no evidence.
>

Science is about evidence and what we have ZERO evidence of is anybody
making one single calculation without using matter that obeys the laws of
physics. Let me repeat that, we have zero evidence, zip nada zilch goose
egg.


> > Amen (you are a good Aristotelian Theologian).
>

Be creative think of a new insult, the one about me being secretly
religious and being an admirer of Aristotle is getting old.


> > the set of true sentences is well defined,
>

The set of all true statements is contained within the set of all
statements, the trick is to separate the true from the false. I agree that
the set of all true statements and no false statements has a definition
that is not gibberish, but we know that nothing can produce such a set. The
integer that is equal to 2+2 but is not equal to 4 is also well defined,
but nothing can produce that integer either.

>> in general there is no way to determine which statements it is possible
>> to prove to be right or wrong and which statements you can not.
>
>
> > Gödel and Post provided a constructive way to do exactly this.
>

So you think Turing was wrong when he claimed that he proved the
Entscheidungsproblem had no general solution??

  John K Clark


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


RE: Climate scientists find warming in higher atmosphere: Elusive tropospheric hot spot located

2015-05-15 Thread 'Chris de Morsella' via Everything List

http://www.sciencedaily.com/releases/2015/05/150514095741.htm

Climate scientists find warming in higher atmosphere: Elusive tropospheric hot 
spot located
Date: May 14, 2015Source: University of New South WalesSummary: Updated data 
and better analysis methods have found clear indications of warming in the 
upper troposphere and a 10 percent increase in winds over the Southern Ocean. 
The inability to detect this hotspot previously has been used by those who 
doubt human-made global warming to suggest climate change is not occurring as a 
result of increasing carbon dioxide emissions.
Researchers have published results in Environmental Research Letters confirming 
strong warming in the upper troposphere, known colloquially as the tropospheric 
hotspot. The hot[spot] has been long expected as part of global warming theory 
and appears in many global climate models.
The inability to detect this hotspot previously has been used by those who 
doubt human-made global warming to suggest climate change is not occurring as a 
result of increasing carbon dioxide emissions.
"Using more recent data and better analysis methods we have been able to 
re-examine the global weather balloon network, known as radiosondes, and have 
found clear indications of warming in the upper troposphere," said lead author 
ARC Centre of Excellence for Climate System Science Chief Investigator Prof 
Steve Sherwood.
"We were able to do this by producing a publicly available temperature and wind 
data set of the upper troposphere extending from 1958-2012, so it is there for 
anyone to see."
The new dataset was the result of extending an existing data record and then 
removing artefacts caused by station moves and instrument changes. This 
revealed real changes in temperature as opposed to the artificial changes 
generated by alterations to the way the data was collected.
No climate models were used in the process that revealed the tropospheric 
hotspot. The researchers instead used observations and combined two well-known 
techniques -- linear regression and Kriging.
"We deduced from the data what natural weather and climate variations look 
like, then found anomalies in the data that looked more like sudden one-off 
shifts from these natural variations and removed them," said Prof Sherwood.
"All of this was done using a well established procedure developed by 
statisticians in 1977."
The results show that even though there has been a slowdown in the warming of 
the global average temperatures on the surface of Earth, the warming has 
continued strongly throughout the troposphere except for a very thin layer at 
around 14-15km above the surface of Earth where it has warmed slightly less.
As well as confirming the tropospheric hotspot, the researchers also found a 
10% increase in winds over the Southern Ocean. The character of this increase 
suggests it may be the result of ozone depletion.
"I am very interested in these wind speed increases and whether they may have 
also played some role in slowing down the warming at the surface of the ocean," 
said Prof Sherwood.
"However, one thing this improved data set shows us is that we should no longer 
accept the claim that there is warming missing higher in the atmosphere. That 
warming is now clearly seen."
Story Source:
The above story is based on materials provided by University of New South Wales.
Journal Reference:
Steven C Sherwood, Nidhi Nishant. Atmospheric changes through 2012 as shown by 
iteratively homogenized radiosonde temperature and wind data 
(IUKv2).Environmental Research Letters, 2015; 10 (5): 054007 DOI: 
10.1088/1748-9326/10/5/054007

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Shermer becomes sceptical about scepticism!

2015-05-15 Thread spudboy100 via Everything List
here is the article, Telmo, linked by Lomborg's own site. Good reading.

http://www.lomborg.com/news/the-honor-of-being-mugged-by-climate-censors


Sent from AOL Mobile Mail


-Original Message-
From: spudboy100 via Everything List 
To: everything-list 
Sent: Fri, May 15, 2015 02:53 PM
Subject: Re: Michael Shermer becomes sceptical about scepticism!





Crap Telmo, because its WSJ, its a paywall for cut and pastes. Basically 
Lomborg got dogged because he by some aussie academics, because be went against 
their holy conclusions. I am and admirer of John Kennedy, even though he made 
nearly lethal mistakes for the world, in foreign policy. A quote, "don't get 
mad, get even." I hope Lomborg does. He might just have with this unseen WSJ 
article. I will send a site link which will show the full article. 
 

 
Sent from AOL Mobile Mail
 

 

 
-Original Message-
 
From: Telmo Menezes te...@telmomenezes.com>
 
To: everything-list everything-list@googlegroups.com>
 
Sent: Fri, May 15, 2015 11:22 AM
 
Subject: Re: Michael Shermer becomes sceptical about scepticism!
 

 

 
 
  
  
 Hi! 
   
 Most of the article is behind a paywall for me... 
 
 
 
 

 Cheers 
 

 Telmo. 
 
 
 
 
 
 On Fri, May 15, 2015 at 3:43 PM, spudboy100 via Everything List 
  everything-list@googlegroups.com>
 wrote: 
  
 
  
 Hello from the US. Here is an article by the WSJ, by Bjorn Lomborg, speaking 
to the climate cult ideology, that pervades acadamia. Like Lomborg, I have to 
believe in GW, but it ain't climate catastrophe, as the red-greens now choose 
to label it. Like Lomborg, I believe there are things we can to to mitigate it. 
In any case,  here is a link to Lomborg's article (hoping it works, sans fee). 
   
 

 

   
 
http://www.wsj.com/articles/the-honor-of-being-mugged-by-climate-censors-1431558936";>http://www.wsj.com/articles/the-honor-of-being-mugged-by-climate-censors-1431558936
 

   
 

Sent from AOL Mobile Mail


-Original Message-
 
From: Telmo Menezes te...@telmomenezes.com>
To: everything-list everything-list@googlegroups.com>
 
Sent: Fri, May 15, 2015 07:52 AM
Subject: Re: Michael Shermer becomes sceptical about scepticism!


  
 
  
  
 
   

 
 
 
 
 On Fri, May 15, 2015 at 12:21 PM, LizR 
  lizj...@gmail.com> wrote: 
  
 
  
 

 
  
  On 15 May 2015 at 21:38, Telmo Menezes te...@telmomenezes.com> wrote:
 
   
 
 
  
   
   On Thu, May 14, 2015 at 3:07 AM, LizR lizj...@gmail.com> wrote:
 

 
  
   

On 13 May 2015 at 21:30, Telmo Menezes te...@telmomenezes.com> wrote:
 
 
 
   

 
  
  
 


 Clouds, especially high clouds have some effect.  They reflect visible 
bands back to space and they also absorb and reemit IR.  Low clouds tend to 
increase heat load because they reflect in the day, but they insulate day and 
night.  It's not magic, it's just calculation. 

   
  
 
   
 
   
 

 Of course, I am not suggesting it's anything else. 
  
 

 My question is about complex interactions between these several phenomena. 
Does a change in the composition of the atmosphere affect cloud formation? In 
what ways? Does temperature?  
  
  
  
 
   
 
   
 

   
  


 Is the idea that we shouldn't do anything because we haven't got a perfect 
model of the atmosphere? 
 

   
  

Re: Michael Shermer becomes sceptical about scepticism!

2015-05-15 Thread spudboy100 via Everything List
Crap Telmo, because its WSJ, its a paywall for cut and pastes. Basically 
Lomborg got dogged because he by some aussie academics, because be went against 
their holy conclusions. I am and admirer of John Kennedy, even though he made 
nearly lethal mistakes for the world, in foreign policy. A quote, "don't get 
mad, get even." I hope Lomborg does. He might just have with this unseen WSJ 
article. I will send a site link which will show the full article. 

Sent from AOL Mobile Mail


-Original Message-
From: Telmo Menezes 
To: everything-list 
Sent: Fri, May 15, 2015 11:22 AM
Subject: Re: Michael Shermer becomes sceptical about scepticism!





 
Hi!
  

Most of the article is behind a paywall for me...
   


   
   
Cheers
   
   
Telmo.
   
   



On Fri, May 15, 2015 at 3:43 PM, spudboy100 via Everything List 
 everything-list@googlegroups.com>
 wrote:
 

 
Hello from the US. Here is an article by the WSJ, by Bjorn Lomborg, speaking to 
the climate cult ideology, that pervades acadamia. Like Lomborg, I have to 
believe in GW, but it ain't climate catastrophe, as the red-greens now choose 
to label it. Like Lomborg, I believe there are things we can to to mitigate it. 
In any case,  here is a link to Lomborg's article (hoping it works, sans fee).
  

   

  
  

   http://www.wsj.com/articles/the-honor-of-being-mugged-by-climate-censors-1431558936";>http://www.wsj.com/articles/the-honor-of-being-mugged-by-climate-censors-1431558936
  
  

   
Sent from AOL Mobile Mail


-Original Message-

   From: Telmo Menezes te...@telmomenezes.com>
To: everything-list everything-list@googlegroups.com>

   Sent: Fri, May 15, 2015 07:52 AM
Subject: Re: Michael Shermer becomes sceptical about scepticism!


 

 
 

 
   


 

 On Fri, May 15, 2015 at 12:21 PM, LizR 
 lizj...@gmail.com> wrote: 
 
 
 
 
   

 
 On 15 May 2015 at 21:38, Telmo Menezes te...@telmomenezes.com> wrote:
 
  
 

 
  
  On Thu, May 14, 2015 at 3:07 AM, LizR lizj...@gmail.com> wrote:
 
   
 
 
  
   
   On 13 May 2015 at 21:30, Telmo Menezes te...@telmomenezes.com> wrote:
 

 
  
   

 
 
 
   
   
 Clouds, especially high clouds have some effect.  They reflect visible 
bands back to space and they also absorb and reemit IR.  Low clouds tend to 
increase heat load because they reflect in the day, but they insulate day and 
night.  It's not magic, it's just calculation. 
   
  
 
 
  
 
  


 Of course, I am not suggesting it's anything else. 
 


 My question is about complex interactions between these several phenomena. 
Does a change in the composition of the atmosphere affect cloud formation? In 
what ways? Does temperature?  
 
 
 
 
  
 
  

   
  
 
   

 Is the idea that we shouldn't do anything because we haven't got a perfect 
model of the atmosphere? 

   
  
 

   
 

 

  

 Is it unreasonable to ask for evidence and serious risk analysis before 
messing with the energy supply chain that keeps 7 billion people alive? 
   
  
 

   
  
 
   
 
   
 

 Of course it isn't. Such risk analysis has been done, and it appears around 
97% of the most competent people available in the field think the risks ca

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-15 Thread Bruno Marchal


On 15 May 2015, at 03:09, Stathis Papaioannou wrote:


On 14 May 2015 at 00:29, Bruno Marchal  wrote:

[Jason]
Chalmer's fading quailia argument shows that if replacing a  
biological
neuron with a functionally equivalent silicon neuron changed  
conscious

perception, then it would lead to an absurdity, either:
1. quaila fade/change as silicon neurons gradually replace the  
biological
ones, leading to a case where the quaila are being completely out  
of touch

with the functional state of the brain.
or
2. the replacement eventually leads to a sudden and complete loss  
of all
quaila, but this suggests a single neuron, or even a few molecules  
of that
neuron, when substituted, somehow completely determine the presence  
of

quaila

His argument is convincing, but what happens when we replace  
neurons not
with functionally identical ones, but with neurons that fire  
according to a
RNG. In all but 1 case, the random firings of the neurons will  
result in
completely different behaviors, but what about that 1 (immensely  
rare) case
where the random neuron firings (by chance) equal the firing  
patterns of the

substituted neurons.

In this case, behavior as observed from the outside is identical.  
Brain
patterns and activity are similar, but according to  
computationalism the
consciousness is different, or perhaps a zombie (if all neurons are  
replaced
with random firing neurons). Presume that the activity of neurons  
in the
visual cortex is required for visual quaila, and that all neurons  
in the
visual cortex are replaced with random firing neurons, which by  
chance,

mimic the behavior of neurons when viewing an apple.

Is this not an example of fading quaila, or quaila desynchronized  
from the
brain state? Would this person feel that they are blind, or lack  
visual
quaila, all the while not being able to express their deficiency? I  
used to
think when Searle argued this exact same thing would occur when  
substituted
functionally identical biological neurons with artificial neurons  
that it
was completely ridiculous, for there would be no room in the  
functionally

equivalent brain to support thoughts such as "help! I can't see, I am
blind!" for the information content in the brain is identical when  
the

neurons are functionally identical.

But then how does this reconcile with fading quaila as the result of
substituting randomly firing neurons? The computations are not the  
same, so
presumably the consciousness is not the same. But also, the  
information
content does not support knowing/believing/expressing/thinking  
something is
wrong. If anything, the information content of this random brain is  
much
less, but it seems the result is something where the quaila is out  
of sync
with the global state of the brain. Can anyone else where shed some  
clarity

on what they think happens, and how to explain it in the rare case of
luckily working randomly firing neurons, when only partial  
substitutions of

the neurons in a brain is performed?


[Bruno]
Nice idea, which leads again to the absurdity to link consciousness  
to the
right "physical activity", instead of the abstract computation (at  
the right

level).


Yes, all these arguments - MGA, Maudlin, Putnam's rock - converge on
the idea that consciousness cannot be dependent on physical activity.


OK.






Only one problem, to use "Chalmers' strategy", you need to change a  
neuron

one at a time, but then a little change will quickly spread abnormal
behavior in the other neurons (which do not yet fire randomly). So  
you have
to change all neurons at once, in this case. This might at first  
mean going
from consciousness to 0 consciousness, except that we already know  
(by MGA,

normally) that consciousness is just not associated to *any* physical
activity, not even computations.

In fact the people that we can see are sort of p-zombies, in some  
sense, but

this is because we see only the 3p-body, and the 3-p bodies are not
conscious: they are only "pointer" to the person, which is in  
Platonia, and
is conscious, in Platonia. (Note that this mean that we are, in  
some sense,

in Platonia, at the limit of all computations).


I think of it like 3 physical objects implementing the number 3: the
number 3 was there already.


It has to be, I think to.
Not necessarily in any metaphysical sense though. We need numbers to  
define what are computation.
It is not that the number 3 is there already, it is that any machine  
can remind of itself of 3 in platonia, once enough cognitive ability,  
and the luck to be able to think and other default hypotheses.


(We need just hope they don't put a patent on it, so that you have to  
pay a tax each time you use the number 3. Note that this is debated,  
not with 3, but with more complex programs. That can make sense).


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from

Re: Reconciling Random Neuron Firings and Fading Qualia

2015-05-15 Thread Bruno Marchal


On 15 May 2015, at 01:48, Jason Resch wrote:




On Wed, May 13, 2015 at 9:29 AM, Bruno Marchal   
wrote:


On 13 May 2015, at 03:59, Jason Resch wrote:

Chalmer's fading quailia argument shows that if replacing a  
biological neuron with a functionally equivalent silicon neuron  
changed conscious perception, then it would lead to an absurdity,  
either:
1. quaila fade/change as silicon neurons gradually replace the  
biological ones, leading to a case where the quaila are being  
completely out of touch with the functional state of the brain.

or
2. the replacement eventually leads to a sudden and complete loss  
of all quaila, but this suggests a single neuron, or even a few  
molecules of that neuron, when substituted, somehow completely  
determine the presence of quaila


His argument is convincing, but what happens when we replace  
neurons not with functionally identical ones, but with neurons that  
fire according to a RNG. In all but 1 case, the random firings of  
the neurons will result in completely different behaviors, but what  
about that 1 (immensely rare) case where the random neuron firings  
(by chance) equal the firing patterns of the substituted neurons.


In this case, behavior as observed from the outside is identical.  
Brain patterns and activity are similar, but according to  
computationalism the consciousness is different, or perhaps a  
zombie (if all neurons are replaced with random firing neurons).  
Presume that the activity of neurons in the visual cortex is  
required for visual quaila, and that all neurons in the visual  
cortex are replaced with random firing neurons, which by chance,  
mimic the behavior of neurons when viewing an apple.


Is this not an example of fading quaila, or quaila desynchronized  
from the brain state? Would this person feel that they are blind,  
or lack visual quaila, all the while not being able to express  
their deficiency? I used to think when Searle argued this exact  
same thing would occur when substituted functionally identical  
biological neurons with artificial neurons that it was completely  
ridiculous, for there would be no room in the functionally  
equivalent brain to support thoughts such as "help! I can't see, I  
am blind!" for the information content in the brain is identical  
when the neurons are functionally identical.


But then how does this reconcile with fading quaila as the result  
of substituting randomly firing neurons? The computations are not  
the same, so presumably the consciousness is not the same. But  
also, the information content does not support knowing/believing/ 
expressing/thinking something is wrong. If anything, the  
information content of this random brain is much less, but it seems  
the result is something where the quaila is out of sync with the  
global state of the brain. Can anyone else where shed some clarity  
on what they think happens, and how to explain it in the rare case  
of luckily working randomly firing neurons, when only partial  
substitutions of the neurons in a brain is performed?


Nice idea, which leads again to the absurdity to link consciousness  
to the right "physical activity", instead of the abstract  
computation (at the right level).


So would such person having fading / diminishing qualia?


The person is in platonia (i.e. distributed on infinitely many true  
sigma_1 sentences), and survives where it is self-referentially  
correct relatively to an infinity of computations.
The person is not in the running of a computer in front of you, which  
is part of your (stable) illusion (there is only 0, s(0), s(s(0)), and  
their add/plus relations).
The mystery is in the fact that such illusion looks computable, which  
would contradict comp (here comp is saved by QM, which show that there  
is something non computable but observable.







Only one problem, to use "Chalmers' strategy", you need to change a  
neuron one at a time, but then a little change will quickly spread  
abnormal behavior in the other neurons (which do not yet fire  
randomly). So you have to change all neurons at once, in this case.


It is possible in theory, if you're running a simulated brain, and  
indicate at time T some subset of the neurons stop executing their  
regular neuron simulation code and instead follow random neuron code.


Let us say that we already know that consciousness is not dependent on  
the low level implementation.(Below the substitution level, there is  
an infinite of them).


MGA does show that at some level, attributing the consciousness to the  
physical activity is like saying that it is not Deep Blue who won the  
chess tournament, but Z8000, the processor used (supposedly) that day.


If from one o'clock to five o'clock, your neurons run randomly, but  
that by the ultra-incredible chance you get the right physical  
activity, well, you, in platonia, are lucky that with some luck, the  
rigth determinacy comes back, and if it did, at five o'clock,  
obviou

Re: Theories that explain everything explain nothing

2015-05-15 Thread Bruno Marchal


On 15 May 2015, at 00:44, colin hales wrote:

Your suggestion presupposes  a limit to reach that we don't  
necessarily have to assume.


Theories of  everything but the scientific observer  (what tends to  
be called a TOE historically)


In the aristotelian picture. In the beginning it meant only  
unification of the known force and objects.
In this list, we take into account consciousness and the first person  
points of view.





and
Theories of everything including the scientific observer.( what I am  
suggesting as a real TOE)


Like Comp and Everett already.
Arithmetic or any universal system (in the CT sense) allows that, and  
much more, by the closure for the diagonalization.






 Can be different categories of scientific account.


The arithmetical hypostases. The same sigma_1 reality, viewed by 8  
points of viewed, multiplied by aleph_0, if not aleph_1 in the first- 
person delay-amnesia limit.



Find the way that this can be the case and you have solved the  
problem.


You have begun, only. Comp makes it mathematical, and it is not simple.



Confuse the two and become part of the problem. Fail to realize  
there are two theory categories and you are also part of the problem.


This dual-'theory' state is a Comp-agnostic position and forms a  
place  from which arguments about COMP get  clarity. The magic of  
COMP being true occurs when the two kinds are identities. Under what  
conditions might that be?


Hmm... You seem to intuit, or understood the key things:

- that the universal numbers can prove p -> []p, for the p in the  
sigma_1 reality,


- but that they cannot yet identify p and []p, because the other side:  
[]p -> p is still what they can only pray for, or work for.






Rhetorical question intended to provoke a bit of thought.


The first post which I can understand!

G1 proves p -> []p
G1* \ G1 proves []p -> p.

Best,

Bruno





Cheers
Colin




From: John Mikes
Sent: ‎15/‎05/‎2015 7:32 AM
To: everything-list@googlegroups.com
Subject: Re: Theories that explain everything explain nothing

Colin: wouldn't it fit to call "TOE"  -  Theory of Everything WE  
KNOW ABOUT?  or: Everything in our reach?

I mentioned my agnostic views.
Greetings
John Mikes

On Wed, May 13, 2015 at 8:40 PM, colin hales   
wrote:


Perhaps better

All posited (so far) scientific TOE are actually wrongly named. They  
would be correctly named:


"Theories predicting how the universe appears to an assumed  
scientific observer inside it"


Or maybe

"Theories of everything except the scientific observer"

By Scientific observer I mean consciousness... What scientific  
observation uses/is.


From here you might ask yourself what a scientist would be doing if  
they _were_ explaining the scientific observer (consciousness). For  
whatever that is, it's not a member of the set of  the kind of  
science outcomes in which these so-called TOE sit, smugly claiming  
everything while actually failing without realizing.


Cheers
Colin
From: Bruce Kellett
Sent: ‎14/‎05/‎2015 9:15 AM
To: everything-list@googlegroups.com
Subject: Theories that explain everything explain nothing

As an aside to recent discussions, it is interesting to point out that
physics has some of the problems associated with over-confidence in
ideas coming from pure intuition too.

http://aeon.co/magazine/science/has-cosmology-run-into-a-creative-crisis

This article by Ross Anderson in Aeon Magazine surveys some of the
recent history of press announcements by leading cosmologists.  
Believing

too strongly in your own pet theory can be a dangerous pastime.

Bruce

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google  
Groups "Ev

Re: My comments on "The Movie Graph Argument Revisited" by Russell Standish

2015-05-15 Thread Bruno Marchal


On 14 May 2015, at 23:49, LizR wrote:


On 15 May 2015 at 02:03, Bruno Marchal  wrote:
On 14 May 2015, at 03:20, LizR wrote:
On 14 May 2015 at 12:01, Russell Standish   
wrote:

On Wed, May 13, 2015 at 01:46:49PM -0400, John Clark wrote:
> On Tue, May 12, 2015  Russell Standish   
wrote:

>
> > Free will is the ability to do something stupid. Nonrational.
> >
>
> OK fine free will is non-rational, in other words an event  
performed for NO
> REASON, in other words an event without a cause, in other words  
random. So

> a radioactive atom has free will when it decays.

A radioactive atom isn't a person, consequently does not have
will. At least not when I last checked.

But a person choosing what to do as a result of an atom decaying  
does have free will, I assume?
Only if you define free-will by random, but frankly, it seems that  
random is the complete opposite of free-will. If you choose  
randomly, it means you abandon your will to chance. It means you let  
chance doing the decision at your place.


Imagine that I give you the liberty to go either in Hell or in  
Paradise, with your own free will. Then, as you tell me that free  
will = random, I can throw the coin for you, and ... Hell. Would say  
that in that case you are going to hell by your own free will?


I think free will require determinacy, at least some amount so as  
being able to do some planning.


I was being a bit flippant. I think the definition that makes sense  
is being unable to predict someone's actions, possibly for deep  
"Halting-problem-type" reasons.


I made all that precise in "Conscience et Mécanisme". the basic idea  
is already in Popper, and used by Good to explain relation between  
free-will and relative speed computation.


The basic idea is that if you can predict in advance what you will do,  
you can as well change your mind.




But I am wary of talk about free will and responsibility because of  
their political use.


Then I am wary to talk about anything. health, climate, energy,  
religion, etc.





As in a cartoon I have on my wall (though not the one in front of me  
at the moment, unfortunately, so this is from memory)...


A business-suited arm points accusingly at a baby. A speech bubble  
from the arm's owner, out of frame:


"YOU are going to make some bad decisions in your life. You will  
choose the wrong parents, the wrong socio-economic group, the wrong  
foster home. As a result you will be abused, drop out, become a  
delinquent, become drug-addicted, end up in prison...


...WHEN ARE YOU GOING TO TAKE SOME RESPONSIBILITY FOR YOUR LIFE?"



Simple. I will take responsibility when you will give my  
responsibility back.


Bruno










--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Theories that explain everything explain nothing

2015-05-15 Thread Bruno Marchal


On 14 May 2015, at 23:32, John Mikes wrote:

Colin: wouldn't it fit to call "TOE"  -  Theory of Everything WE  
KNOW ABOUT?  or: Everything in our reach?

I mentioned my agnostic views.



Everything in our reach would be []p, and its many intensional  
variants. You need to bet on the reach, and that there is something to  
reach, which you never know.


As far as I can understand Colin, he confuses "& p" and "& <>p".  
Consciousness has a relation with both the first person, and the  
observers.


By definition, the mystical tries to intuit what is beyond the reach.  
He is *that* curious.


Then some theories can give pictures, which can be true or false, and  
some can be tested, and if refuted, we can progress, *toward* the  
truth beyond the reach.



Bruno





Greetings
John Mikes

On Wed, May 13, 2015 at 8:40 PM, colin hales   
wrote:


Perhaps better

All posited (so far) scientific TOE are actually wrongly named. They  
would be correctly named:


"Theories predicting how the universe appears to an assumed  
scientific observer inside it"


Or maybe

"Theories of everything except the scientific observer"

By Scientific observer I mean consciousness... What scientific  
observation uses/is.


From here you might ask yourself what a scientist would be doing if  
they _were_ explaining the scientific observer (consciousness). For  
whatever that is, it's not a member of the set of  the kind of  
science outcomes in which these so-called TOE sit, smugly claiming  
everything while actually failing without realizing.


Cheers
Colin
From: Bruce Kellett
Sent: ‎14/‎05/‎2015 9:15 AM
To: everything-list@googlegroups.com
Subject: Theories that explain everything explain nothing

As an aside to recent discussions, it is interesting to point out that
physics has some of the problems associated with over-confidence in
ideas coming from pure intuition too.

http://aeon.co/magazine/science/has-cosmology-run-into-a-creative-crisis

This article by Ross Anderson in Aeon Magazine surveys some of the
recent history of press announcements by leading cosmologists.  
Believing

too strongly in your own pet theory can be a dangerous pastime.

Bruce

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: My comments on "The Movie Graph Argument Revisited" by Russell Standish

2015-05-15 Thread Bruno Marchal


On 14 May 2015, at 23:16, John Mikes wrote:


Bruno concluded:

Only if you define free-will by random, but frankly, it seems that  
random is the complete opposite of free-will. If you choose  
randomly, it means you abandon your will to chance. It means you let  
chance doing the decision at your place.


Imagine that I give you the liberty to go either in Hell or in  
Paradise, with your own free will. Then, as you tell me that free  
will = random, I can throw the coin for you, and ... Hell. Would say  
that in that case you are going to hell by your own free will?


I think free will require determinacy, at least some amount so as  
being able to do some planning.


IMO neither 'free will', nor 'random' make comon sense. Both (and  
CHAOS as well) are deterministic products of infinite many factors  
beyond our mental limits - and control. I tried to address such  
views to Russell, but he rejected my post as 'not having followed my  
"rabbitting" at all'.

I wonder why 'decision making' would preferred to be called FREE will?


Yes, "will" is a better term.
Free-will is will in the situation that you are not in jail, or stuck  
in a lift.




In the world of infinite complexities in more-or-less unfollowable  
relations the determining factor of the composite 'pressure' to  
influence our decision is indeed a composite of Everything affecting  
our flexible 'mind' into some WILL.


About 'random'? I asked many times what ruling exempts the  
arithmetical

2 + 2 = 4 from a randomity when in Nature ANYTHING(?) can go random?


Because 2+2=4 is independent of nature. It does not assume nature. On  
the SK-planet, they learn the numbers as curiousity, and some take  
pleasure in proving 2+2=4 from Kxy = x and Sxyz = xz(yz), with  
reasonable definitions.





How come we observe physical laws exempt from random occurrences?


That is the question that the hypothesis of computationalism can make  
precise.



What happened to CHAOS when enlightenment disclosed some origins and  
procedures explaining 'chaotic' unknowables of the past?


All good questions!

Best,

Bruno





On Thu, May 14, 2015 at 10:03 AM, Bruno Marchal   
wrote:


On 14 May 2015, at 03:20, LizR wrote:

On 14 May 2015 at 12:01, Russell Standish   
wrote:

On Wed, May 13, 2015 at 01:46:49PM -0400, John Clark wrote:
> On Tue, May 12, 2015  Russell Standish   
wrote:

>
> > Free will is the ability to do something stupid. Nonrational.
> >
>
> OK fine free will is non-rational, in other words an event  
performed for NO
> REASON, in other words an event without a cause, in other words  
random. So

> a radioactive atom has free will when it decays.

A radioactive atom isn't a person, consequently does not have
will. At least not when I last checked.

But a person choosing what to do as a result of an atom decaying  
does have free will, I assume?


Only if you define free-will by random, but frankly, it seems that  
random is the complete opposite of free-will. If you choose  
randomly, it means you abandon your will to chance. It means you let  
chance doing the decision at your place.


Imagine that I give you the liberty to go either in Hell or in  
Paradise, with your own free will. Then, as you tell me that free  
will = random, I can throw the coin for you, and ... Hell. Would say  
that in that case you are going to hell by your own free will?


I think free will require determinacy, at least some amount so as  
being able to do some planning.






(Perhaps the atom was inside their brain, and its decay just  
happened to tip the balance of brain chemicals enough that the  
final decision was in favour of tea rather than coffee... or  
perhaps the person decided to decide which drink to have on the  
basis of a reading from a Geiger counter... either way, in this  
particular case human FW puts them in a bit of a Schroedinger's cat  
siutation...)


Which in my opinion illustrate well that both self-duplication and  
self-superposition have no role in free -will.


Bruno




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to t

Re: My comments on "The Movie Graph Argument Revisited" by Russell Standish

2015-05-15 Thread Bruno Marchal


On 14 May 2015, at 22:04, John Clark wrote:


On Wed, May 13, 2015  Bruno Marchal  wrote:

> Why would Turing machine obeys the laws of physics?

Because a Turing Machine like all machines involves change.



The change are injection in N.




A clockwork must read a cell on a tape made of matter


Buy the Davis Book 1964, in the cheap paperback from Dover.





and determine if it is white or black, and a clockwork must  
determine if it should change the color of that cell or not, and a  
clockwork must determine if it should move the tape one space to the  
right or one space to the left or just stop. And nobody knows how to  
make clockwork without using matter that obeys the laws of physics.  
Nobody, absolutely nobody.


Oh, so you do assume primitive matter.

But you are just wrong on what is a Turing machine. Turing makes it  
look material for reason of pedagogy.







> You can implement Turing machine in Lambda calculus

No you can not!


Then not only Turing and Church were wrong, as they will both proves  
this.
You can find the proofs, or similar, in any textbooks in computer  
science.
All known universal systems have been proved to implement all known  
universal systems. And with CT you can suppressed the "known".





The word "implement" means to put a plan into effect


Not in computer science. It means you can write a translator of one  
universal system in another one, or you can write an interpreter of  
one language in another one.





and Lambda calculus or any other type of ink on paper can not do that.


Lambda calculus, like number theory, has no relationship with ink and  
paper.




You can find books about Lambda calculus that describe how Turing  
Machines operate but it's just a description,


No. It is either compilation or interpretation, as universal entities  
can do.  You confuse computer science and physical computer science.  
Those are different, and the second use the basic definition of the  
first, up to now.




to actually make a Turing Machine as opposed to just talking about  
one, you'll need matter and the laws of physics. A book about Lambda  
calculus or about anything else can't calculate diddly squat.


> You can implement them in Fortran, in Algol,

Not unless you have a computer made of matter that obeys the laws of  
physics to run those Fortran or Algol programs on.


No, if you agree that 2+2=4, and if you use the standard definition,  
then you can prove that a tiny part of the standard model of  
arithmetic run all computations.






>> nearly all numbers are non computable

> I told you that by numbers I mean integers, what you call number  
here are non computable functions.


And what you call non computable functions Turing himself called non  
computable numbers in the very 1936 paper that introduced the  
concept that would later be called a "Turing Machine".


OK, Turing made two pedagogical mistakes, relatively to the question  
treated here.
Read any of his other paper, in fact it hows the relative vice-versa  
implementation of lambda and his "machine" formalism in that basic  
paper.
Note that his definition of computable real numbers is wrong (as he  
himself realized, and changed later. There is no Church-thesis for the  
notion of real number computable.






> If we are machine, reality is not a machine, and with comp physics  
is an important part of that reality


> If by mathematics you mean tha arithmetical truth, then  
mathematics knows the arithmetical truth.


Nothing can divide all arithmetical truth from all arithmetical  
falsehoods. Nothing can do it including arithmetic.


Sorry Arithmetical truth does it, trivially. Then that reality can  
have his complexity measured, and their are degrees of unsolvability.


You just dismiss a whole branch of math.





> At this stage, a plea for intuitionism is inadequate. It implies  
non-comp (strictly speaking).


I don't care, I'm not interested in "comp".


But you are a comp1 believer, and "comp" is comp1. Then it implies  
comp2, which you fake to not understand, or you just play the  
advocate's devil.





>> Ink on paper is in those textbooks, there is no evidence that any  
book has ever been able to calculate anything, not even 1+1.  You  
want to fly across the Pacific Ocean on the blueprints of a 747 and  
it just doesn't work.


> Grave confusion of level.

Maybe on some level our entire universe is just a simulation program  
written in Fortran, but if it is as far as we know that program is  
running on a computer made of matter that obeys the laws of physics,


How do you know that?

Especially knowing than the the sigma_1 tiny part of the arithmetical  
truth realizes all computations, even all quantum computations.


We don't know that, and have no evidence, and even clues that the  
physical universe might be the border of something else.






>>   In other words those computer textbooks provide simplified and  
approximated descriptions of how real com

Re: The dovetailer disassembled

2015-05-15 Thread Bruno Marchal


On 14 May 2015, at 21:17, meekerdb wrote:


On 5/14/2015 12:04 PM, Bruno Marchal wrote:


On 13 May 2015, at 20:42, meekerdb wrote:


On 5/13/2015 3:26 AM, Bruno Marchal wrote:


On 13 May 2015, at 00:49, Russell Standish wrote:


On Tue, May 12, 2015 at 02:53:18PM +0200, Bruno Marchal wrote:


The recording is a distinctly different computation, because  
they do

not behave identically on all counterfactuals.


And that is all what is needed in the MGA to proceed.

Bruno



Only if it is assumed to be absurd that the counterfactually  
incorrect

recording  instantiates a conscious moment.


It is absurd from the notion of computation. The recording, if we  
insist to see it as a computation, is a simple sequence of  
unrelated constant projection. It is a movie, not a computation  
made by a computer.





Not only is that not obvious, but
also a number of people, including you IIRC, say that the issue of
counterfactual correctness is a side issue, not really relevant.

ISTM it is critical - without resolving that issue, the MGA  
doesn't

proceed, nor is it clear what it even means if it were to.



It means that consciousness is an abstract feature of the  
universal machine in arithmetic, and that it makes sense only  
through the differentiation and specialization with respect to  
infinitely many universal numbers.


It means that we have to abandon the idea that consciousness is  
associated to its any particular implementation in one universal  
numbers (physical or not),


I doubt that anyone on this list every had the idea that  
consciousness could only be associated to a particular  
implementation.  Certainly everyone is willing to entertain the  
hypothetical that consciousness, human-level consciousness, could  
be realized by a digital computer with suitable program and I/O.   
It just muddles things to make complicated arguments for this  
starting from "comp".


But then, you get easily the reversal. Only the physicalist insist  
one one particular computation, called physical.
The point is just that this physical assumption is useless in  
metaphysics, when we assume comp. It is not that the physical  
disappears (which would be annoying for nay sense of "yes doctor").  
It is just that the physical emerges mathematically, for a  
(measure) reason, from all computations (seen from some points of  
view).
And indeed, the simplest and direct description of "true  
prediction" is the case where []p & <>t is true, with p sigma_1.  
And it gives the quantization needed at the modal level, to get a  
quantum logic at the arithmetical level.






The question is whether such consciousness can be abstracted away  
from ALL implementation and exist in platonia;


I have no clue what that could mean with comp.
With comp consciousness is associated to all machine with enough  
control structure, or beliefs or whatever make them able to look  
inward deep enough.


You need the implementations.



which only makes sense if one already believes that "exist in  
platonia" is the same as "exists".


"exist in Platonia" is a nickname for the model N satisfied this  
relation.


You are the one wanting to define existence by physical existence,  
but "physical existence" is what I try to explain from simpler  
notion, and this without eliminating consciousness.






I think that "exists" is relative to a world.


Yes, one that we assume at the start. But with comp, you know that  
such a world is the realm of number.


OK, in UDA we are neutral and show that reifying the (primitive)  
physical existence just can't work to eliminate the measure problem.





So a digital AI consciousness can exist relative to a virtual  
world in which it is emulated, or it can exist in this world given  
sufficient I/O to relate it this world as its environment.  An  
abstract AI can exist in platonia relative to an abstract  
environment in platonia.


It is more interesting than that. Comp says that there is a  
physical reality, and that it sums on all virtual reality at some  
low level.






What I'm interested in is what makes the program/AI conscious.  
Bruno has an answer, i.e. it can do mathematical induction.  But  
it's not clear to me how this squares with my dog being aware of  
his name - since I don't think he can understand mathematical  
induction.


Liz answered this.

There is a difference between doing induction,


I know my dog can do induction. Doing induction isn't the same as  
doing mathematical induction.


Doing dog's induction is more than the mathematical induction.  
Mathematical induction is to that induction like the platonic line is  
to the horizon.




So which is it that you think is sufficient (necessary?) for  
consciousness?


None, as consciousness starts with sub-universality (less than just  
universal). Like RA. But it is a *quite* altered state of  
consciousness, most plausibly.




And is that consciousness awareness, self-awareness, or what?


RA is already conscious,

Re: Theories that explain everything explain nothing

2015-05-15 Thread John Clark
On Wed, May 13, 2015  colin hales  wrote:

> "Theories of everything except the scientific observer"
>

The only reason I think Everett's Many Worlds interpretation of Quantum
Mechanics is the best one is that unlike the competition Everett doesn't
have to explain what a observation or a observer or consciousness is
because none of that has anything to do with it.


> > By Scientific observer I mean consciousness
>

In science you explain A in terms of B and explain B in terms of C and so
on, but do you think the chain of "explain this" questions goes on forever
or does it eventually terminate with a brute fact? If it does terminate I
think a likely place for it to do so is "consciousness is the way data
feels like when it is being processed", and after that there is nothing
more that can be usefully said about it.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread Bruno Marchal


On 14 May 2015, at 20:44, meekerdb wrote:


On 5/14/2015 10:33 AM, Bruno Marchal wrote:
Then the math confirms this, as proved by the Löbian universal  
machine itself, as on the p sigma_1, the first person variant of  
the 3p G ([]p), that is []p & p, []p & <>t, and []p & <>t & p,  
provides a quantization (namely [i]p, with [i] being the  
corresponding modality of the variants).


Went over my head.  Can you expand on that?


What can an ideally correct machine proves on itself, at the right  
level by construction? This can be handled with using the self  
provided by the second recursion theorem, or the little song if D'x'  
gives 'x'x'' what gives DD. The song is singed by some universal  
system understanding the notation.


The math gives here the Beweisbar predicate of Gödel, for the finite  
entities believing in enough induction axioms, which I call the Löbian  
numbers or the Löbian combinators (depending if there is a "r" in the  
month).


If you interpret the propositional variables (p, q, r ...) by  
arithmetical propositions, and the modal box []p by beweisbar(code of  
p in the language of the machine), Gödel and Löb's theorems prove the  
soundness of the formula ~[]f -> ~[]~[]f , and []([]A->A)->[]A, etc.  
Indeed Solovay's theorem will prove that G and G* characterize what  
the machine can prove about its provability and consistency abilities,  
and G* describes what is true about them.


If you define generously a mystic by any entity interested in its  
self, then G is the abstract mystical science, and G* is the abstract  
mystical truth.


Gödel already saw that G, well, it is not a logic of knowledge, like T  
([]p->p), or S4 ([]p->p, []p -> [][]p).


This means, that contrarily to the intuition of some mathematicians  
and scientists, formal provability is of the type belief, not  
knowledge. But this gives the opportunity to define knowledge by using  
Theaetetus idea: [k]p = [g]p & p, with [g] = the usual beweisbar [] of  
Gödel. This does fit with Tarski mathematical analysis of truth, where  
"il pleut" is true when it rains.


This leads to a (meta) definition of a knower, indeed axiomatized  
soundly and completely by the modal logic S4Grz. Grz for Grzegorczyk  
who discovered an equivalent formula, in the context of topological  
interpretation of intuitionisic logic. Indeed S4Grz provides an  
arithmetical (self-referential) interpretation of intuitionistic logic  
(which the first person will be, from her own perspective).


But we want a probability measure of those things accessible by the UD.

G has a Kripke semantic. In particular, there, there are cul-de-sac  
world everywhere, and they do contains sorts of white rabbits, as []f  
is true in those cul-de-sac world. To get a probability, we need to  
have the D axiom ([]p -> <>p).


What about the measure one? It is simpler to extract it than a measure  
different from one. Recall what I asked 10 times to John Clark: you  
are in Helsinki (so you are PA+"I am in Helsinki", say), and you will  
be duplicated and reconstituted in Washington and Moscow, and you are  
told that both reconstitutions will be offered a coffee cup. We want  
to say that []A would do, as by completeness it entails truth in all  
models, in particular true in all consistent extensions (PA + "I was  
in Helsinki" + "I am in Moscow" + I got a cup of coffe"), and (PA + "I  
was in Helsinki" + "I am in Washington" + I got a cup of coffee").


But [] does not intensionally acts like that and D is false, so to get  
it you have to add explicilty that there is a consistent extension.  
[b]p = []p & <>t   (in Kripke semantic: "<>t" means that there is a  
"world" in which t exists, as t is true in all worlds, this amount to  
say that there is a world: it is a default hypothesis (an instinct)  
made explicit. the "b" of [b]p is for bet.


To get physics, through comp, you have to restrict the local  
continuations to the UD's work, or to the sigma_1 reality.


So the propositional logic of physics (the logic of yes no experiment)  
must be given by the logic of [b]p with p sigma_1 sentence.


Then it happens that quantum logicians have already a nice  
representation of quantum logic in modal logic, and roughly a modal  
logic known as B, (with main axiom []p -> p, p -> []<>p) interpret  
quantum logic through a quantization of the classical proposition  
([]<>A, that is B proves []<>p for the atomic proposition when and  
only when quantum logic proves p).


Now, the three of SGrz1 ( = S4Grz restricted to the sigma_1), Z1* (the  
part of Z, restricted to sigma_1, and proved by G*, when translated),  
X1*, provide such a quantization, and a corresponding different  
quantum logics.


Nobody complained that I got three quantum logics. But then as Van  
Frassen said: there is a labyrinth of quantum logics, and here comp  
provides a sort of etalon.


All logic can be proved to be emulated/represented by the decidable  
logic G, so they are all decida

Re: My comments on "The Movie Graph Argument Revisited" by Russell Standish

2015-05-15 Thread John Clark
On Thu, May 14, 2015  John Mikes  wrote:

> How come we observe physical laws exempt from random occurrences?


That's easy if the physical laws are statistical. For example a law might
say that under circumstance X outcome Y will happen 80% of the time and
outcome Z 20%. And even if the outcome is produced by completely random
variables (events without causes) they will still tend to form a
predictable bell shaped curve, and the more outcomes there are the closer
the graph will resemble that precisely defined bell shaped curve.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: My comments on "The Movie Graph Argument Revisited" by Russell Standish

2015-05-15 Thread John Clark
On Thu, May 14, 2015 at 6:46 PM, Russell Standish 
wrote:

>
>> Laplace didn't know that calculation takes energy and produces entropy,
>
>
> > Sure, so we now know the daemon cannot be physical.


If the daemon isn't physical then, at least as far as we know, the daemon
can't do anything including calculate or observe or think.

>
>>  he thought deterministic was the same as predictable and it isn't. How
> on earth do you expect the poor daemon to know if a program to find the
> first even integer greater than 2 that is not the sum of two prime numbers
> and then stop will ever stop if Goldbach is true but has no proof??
>
> > Deterministic means you can predict what will happen at some given time
> t_1 after the origin.


Only if the prediction itself does not change the state of the system as
would happen if you told Og what fork in the road ahead you thought he
would end up going down. And you might be making your "prediction" a long
time after t_1 already occurred because there might not be a shortcut so
the quickest way to find out what the system will do is to just watch it
and see. And you'd still have no idea what will happen at time t_2.


> > So you can just run the program for t_1 seconds, and it will tell you
> whether the proigram has halted by that time or not. If you want to
> actually predict the outcome, use a 10x faster computer.
>

And if you find out that the program is still running at time t_10 that
information will be of no help whatsoever in answering my question, will
the program ever stop?

> To determine the outcome of the program in the above circumstance,
> you need a more powerful beast than Laplace's daemon - it would need to
> be a Halting Oracle - ie someone who knows the decimal expansion
> of Chaitin's Omega to say a few thousand decimal places.


And both mathematics and physics agree that there is no way that anyone or
ANYTHING can ever know what Chaitin's Omega is because its digits are truly
random.  And even if you were familiar with the number through pure chance
there is no way you'd know that it was Chaitin's Omega, to you it would
just seem to be a string of random digits like so many others. Fantasising
what would happen if you knew the value of Chaitin's Omega and knew it was
Chaitin's Omega is like asking what things would be like if 2+2=5.


> > But a Halting Oracle can never predict the outcome of FPI,


Because even a Halting Oracle can not answer a gibberish question.

   The daemon must keep his prediction of Og's behavior secret from Og
>> or lie about what he really  thinks Og will do.  If Og is DETERMINED to do
>> the opposite of whatever the daemon predicts he will do and Og is told what
>> the prediction is then the daemon's prediction will never be correct.
>
>

>>>   What does DETERMINED mean here?


> >>  Deterministic clockwork.
>
> > Right - so you're setting up a logical contradiction.


Yes.


> > You haven't really proved anything by it, other than Laplace daemons
> cannot influence the system they study.


The daemon would have no difficulty influencing the system, but if he does
his predictions will be wrong.

  John K Clark



>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Shermer becomes sceptical about scepticism!

2015-05-15 Thread Telmo Menezes
Hi!
Most of the article is behind a paywall for me...

Cheers
Telmo.

On Fri, May 15, 2015 at 3:43 PM, spudboy100 via Everything List <
everything-list@googlegroups.com> wrote:

> Hello from the US. Here is an article by the WSJ, by Bjorn Lomborg,
> speaking to the climate cult ideology, that pervades acadamia. Like
> Lomborg, I have to believe in GW, but it ain't climate catastrophe, as the
> red-greens now choose to label it. Like Lomborg, I believe there are things
> we can to to mitigate it. In any case,  here is a link to Lomborg's article
> (hoping it works, sans fee).
>
>
> http://www.wsj.com/articles/the-honor-of-being-mugged-by-climate-censors-1431558936
>
> Sent from AOL Mobile Mail
>
>
> -Original Message-
> From: Telmo Menezes 
> To: everything-list 
> Sent: Fri, May 15, 2015 07:52 AM
> Subject: Re: Michael Shermer becomes sceptical about scepticism!
>
>
>
>
>  On Fri, May 15, 2015 at 12:21 PM, LizR  wrote:
>
>   On 15 May 2015 at 21:38, Telmo Menezes  wrote:
>
>   On Thu, May 14, 2015 at 3:07 AM, LizR  wrote:
>
>   On 13 May 2015 at 21:30, Telmo Menezes  wrote:
>
>
> Clouds, especially high clouds have some effect.  They reflect visible
> bands back to space and they also absorb and reemit IR.  Low clouds tend to
> increase heat load because they reflect in the day, but they insulate day
> and night.  It's not magic, it's just calculation.
>
>
>  Of course, I am not suggesting it's anything else.
>  My question is about complex interactions between these several
> phenomena. Does a change in the composition of the atmosphere affect cloud
> formation? In what ways? Does temperature?
>
>Is the idea that we shouldn't do anything because we haven't got a
> perfect model of the atmosphere?
>
>
>  Is it unreasonable to ask for evidence and serious risk analysis before
> messing with the energy supply chain that keeps 7 billion people alive?
>
>
>  Of course it isn't. Such risk analysis has been done, and it appears
> around 97% of the most competent people available in the field think the
> risks caused by the rising CO2 levels are more dangerous than the risks of
> doing nothing about them.
>
>
>  Ok, I wouldn't be surprised if you are right. I only claim ignorance,
> and ask questions when something looks fishy. I also care about science
> more than anything else, so arguments around what "97% of the most
> competent people" think mean nothing to me. For me, that is politician
> speak. Consensus are easy to manufacture, even in science. I care about
> correct predictions and a good understanding of the mechanisms. What makes
> these people so competent? Have they created models that led to correct
> predictions?
>
>  This is all just intellectual curiosity anyway. My opinion on the matter
> has no importance whatsoever. I don't even vote.
>
>
>
>
>--
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
>   --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread Bruno Marchal


On 14 May 2015, at 20:34, meekerdb wrote:


On 5/14/2015 9:41 AM, Bruno Marchal wrote:


On 14 May 2015, at 07:13, meekerdb wrote:


On 5/13/2015 5:32 PM, Russell Standish wrote:

On Thu, May 14, 2015 at 11:26:17AM +1200, LizR wrote:
On 13 May 2015 at 18:20, Russell Standish  
 wrote:


For a robust ontology, counterfactuals are physically  
instantiated,

therefore the MGA is invalid.

Can you elaborate on this? ISTM that counterfactuals aren't, and  
indeed
can't, be physically instantiated. (Isn't that what being  
counterfactual

means?!)
No - counterfactual just means not in this universe. If its not  
in any
universe, then its not just counterfactual, but actually  
illogical, or

impossible, or something.


If "not in any universe" is meant in the Kripke sense, then  
something not in any universe is something that is logically  
impossible.  But if "not in any universe" is meant in the MWI  
sense, then counterfactuals are only those outcomes consistent  
with QM but which don't happen.


OK.


I think it is only the latter kind of counterfactual that need be  
considered in computations.


Not OK. You beg the question of justifying why the physical  
computation wins. You then miss the comp promise of explaining the  
physical form from something simpler like the combinatorial, or the  
arithmetical, or the sigma_1 complete set, etc.


OK, I appreciate that.  But then what does it mean that the brain  
prosthesis the doctor installs must be counterfactually correct?


It means that, after the substitution is done, in case you *are* not  
hungry, then in case you would be hungry, you would eat.






Is there no restriction except consistency of the possible inputs?


?
Consistent applies to any system of beliefs producing (believed)  
propositions. In classical systems, consistent systems ate those not  
believing in some proposition A and in ~A, or which does not believe  
in the constant propositional falsity f.  ~[]f, or <>t.








Unless you talk like if UDA is understood, and suggest a way to  
explain physical counterfactualness  in term of the physics  
extracted from comp, which you assume is QM. In that case, I can  
make sense of your sentence.


I'm trying to understand what "counterfactual correctness" means in  
the physical thought experiments.


It means that in the physically correct mimic of the computation, like  
the MOVIE, we would have the right output or the relevant circuits  
behavior in case we would have made some change in the system.


Maudlin, in MGA terms, add the "Klara", physically inactive device  
which would only be trigged and "restore" the counterfactual  
correctness, in case a change is introduced.
But, of course, "restoring the counterfactualness at the right moment  
makes you counterfactually correct, by definition, so if we accept the  
physical supervenience (of consciousness on the physical activity of  
the computation) then we have to accept the consciousness on MOVIE +  
KLARA, which are during the experience identical, as the Klara are  
inactive.


So physical supervenience makes computationalism spurious, and it is  
simpler to NOT assume a physical reality at the start, and relate  
consciousness to the semantic of the abstract program/person, which is  
actually supported accidentally or not to this or that universal system.


This leads to the extent of Everett's formulation on the "Universal  
Wave/Multiverse" to the Sigma_1 arithmetic. That one is better seen as  
a web of dream emulation, as the obligatory exercise now should  
consist in justifying a probability measure on them (cf the FPI).


I say more on this in an answer to another post.

Bruno





Brent


Are you defending physicalism? Or are you trying to justify the  
appearance of physicalism in comp?


Sometimes, out of context, those two things can't avoid to look  
similar. At some point peole should perhaps make clear all what  
they assume.


Bruno





Brent



As I mentioned, a simple example is my decision between tea and  
coffee. In

the MWI (or an infinite universe) there are separate branches (or
locations) in which I have both - but in the branch where I had  
tea, I
didn't have coffee, and vice versa. And because those branches  
can't
communicate, the road not taken remains counterfactual and non- 
physical
within each branch. Isn't that enough for the MGA to not need to  
worry

about counterfactuals, even in the MWI/Level whatever multiverse?


Why is communication needed?




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/





--
You received

Re: What does the MGA accomplish?

2015-05-15 Thread Bruno Marchal


On 14 May 2015, at 20:22, meekerdb wrote:


On 5/14/2015 9:19 AM, Bruno Marchal wrote:


On 14 May 2015, at 02:50, Russell Standish wrote:


On Wed, May 13, 2015 at 02:33:06PM +0200, Bruno Marchal wrote:





3. A recording of (2) supra being played back.


Nobody would call that a computation, except to evade comp's
consequences.



I do, because it is a computation, albeit a rather trivial one.


Yes, like a rock, in some theory of rock. It is not relevant for  
the argument.





It is
not to evade comp's consequences, however, which I already accept  
from

UDA1-7.


OK.





I insist on the point, because the MGA is about driving an
inconsistency between computational and physical supervenience,  
which

requires care and rigour to demonstrate, not careless mislabelling.


If you agree with the consequences from UDA1-7, then You don't need  
step 8 (MGA) to understand the epistemological inconsistency  
between of computational supervenience and the primitive-physical  
supervenience (assumed often implicitly by the Aristotelians).


So, i see where your problem comes from, you might believe that  
step 8 shows that physical supervenience is wrong (not just the  
primitive one). But that is astonishing, because the that physical  
supervenience seems to me to be contained in the definition of  
comp, which refers to a doctor with a physical body, which will  
reinstalled my "mind" in a digital and physical machine.


Step 8 just shows an epistemological contradiction between comp and  
primitive or physicalist notion of matter.


How can it do that when it never mentions a "physicalist notion of  
matter".  It only invokes ordinary experience and ideas of matter -  
without assuming anything about whether they are fundamental?




The contradiction is epistemological. It dissociate what we  
observed from that primitive matter. Unless you introduce a magic  
clairvoyance ability to Olympia (feeling the inactive Klara nearby).









Whether (3) preserving consciousness is absurd or not (and I agree
with Russell that's to much of a stretch of intuition to judge);


There is no stretch of intuition, the MGA shows that you need to  
put

magic in the primitive matter to make it playing a role in the
consciousness of physical events.



Where does the MGA show this? I don't believe you use the word  
"magic"

in any of your papers on the MGA.


Good point (if true, not the time to verify, but it seems the idea  
is there). I will use "magic" in some next publication. At some  
point, when we apply logic to reality, we have to invoke the magic,  
as with magic, you can always suggest a theory is wrong. Earth is  
flat, it just the photon who have very weird trajectories ...


I agree that I take for granted that in science we don't do any  
ontological commitment, so there is no proof at all about reality.


Then why do you say that supervenience of consciousness on physics  
has something to do with assuming physics is based on ur-stuff?


Just that comp1 -> comp2, that is physics, assuming comp1,  is not  
the fundamental science, as it makes consciousness supervening on  
all computations in the sigma_1 reality, but with the FPI, not  
through possible particular emulations, although they have to be  
justify above the substitution level.


It was also clearly intended that primitive-physical supervenience  
entails that the movie will supports the same consciousness  
experience than the one supported by the boolean graph. Indeed the  
point of Maudlin is that we can eliminate almost all physical  
activity why keeping the counterfactual correctness (by the inert  
Klara)) making the primitive-supervenience thesis (aristotelianism,  
physicalism) more absurd.






Sorry, but this does seem a rhetorical comment.


Who would have thought that?

I think you might underestimate the easiness of step 8, which  
addresses only person believing that there is a substantially real  
*primitive* (that we have to assumed the existence as axioms in the  
fundamental TOE) physical universe, and that it is the explanation  
of why we exist, have mind and are conscious.


That consciousness supervenes on the physical that we might extract  
from comp, that is indeed what would follow if the physical do what  
it has to do: gives the "right" measure on the relative  
computational histories.


It is for those who, like Peter Jones, perhaps Brent and Bruce, who  
at step 7 say that the UD needs to be executed in a primitive  
physical universe, (to get the measure problem) with the intent to  
save physicalism.


I don't see anything in the MGA that makes it specific to a  
*primitive* physics.  It just refers to ordinary physical  
realizations of computations, and so whatever it concludes applies  
to ordinary physics.  And ordinary physics doesn't depend on some  
assumption of primitive materialism - as evidenced by physicist like  
Wheeler and Tegmark who speculate about what makes the equations work.



There

Re: Michael Shermer becomes sceptical about scepticism!

2015-05-15 Thread spudboy100 via Everything List
Hello from the US. Here is an article by the WSJ, by Bjorn Lomborg, speaking to 
the climate cult ideology, that pervades acadamia. Like Lomborg, I have to 
believe in GW, but it ain't climate catastrophe, as the red-greens now choose 
to label it. Like Lomborg, I believe there are things we can to to mitigate it. 
In any case,  here is a link to Lomborg's article (hoping it works, sans fee).

http://www.wsj.com/articles/the-honor-of-being-mugged-by-climate-censors-1431558936

Sent from AOL Mobile Mail


-Original Message-
From: Telmo Menezes 
To: everything-list 
Sent: Fri, May 15, 2015 07:52 AM
Subject: Re: Michael Shermer becomes sceptical about scepticism!





 
  

  
   

   
On Fri, May 15, 2015 at 12:21 PM, LizR 
lizj...@gmail.com> wrote:



 
  
   
On 15 May 2015 at 21:38, Telmo Menezes te...@telmomenezes.com> wrote:

 
  
   

 On Thu, May 14, 2015 at 3:07 AM, LizR lizj...@gmail.com> 
wrote:

  
   

 
  On 13 May 2015 at 21:30, Telmo Menezes te...@telmomenezes.com> wrote:

   

 
  
   

 
  
 Clouds, especially high clouds have some effect.  They reflect visible 
bands back to space and they also absorb and reemit IR.  Low clouds tend to 
increase heat load because they reflect in the day, but they insulate day and 
night.  It's not magic, it's just calculation.
 



 


   

Of course, I am not suggesting it's anything else.
   
   

My question is about complex interactions between these several phenomena. Does 
a change in the composition of the atmosphere affect cloud formation? In what 
ways? Does temperature? 
   
   


 


  
 

   
  

Is the idea that we shouldn't do anything because we haven't got a perfect 
model of the atmosphere?
  
 

   
  
  

   

  
 

Is it unreasonable to ask for evidence and serious risk analysis before messing 
with the energy supply chain that keeps 7 billion people alive?
 

   
  
 
 

  

 


Of course it isn't. Such risk analysis has been done, and it appears around 97% 
of the most competent people available in the field think the risks caused by 
the rising CO2 levels are more dangerous than the risks of doing nothing about 
them.
 


   
  
 



 




Ok, I wouldn't be surprised if you are right. I only claim ignorance, and ask 
questions when something looks fishy. I also care about science more than 
anything else, so arguments around what "97% of the most competent people" 
think mean nothing to me. For me, that is politician speak. Consensus are easy 
to manufacture, even in science. I care about correct predictions and a good 
understanding of the mechanisms. What makes these people so competent? Have 
they created models that led to correct predictions?



 




This is all just intellectual curiosity anyway. My opinion on the matter has no 
importance whatsoever. I don't even vote.



 




 


 
  
   




   
  
 
 

   
-- 
   
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
   
 To unsubscribe from this group and stop receiving emails from it, send an 
email to 
   mailto:everything-list+unsubscr...@googlegroups.com";>everything-list+unsubscr...@googlegroups.com.
   
 To post to this group, send email to 
   mailto:everything-list@googlegroups.com";>everything-list@googlegroups.com.
   
 Visit this group at 
   http://groups.google.com/group/everything-list";>http://groups.google.com/group/everything-list.
   
 For more options, visit 
   https://groups.google.com/d/optout";>https://groups.google.com/d/optout.
   
 
  
 

   
   

  
  
  -- 
 
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
 
 To unsubscribe from this group and stop receiving emails from it, send an 
email to 
 mailto:everything-list+unsubscr...@goog

Re: Michael Shermer becomes sceptical about scepticism!

2015-05-15 Thread Telmo Menezes
On Fri, May 15, 2015 at 12:21 PM, LizR  wrote:

> On 15 May 2015 at 21:38, Telmo Menezes  wrote:
>
>> On Thu, May 14, 2015 at 3:07 AM, LizR  wrote:
>>
>>> On 13 May 2015 at 21:30, Telmo Menezes  wrote:
>>>

> Clouds, especially high clouds have some effect.  They reflect visible
> bands back to space and they also absorb and reemit IR.  Low clouds tend 
> to
> increase heat load because they reflect in the day, but they insulate day
> and night.  It's not magic, it's just calculation.
>

 Of course, I am not suggesting it's anything else.
 My question is about complex interactions between these several
 phenomena. Does a change in the composition of the atmosphere affect cloud
 formation? In what ways? Does temperature?

 Is the idea that we shouldn't do anything because we haven't got a
>>> perfect model of the atmosphere?
>>>
>>
>> Is it unreasonable to ask for evidence and serious risk analysis before
>> messing with the energy supply chain that keeps 7 billion people alive?
>>
>
> Of course it isn't. Such risk analysis has been done, and it appears
> around 97% of the most competent people available in the field think the
> risks caused by the rising CO2 levels are more dangerous than the risks of
> doing nothing about them.
>

Ok, I wouldn't be surprised if you are right. I only claim ignorance, and
ask questions when something looks fishy. I also care about science more
than anything else, so arguments around what "97% of the most competent
people" think mean nothing to me. For me, that is politician speak.
Consensus are easy to manufacture, even in science. I care about correct
predictions and a good understanding of the mechanisms. What makes these
people so competent? Have they created models that led to correct
predictions?

This is all just intellectual curiosity anyway. My opinion on the matter
has no importance whatsoever. I don't even vote.



>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Shermer becomes sceptical about scepticism!

2015-05-15 Thread LizR
On 15 May 2015 at 21:38, Telmo Menezes  wrote:

> On Thu, May 14, 2015 at 3:07 AM, LizR  wrote:
>
>> On 13 May 2015 at 21:30, Telmo Menezes  wrote:
>>
>>>
 Clouds, especially high clouds have some effect.  They reflect visible
 bands back to space and they also absorb and reemit IR.  Low clouds tend to
 increase heat load because they reflect in the day, but they insulate day
 and night.  It's not magic, it's just calculation.

>>>
>>> Of course, I am not suggesting it's anything else.
>>> My question is about complex interactions between these several
>>> phenomena. Does a change in the composition of the atmosphere affect cloud
>>> formation? In what ways? Does temperature?
>>>
>>> Is the idea that we shouldn't do anything because we haven't got a
>> perfect model of the atmosphere?
>>
>
> Is it unreasonable to ask for evidence and serious risk analysis before
> messing with the energy supply chain that keeps 7 billion people alive?
>

Of course it isn't. Such risk analysis has been done, and it appears around
97% of the most competent people available in the field think the risks
caused by the rising CO2 levels are more dangerous than the risks of doing
nothing about them.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Michael Shermer becomes sceptical about scepticism!

2015-05-15 Thread Telmo Menezes
On Thu, May 14, 2015 at 3:07 AM, LizR  wrote:

> On 13 May 2015 at 21:30, Telmo Menezes  wrote:
>
>>
>>> Clouds, especially high clouds have some effect.  They reflect visible
>>> bands back to space and they also absorb and reemit IR.  Low clouds tend to
>>> increase heat load because they reflect in the day, but they insulate day
>>> and night.  It's not magic, it's just calculation.
>>>
>>
>> Of course, I am not suggesting it's anything else.
>> My question is about complex interactions between these several
>> phenomena. Does a change in the composition of the atmosphere affect cloud
>> formation? In what ways? Does temperature?
>>
>> Is the idea that we shouldn't do anything because we haven't got a
> perfect model of the atmosphere?
>

Is it unreasonable to ask for evidence and serious risk analysis before
messing with the energy supply chain that keeps 7 billion people alive?


> I assume that isn't the point - after all, if we followed that logic we'd
> still be living in caves.
>

If progress depended on planet-wide collective action and consensus, we
would surely still be living in caves. We are not living in caves because
people look for realistic solutions to the problems they are faced with.
There is no planetary "we", and I think that's a good thing. In some
dystopian scenarios, survival may not be worth it.


> But then what is the point?
>

The point is to do risk analysis and treat the problem as a trade-off,
because cutting CO2 emissions is far from not having potentially
catastrophic consequences too.

>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.