What happens to old entanglements?

2019-03-12 Thread Pierz
A question for the physicists. I understand that entanglement is 
monogamous, which is really just a way of saying that a system's 
correlations with other systems cannot exceed +-1. Thus a maximally 
entangled system has no room for entanglement with any other system. The 
question is what happens to previous entanglements when a particle 
interacts with another particle, such that it becomes maximally entangled 
with it. Are prior entanglements completely obliterated, or are they just 
obliterated FAPP, meaning that maximal entanglement is also only FAPP? ISTM 
that some remote trace of entanglement - a kind of micro-entanglement - 
must remain?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Black holes and the information paradox

2019-03-12 Thread John Clark
On Tue, Mar 12, 2019 at 8:41 AM Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> The time it takes a black hole (BH) to quantum decay completely is
> proportional to the cube of the mass, which means the black hole has
> emitted half its mass in 7/8ths of its expected duration. This means that
> when a black hole is reduced to half of its original mass the bipartite
> entangled photons with the BH emitted a long time ago, for a solar mass
> black hole some 10^{67}years, are now entangled with not only the BH, but
> with newly emitted photons. This is a big problem. This is telling us there
> is a difficulty in making entanglement entropy fit with the Bekenstein
> bound and that bipartite entanglements are transformed into tripartite
> entanglements. This means quantum unitarity fails. This is not something
> people are willing to abandon so easily, so what AMPS [Almheiri, D. Marolf,
> J. Polchinski, J. Sully, "Black holes: complementarity or firewalls?".
> JHEP. $\bf 2$, (2013). arXiv:1207.3123] proposed was that instead of losing
> quantum unitarity maybe the equivalence principle of general relativity
> fails. This means the BH becomes a sort of naked singularity at the
> horizon, called the firewall, where anything that enters is just demolished
> or "burned up" as it would in the interior of a BH.
>

First of all thanks a lot for taking the time to write a very interesting
post. I'm trying to understand why the firewall is hot. I understand that
if I'm hovering just above the event horizon in a super powerful rocket
time would slow down so much I'd be able to observe the Black Hole evaporate
, even if it took 10^ 67 years for a observer far from the Black Hole to me
it would only take a few seconds, and that means the Hawking Radiation
would burn me to a crisp. And I understand that until very recently
everybody said that if rather than hovering I was freely falling I wouldn't
even notice I've passed the Event Horizon, but if the Equivalence Principle
breaks down at that point perhaps I would notice it after all. Is that a
productive way to think about the Firewall? I've heard some say it's the
breaking of entanglement needed to avoid tripartite entanglements and
preserving
quantum unitarity that causes the Firewall, but the connection between heat
and broken entanglement is not intuitively obvious to me.


> > This provides me with the motivation at least to think that spacetime
> and quantum information are much the same.
>

I think if that could be shown to be less wrong than current ideas it would
be one of the greatest triumphs in the history of science.


> > It also relates to quantum error correction codes and the Hamming
> distance. If you have a library where books are not reshelved regularly
> then when about half the books become irregularly stacked off their duly
> appointed shelves it becomes much harder the reshelve them. This is a limit
> on an error correction, and the Page time or firewall is related to this.
>

I can see how that might cause a big jump in entropy when the Black Hole
reaches the Page time, but the universe isn't old enough for any Black Hole
to have reached the Page time so I don't see the connection to a ultra hot
firewall. What causes the heat?

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Black holes and the information paradox

2019-03-12 Thread 'Chris de Morsella' via Everything List

 
 
  On Mon, Mar 11, 2019 at 7:04 PM, Bruce Kellett wrote:  
 On Tue, Mar 12, 2019 at 12:43 PM John Clark  wrote:

On Mon, Mar 11, 2019 at 8:42 PM Lawrence Crowell 
 wrote:


> all the radiation emitted is entangled with the black hole, which would then 
> mean the entanglement entropy increases beyond the Bekenstein bound. 


Could nature be trying to tell us that the Bekenstein bound is simply wrong and 
spacetime is contentious and can store information at scales even smaller than 
the Planck area? After all as far as I know there is no experimental evidence 
the Bekenstein bound exists or that spacetime ends when things get smaller than 
10^-35 meters.

Points that I have made many times, here and elsewhere. No one is listening, it 
would appear. Actually, though, Penrose has worked this out for himself. See 
"Roads to Reality".
Speaking to this there exists some tantalizing indirect measured evidence for 
the the scale of any structure of spacetime. An ESA satellite (luckily captured 
a distant gamma ray burster event) and was able measure a very powerful and 
also very distant gamma-ray burster across multiple different frequencies as it 
happened -- capturing signal data from gamma ray to x-ray, ultraviet, visible 
light, infrared and various radio frequencies) and using this data was able to 
experimentally establish that spacetime is in fact smooth -- e.g. not pixelated 
-- down to scales far smaller than the Planck scale. 
Even though we cannot directly measure anything at this exceedingly small scale 
(it would require an atom smasher as big as our galaxy) this elegant experiment 
leveraged the more than 9 billion light years that light from this event 
travelled through spacetime in order to reach us 9 billion years later to infer 
these conclusions excluding the possibility of spacetime being pixelated at 
planck scale and even to a degree far smaller than the planck scale. 
The 9 billion light years these various frequency photons travelled was itself 
used as a kind of lever to deduce that which we cannot know directly. 
Basically, if I recall, it was based on the assumption that measurable 
properties of photons ( forget which one exactly) would over vast distances 
become subtly affected by repeatedly crossing pixel boundaries at many various 
pixelation scales of spacetime, which were one by one excluded down to some 
incredibly small scale (if I recall like a trillion times smaller than the 
Planck scale).So far I have not heard of any falsification of the results of 
this experimental measurements. 
Chris de Morsella 
Bruce 

John K Clark



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Questions about the Equivalence Principle (EP) and GR

2019-03-12 Thread agrayson2000


On Thursday, March 7, 2019 at 3:19:39 AM UTC-7, agrays...@gmail.com wrote:
>
>
>
> On Wednesday, March 6, 2019 at 11:42:33 AM UTC-7, Brent wrote:
>>
>>
>>
>> On 3/6/2019 1:27 AM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Wednesday, March 6, 2019 at 1:03:16 AM UTC-7, Brent wrote: 
>>>
>>>
>>>
>>> On 3/5/2019 10:02 PM, agrays...@gmail.com wrote:
>>>
>>>
>>>
>>> On Saturday, March 2, 2019 at 2:29:50 AM UTC-7, agrays...@gmail.com 
>>> wrote: 



 On Friday, March 1, 2019 at 10:14:02 PM UTC-7, agray...@gmail.com 
 wrote: 
>
>
>
> On Thursday, February 28, 2019 at 12:09:27 PM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/28/2019 4:07 AM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Wednesday, February 27, 2019 at 8:10:16 PM UTC-7, Brent wrote: 
>>>
>>>
>>>
>>> On 2/27/2019 4:58 PM, agrays...@gmail.com wrote:
>>>
>>> *Are you assuming uniqueness to tensors; that only tensors can 
>>> produce covariance in 4-space? Is that established or a mathematical 
>>> speculation? TIA, AG *
>>>
>>>
>>> That's looking at it the wrong way around.  Anything that transforms 
>>> as an object in space, must be representable by tensors. The informal 
>>> definition of a tensor is something that transforms like an object, 
>>> i.e. in 
>>> three space it's something that has a location and an orientation and 
>>> three 
>>> extensions.  Something that doesn't transform as a tensor under 
>>> coordinate 
>>> system changes is something that depends on the arbitrary choice of 
>>> coordinate system and so cannot be a fundamental physical object.
>>>
>>> Brent
>>>
>>
>> 1) Is it correct to say that tensors in E's field equations can be 
>> represented as 4x4 matrices which have different representations 
>> depending 
>> on the coordinate system being used, but represent the same object? 
>>
>>
>> That's right as far as it goes.   Tensors can be of any order.  The 
>> curvature tensor is 4x4x4x4.
>>
>> 2) In SR we use the LT to transform from one* non-accelerating* 
>> frame to another. In GR, what is the transformation for going from one 
>> *accelerating* frame to another? 
>>
>>
>> The Lorentz transform, but only in a local patch.
>>
>
> *That's what I thought you would say. But how does this advance 
> Einstein's presumed project of finding how the laws of physics are 
> invariant for accelerating frames? How did it morph into a theory of 
> gravity? TIA, AG *
>

 *Or suppose, using GR, that two frames are NOT within the same local 
 patch.  If we can't use the LT, how can we transform from one frame to the 
 other? TIA, AG *

 *Or suppose we have two arbitrary accelerating frames, again NOT within 
 the same local patch, is it true that Maxwell's Equations are covariant 
 under some transformation, and what is that transformation? TIA, AG*

>>>
>>>
>>> *I think I can simplify my issue here, if indeed there is an issue: did 
>>> Einstein, or anyone, ever prove what I will call the General Principle of 
>>> Relativity, namely that the laws of physics are invariant for accelerating 
>>> frames? If the answer is affirmative, is there a transformation equation 
>>> for Maxwell's Equations which leaves them unchanged for arbitrary 
>>> accelerating frames? TIA, AG *
>>>
>>>
>>> Your question isn't clear.  If you're simply asking about the equations 
>>> describing physics* as expressed* in an accelerating (e.g. rotating) 
>>> reference frame, that's pretty trivial.  You write the equations in 
>>> whatever reference frame is convenient (usually an inertial one) and then 
>>> transform the coordinates to the accelerated frame coordinates.   But if 
>>> you're asking about what equations describe some physical system while it 
>>> is being accelerated as compared to it not being accelerated, that's more 
>>> complicated. 
>>>
>>
>> *Thanks, but I wasn't referring to either of those cases; rather, the 
>> case of transforming from one accelerating frame to another accelerating 
>> frame, and whether the laws of physics are invariant. *
>>
>>
>> For simplicity consider just flat Minkowski space time.  If you know the 
>> motion of a particle in reference frame, whether the reference frame is 
>> accelerated or not, you can determine its motion in any other reference 
>> frame.  As for the particle path through spacetime, that's just some 
>> geometric path and you're changing from describing it in one coordinate 
>> system to describing it in another system...no physics is changing, just 
>> the description.  If the reference frames are accelerated you get extra 
>> terms in this description, like "centrifugal acceleration" which are just 
>> artifacts of the frame choice. This is the same as in Newtonian mechanics.  
>>
>> But if the particle is actually accelerate

Re: Black holes and the information paradox

2019-03-12 Thread agrayson2000


On Tuesday, March 12, 2019 at 12:18:51 PM UTC-6, Bruno Marchal wrote:
>
>
> On 11 Mar 2019, at 03:16, agrays...@gmail.com  wrote:
>
> They say if information is lost, determination is toast. 
>
>
> That is not correct. If information is lost, reversibility is toast, but 
> determination can be conserved.
>

*If reversibility is lost, how can determinism be preserved? It can't, and 
this is the position Hawking took IIUC. What's your definition of 
determinism? Doesn't it require the laws of physics to be time reversible? 
AG *

>
> Typically the Kestrek bird K is irreversible, as it eliminates information 
> Kxy = x. From KSI you get S, but from S, even knowing it comes from the 
> application of K, you cannot retrieve I. Similarly with addition and 
> multiplication in arithmetic. From 18 you can’t guess it cames from 7 and 
> 11. Erasing information is common.
>
> Some does not tolerate that, so Church works in the base {I, B, W, C}, 
> where I is [x]x, B is [x][y][z] x(yz), etc. 
>
> That base is not combinatorial complete, but is still Turing complete, 
> illustrating that we can do computation without eliminating any 
> information. (None of I, B, C and X eliminates information)
>
> But the quantum eliminates even the combinator W (Wxy = xyy), or the lamda 
> expression [x][y]. xyy. That is, we cannot eliminate information, but we 
> cannot duplicate it either!
>
> Now, the problem is that the BCI combinator algebra are not 
> Turing-complete. It is the core of the physical reality, and Turing 
> universality needs the addition of modal “combinators”.
>

*I have no idea what you're referring to. AG *

>
>
>
>
> But doesn't QM inherently affirm information loss? I mean, although, say, 
> the SWE can be run backward in time to reconstruct any wf it describes, we 
> can never reconstruct or play backward Born's rule, in the sense of knowing 
> what original particular state gave a particular outcome. That is, there is 
> no rule in QM to predict a particular outcome, so how can we expect, that 
> given some outcome, we can know from whence it arose? AG
>
>
>
> You can run backward by discarding information. Born rule, or the 
> projection inherent in the measurement discard information, when you 
> abandon the collapse postulate. That is why “fusing” histories can be done 
> by relative amnesia, and also that is how Church emulate “local kestrels” 
> capable to “apparently eliminate information”, but only with selected 
> objects, like the numbers. K *n* *m* = *n* 
>
> A quantum computer (essentially irreversible during the processing) is 
> Turing complete, and so can simulate all classical computers discarding 
> information all the times, but in the details, everything is locally 
> determinist and reversible.
>
> Bruno
>
>
>
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Black holes and the information paradox

2019-03-12 Thread agrayson2000


On Tuesday, March 12, 2019 at 12:18:50 PM UTC-6, Bruno Marchal wrote:
>
>
> On 11 Mar 2019, at 09:54, agrays...@gmail.com  wrote:
>
>
>
> On Monday, March 11, 2019 at 1:43:05 AM UTC-6, Liz R wrote:
>>
>> I thought QM was deterministic, at least mathematically - and I guess in 
>> the MWI?
>>
>
> *QM is deterministic, but only as far as reconstructing wf's as time is 
> reversed, but it can't reconstruct individual events which are without 
> ostensible cause. As for the MWI, I don't think it's deterministic since 
> the different branches are never in causal contact. AG *
>
>
> It has to be. 
>

*So If I am in one world of many, how can I time reverse my outcome to 
reconstruct something from another world, the one that gave rise to the 
many worlds? AG*
 

> Without wave collapse the evolution is “just” a unitary transformation. It 
> is a vector rotating in some (Hilbert) space. Only the wave collapse 
> postulate bring 3p-indterminacy. In Everett the indeterminacy is explained 
> like in arithmetic, or combinator, with the digital mechanistic hypothesis 
> (in the cognitive science, not in physics).. 
>

*Can't we keep your theory out of this? AG *

>
> Bruno
>
>
>
>
>> I mean everyone can't have forgotten quantum indeterminacy when 
>> discussing the BHIP, surely?
>>
>
>  
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Recommend this article, Even just for the Wheeler quote near the end

2019-03-12 Thread Lawrence Crowell
On Sunday, March 10, 2019 at 4:19:16 PM UTC-6, John Clark wrote:
>
> On Sun, Mar 10, 2019 at 5:29 PM Lawrence Crowell  > wrote:
>
> *> in the biological world certain problems that are NP are figured out. 
>> This runs from ants finding the minimal distance for their trails or even 
>> protistans negotiating some space. Ants are good at approximately solving 
>> the traveling salesman problem, the classic NP algorithm.*
>
>
I read the post by Aaronson on his blog about this. It is the case that 
nature does find approximate solutions to NP problems in P time. The 
protein folding problem is another good example. The fact it is not perfect 
is seen with transmissible spongephore encephalies (TSEs) where 
polypeptides are misfolded in ways not appropriate. TSEs are running 
rampant in deer populations in the US, which might mean before long it is 
in cattle. With protein folding however the success rate is amazingly high, 
and where natural selections is at play. There are chaperon proteins that 
adjust the folding of proteins, and clearly evolution has honed in those 
that shape proteins in an optimal way. Again, this tends to go with the 
point I was making. Repeated trials and correction serve the role of closed 
timelike curves. Aaronson, Barvarian and Gueltrini wrote a paper 
[https://arxiv.org/abs/1609.05507] on how closed timelike curves that 
interact with a Turing machine that is not closed timelike will solve NP in 
P. The closed timelike curves as paths in a path integral constructive and 
destructively interfere to give the optimal solution. More prosaically in 
our time a repeated effort will approximate this in the way an ensemble can 
approximate a quantum system.

LC


 

arXiv:1609.05507   [pdf 
, ps , 
other ] 
 
quant-ph

Computability Theory of Closed Timelike Curves

Authors: Scott Aaronson 
, 
Mohammad 
Bavarian 
, 
Giulio 
Gueltrini 


Abstract: We ask, and answer, the question of what's computable by Turing 
machines equipped with time travel into the past: that is, closed timelike 
curves or CTCs (with no bound on their size). We focus on a model for CTCs 
due to Deutsch, which imposes a probabilistic consistency condition to 
avoid grandfather paradoxes. Our main result is that computers with CTCs 
can solve exactly the problems that are Tu… ▽ More

Submitted 18 September, 2016; originally announced September 2016.
 

>
> It's easy to solve the traveling salesman problem if the number of cities 
> involved is small, but I see no evidence that nature can in general solve 
> NP problems in polynomial time. Of course there are many claims to the 
> contrary so Quantum Computer expert Scott Aaronson decided to but the 
> matter to a simple experimental test, this is what he reported:
>
>
> *"taking two glass plates with pegs between them, and dipping the 
> resulting contraption into a tub of soapy water. The idea is that the soap 
> bubbles that form between the pegs should trace out the minimum Steiner 
> tree — that is, the minimum total length of line segments connecting the 
> pegs, where the segments can meet at points other than the pegs themselves. 
> Now, this is known to be an NP-hard optimization problem. So, it looks like 
> Nature is solving NP-hard problems in polynomial time!*
>
> *Long story short, I went to the hardware store, bought some glass plates, 
> liquid soap, etc., and found that, while Nature does often find a minimum 
> Steiner tree with 4 or 5 pegs, it tends to get stuck at local optima with 
> larger numbers of pegs. Indeed, often the soap bubbles settle down to 
> a configuration which is not even a tree (i.e. contains “cycles of soap”), 
> and thus provably can’t be optimal.*
>
> *The situation is similar for protein folding. Again, people have said 
> that Nature seems to be solving an NP-hard optimization problem in every 
> cell of your body, by letting the proteins fold into their minimum-energy 
> configurations. But there are two problems with this claim. The first 
> problem is that proteins, just like soap bubbles, sometimes get stuck in 
> suboptimal configurations — indeed, it’s believed that’s exactly what 
> happens with Mad Cow Disease. The second problem is that, to the 
> extent that proteins do usually fold into their optimal configurations, 
> there’s an obvious reason why they would: natural selection! If  there were 
> a protein that could only be folded by proving the Riemann Hypothesis, the 
> gene that coded for it would quickly get weeded out of the gene pool." *
>  
>
>> *> The ants crawl all over the place and the trails with the largest 
>> pheremone density tend to be those tha

Re: Recommend this article, Even just for the Wheeler quote near the end

2019-03-12 Thread Bruno Marchal

> On 10 Mar 2019, at 21:16, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 3/10/2019 6:45 AM, Bruno Marchal wrote:
>> 
>>> On 9 Mar 2019, at 01:16, 'Brent Meeker' via Everything List 
>>> >> > wrote:
>>> 
>>> 
>>> 
>>> On 3/8/2019 2:28 AM, Bruno Marchal wrote:
> Why is the probability not 1.0.  Why is there any effect at all in any 
> continuation?  Why is experience dependent on physics, if it is just a 
> matter of timeless arithmetical relations.
 Because to get physics you need to be able to make prediction.
>>> 
>>> But why do you need to "get physics".  You seem to be arguing backwards 
>>> from the conclusion you want.  You know you need to get physics to make a 
>>> prediction, otherwise your theory is useless.  So then you argue that 
>>> therefore substituting for brain parts is necessary because that makes 
>>> "getting physics" necessary.
>> 
>> I don’t understand. What do you mean by “substituting brain parts is 
>> necessary”. It is my working hypothesis.
> 
> But then you reach a contradiction that brain parts don't exist and are 
> irrelevant to thought.

You confuse a brain made of matter with a brain made of primary matter. That 
confusion is correct in the theology/metaphysics of Aristotle. But is invalid 
if the brain/body is Turing emulable. 



> 
>> It is the exactly same hypothesis made by Darwin, and most scientists since. 
>> That physics has to be recovered from arithmetic is shown to be a 
>> consequence of that theory. And the proofs I have given is constructive, so 
>> it explains how to recover physics from arithmetic. Most of the weirdness of 
>> quantum physics becomes indisputable arithmetic facts. In fact, the 
>> classical, or quasi classical part of physics is far more difficult to be 
>> derived, but it has still to be derivable, unless Mechanism is false (in 
>> which case we are back at the start).
>> 
>> Keep in mind that with Mechanism, physicalism is already refuted.
> 
> A tautology: With Communism, capitalism is already refuted.


?

Mechanism is only the idea that there is no magic operating in the brain. Many 
people confuse Mechanism and Materialism, and strong-atheism is used to employ 
Mechanism, in the (weakly) materialist frame to avoid dualism and to put the 
mind-body problem under the rug.

So it is an important point that when we look closer, we see that Mechanism is 
incompatible with materialism and physicalism.





> 
>> With physicalism, you need a god to select a computation, or a collection of 
>> computation, to make a prediction. But if that God exists, you cannot say 
>> that you survive a digital substitution of the brain *qua computation”. You 
>> can still say yes to a doctor, invoking the strangest magical abilities of 
>> your god or another.
>> 
>> If you doubt this, just tell me how A Nature, or a Primary Matter, or any 
>> God, select the computation which all occurs, are executed in the (sigma_1) 
>> arithmetical reality.
>> 
>> If you argue that the computation in arithmetic are not real, you again 
>> invoke your god. The word “real” has to be avoid in science, especially in 
>> theology when done with the scientific method.
> 
> But you invoke your god to justify your argument for your god: In fact, the 
> classical, or quasi classical part of physics is far more difficult to be 
> derived, but it has still to be derivable, unless Mechanism is false (in 
> which case we are back at the start).

Mechanism use  a God, which is just the sigma_1 truth, in which everybody 
believe already. Then, I prove that If Mechanism is true, physics becomes a 
branch of arithmetic “seen from inside” (using the Gödel-Kleene recursion 
theorem to define precisely that arithmetical “inside-view”.

Are you able to doubt the existence of a PRIMARY physical universe? Do you see 
what that means?

Bruno




> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Black holes and the information paradox

2019-03-12 Thread Bruno Marchal

> On 11 Mar 2019, at 03:16, agrayson2...@gmail.com wrote:
> 
> They say if information is lost, determination is toast.

That is not correct. If information is lost, reversibility is toast, but 
determination can be conserved.

Typically the Kestrek bird K is irreversible, as it eliminates information Kxy 
= x. From KSI you get S, but from S, even knowing it comes from the application 
of K, you cannot retrieve I. Similarly with addition and multiplication in 
arithmetic. From 18 you can’t guess it cames from 7 and 11. Erasing information 
is common.

Some does not tolerate that, so Church works in the base {I, B, W, C}, where I 
is [x]x, B is [x][y][z] x(yz), etc. 

That base is not combinatorial complete, but is still Turing complete, 
illustrating that we can do computation without eliminating any information. 
(None of I, B, C and X eliminates information)

But the quantum eliminates even the combinator W (Wxy = xyy), or the lamda 
expression [x][y]. xyy. That is, we cannot eliminate information, but we cannot 
duplicate it either!

Now, the problem is that the BCI combinator algebra are not Turing-complete. It 
is the core of the physical reality, and Turing universality needs the addition 
of modal “combinators”.




> But doesn't QM inherently affirm information loss? I mean, although, say, the 
> SWE can be run backward in time to reconstruct any wf it describes, we can 
> never reconstruct or play backward Born's rule, in the sense of knowing what 
> original particular state gave a particular outcome. That is, there is no 
> rule in QM to predict a particular outcome, so how can we expect, that given 
> some outcome, we can know from whence it arose? AG


You can run backward by discarding information. Born rule, or the projection 
inherent in the measurement discard information, when you abandon the collapse 
postulate. That is why “fusing” histories can be done by relative amnesia, and 
also that is how Church emulate “local kestrels” capable to “apparently 
eliminate information”, but only with selected objects, like the numbers. K n m 
= n 

A quantum computer (essentially irreversible during the processing) is Turing 
complete, and so can simulate all classical computers discarding information 
all the times, but in the details, everything is locally determinist and 
reversible.

Bruno





> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: My son the mathematician

2019-03-12 Thread Bruno Marchal

> On 11 Mar 2019, at 08:46, Liz R  wrote:
> 
> Here is his first co-authored paper (at the age of 20).
> 
> Topology and its Applications 
> 
> Volume 254 
> , 1 March 
> 2019, Pages 85-100
> 
> Extending bonding functions in generalized inverse sequences
> 
> Iztok Banič, 
>  
> SimonGoodwin and  
> MichaelLockyer
>  
>   
> 
> 
> (he's the one in the middle)


Congratulation Liz! Nice to hear from you,

Bruno




> 
> https://www.sciencedirect.com/science/article/abs/pii/S0166864118304449 
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Black holes and the information paradox

2019-03-12 Thread Bruno Marchal

> On 11 Mar 2019, at 09:54, agrayson2...@gmail.com wrote:
> 
> 
> 
> On Monday, March 11, 2019 at 1:43:05 AM UTC-6, Liz R wrote:
> I thought QM was deterministic, at least mathematically - and I guess in the 
> MWI?
> 
> QM is deterministic, but only as far as reconstructing wf's as time is 
> reversed, but it can't reconstruct individual events which are without 
> ostensible cause. As for the MWI, I don't think it's deterministic since the 
> different branches are never in causal contact. AG 

It has to be. Without wave collapse the evolution is “just” a unitary 
transformation. It is a vector rotating in some (Hilbert) space. Only the wave 
collapse postulate bring 3p-indterminacy. In Everett the indeterminacy is 
explained like in arithmetic, or combinator, with the digital mechanistic 
hypothesis (in the cognitive science, not in physics).. 

Bruno



> 
> I mean everyone can't have forgotten quantum indeterminacy when discussing 
> the BHIP, surely?
> 
>  
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Black holes and the information paradox

2019-03-12 Thread Lawrence Crowell
On Monday, March 11, 2019 at 8:04:57 PM UTC-6, Bruce wrote:
>
> On Tue, Mar 12, 2019 at 12:43 PM John Clark  > wrote:
>
>> On Mon, Mar 11, 2019 at 8:42 PM Lawrence Crowell <
>> goldenfield...@gmail.com > wrote:
>>
>> > all the radiation emitted is entangled with the black hole, which 
>>> would then mean the entanglement entropy increases beyond the Bekenstein 
>>> bound. 
>>
>>
>>
>> Could nature be trying to tell us that the Bekenstein bound is simply 
>> wrong and spacetime is contentious and can store information at scales 
>> even smaller than the Planck area? After all as far as I know there is no 
>> experimental evidence the Bekenstein bound exists or that spacetime ends 
>> when things get smaller than 10^-35 meters.
>>
>
> Points that I have made many times, here and elsewhere. No one is 
> listening, it would appear. Actually, though, Penrose has worked this out 
> for himself. See "Roads to Reality".
>
> Bruce 
>

I have of course read Penrose's *Roads to Reality*. Towards the end he 
makes a pitch for his R-process that he introduced in the 1980s and made a 
central feature of his *Emperor's New Mind*. The problem is that it is most 
likely a sort of semi-classical phenomenology or effective theory. It is a 
result of ignoring how spacetime and quantum fields transform by the same 
rules. Sure if you do that you will get the R-process, or the previous idea 
of the super $-matrix by Hawking. 

Quantum information is fundamentally unitless, and this as a result is 
probably the best quantity to focus on as fundamental. Issues of the Planck 
scale, mass units and even the scale invariant breaking of inflation are 
challenges, for if quantum information is unitless it should then be 
absolutely conformal. So questions are open. Penrose just throws in the 
towel and says this violation just happens. There is no proof against this, 
but in spite of Hossenfelder's admonition against invoking beauty I find 
the R-process to be less than elegant and if nature were fundamentally this 
way, rather than as some effective theory, it would be rather disappointing.

LC
 

>
> John K Clark
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Black holes and the information paradox

2019-03-12 Thread Lawrence Crowell
On Monday, March 11, 2019 at 7:43:54 PM UTC-6, John Clark wrote:
>
>
> On Mon, Mar 11, 2019 at 8:42 PM Lawrence Crowell  > wrote:
>
> > all the radiation emitted is entangled with the black hole, which would 
>> then mean the entanglement entropy increases beyond the Bekenstein bound. 
>
>
>
> Could nature be trying to tell us that the Bekenstein bound is simply 
> wrong and spacetime is contentious and can store information at scales 
> even smaller than the Planck area? After all as far as I know there is no 
> experimental evidence the Bekenstein bound exists or that spacetime ends 
> when things get smaller than 10^-35 meters.
>
> John K Clark
>

Warning, this is a bit long, but I hope informative and interesting. John's 
question pertains to the Planck scale and Bekenstein bound. Really the 
issue of quantum information and the firewall is on scales considerably 
larger. I do address some conundrums with the Planck scale towards the end.

As with the analogue of the thermal cavity the entanglement of radiation 
emitted shifts from radiation entangled with the cavity or photon emitting 
hot atoms, to entanglement between photons. Photons previously emitted and 
entangled with atoms, then become entangled with subsequent photons emitted 
by these atoms. It is interesting how entanglement is really all around us, 
but it is mostly not controlled and is an aspect of thermodynamics. Anyway 
this occurrence happens at a time called the Page time, after Don Page who 
first identified this. As this happens when around half the photons are 
emitted, the same happens with black holes. When about half the mass of a 
black hole has been emitted as Hawking radiation about half of its initial 
mass. The time it takes a black hole (BH) to quantum decay completely is 
proportional to the cube of the mass, which means the black hole has 
emitted half its mass in 7/8ths of its expected duration.

This means that when a black hole is reduced to half of its original mass 
the bipartite entangled photons with the BH emitted a long time ago, for a 
solar mass black hole some 10^{67}years, are now entangled with not only 
the BH, but with newly emitted photons. This is a big problem. This is 
telling us there is a difficulty in making entanglement entropy fit with 
the Bekenstein bound and that bipartite entanglements are transformed into 
tripartite entanglements. This means quantum unitarity fails. This is not 
something people are willing to abandon so easily, so what AMPS [Almheiri, 
D. Marolf, J. Polchinski, J. Sully, "Black holes: complementarity or 
firewalls?". JHEP. $\bf 2$, (2013). arXiv:1207.3123] proposed was that 
instead of losing quantum unitarity maybe the equivalence principle of 
general relativity fails. This means the BH becomes a sort of naked 
singularity at the horizon, called the firewall, where anything that enters 
is just demolished or "burned up" as it would in the interior of a BH.

If quantum mechanics builds up spacetime as entanglements, or equivalently 
if spacetime is an emergent phenomenon of quantum mechanics (QM), then the 
unitarity of QM and the equivalence principle (EP) of general relativity 
(GR) may be either equivalent in some way or that they share a duality. If 
we think about it the Einstein field equation 

R_{μν} - ½ Rg_{μν} = (8πG/c^4)T_{μν}

Tells us that weak gravitation on the left side of the equal sign is equal 
to strongly interacting stuff on the right. In a quantum mechanical setting 
the left hand side is quantum mechanical at extreme energy or the UV, while 
the right hand side is all around us at low or moderate energy or the IR. 
There is then a duality between quantum gravitation at extreme energy vs 
quantum field theory at lower energy. 

The holographic principle of black holes indicates that any system that 
approaches a black hole becomes less localized as seen by an asymptotic 
observer. The optical lensing of spacetime spreads any wave function or for 
that matter a local field amplitude across the near horizon region. Quantum 
field theory with its assumptions of Wightman conditions to remove quantum 
nonlocality may no longer be applicable. These were imposed in part to 
remove nonlocal quantum physics, which in high energy is on a very small 
scale from the physics one observes with detectors on a larger scale. 

The best thing to come out of superstring theory is Maldecena's 
correspondence between the anti-de Sitter spacetime of dimension N with the 
conformal field theory on the boundary in N - 1 dimensions. This gives me a 
sense that superstring theory has maybe far less to do with TeV scale 
physics and a lot more to do with quantum cosmology. In effect this 
connects a global physics of cosmology in the bulk of an AdS spacetime with 
the local conformal field theory on the boundary with one dimension less. 
This is a quantum spacetime version of the Gauss-Bonnet theorem! If one 
expands the AdS action S = ∫d^4x\sqrt{-g}R with R_{abcd}R^{abcd} as