On 11-08-2019 01:19, Bruce Kellett wrote:
On Sun, Aug 11, 2019 at 6:54 AM smitra <smi...@zonnet.nl> wrote:

On 10-08-2019 10:20, Bruce Kellett wrote:
On Sat, Aug 10, 2019 at 6:16 PM smitra <smi...@zonnet.nl> wrote:

On 10-08-2019 09:49, Bruce Kellett wrote:

But when you cannot reach, or ignore, some of this larger
number
of
degrees of freedom, you end up with a mixed state. That is how
decoherence reduces the pure state to a mixture on measurement
--
there are always degrees of freedom that are not recoverable --
those
infamous IR photons, for example. The brain does not take all
this
entanglement with the environment into account, so it is a
classical
object.


And that step of tracing out the environmental degrees of
freedom
is
where we make a mathematical approximation in order to be able
to
do
practical calculations. But as you have said in this thread, the
mathematics we use to describe a system is not necessarily a
good
physical representation of the system. It's not up to the brain
to
decide to not take entanglement into account.

No, the brain has no choice. It simply cannot take these
environmental
dof into account. So on its own reckoning, it is a classical
object.

It cannot be "classical" in the way we conventionally define it, as

that's a concept that cannot exist in the known universe.

That will be news to my table and chairs.....

What we want
to do is extract an object out of an entangled state in a
physically
correct way, instead of a way that yields negligible errors when
computing expectation values of generic macroscopic observables,
but is
physically incorrect.

OK. What we want is to extract the observed classical universe from a
quantum substrate. Simply denying that this is possible is not an
option. If FAPP is the best that can be achieved, so be it. There is
no point hankering after unobtainable perfection. Progress might be
possible, but simply denying that the classical world exists is silly.

We may describe the brain
as a classical object, but that doesn't make it so.

Tell me one practical way in which this makes a difference.

There is no practical difference between a collapse interpretation
and
the MWI, neither is there a practical difference between a theory
that
says that all planets that are beyond the cosmological horizon are
made
out of green cheese and the standard astrophysical models

There is, actually. That would be to deny the cosmological hypothesis
that the universe is uniform and isotropic on the large scale. Denying
that would have local consequences.

.I do think that sticking to the relevant physics one can learn a
great
deal more than by invoking irrelevant models. E.g. in
thermodynamics we
ignore the correlations between the molecules that makes the
physical
state a specially prepared state w.r.t. inverse time evolution. So,

ignoring the correlations and pretending that everything is random
is
good enough if we focus on being able to predict measurement
outcomes,
but the fact that the state isn't just any random state follows
from the
fact that entropy would go down under time reversal while it would
increase if the state were truly random.

That depends on whether you think of time reversal as "running the
film backwards", or as reversing the sign of t in your equations.

So, the well known paradoxes in statistical physics go away when we
take
into account the way we've oversimplified the physics. In the case
of QM
exactly the same thing happens when using the density matrix
formalism
and tracing out the environment. You lose the information needed to

describe the inverse time evolution correctly. But unlike ion
statistical physics, that's not the topic under discussion. The
ignored
correlations between the degrees of freedom in the brain and the
environment do however solve a lot of other paradoxes invoked by
people
who argue that AI can never generate consciousness.

Let's consider a robot with an electronic brain that runs a well
defined
algorithm. Then there exists a notion about what algorithm the
brain is
running, and we may call this a classical description of the
electronic
brain. We include in the algorithm the exact computational state.
The
exact description of the physical state involves all the
entanglements
of all the atoms in the electronic brain and all the other local
degrees
of freedom in the environment. If we then extract the computational

state represented as a bitstring out of this state, then the exact
physical state can be written as:

|psi> = |b1>|e1> + |b2>|e2> + |b3>|e3>

where the |bj> are normalized computational states and the |ej> are
the
unnormalized "environmental" state that include everything except
the
computational state.

But you set up the problem with a well-defined algorithm, so there can
be only one computational state.

Then <ej|ej> is the probability for the system to
be in the state |bj>|ej>. So, it also includes the state of the
atoms in
the brain given whatever computational state the brain is in. Now
suppose that the robot is conscious, then what it will know/feel
about
itself and its local environment will be contained in the bistring
describing its computational state, but the mapping from
computational
states to awareness cannot be one to one.

Why not? There is only one computational state for the well-defined
algorithm at any particular moment -- so only one "awareness".

Awareness does not specify the exact algorithm, and there are vast number of different algorithms in the superposition.

Saibal


Whatever we are aware of,
won't precisely specify the exact computational state defined by
what
all the neurons are doing at some time. This means that there
exists a
large number of different |bj>'s that generate the exact same
awareness
for the robot.

The conclusion does not follow.

Suppose that the robot is subjectively aware that it prepared the
spin
of an electron to be polarized in the positive in the x-direction
and
knows that I measured the spin then before I let the robot know the

result of the measurement, the robot will find itself in the state:

|psi> = |up> + |down>]

Not if the robot knows that you measure in the x-direction. Otherwise,
it is just classical ignorance.

where

|up> = sum over states where |ej> contains Saibal finding spin up
of
|bj>|ej>,

|down> = sum over states where |ej> contains Saibal finding spin
down of
|bj>|ej>.

The robot will thus be in a superposition of two classes of worlds
where
the result of the spin measurement is different.

This conclusion does not follow. Since there is only one computational
state consistent with the algorithm, if the robot does no know the
result of your y-direction measurement, there is nothing in its brain
or computational state corresponding to your result. So it is in the
same computational state as previously (or the temporal development of
that). When it learns your result, or becomes entangled in some way
with your result, it then enters the single computational state
representing inclusion of knowledge of your result. You are looking at
the "relative state" of your result from the wrong direction.

Bruce

 --
 You received this message because you are subscribed to the Google
Groups "Everything List" group.
 To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/CAFxXSLRqLHqChvLetkyvM1WXBOJ0qzSJdOwu451Q7Utmsrrn%2Bg%40mail.gmail.com
[1].


Links:
------
[1]
https://groups.google.com/d/msgid/everything-list/CAFxXSLRqLHqChvLetkyvM1WXBOJ0qzSJdOwu451Q7Utmsrrn%2Bg%40mail.gmail.com?utm_medium=email&utm_source=footer

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/5ccda246e9dbb1c96b8da71433989638%40zonnet.nl.

Reply via email to