Citeren meekerdb <meeke...@verizon.net>:
On 9/29/2013 6:26 AM, smi...@zonnet.nl wrote:
Citeren meekerdb <meeke...@verizon.net>:
On 9/28/2013 7:20 PM, Russell Standish wrote:
On Sun, Sep 29, 2013 at 12:47:28PM +1300, LizR wrote:
On 23 September 2013 13:16, Russell Standish
<li...@hpcoders.com.au> wrote:
For me, my stopping point is step 8. I do mean to summarise the
intense discussion we had earlier this year on this topic, but that
will require an uninterrupted period of a day or two, just to
pull it all
into a comprehensible document.
I'm just now reading a reading a very long paper (more of a short
book, actually) by Scott Aaronson, on the subject of free will, which
is one of those rare works in that topic that is not
gibberish. Suffice it to say, that if he is ultimately convincing, he
would get me to stop at step 0 (ie COMP is false), but more on that
later when I finish it.
I am still reading this, but I am a little disappointed that as far as I
can see he hasn't mentioned Huw Price and John Bell's alternative
formulation of Bell's Inequality, namely that it can be explained using
microscopic time-symmetry. (This is despite mentioning Huw Price in the
acknowledgements.) Maybe I will come across a mention somewhere as I
continue, but I've been reading the section on Bell's Inequality and it
doesn't seem that this potentially highly fruitful explanation - all the
more so in that it doesn't require any new physics or even any new
interpretations of existing physics - doesn't merit a mention, which is a
shame because without taking account of that potential explanation, any
subsequent reasoning that relies on Bell's Inequality is
potentially flawed.
I have just now finished Aaronson's paper. I would thoroughly
recommend the read, and it is definitely a challenge to John Clark's
assertion that only rubbish has ever been written about free will.
However it is a long paper (more of a short book), so for those of us
it is TL;DR, I'll try to summarise the paper, where I agree with it,
and more importantly where I depart from it.
Aaronson argues that lack of predictability is a necessary part of
free will (though not sufficient), much as I do in my book (where I go
so far as to define FW as "the ability to do something stupid"). He
does so far more eloquently, and with better contact to philosophical
literature than I do.
Where he starts to differ from my approach is that he draws a
distinction between ordinary "statistical" uncertainty and what he
calls Knightian uncertainty. To use concepts of the great philospher
of our time, Donald Rumsfeld :v), Knightian uncertainty corresponds to the
"unknown unknowns", as compared to the "known unknowns" of
"statistical" uncertainty. Nasim Taleb's "black swan" is a similar
sort of concept.
Aaronson accepts the criticism that ordinary "statistical" uncertainty
is not enough for free will. If I have a choice of three paths to
drive to work, with a certain probability of choosing each one, then
choosing one of the paths on any given morning is not an exercise in
free will. However, ringing work and chucking a sickie that day is an
example of Knightian uncertainty, and is an exercise in free will.
I accept this distinction between Knightian uncertainty and
statistical uncertainty, but fail to see why this distinction is
relevant to free will. I was never particular convinced by those who
argue that subjecting your will to a random generator does not make it
free (that is quite true, but irrelevant, as it is the will which is
random, not deterministic and subject to an external
generator). Aaronson accepts the criticism, without much comment, or
explanation why, alas, even though he gives a perfect example in the
form of a "gerbil-powered AI" that cannot have free will.
I agree. I also agree with JC that the 'free' is so ill defined
that 'free will' is virtually meaningless. I think the idea that
randomness is incompatible with will arises because people think of
random as meaning 'anything possible'. It's clear to me that
randomness can be very useful in selecting actions and, since it's
hard to get rid of anyway, evolution has undoubtedly kept some.
Whether it's inherent quantum randomness or just FAPP randomness
from the environment doesn't really matter.
Accepting Knightian uncertainty as necessary, he goes looking for
sources of Knightian uncertainty in the physical universe, and
identifies the initial conditions of the big bang as a source of
"freebits", as a source of Knightian information.
Aaronson seems hung up on the critereon of predictablity. That's
why he wants 'freebits' to underwrite his Knightian uncertainty.
But I don't see that this "unpredictable even by God" standard adds
anything to unpredictable because of QM, because of deterministic
chaos, because of event horizons, Holevo's theorem,... There are
plenty of barriers to perfect predictability.
He also argues that the requirement for Knightian uncertainty prevents
the ability for copying a consciousness. As I understand it, the
objection is along the lines of - if I can copy you, the I can use the
copy to make perfect predictions of what you do,
Makes no sense to me, both because there are so many obstacles to
predict as noted above, but also because "you" interact the
environment, that's why "you" are a quasi-classical object. And
that means as soon as you are duplicated you and your duplicate
will start to diverge and in a very short time one will not be a
good predictor of the other.
Brent
It might actually be a good predictor for a quite a long period of
time (minutes or even longer). Using functional MRI one can predict
what seems to be a totally random choice ten seconds in advance.
Also, you can run the copy inside a virtual environment and then the
copies will never diverge.
?? I don't think so. Insofar as they are classical objects they
depend on decoherence to remain classical and deterministic. But
that means they interact with the environment. So the environments
would also have to be identical at the quantum level. But then how
would you interact with them in order to measure the behavior. So I
think the virtual environment can only give a small extension in
predictability.
Brent
It doesn't follow that the environments have to be identical at the
quantum level, as that would imply that one cannot build a reliable
classical computer. The whole point of computation is that you are able
to take as input only that is relevant from the environment so that the
outcome of the computation is independent of other factors.
The brain is actually very good at this, e.g. the visual system
computes how things look like during standard light conditions even if
the conditions are totally different. So, two copies in two almost
identical rooms looking at a white sheet of paper but illuminated by
different lights may not even perceive the white papers differently.
They would have to make a picture with their digital cameras set to the
same white balance setting before they would start to diverge. One
picture would e.g. look yellow while the other would look white.
Saibal
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.