On 9/28/2013 7:20 PM, Russell Standish wrote:
On Sun, Sep 29, 2013 at 12:47:28PM +1300, LizR wrote:
On 23 September 2013 13:16, Russell Standish <li...@hpcoders.com.au> wrote:

For me, my stopping point is step 8. I do mean to summarise the
intense discussion we had earlier this year on this topic, but that
will require an uninterrupted period of a day or two, just to pull it all
into a comprehensible document.

I'm just now reading a reading a very long paper (more of a short
book, actually) by Scott Aaronson, on the subject of free will, which
is one of those rare works in that topic that is not
gibberish. Suffice it to say, that if he is ultimately convincing, he
would get me to stop at step 0 (ie COMP is false), but more on that
later when I finish it.

I am still reading this, but I am a little disappointed that as far as I
can see he hasn't mentioned Huw Price and John Bell's alternative
formulation of Bell's Inequality, namely that it can be explained using
microscopic time-symmetry. (This is despite mentioning Huw Price in the
acknowledgements.) Maybe I will come across a mention somewhere as I
continue, but I've been reading the section on Bell's Inequality and it
doesn't seem that this potentially highly fruitful explanation - all the
more so in that it doesn't require any new physics or even any new
interpretations of existing physics - doesn't merit a mention, which is a
shame because without taking account of that potential explanation, any
subsequent reasoning that relies on Bell's Inequality is potentially flawed.

I have just now finished Aaronson's paper. I would thoroughly
recommend the read, and it is definitely a challenge to John Clark's
assertion that only rubbish has ever been written about free will.

However it is a long paper (more of a short book), so for those of us
it is TL;DR, I'll try to summarise the paper, where I agree with it,
and more importantly where I depart from it.

Aaronson argues that lack of predictability is a necessary part of
free will (though not sufficient), much as I do in my book (where I go
so far as to define FW as "the ability to do something stupid"). He
does so far more eloquently, and with better contact to philosophical
literature than I do.

Where he starts to differ from my approach is that he draws a
distinction between ordinary "statistical" uncertainty and what he
calls Knightian uncertainty. To use concepts of the great philospher
of our time, Donald Rumsfeld :v), Knightian uncertainty corresponds to the
"unknown unknowns", as compared to the "known unknowns" of
"statistical" uncertainty. Nasim Taleb's "black swan" is a similar
sort of concept.

Aaronson accepts the criticism that ordinary "statistical" uncertainty
is not enough for free will. If I have a choice of three paths to
drive to work, with a certain probability of choosing each one, then
choosing one of the paths on any given morning is not an exercise in
free will. However, ringing work and chucking a sickie that day is an
example of Knightian uncertainty, and is an exercise in free will.

I accept this distinction between Knightian uncertainty and
statistical uncertainty, but fail to see why this distinction is
relevant to free will. I was never particular convinced by those who
argue that subjecting your will to a random generator does not make it
free (that is quite true, but irrelevant, as it is the will which is
random, not deterministic and subject to an external
generator). Aaronson accepts the criticism, without much comment, or
explanation why, alas, even though he gives a perfect example in the
form of a "gerbil-powered AI" that cannot have free will.

I agree. I also agree with JC that the 'free' is so ill defined that 'free will' is virtually meaningless. I think the idea that randomness is incompatible with will arises because people think of random as meaning 'anything possible'. It's clear to me that randomness can be very useful in selecting actions and, since it's hard to get rid of anyway, evolution has undoubtedly kept some. Whether it's inherent quantum randomness or just FAPP randomness from the environment doesn't really matter.


Accepting Knightian uncertainty as necessary, he goes looking for
sources of Knightian uncertainty in the physical universe, and
identifies the initial conditions of the big bang as a source of
"freebits", as a source of Knightian information.

Aaronson seems hung up on the critereon of predictablity. That's why he wants 'freebits' to underwrite his Knightian uncertainty. But I don't see that this "unpredictable even by God" standard adds anything to unpredictable because of QM, because of deterministic chaos, because of event horizons, Holevo's theorem,... There are plenty of barriers to perfect predictability.

He also argues that the requirement for Knightian uncertainty prevents
the ability for copying a consciousness. As I understand it, the
objection is along the lines of - if I can copy you, the I can use the
copy to make perfect predictions of what you do,

Makes no sense to me, both because there are so many obstacles to predict as noted above, but also because "you" interact the environment, that's why "you" are a quasi-classical object. And that means as soon as you are duplicated you and your duplicate will start to diverge and in a very short time one will not be a good predictor of the other.

Brent


thus negating any
free will you might have. He then points out the no-cloning theorem of
quantum mechanics as supporting his freebits picture that
consciousnesses cannot be cloned. This, then, would be the basis of
Aaronson rejecting COMP, right at step 0 of Bruno's UDA.

Personally, I'm not convinced. I could believe that someone makes a
very good physical copy of me, looks exactly like me, behaves like I
do statistically, and I would believe to be just as conscious as me,
yet when it comes down to a free choice, simply chooses to do a
different course of action than I do simply by random
happenstance. Over time, these differences cause a divergence such
that the two copies are quite distinct people. Having a copy of me,
does not make me predictable, and this consideration is quite
independent of whether you think the no-cloning theorem has anything
to do with consciousness.

Another final point is that of tracing Knightian uncertainty back to
the big bang. I also think this is unnecessary. As I point out in my
book, the key concept is emergence, that there is more than one
incommensurate levels of description of a given system of
situation. In the higher, or semantic levels, there will appear
phenomena that simply have no referrents at the lower syntactic
level. The very appearance of these emergent phenomena is a major source of
Knightian uncertainty. Having full knowlegde of the syntactic layer
does not in any way afford the ability to predict the emergence of
these higher level phenomena (if it did, the phenomena in question are
not emergent, by definition).

So all in all, a very interesting and thought provoking paper, but one
that ultimately, I think, will be found wanting.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to