Re: Compatibilism

2010-11-25 Thread Rex Allen
On Mon, Nov 22, 2010 at 11:40 AM, Bruno Marchal marc...@ulb.ac.be wrote:
 On 21 Nov 2010, at 19:47, Rex Allen wrote:
 On Fri, Nov 19, 2010 at 8:32 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 But your reasoning does not apply to free will in the sense I gave: the
 ability to choose among alternatives that *I* cannot predict in advance
 (so that *from my personal perspective* it is not entirely due to reason
 nor do to randomness).

 So that is a good description of the subjective feeling of free will.

 I was not describing the subjective feeling of free will, which is another
 matter, and which may accompany or not the experience of free will.
 Free-will is the ability to choose among alternatives that *I* cannot
 genuinely predict in advance so that reason fails, and yet it is not random.

The ability to choose among unpredictable alternatives?  What???

In no way does “ability to choose from unpredictable alternatives”
match my conception of free will.  Nor would you find many people in
agreement amongst the general populace.

You’re just redefining “free will” in a way that allows you to claim
that it exists but which bears little relation to the original
conception.

In a deterministic universe, there are no alternatives.  Things can
only unfold one way.  Our being unable to predict that unfolding is
neither here nor there.

Again, ignorance is not free will.  Ignorance is just ignorance.


 But if you question most people closely, this isn't what they mean by
 “free will”.

 You have interpret too much quickly what I was describing. Free-will as I
 define it is not the subjective feeling of having free-will. It is really
 due to the fact that the choice I will make is not based on reason, nor on
 randomness from my (real) perspective (which exists).

I didn’t say that the options were choices based on “reason or randomness”

I said:

“Either there is *a reason* for what I choose to do, or there isn't.”

By “a reason” I mean “a cause”.

I don’t mean “reason” in the sense of rationality.


 Subjective does not mean inexisting. Free-will is subjective or better
 subject-related, but it exists and has observable consequences, like
 purposeful murdering, existence of jails, etc. It is the root of moral
 consciousness, or conscience.

How does my inability to predict my choices or alternatives in advance
serve as the root for moral conscience?



 They mean the ability to make choices that aren't random, but which
 also aren't caused.

 And this becomes, with the approach I gave: the ability to make choices
 that aren't random, but for which they have to ignore the cause. And I
 insist: they might even ignore that they ignore the cause. They will say
 because I want do that or things like that.

The vast majority of the populace certainly does not equate free will
with ignorance of causes.


 I disagree that many people would accept your definition, because it would
 entail (even for religious rationalist believers) that free-will does not
 exist, and the debate would be close since a long time.

If you ask “most people”, they will not agree that the human choice is
random, and they will not agree that human choice can be explained by
causal forces.

Rather, they claim that human choice is something not random *and* not
caused.  Though they can’t get any more specific than that.

The debate isn’t settled because they won’t admit that there is no
third option.  They feel free, therefore they *believe* that they must
actually be free.  Free from randomness and free from causal forces.

“I feel free, therefore I must be free.”

That reasoning is what keeps the free will debate alive.


 They have the further belief that since the choices aren't random or
 caused, the chooser bears ultimate responsibility for them.

 They are right. That is what the materialist eliminativist will deny, and
 eventually that is why they will deny any meaning to notion like person,
 free-will, responsibility or even consciousness.

How does ignorance of what choice you will make lead to ultimate
responsibility for that choice?

I deny the possibility of ultimate responsibility and I’m not a
eliminative materialist.

But I also deny that mechanism can account for consciousness (except
by fiat declaration that it does).

As to “person”, I take a deflationary view of the term.  There’s less
to it than meets the eye.


 This further belief doesn't seem to follow from any particular chain
 of reasoning.  It's just another belief that this kind of person has.

 Because as a person she is conscious and feel a reasonable amount of sense
 of responsibility, which is genuine and legitimate from her first person
 perspective (and from the perspective of machine having a similar level of
 complexity).

This comes back to my earlier point.  She “feels” a sense of
responsibility and therefore believes that she is genuinely and
legitimately responsible.

But the fact that she feels responsibility in no way means that she
actually is 

Re: Compatibilism

2010-11-25 Thread Jason Resch
On Thu, Nov 25, 2010 at 3:38 PM, Rex Allen rexallen31...@gmail.com wrote:


 But I also deny that mechanism can account for consciousness (except
 by fiat declaration that it does).


Rex,

I am interested in your reasoning against mechanism.  Assume there is were
an] mechanical brain composed of mechanical neurons, that contained the same
information as a human brain, and processed it in the same way.  The
behavior between these two brains is in all respects identical, since the
mechanical neurons react identically to their biological counterparts.
 However for some unknown reason the computer has no inner life or conscious
experience.

If you were to ask this mind if it is conscious it would have to say yes,
but since it is not conscious, this would be a lie.  However, the mechanical
mind would not believe itself to be lying.  It's neural activity would match
the activity of a biological brain telling the truth.  It not only is lying
about it's claim of consciousness, but would be wrong in its belief that it
is conscious.  It would be wrong in believing it sees red when you hold a
ripe tomato in front of it.  My question is what could possibly make the
mechanical mind wrong in these beliefs when the biological mind is right?

The mechanical mind contains all the same information as the biological one;
the information received from the red-sensitive cones in its eyes can be
physically traced as it moves through the mechanical mind and leads to it
uttering that it sees a tomato.  How could this identical informational
content be invalid, wrong, false in one representation of a mind, but true
in another?

Information can take many physical forms.  The same digital photograph can
be stored as differently reflective areas in a CD or DVD, as charges of
electrons in Flash memory, as a magnetic encoding on a hard drive, as holes
in a punch card, and yet the file will look the same regardless of how it is
physically stored.  Likewise the file can be sent in an e-mail which may
transmit as fields over an electrical wire, laser pulses in an glass fiber,
radio waves in the air, the physical implementation is irrelevant.  Is the
same not true for information contained within a conscious mind?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: advice needed for Star Trek talk

2010-11-25 Thread Jason Resch
On Wed, Nov 24, 2010 at 1:50 PM, ronaldheld ronaldh...@gmail.com wrote:

 Jason:
  I see what you are saying up at our level of understanding, I do not
 know how to present that in a technically convincing matter.
  Ronald


Which message in particular do you think is difficult to
present convincingly?  Tegmark's ideas that everything is real, or the
suggestion that computer simulation might be a legitimate tool for
exploration?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Compatibilism

2010-11-25 Thread Rex Allen
On Tue, Nov 23, 2010 at 4:12 PM, 1Z peterdjo...@yahoo.com wrote:
 On Nov 21, 6:43 pm, Rex Allen rexallen31...@gmail.com wrote:
 On Fri, Nov 19, 2010 at 7:36 AM, 1Z peterdjo...@yahoo.com wrote:

 No-one is. They are just valid descriptions. There is no argument
 to the effect that logic is causal or it is nothing. It is not
 the case that causal explanation is the only form of explanagion

 “Valid descriptions” don’t account for why things are this way rather
 than some other way.


 If a higher level description is a  valid description of
 some microphysics, then it will be an explanation of
 why the result happened given the initial conditions

 It won't solve the trilemma, but neither will
 microphysical causality

So Agrippa's Trilemma revolves around the question of how we can
justify our beliefs.

It seems to me that an entirely acceptable solution is just to accept
that we can't justify our beliefs.


 As I said before, materialism could conceivably explain human ability
 and behavior, but in my opinion runs aground at human consciousness.
 Therefore, I doubt that humans are a complex sort of robot.

 Is human consciousness causally effective?

I don't believe so, no.

And claiming that consciousness is itself caused just runs into
infinite regress, as you then need to explain what causes the cause of
conscious experience, and so on.

Therefore, taking the same approach as with Agrippa's Trilemma, it
seems best to just accept that there is no cause for conscious
experience either.

Is it a useful answer?  Maybe not.  But where does it say that all
answers have to be useful?

Besides, what causes you to care about usefulness?  Evolution.

What causes evolution?  Initial conditions and causal laws.

What causes initial conditions and causal laws?

And so on.  We've been through this before I think.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Compatibilism

2010-11-25 Thread Rex Allen
On Tue, Nov 23, 2010 at 4:20 PM, 1Z peterdjo...@yahoo.com wrote:


 On Nov 21, 6:35 pm, Rex Allen rexallen31...@gmail.com wrote:
 On Fri, Nov 19, 2010 at 7:28 AM, 1Z peterdjo...@yahoo.com wrote:
 On Nov 18, 6:31 am, Rex Allen rexallen31...@gmail.com wrote:
 If there is a reason, then the reason determined the choice.  No free will.

 Unless you determined the reason.

 How would you do that?  By what means?  According to what rule?  Using
 what process?

 If you determined the reason, what determined you?  Why are you in the
 particular state you're in?

 If there exists some rule that translates your specific state into
 some particular choice, then there's still no free will.  The rule
 determined the choice.

 And if there isn't...you have an action that is reasoned yet
 undetermined, as required

If there is no rule that translates your specific state into some
particular choice, then what is it connects the state to the choice?

The state occurs.  Then the choice occurs.  But nothing connects them?
 That is accidentalism isn't it?



 I.1.v Libertarianism — A Prima Facie case for free will

 As for the rest of it, I read it, but didn't find it convincing on any level.

 RIG + SIS  Free Will

 A random process coupled to a deterministic process isn't free will.
 It's just a random process coupled to a deterministic process.

 If you insist that FW is  a Tertium Datur that is fundamenally
 different from both determinism and causation, then you
 won't accept a mixture. However, I don;t think Tertium Datur
 is a good definition of DW sinc e it is too question begging

It seems to me that when people discuss free will, they are always
really interested in ultimate responsibility for actions.

Any defense of free will must allow for ultimate responsibility for actions.

I say that ultimate responsibility is impossible, because neither
caused actions nor random actions nor any combination of cause and
randomness seems to result in ultimate responsibility.

Ultimate responsibility means that reward and punishment are justified
for acts *even after* setting aside any utilitarian considerations.

So *if* it were possible to be ultimately responsible for a bad act,
we wouldn't need to justify the offender's punishment in terms of
deterring future bad behavior by the offender or others.

We wouldn't need to justify the offender's punishment in terms of
rehabilitating the offender so that they don't commit similar bad acts
in the future.

We wouldn't need to justify the offender's punishment in terms of
motivating better behavior by them or others in the future.

We wouldn't need to justify the offender's punishment in terms of
compensating their victims or insuring social stability.

Instead, we could justify the offender's punishment purely in terms of
their ultimate responsibility for it.

Using their free will, they chose to commit the bad act, and therefore
they deserve the punishment.  End of story.

So, given that the punishment would no longer need to be justified in
terms of anything other than ultimate responsibility, how would one
justify limits on the punishment's severity?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.