On Oct 5, 12:23 am, meekerdb <meeke...@verizon.net> wrote:
> On 10/4/2011 8:14 PM, Craig Weinberg wrote:
>
> > On Oct 4, 8:46 pm, meekerdb<meeke...@verizon.net>  wrote:
> >> On 10/4/2011 5:15 PM, Craig Weinberg wrote:
>
> >>> On Oct 4, 2:59 pm, meekerdb<meeke...@verizon.net>   wrote:
> >>>> This goes by the name "causal completeness"; the idea that the 3-p 
> >>>> observable state at t
> >>>> is sufficient to predict the state at t+dt.  Craig wants add to this 
> >>>> that there is
> >>>> additional information which is not 3-p observable and which makes a 
> >>>> difference, so that
> >>>> the state at t+dt depends not just on the 3-p observables at t, but also 
> >>>> on some
> >>>> additional "sensorimotive" variables.  If you assume these variables are 
> >>>> not independent
> >>>> of the 3-p observables, then this is just panpsychic version of 
> >>>> consciousness supervening
> >>>> on the 3-p states.  They are redundant in the informational sense.   If 
> >>>> you assume they
> >>>> are independent of the 3-p variables and yet make a difference in the 
> >>>> time evolution of
> >>>> the state then it means the predictions based on the 3-p observables 
> >>>> will fail, i.e. the
> >>>> laws of physics and chemistry will be violated.
> >>> Why would they have to be either completely dependent or independent?
> >> Did I use the word "completely"?
> > You're reducing the possibilities to two mutually exclusive impossible
> > options, so if 'completely' is not implied then you aren't really
> > saying anything.
>
> I wrote "not independent" and "independent".  Those are mutually exclusive in 
> any logic I
> know of.  But "not independent" is not the same as "completely dependent".  
> Try reading
> what is written.

I did read what you wrote. You said we only have two options, either
1p and 3p are independent or not independent. I'm countering that by
saying that they are neither completely independent nor dependent, so
there is no reason to go forward with the assumption that you have to
pick one of your two impossible conclusions.

>
>
>
> >>> I've given several examples demonstrating how we routinely exercise
> >>> voluntary control over parts of our minds, bodies, and environment
> >>> while at the same time being involuntarily controlled by those same
> >>> influences, often at the same time. This isn't a theory, this is the
> >>> raw data set.
> >> No it's not.  In your examples of voluntary control you don't know what 
> >> your brain is
> >> doing.  So you can't know whether you "voluntary" action was entirely 
> >> caused by physical
> >> precursors or whether their was some effect from libertarian free-will.
> > What difference does it make what your brain is doing to be able to
> > say that you are voluntarily controlling the words that you type here?
>
> >>> If it were the case that the 3p and 1p were completely independent,
> >>> then you would have ghosts jumping around into aluminum cans and
> >>> walking around singing, and if they were completely dependent then
> >>> there would be no point in being able to differentiate between
> >>> voluntary and involuntary control of our mind, body, and environment.
> >> Exactly the point of compatibilist free-will.
> > What does that label add to this conversation?
>
> It makes the discussion precise; instead of wandering around analogies and 
> metaphors.

I think that metaphors reveal the truth by letting the thinker make
sense of it for themselves, while labels or intended intimidate and
prejudice the thinker to conceal the truth.

>
>
>
> >>> Such an illusory distinction would not only be redundant but it would
> >>> have no ontological basis to even be able to come into being or be
> >>> conceivable. It would be like an elephant growing a TV set out of it's
> >>> trunk to distract it from being an elephant.
> >> Or pulling another meaningless example out of the nether regions.
> > Why meaningless? I'm pointing out that the illusion of free will in a
> > deterministic universe would be not merely puzzling but fantastically
> > absurd. Your criticism is arbitrary.
>
> You're "pointing out" the very thing that is in dispute.  Your assertion that 
> is absurd is
> not a substitute for saying how it could be tested and found false.

 I'm stating that logically to think that awareness would or could
exist in a deterministic universe would be absurd. Since we know for a
fact that awareness exists but we don't know that the universe is
deterministic, why do you find my position to be the unfalsifiable
one?

>
>
>
> >>> Since neither of those two cases is possible, I propose, as I have
> >>> repeatedly proposed, that the 3p and 1p are in fact part of the same
> >>> essential reality in which they overlap, but that they each extent in
> >>> different topological directions;
> >> What's a topological direction?
> > matter elaborates discretely across space, energy elaborates
> > cumulatively through time.
>
> A creative use of "elaborates"....does not parse.

ok, matter and energy 'appear to us as being involved in a consistent
range and variety of persistent forms and repeating and novel
processes'

>
>
>
> >>> specifically, 3p into matter, public
> >>> space, electromagnetism, entropy, and relativity, and 1p into energy,
> >>> private time, sensorimotive, significance, and perception.
> >> "3p overlaps into entropy"!?  Reads like gibberish to me.
> > 3-p doesn't overlap entropy, 3-p is entropic. 1-p is syntropic. The
> > overlap is the 'here and now'. I'm not sure that it matters what I say
> > though, you're mainly just auditing my responses for technicalities so
> > that you can get a feeling of 'winning' a debate. It's a sensorimotive
> > circuit. A feeling that you are seeking which requires a particular
> > kind of experience to satisfy it. If I could offer you a drug instead
> > that would stimulate the precise neural pathways involved in feeling
> > that you had proved me wrong in an objective way, would that be
> > satisfying to you? Would there be no difference in being right versus
> > having your physical precursors to feeling right get tweaked? Isn't
> > that what you are saying, that in fact this discussion is nothing but
> > brain drugs with no free will determining our opinions? Isn't being
> > right or wrong just a matter of biochemistry?
>
> No, it's a matter of passing an empirical test.

How is an empirical test not a matter of biochemistry? Can I not
induce the feeling that something has passed an empirical test in any
person or group of people with the right neurological agents?

>
>
>
> >>> No laws of physics are broken by consciousness, but it is very
> >>> confusing because our only example of consciousness is human
> >>> consciousness, which is a multi-trillion cell awareness.
> >> Exactly what I said. In fact one's only example of consciousness is their 
> >> own.  The
> >> consciousness of other humans is an inference.
> > I agree. Although I would qualify the inference. It's more of an
> > educated inference. I'm making a different point with it though. I'm
> > saying there is a problem with our default assumptions about micro
> > brain mechanisms correlating with macro psychological experiences.
>
> Fine.  Think of a test that would prove the competing theory wrong.

What's the competing theory? "Someday we will find a connection?"

>
>
>
> >>> The trick is
> >>> to realize that you cannot directly correlate our experience of
> >>> consciousness with the 3-p cellular phenomenology, but to only
> >>> correlate it with the 3-p behavior of the brain as a whole.
> >> That's the experimental question, and you don't know the answer.
> > I don't claim to have the answer, but I have a hypothesis, which has
> > to be understood using this way of looking at the mind and brain.
>
> >>> That's the
> >>> starting point. If you are going to try to understand what a movie is
> >>> about, you have to look at the whole images of the movie, and not
> >>> focus on the pixels of the screen or the mechanics of pixel
> >>> illumination to guide your interpretation. There is no human
> >>> consciousness at that low level. There may be sensorimotive 1-p
> >>> phenomenology there, and I think that there is, but we can't prove it
> >>> now. What we can prove is there in 3-p would only relate to that low
> >>> level 1-p which is unknown to us.
> >>> My proposition is that our 1-p consciousness builds from lower level 1-
> >>> p awareness and higher level 1-p semantic environmental influences,
> >>> like cultural ideas, family traditions, etc.
> >> But that is entirely untestable since we have no access to those 1-p 
> >> consciousnesses.
> >> Cultural ideas, family traditions are 3-p observables.
> > We have access to our own 1-p consciousness. What else do we need?
>
> We need to show that it is not entirely determined by the the physical 
> evolution of the brain.

Wouldn't we first need a plausible mechanism for physical evolution in
the brain to lead to 1-p awareness? It's not like growing sharper
teeth, there's nothing that can just be quantitatively augmented or
diminished to suddenly make consciousness happen in something that has
no possibility of being conscious. The possibility of consciousness in
the first place is the mystery that materialism and determinism have
to address, not that the fact of consciousness needs to account for
itself in physical terms.

>
> > Cultural ideas and family traditions are not 3-p observable - they
> > have no melting point or specific gravity, they occupy no location -
> > they must be inferred by 1-p interpretation/participation/consensus.
>
> Everything is inferred from 1-p experiences.  But cultural ideas and 
> traditions are
> public; they can be observed by more than one person and they can reach 
> intersubjective
> agreement just like any other facts about the world.
>

They are public to the members of the particular culture only. That's
not 3-p, it's 1-p plural.

>
>
> >>> It is not predictable
> >>> from 3-p appearances alone, but not because it breaks the laws of
> >>> physics. Physics has nothing to say about what particular patterns
> >>> occur in the brain as a whole.
> >> Sure it does - unless magic happens.
> > Consciousness happens. Physics has nothing to say about what the
> > content of any particular brain's thoughts should be. If give you a
> > book about Marxism then you will have thoughts about Marxism - not
> > about whatever physical modeling of a brain of your genetic makeup
> > would suggest.
>
> Do you think a book about Marxism is not physical and reading it is not a 
> physical
> process?  What is your evidence for this.

Because if the book is written in Russian then you won't (I'm
assuming) be able to read it. If you learn to read Russian then it
becomes a book about Marxism to you, but with no changes to the ink or
pages in the book. A book is a physical thing, but 'about Marxism' is
a 1-p subjective experience.

> That's the whole question: Is thinking a purely
> physical process or does it include some extra-physical part.

That's your question, not mine. I see the physical and experiential in
clear and specific relationship of mutual interdependence. Thinking is
not extra-physical, it is entero-physical.

>
>
>
> >>> There is no relevant biochemical
> >>> difference between a one thought and another that could make it
> >>> impossible physically,
> >> So you say.   But I think there is.  If you think of an elephant there is 
> >> something
> >> biochemical happening that makes it not a thought about a giraffe.  So 
> >> when you read
> >> "elephant" it is impossible to think of a giraffe at that moment.
> > Nah, you can easily be hypnotized to think of a giraffe whenever you
> > see the word elephant. I don't understand what it would prove anyways.
> > Each person reading the word for elephant in their own language will
> > have different biochemical happenings which could not be proactively
> > tied to elephantness or giraffeness if you didn't already have a
> > correlation established beforehand from first hand anecdotal reports
> > of subjective content. There is no predictive route from the
> > biochemistry to zoological linguistic complexes and no role for any
> > such complexes to play in the observed biochemistry.
>
> >>> just as there is no sequence of illuminated
> >>> pixels that is preferred by a TV screen, or electronics, or physics.
> >>>> Of course this violation maybe hard to detect in something very 
> >>>> complicated like a brain;
> >>>> but Craig's theory doesn't seem to assume the brain is special in that 
> >>>> respect and even a
> >>>> single electron supposedly has these extra, unobservable variables, i.e. 
> >>>> a mind of its
> >>>> own.
> >>> No. I have never said that a particle has a mind of it's own, I only
> >>> say that it may have a sensorimotive quality which is primitive like
> >>> charge or spin, but that this quality scales up in a different way
> >>> than quantitative properties.
> >> Scales up how?
> > Qualitatively. Richer, deeper, more meaningful qualia. Where else does
> > it come from? A metaphysical dimension?
>
> > How is this sensormotive quality detected or measured?
>
> > It is felt. It is experienced first hand as qualia.
>
> >> What's its
> >> operational definition?
> > What form do you want it in? Defined in terms of what?
>
> An opertaional definition is in terms of operations that will detect or 
> measure something.

Sensorimotive phenomena is the ability to privately perceive,
experience, and intentionally participate in that experience.

>
> > Sensorimotive
> > phenomena is a universal primitive. It is the capacity for
> > participatory being - to detect and respond to changing interior and
> > exterior conditions.
>
> >> How is it different from connective complexity of processes -
> >> which is the quality that most people think gives a brain its special 
> >> quality.
> > Without sensorimotive qualities, those processes cannot be experienced
> > by anything. What knows the difference between simplicity and
> > complexity if you have no awareness to distinguish it?
>
> If you have no awareness then you don't know anything.  It doesn't follow 
> that everything
> depends on your awareness of it.

No, but it doesn't follow that anything can exist independently of
awareness in general either. If I have no awareness then I don't know
anything, but if the universe has no awareness then it doesn't exist
(in what form could it be said to exist?)

>
>
>
> >>> The brain is very special *to us* and I
> >>> suspect that it is pretty special relatively speaking as far as
> >>> processes in the Cosmos. It's not special because it has awareness
> >>> though, it's just the degree to which that awareness is elaborated and
> >>> concentrated.
> >>>> The problem with electrons or other simple systems is that while we have 
> >>>> complete
> >>>> access to their 3-p variables, we don't have access to their 
> >>>> hypothetical other variables;
> >>>> the ones we call 1-p when referring to humans.  So when all the silver 
> >>>> atoms in a
> >>>> Stern-Gerlach do just as we predict, it can be claimed that they all had 
> >>>> the same 1-p
> >>>> variables and that's why the 3-p variables were sufficient to predict 
> >>>> their behavior.
> >>> Why is that a problem?
> >> It's a problem because it makes your theory untestable for anything except 
> >> a human brain.
> > Why would you need more than a human brain? You just have to turn it
> > into a laboratory. Figure out how conjoined twins who share the same
> > brain do that, and then conjoin your brain with other kinds of brains,
> > tissues, cells, molecules. It's a lot easier than trying to copy
> > someone's brain by duplicating the position of every atom in their
> > neurons.
>
> >>>> So the only way I see to test this theory, even in principle, would be 
> >>>> to observe Craig's
> >>>> brain at a very low level while having him report his experiences (at 
> >>>> least to himself)
> >>>> and show that his experiences and his brain states were not one-to-one.
> >>> No, I'm not saying that 1-p and 3-p are not synchronized, they are
> >>> synchronized, but that doesn't mean that voluntary choices supervene
> >>> on default neurological processes. Look at how our diaphragm works. We
> >>> can voluntarily control our breathing to a certain extent, but there
> >>> are involuntary default behaviors as well. This does not mean that we
> >>> can't decide to hold our breath or that it can only be our body which
> >>> is doing the deciding. How do you explain the appearance of voluntary
> >>> control of our body?
> >> I appears voluntary because we can't perceive the brain processes that 
> >> produce the
> >> action.  So when the action comports with the brains usual pathways we 
> >> feel "we did it
> >> *voluntarily".
> > That doesn't explain the appearance at all. You're just acknowledging
> > that there is a feeling despite your not knowing (or caring) why it's
> > necessary.
>
> >> Which is the point of David Eagleman's experiment with shifting a person's
> >> time calibration.  If he shifted it so that the result appeared earlier 
> >> (in subjective
> >> time) than the voluntary act then the person no long felt that they had 
> >> done it.  It
> >> happened without them.
> > There is no question that our feeling of free will as a unified
> > phenomenon is limited to a particular scale of time, but so what? We
> > know that our consciousness is multi-threaded so that many awarenesses
> > compete for attention. That takes time. The threads that are involved
> > with tying the perceptions together are going to lag behind the flow
> > of sensations because you are slicing the time frame too thin to
> > reveal the minimum thickness of human consciousness. That doesn't mean
> > that our voluntary actions are not voluntary.
>
> what's the operational definition of "voluntary".  Does it exclude 
> "determined by physics"?

Voluntary means that we perceive a coherent and consistent qualitative
difference from actions which are involuntary. We feel that we are
doing something consciously, as opposed to digesting something
automatically and unconsciously.

>
> > It just means that our
> > psyche is very complex and arriving at a consensus can only happen so
> > fast. Measurements faster than that are going to look strange, just as
> > freezing a movie mid frame is going to give you some strange artifacts
> > and blurs that defy ordinary expectations of what a movie should look
> > like.
>
> >>>> Of course this is
> >>>> probably impossible with current technology.  Observing the brain at a 
> >>>> coarse grained
> >>>> level leaves open the possibility that one is just missing the 3-p 
> >>>> variables that you show
> >>>> the relationship to be one-to-one.
> >>>> So I'd say that until someone thinks of an empirical test for this "soul 
> >>>> theory",
> >>>> discussing it is a waste of bandwidth.
> >>> Way to argue from authority. "Your thoughts are a waste of everyone's
> >>> time unless I think that they can be proved to my satisfaction".
> >> I didn't say anything about which outcome would satisfy me.  I said it's a 
> >> waste of time
> >> to argue a theory that cannot be tested.
> > It can be tested, just maybe not with the technology we are using. You
> > could build instruments which use living tissue to test these ideas.
> > Replace someone's eye with a petri-dish retina that can serve as a
> > laboratory for different types of cells to see if vision can be
> > recreated out of other kinds of tissue, see if you get new colors,
> > etc. There's all kinds of ways this theory could be tested,
>
> How would you know if it perceived new colors?  You couldn't ask it, and you 
> have no
> access to its qualia (if it has any).

That's why I said "REPLACE SOMEONE'S EYE with a petri-dish retina".
Don't you read the words I write? ;)
Then the patient has access to the qualia of whatever we can
successfully connect to their optic nerve.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to