On 28 February 2014 16:44, Craig Weinberg <whatsons...@gmail.com> wrote:

>
>
> On Friday, February 28, 2014 8:29:52 AM UTC-5, David Nyman wrote:
>
>> On 27 February 2014 16:43, Craig Weinberg <whats...@gmail.com> wrote:
>>
>>>
>>>
>>> On Thursday, February 27, 2014 9:47:33 AM UTC-5, David Nyman wrote:
>>>
>>>> On 27 February 2014 14:02, Craig Weinberg <whats...@gmail.com> wrote:
>>>>
>>>> In other words, why, in a functionalist/materialist world would we need
>>>>> a breakable program to keep telling us that our hand is not Alien?When 
>>>>> you start by assuming that I'm always wrong, then it becomes very easy
>>>>> to justify that with ad hoc straw man accusations.
>>>>
>>>>
>>>> I do not in fact start with that assumption and, if you believe that I
>>>> do, I suggest you should question it. I do find however that I am unable to
>>>> draw the same conclusion as you from the examples you give. They simply
>>>> seem like false inferences to me (and to Stathis, based on his comment).
>>>>
>>>
>>> You are unable to draw the same conclusion because you aren't
>>> considering any part of what I have laid out. I'm looking at CTM as if it
>>> were true, and then proceeding from there to question whether what we
>>> observe (AHS, blindsight, etc) would be consistent with the idea of
>>> consciousness as a function.
>>>
>>
>> Yes, but functionalism doesn't necessarily force the claim that
>> consciousness *just is* a function: that is the eliminativist version. More
>> usually it is understood as the claim that consciousness *supervenes on*
>> (or co-varies with) function (i.e. the epiphenomenalist or crypto-dualist
>> versions).
>>
>
> That's even more eliminativist IMO.
>

I wouldn't disagree. You should have read on a bit further.


> To say that consciousness is identical to the function of a machine at
> least acknowledges that phenomenology is causally efficacious.
>

Not really. Only in a crypto-eliminativist sense, which is to say no sense
at all.


> To add in supervenience to non-computational epiphenomena is not really
> functionalism or digital functionalism or computationalism. What is
> overlooked is that supervenience and emergence both depend themselves on
> consciousness to provide a perspective in which some phenomena appear to
> 'emerge' from the supervening substrate. From the point of view of
> computation, surely computationalism cannot allow that consciousness comes
> as a surprise. From any comp perspective, we humans can define
> consciousness as emergent or supervenient, but surely arithmetic itself
> would not define its own conscious functionality as non-computational.
>

Slipping "surely" into a sentence doesn't make a contention any the more
plausible. It is certainly not obvious how one can begin from arithmetic
and arrive at consciousness. I have already argued that the assumption of a
first-personal reality, transcending any third-personal description of it,
is necessitated from the outset in any theory that purports to take
consciousness seriously (and that includes comp, by definition). The theory
must then show how this reality comes to be "discoverable" under the
appropriate conditions, but it doesn't thereby pull it out of a hat by
magic. I think it would be foolish to expect that the consequences of any
theory dealing with such fundamental questions would be obvious and
therefore criticisms on the grounds of its failure to meet uninformed
expectation are beside the point.
.

>
>
>> But, if we take this latter view, the conundrum is more peculiar even
>> than you seem to imply by these piecemeal pot-shots. Rather, the *entire
>> story* of awareness / intention now figures only as a causally-irrelevant
>> "inside" interpretation of a complete and self-sufficient functional
>> history that neither knows nor cares about it.
>>
>
> What self-sufficient functional history do you mean?
>

The physical history of the systems in question, for example.


> When I use history I'm generally talking about a collection of aesthetic
> resources which have been accumulated through direct experience and remain
> present implicitly locally and explicitly in the absolute sense.
>

You could hardly call that a functional history though.


>
>
>> You remember my analogy of the dramatis personae instantiated by the
>> pixels of the LCD screen?
>>
>
> Semi remember.
>

Well, the analogy was that fact that the pixels are an adequate
infrastructure for the portrayal of any possible drama that will fit within
their confines doesn't mean that this provides a sufficient account of
those dramas. Analogously, the fact that we can give a functional account
of the brain doesn't mean that this provides a sufficient account of
consciousness. Since we can't appeal to an external source of
interpretation as we can in the analogy, we must look for a schema that can
make sense of "internal interpretation". If comp is correct, that
interpretation requires us to cast our net pretty wide.


>
>>
>> However it would be self-defeating if our response to such bafflement
>> resulted in our misrepresenting its patent successes because it cannot
>> explain everything. We should rather seek a resolution of the dichotomy
>> between apparently disparate accounts in a more powerful explanatory
>> framework; one that could, for example, explain how *just this kind of
>> infrastructure* might emerge as the mise-en-scène for *just these kinds of
>> dramatis personae*. Comp is a candidate for that framework if one accepts
>> at the outset that there is some functional level of substitution for the
>> brain. If one doesn't, there is certainly space for alternatives, but it is
>> fair to demand a similar reconciliatory account in all cases, rather than a
>> distortion of particular facts to suit one's preference.
>>
>
> I agree. Who is calling for facts to be distorted?
>

You are, or at least you consistently give that impression. You keep coming
up with attempts at counter-examples to functional or physical accounts
that seem immediately to fall on their faces, such as the argument that
functional deficiencies in the brain are insufficient to account for AHS.
What you keep forgetting is that, per functionalism, they only need to
account for the functional armature of AHS, which they do perfectly
adequately. Is this then an adequate account of the correlation of such
functions with the conscious phenomena of AHS? Absolutely not. That's why
we need a better theory than functionalism, in either its eliminativist or
epiphenomenalist guises.

Once you have pansensitivity as the primordial identity, then computation
> becomes explainable as the skeletal reflection of sense through
> insensitivity (pan-entropy (pan-negentropy)). This replaces UDA and places
> a limit on computation to the context of public facing
> communication/encapsulation and leaves some aspect of privacy
> trans-measurable and locally omnipotent.
>

Those are big claims. I'd need to see something rather more closely argued,
on the lines of Bruno's papers, before I'd be swayed, or even be in a
position to compare the two theories.


>
>> What I conclude is that since the function of the limb is not
>>> interrupted, there is no plausible basis for the program which models the
>>> limb to add in any extra alarm for a condition of 'functional but not 'my'
>>> function'. AHS is the same as a philosophical zombie, except that it is at
>>> the level where physiological behavior is exhibited rather than
>>> psychological behavior.
>>>
>>>
>>>>  If you have a compelling argument to the contrary, I wish you would
>>>> find a way to give it in a clearer form.
>>>>
>>>
>>> See above. Hopefully that is clearer.
>>>
>>>
>>>>  I can't see that what you say above fits the bill.
>>>>
>>>
>>> I don't see that criticism without any details or rebuttals fit the bill
>>> either. Whenever the criticism is "It seems to me that your argument
>>> fails', it only makes me more suspicious that there is no legitimate
>>> objection. I can't relate to it, since as far as I know, my objections are
>>> always in the form of an explanation - what specifically seems wrong to me,
>>> and how to see it differently so that what I'm objecting to is not
>>> overlooked.
>>>
>>>
>>>>  You seem to regard rhetorical questions beginning "why would we
>>>> need..?" as compelling arguments against a functional account, but they
>>>> seem to me to be beside the point.
>>>>
>>>
>>> That's because you are only considering the modus ponens view where
>>> since functionalism implies that a malfunctioning brain would produce
>>> anomalies in conscious experience, it would make sense that AHS affirms
>>> functionalism being true. I'm looking at the modus tollens view where since
>>> functionalism implies that brain function requires no additional ingredient
>>> to make the function of conscious machines seem conscious, some extra,
>>> non-functional ingredient is required to explain why AHS is alarming to
>>> those who suffer from it. Since the distress of AHS is observed to be real,
>>> and that is logically inconsistent with the expectations of functionalism,
>>> I conclude that the AHS example adds to the list of counterfactuals to
>>> CTM/Functionalism. It should not matter whether a limb feels like it's
>>> 'yours',* functionalism implies that the fact of being able to use a
>>> limb makes it feel like 'yours' by definition*. This is the entire
>>> premise of computationalist accounts of qualia; that the mathematical
>>> relations simply taste like raspberries or feel like pain because that is
>>> the implicit expression of those relations.
>>>
>>
>> I think if you consider my comments above you may concur that the
>> problems with functionalism are even deeper than this and necessitate a
>> fundamental paradigm shift if they are to be successfully resolved. But we
>> can't be content if such a shift results in the distortion or wholesale
>> junking of the functional account per se
>>
>
> If functionalism asserts that consciousness is fundamentally functional,
> then I think it must be junked.
>

I agree. But I said the functional *account* - that is to say our ability
to give a consistent functional or physical accounting of things - not the
theory that this account exhausts the explanation of consciousness.


> Function is a sensible expectation, but sense is not a plausible
> expectation of function alone.
>
>
>> (which can only be self-defeating).
>>
>
> why?
>

Because it is self defeating to set out to mangle the functional account
just because it doesn't fit one's pet theory. What any candidate theory in
this topic needs to do, at least in the first instance, is to argue for
specific principles that could correlate the functional account
satisfactorily with the sensible one.


>
>
>> Rather we want to be able to explain how it is possible for a
>> functionally-complete account to emerge as an apparently discrete level of
>> reality and
>>
>
> It's possible because the functionally complete account is the one which
> excludes the non-functional aesthetic context from which that account
> arises. If you get rid of yourself, then in your estimation, what remains
> appears to be functional. It works the same way with the visual blind-spot,
> or any number of neurological disorders where sense compensates for what is
> missing by eliding the awareness that something is missing and allowing the
> underlying context of expectation to fill in the hole.
>
>
>> , moreover, in a manner that makes it possible to understand how the
>> functional manifestation of sensation
>>
>
> That manifestation is functional is already only a quality of sense. We
> could dream of being in an Escher painting without ever suspecting any
> unusual functional mismatches.
>
>
>> might coincide lawfully with its inner expression. It is an explication
>> of that "coincidence" - the conjunction of true belief with the truth to
>> which it refers - that, for me, will be more convincing than any intuition
>> of the ultimate primacy of sensation; an intuition that surely begs the
>> very questions it seeks to resolve.
>>
>
> I think it is the intuition of the primacy of sense-independent truth
> which is begging the very questions that it seeks to resolve. Truth about
> what? What 'refers' to truth?
>

It's funny you should ask that because it's your theory that leads to the
idea of dolls that can "truthfully" produce the appearance, for example, of
being in pain without it being true that they are, in fact, in pain. ISTM
that the only way to get any adequate theoretic purchase on this sort of
paradox is through a principled correlation of the functional expressions
of pain (e.g. crying out or in any other way referring to it) with the
directly-perceived truth of being, in fact, "in pain". We should expect any
adequate account to be capable of situating both of these categories
appropriately without doing irreparable violence to either.

David


>>
>>>
>>>
>>>
>>>> They invite the obvious rejoinder that AHS doesn't seem in principle to
>>>> present any special difficulties to functionalism in explaining the facts
>>>> in its own terms.
>>>>
>>>
>>> It does present special difficulties though (see above). AHS doesn't
>>> make much sense for functionalism, particularly combined with blindsight,
>>> and all of the problems with qualia and the hard problem/explanatory gap.
>>>
>>>
>>>>  You recently proposed the example of tissue rejection which invited a
>>>> similar response.
>>>>
>>>> None of this is to say that I don't regard functional / material
>>>> accounts as problematic, but this is for a different reason; I think they
>>>> obfuscate the categorical distinctions between two orthogonal versions of
>>>> "the facts": at the reduced level of function and at the integrated level
>>>> of sensory awareness / intention. Comp, for example, seeks to remedy this
>>>> obfuscation by elucidating principled correlations between formal notions
>>>> of reduction and integration via computational theory. Hence, per comp, the
>>>> principle of digital substitution is not the terminus of an explanation but
>>>> the starting point for a deeper theory. ISTM that alternative theories
>>>> cannot avoid a similar burden of explanation.
>>>>
>>>
>>> They can if they begin by accepting that what we cannot explain about
>>> consciousness is unexplainable for a good reason, namely that consciousness
>>> cannot be made any more or less plain than it is. Consciousness is what
>>> makes all things plain, so it is circular to expect that it could be
>>> subject to its own subjugation.
>>>
>>> Craig
>>>
>>>
>>>>
>>>> David
>>>>
>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-li...@googlegroups.com.
>>> To post to this group, send email to everyth...@googlegroups.com.
>>>
>>> Visit this group at http://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to