On 15 May 2017 at 19:41, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 5/15/2017 3:38 AM, David Nyman wrote:
>
> I've been thinking a bit about physical supervenience in the
> computationalist context and have come to the conclusion that I don't
> really understand it. So let's consider CT + YD. YD means accepting the
> replacement of all or part of my brain with a digital prosthesis. Now,
> whatever theory the doctor may vouchsafe me with respect to the function of
> this device, the replacement will FAPP be, in the first (and last)
> instance, a physical one. IOW after the procedure some are all of my
> neurological function will be have been replaced by digital componentry
> (presumably some species of logic gates) that putatively (sufficiently)
> faithfully mirrors the function of the biology that has been replaced. From
> any extrinsic perspective, all that will have happened (assuming the
> success of the procedure) is that the net physical behaviour of my generic
> brain will have been preserved to the required extent. Notice that there is
> no necessary reference to computation per se so far, however much we may
> wish to appeal to it in explicating what is "actually" supposed to have
> occurred. In a minimal sense supervenience will have been satisfied, in
> that my behaviour will continue to covary systematically with the net
> physical action of my generic brain, but there is no necessary reference to
> computation in this. And indeed if the proposition of CTM were simply based
> on YD alone, there would seem to be no further criteria to satisfy.
> However, with the additional assumption of CT, it would still seem
> necessary to make a further step towards explicating the relation between
> the above mentioned net physical action and the relevant spectrum of
> computation deemed to underlie both it and the (perceptually) substantial
> terms in which it is subjectively made manifest. The use of the word "net"
> in the above is noteworthy as a little reflection will remind us that, no
> matter how many "layers" of software are in principle being implemented on
> a given hardware, the only relevant observable consequence is their net
> physical effect, which may be rendered entirely minimal, or even be
> approximated adventitiously.
>
>
> If the implementation is producing conscious thoughts, then are those
> "observed"?
>

​No, I don't think that would be the correct way to put it. If we continue
to use the term "observed" (and all terms should IMO be considered mere
placeholders in this context) then conscious thoughts should be considered
to fall within the spectrum of observation itself. My thought here is that
anything whatsoever within awareness must fall somewhere on this spectrum.


​

> The idea of conscious thoughts as something special is inherent in saying
> that the device implanted must implement the same computation - not just
> have the same physical manifestation.
>

​Yes, somehow. But my point is that we will only ever observe the net
physical action that putatively implements the integration of the entire
abstract computational stack, whatever we may take that to be. And indeed
we can only ever take action (e.g. in implementing YD) in exactly those
same terms. It's easy to forget this with all the easy talk about software
implementation and instantiation, but the absolutely ingenious thing about
so-called computation in the physically-implementable sense is that in the
last analysis all those distinctions are lost in the final net physical
integration of the whole abstract confection. And this includes of course
all inputs and subsequent outputs and all their effects whatsoever.
​

> Bruno's examples of beliefs and proofs are all internal private thoughts
> with no necessary output.
>

​In the toy model, yes.
​

> I think this is a cheat
>

​I really cannot agree.
​

> because those propositions only take meaning by reference to external
> things: perception and actions.
>

I do agree, but nevertheless ​I'm truly at a loss as to why you seem to be
so committed to this critique. ​Firstly, the root assumption is, in effect,
that all subjective awareness is fundamentally, in effect, a dreamlike
experience. This is of course at least plausible in terms of what we know
about brain function. It seems that core functions are not duplicated
between what we ordinarily call dreaming and waking states, but rather are
distinguished by such considerations as sources of input data, relative
states of activation of sub-components of the perceptual apparatus, and
degree of behavioural inhibition. It is difficult indeed to see how an
evolutionary process could result in anything substantially different. The
toy model tries to explicate at least some of the fundamental components of
such "dreams" from a machine-psychological point of view. But as I've tried
to outline, a fully-developed theory - e.g. the kind implied in accepting
the doctor's proposed substitution of a finite digital device - would have
to entail a (sufficiently) consistent relation to a perceived externality
just as you require, else it would fail catastrophically. I don't believe
that Bruno has ever implied anything substantially different from this. At
least that would be the direction and goal of the perfected project.


>
> It has been asserted that "physics" emerges epistemologically as a
> consequence of the net perceptual integration (aka the psychology) of an
> infinity of digital machines. We can say this because of the formal
> equivalence of the class of such machines. This in effect equates, in a
> certain relevant sense, to their having a single such psychology, or
> monopsychism, albeit one that must be highly compartmentalised by
> programming and the contents of memory. In this sense, it may be possible
> to analogise this monopsychic perceptual position as akin to that of a
> multitasking OS running on a single "processor". There are at least two
> aspects to the physics of which we speak. The most obvious aspect is the
> observed behaviour of any physical system under study, which must always be
> rendered in terms of the net change in some set of concrete perceptual
> markers (e.g. the classic needle and dial). The second aspect is however
> unobservable in principle and consists of an abstract set of transition
> rules between physical states (assumed to be finitely computable, according
> to CTM) between observations. For present purposes, we may perhaps consider
> these rules to be represented by the wave equation which describes the
> unitary evolution of physical states.
>
> A question now occurs to me. On the foregoing presuppositions, are we to
> suppose that the computations representing the abstract unitary transition
> "rules" of the wavefunction (i.e. the second, abstract, part above) are
> *the selfsame ones* that, (under an alternative but putatively compatible
> logical interpretation) are supposed also to explicate the concrete
> perceptions in terms of which the observations (i.e. the first, perceptual,
> part) are made? If this were the case, we could indeed say that perception
> supervened both on computation (under one interpretation) and observed
> physical action (under a different but compatible one).
>
>
> That is (if I understand it) what has always been my conception of it.
> But Bruno has bridled at computations as "representing" and "explicating" -
> those are language functions: describing, representing, abstracting,
> explicating...  Not the ding and sich.
>

​Well, the ding and sich - the ontology, if you like - is assumed
axiomatically at the outset to be arithmetic and computational relations.
The idea then is that language functions, representing, explicating et al -
in short, the elusive transition from syntax to semantics - are to be
justified epistemologically in terms of the fully developed schema. But the
toy model already anticipates at least the skeleton of this proposed
explanatory approach, which begins with the exploitation to the full of the
recursively emulative characteristic of computation itself. The ultimate
goal is then to demonstrate, again in terms of the toy model, how this
might be intelligibly be developed to emulate a net monopsychic integration
of machine perspectives that will ultimately reference a stable, consistent
and pervasive "physical externality". The hope is then that this
perspectival integration can somehow be shown to predominate in
establishing the necessary subjective measure entailed in "extracting"
itself from the dross of the pathological background babble. Of course this
is not demonstrated, but the important point even at this early stage is
that if it fails conclusively at any juncture then CTM can be considered
definitively false and we must look elsewhere for a TOE. So there is no
disagreement AFAICS with what you have consistently demanded in these
respects and indeed it's the very thing for which I've tried to suggest
some intuitive analogies (nothing more at this stage) . Of course these
goals can hardly be said to have been achieved, but I'm still not at all
sure why you would consider the project itself at such a preliminary stage
to be a cheat.


>
> Is this compatibility in effect what is meant by Bp (i.e. the communicable
> "belief in", or procedural commitment to, a finite set of rules) and also p
> (i.e. the "true", or directly incommunicable, correspondence between
> perceptual facts) implied by that belief? If this were indeed the case,
> then the indispensable characteristic of the necessary physics (i.e. the
> second, abstract, part) that would permit it to be singled out perceptually
> from the dross of the computational Babel in this way would it be
> sufficiently "robust" in the relevant sense. That robustness would consist
> firstly in the capacity to stabilise the emulation of an (ultimately
> monopsychic) class of perceptual machines. And secondly in that the
> necessary machine psychology would supervene on common computational
> transition rules resulting in a (sufficiently) consistent covariance with a
> concrete externality as perceived by those selfsame machines. The
> conjunction of those rules and that observed externality would then be what
> we call physics and the computational physics so particularised would in
> effect be distinguished by its intrinsic capacity for self-interpretation
> and self-observation.
>
>
> That seems to have the form of "If this theory is going to work out then
> X, Y, and Z MUST be true."
>

​Yes, at this stage, exactly that. And concomitantly if X, Y, and Z are
demonstrably false then the theory cannot work.
​

>
> We had extended arguments starting from "Why isn't the-rock-that-computes
> everything conscious?"  I think your analysis above needs to be extended to
> cover that.   You seem to take "perception" as a given attribute of the
> machine, but perception is part of consciousness which we're trying to
> explain.
>

​Well, IIUC, perception is proposed to lie in the gap between procedure
(which again IIUC is what Turing was attempting to formalise at the outset)
and ​the putative truths entailed by those procedures *on the basis that
they can be taken as having true reference in the first instance*. The
emphasis I've applied here is IMO the necessary basis of any coherent
syntax-semantics transition. This is of course what Searle, for one, denies
to the computational schema; but this is because IMO he lacks the relevant
tools for any illuminating first-personal analysis. Those truths would, I
propose, cash out in terms of subjective correspondence of relevantly
interpreted propositions with the (perceptual) facts to which they
putatively refer. And this reference must extend ultimately, as we have
agreed, to a consistent, stable, pervasive and POVI covariance with both a
perceived externality and its corresponding set of abstract transition
rules. Without such a cash value, there can be no intelligible possibility
of reference and indeed no justification of the ascription of propositional
value or even mere utterance. In short there would be mere reduced
computation (i.e. the base ontology shorn of any attendant epistemology). I
speak purely in terms of a theory in the axiomatic mode, of course.

As to the rock that computes everything, ISTM that this falls roughly into
the Boltzmann Brain category of problem. So the (hopeful) idea is that the
subjective measure of any resultant perceptual states will either be
integrated into or, if pathological, be swamped in, the monopsychic
remembering-forgetting struggle for the stability, pervasiveness and
consistency that characterise the physics that we both observe and theorise
about. You will have noticed by now that I am more than somewhat attached
to this way of looking at the thing, so your robust counter-arguments would
be most appreciated. Bruce has most recently taken the view that this is
mostly wishful thinking, a view with which I must concur. But this is not
to say that such a wish cannot be fulfilled, unless it can be shown even at
this stage that there is something fundamentally wrong with it.

At the very least I think perception already requires a first-person
> distinct from, but interacting with an environment.
>

​Agree totally. But as I've asked you before, why on earth would you assume
that Bruno or (lagging way behind) even I would think anything different?
Of course that is a long way from conclusively demonstrating or finally
explicating, but it is absolutely an acknowledgement that the theory cannot
succeed without the entailment you insist on above.​

David

>
>
> Brent
>
>
> Does this make sense and if it does, what in particular about the
> computationalist assumptions or inferences make such a very specific
> conjunction plausible?
>
> David
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to