> Date: Mon, 22 Feb 2010 08:42:17 -0800
> From: meeke...@dslextreme.com
> To: everything-list@googlegroups.com
> Subject: Re: problem of size '10
> 
> Jesse Mazer wrote:
> >
> >
> > > Date: Sat, 13 Feb 2010 10:48:28 -0800
> > > From: jackmal...@yahoo.com
> > > Subject: Re: problem of size '10
> > > To: everything-list@googlegroups.com
> > >
> > > --- On Fri, 2/12/10, Bruno Marchal <marc...@ulb.ac.be> wrote:
> > > > Jack Mallah wrote:
> > > > --- On Thu, 2/11/10, Bruno Marchal <marc...@ulb.ac.be>
> > > > > > MGA is more general (and older).
> > > > > > The only way to escape the conclusion would be to attribute 
> > consciousness to a movie of a computation
> > > > >
> > > > > That's not true.  For partial replacement scenarios, where part 
> > of a brain has counterfactuals and the rest doesn't, see my partial 
> > brain paper: http://cogprints.org/6321/
> > > >
> > > > It is not a question of true or false, but of presenting a valid 
> > or non valid deduction.
> > >
> > > What is false is your statement that "The only way to escape the 
> > conclusion would be to attribute consciousness to a movie of a 
> > computation".  So your argument is not valid.
> > >
> > > > I don't see anything in your comment or links which prevents the 
> > conclusions of being reached from the assumptions. If you think so, 
> > tell me at which step, and provide a justification.
> > >
> > > Bruno, I don't intend to be drawn into a detailed discussion of your 
> > arguments at this time.  The key idea though is that a movie could 
> > replace a computer brain.  The strongest argument for that is that you 
> > could gradually replace the components of the computer (which have the 
> > standard counterfactual (if-then) functioning) with components that 
> > only play out a pre-recorded script or which behave correctly by 
> > luck.  You could then invoke the 'fading qualia' argument (qualia 
> > could plausibly not vanish either suddenly or by gradually fading as 
> > the replacement proceeds) to argue that this makes no difference to 
> > the consciousness.  My partial brain paper shows that the 'fading 
> > qualia' argument is invalid.
> >
> >
> >
> > Hi Jack, to me the idea that counterfactuals would be essential to 
> > defining what counts as an "implementation" has always seemed 
> > counterintuitive for reasons separate from the Olympia or movie-graph 
> > argument. The thought-experiment I'd like to consider is one where 
> > some device is implanted in my brain that passively monitors the 
> > activity of a large group of neurons, and only if it finds them firing 
> > in some precise prespecified sequence does it activate and stimulate 
> > my brain in some way, causing a change in brain activity; otherwise it 
> > remains causally inert (I suppose because of the butterfly effect, the 
> > mere presence of the device would eventually affect my brain activity, 
> > but we can imagine replacing the device with a subroutine in a 
> > deterministic program simulating my brain in a deterministic virtual 
> > environment, with the subroutine only being activated and influencing 
> > the simulation if certain simulated neurons fire in a precise sequence).
> 
> It seems that these thought experiments inevitably lead to considering a 
> digital simulation of the brain in a virtual environment.  This is 
> usually brushed over as an inessential aspect, but I'm coming to the 
> opinion that it is essential.  Once you have encapsulated the whole 
> thought experiment in a closed virtual environment in a digital computer 
> you have the paradox of the rock that computes everything.  How we know 
> what is being computed in this virtual environment? Ordinarily the 
> answer to this is that we wrote the program and so we provide the 
> interpretation of the calculation *in this world*.  But it seems that in 
> these thought experiments we are implicitly supposing that the 
> simulation is inherently providing it's own interpretation.  Maybe, so; 
> but I see no reason to have confidence that this inherent interpretation 
> is either unique or has anything to do with the interpretation we 
> intended.  I suspect that this simulated consciousness is only 
> consciousness *in our external interpretation*.
> 
> Brent

In that case, aren't you saying that there is no objective answer to whether a 
particular physical process counts as an "implementation" of a given 
computation, and that absolutely any process can be seen as implementing any 
computation if outside observers choose to interpret it that way? That's 
basically the conclusion Chalmers was trying to avoid in his "Does a Rock 
Implement Every Finite-State Automaton" paper at 
http://consc.net/papers/rock.html which discussed the implementation problem. 
One possible answer to this problem is that implementations *are* totally 
subjective, but this would seem to rule out the possibility of there ever being 
any sort of objective measure on computations (unless you imagine some 
privileged observers who are themselves *not* identified with computations and 
whose interpretations are the only ones that 'count') which makes it hard to 
solve things like the "white rabbit problem" that's been discussed often on 
this list.
Jesse                                     

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to