On 12 August 2014 12:48, meekerdb <meeke...@verizon.net> wrote:

> On 8/11/2014 4:03 PM, LizR wrote:
>
>> I have never got this idea of "counterfactual correctness". It seems to
>> be that the argument goes ...
>>
>> Assume computational process A is conscious
>> Take process B, which replays A - B passes through the same machine
>> states as A, but it doesn't work them out, it's driven by a recording of A
>> - B isn't conscious because it isn't counterfactually correct.
>>
>> I can't see how this works. (Except insofar as if we assume consciousness
>> doesn't supervene on material processes, then neither A nor B is conscious,
>> they are just somehow attached to conscious experiences generated
>> elsewhere, maybe by a UD.)
>>
>
> It doesn't work, because it ignores the fact that consciousness is about
> something. It can only exist in the context of thoughts (machine states and
> processes) referring to a "world"; being part of a representational and
> predictive model.  Without the counterfactuals, it's just a sequence of
> states and not a model of anything.  But in order that it be a model it
> must interact or have interacted in the past in order that the model be
> causally connected to the world.  It is this connection that gives meaning
> to the model.


What differentiates A and B, given that they use the same machine states?
How can A be more about something than B? Or to put it another way, what is
the "meaning" that makes A conscious, but not B?


> Because Bruno is a logician he tends to think of consciousness as
> performing deductive proofs, executing a proof in the sense that every
> computer program is a proof.  He models belief as proof.  But this
> overlooks where the meaning of the program comes from.  People that want to
> deny computers can be conscious point out that the meaning comes from the
> programmer.  But it doesn't have to.  If the computer has goals and can
> learn and act within the world then its internal modeling and decision
> processes get meaning through their potential for actions.
>
> This is why I don't agree with the conclusion drawn from step 8.  I think
> the requirement to counterfactually correct implies that a whole world, a
> physics, needs to be simulated too, or else the Movie Graph or Klara need
> to be able to interact with the world to supply the meaning to their
> program.  But if the Movie Graph computer is a counterfactually correct
> simulation of a person within a simulated world, there's no longer a
> "reversal".  Simulated consciousness exists in simulated worlds - dog bites
> man.
>
> Are you assuming that the world with which the MG interacts it itself
digitally emulable? If so, doesn't Bruno's argument go through for the
whole emulated world, if not for a subcomponent of it ("Klara") ? ISTM
you're saying that a conscious being has to interact with a world - which
may be true (people go mad in sensory isolation eventually). But if the
world is emulable then the MGA can be applied to it as a whole. Or at least
I remember Bruno saying that the substitution level and region to be
emulated weren't important to the argument, as long as there is some level
and region in which it holds. I'm sure he said that it might involve
emulating the world, or a chunk of the universe, but that the argument
still goes through.

Or did I misremember that, or did he say that, but there's a flaw in his
argument?

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to