Re: A challenge for Craig

2013-09-26 Thread Craig Weinberg


On Thursday, September 26, 2013 6:17:04 AM UTC-4, telmo_menezes wrote:
>
> Hi Craig (and all), 
>
> Now that I have a better understanding of your ideas, I would like to 
> confront you with a thought experiment. Some of the stuff you say 
> looks completely esoteric to me, so I imagine there are three 
> possibilities: either you are significantly more intelligent than me 
> or you're a bit crazy, or both. I'm not joking, I don't know. 
>
> But I would like to focus on sensory participation as the fundamental 
> stuff of reality and your claim that strong AI is impossible because 
> the machines we build are just Frankensteins, in a sense. If I 
> understand correctly, you still believe these machines have sensory 
> participation just because they exist, but not in the sense that they 
> could emulate our human experiences. They have the sensory 
> participation level of the stuff they're made of and nothing else. 
> Right? 
>

Not exactly. My view is that there is only sensory participation on the 
level of what has naturally evolved. Since the machine did not organize 
itself, there is no 'machine' any more than a book of Shakespeare's quotes 
is a machine that is gradually turning into Shakespeare. What we see as 
machines are assemblies of parts which we use to automate functions 
according to our own human sense and motives - like a puppet. 

There is sensation going on two levels: 1) the very local level, and 2) at 
the absolute level. On the 1) local level, all machines depend on local 
physical events. Whether they are driven by springs and gears, boiling 
water in pipes, or subatomic collisions, Turing emulation rides on the back 
of specific conditions which lock and unlock small parts of the machine. 
Those smallest of those parts would be associated with some sensory-motive 
interaction - the coherence of molecular surfaces, thermodynamics, 
electrodynamics, etc, have a very local, instantaneous, and presumably 
primitive sensory involvement. That could be very alien to us, as it is 
both very short term and very long term - maybe there is only a flash of 
feeling at the moment of change, who knows?

On the 2) absolute level, there is the logical sensibility which all 1) 
local events share - the least common denominator of body interactions. 
This is the universal machine that Bruno champions. It's not sense which is 
necessarily experienced directly, rather all local sense touches on this 
universal measuring system *when it measures* something else.

The problem with machines is that there is no sense in between the 
momentary, memoryless, sensation of lock/unlock and the timeless, placeless 
sensibility of read/write or +/*. In a human experience, the 1) has evolved 
over billions of years to occupy the continuum in between 1) and 2), with 
implicit memories of feelings and experiences anchored in unique local 
contexts. Machines have no geography or ethnicity, no aesthetic presence.

 

>
> So let's talk about seeds. 
>
> We now know how a human being grows from a seed that we pretty much 
> understand. We might not be able to model all the complexity involved 
> in networks of gene expression, protein folding and so on, but we 
> understand the building blocks. We understand them to a point where we 
> can actually engineer the outcome to a degree. It is now 2013 and we 
> are, in a sense, living in the future. 
>
> So we can now take a fertilised egg and tweak it somehow. When done 
> successfully, a human being will grow out of it. Doing this with human 
> eggs is considered unethical, but I believe it is technically 
> possible. So a human being grows out of this egg. Is he/she normal? 
>

I don't know that there is normal. All that we can do is see whether people 
who have had various procedures done to their cell-bodies seem healthy to 
themselves and others.
 

>
> What if someone actually designs the entire DNA string and grows a 
> human being out of it? Still normal? 
>

Same thing. Probably, but it depends on how the mother's body responds to 
it as it develops.
 

>
> What if we simulate the growth of the organism from a string of 
> virtual DNA and then just assemble the outcome at some stage? Still 
> normal? 
>

Virtual DNA is a cartoon, with a recording of our expectations attached to 
it. Is a digital picture of a person 'normal'? If we photoshop it a little 
bit, is it still normal? The problem is the expectation that virtual 
anything is the same as real simply because it reminds us of something 
real. Of course it reminds us of what is real, we have designed it 
specifically to fool us in every way that we care about.
 

>
> What if now we do away with DNA altogether and use some other Turing 
> complete self-modifying system? 
>

Then we have a cool cartoon that reminds us of biology. That's if we have 
it rendered to a graphic display. If not then we have a warm box full of 
tiny switches that we can imagine are doing something other than switching 
on and off.
 

Re: A challenge for Craig

2013-09-26 Thread Telmo Menezes
On Thu, Sep 26, 2013 at 2:38 PM, Craig Weinberg  wrote:
>
>
> On Thursday, September 26, 2013 6:17:04 AM UTC-4, telmo_menezes wrote:
>>
>> Hi Craig (and all),
>>
>> Now that I have a better understanding of your ideas, I would like to
>> confront you with a thought experiment. Some of the stuff you say
>> looks completely esoteric to me, so I imagine there are three
>> possibilities: either you are significantly more intelligent than me
>> or you're a bit crazy, or both. I'm not joking, I don't know.
>>
>> But I would like to focus on sensory participation as the fundamental
>> stuff of reality and your claim that strong AI is impossible because
>> the machines we build are just Frankensteins, in a sense. If I
>> understand correctly, you still believe these machines have sensory
>> participation just because they exist, but not in the sense that they
>> could emulate our human experiences. They have the sensory
>> participation level of the stuff they're made of and nothing else.
>> Right?
>
>
> Not exactly. My view is that there is only sensory participation on the
> level of what has naturally evolved.

This sounds a bit like vitalism. What's so special about natural
evolution that can't be captured otherwise?

> Since the machine did not organize
> itself, there is no 'machine' any more than a book of Shakespeare's quotes
> is a machine that is gradually turning into Shakespeare.

But the books are not machines. Shakespeare possibly was. If he was,
why can't he be emulated by another machine?

> What we see as
> machines are assemblies of parts which we use to automate functions
> according to our own human sense and motives - like a puppet.
> There is sensation going on two levels: 1) the very local level, and 2) at
> the absolute level. On the 1) local level, all machines depend on local
> physical events. Whether they are driven by springs and gears, boiling water
> in pipes, or subatomic collisions, Turing emulation rides on the back of
> specific conditions which lock and unlock small parts of the machine. Those
> smallest of those parts would be associated with some sensory-motive
> interaction - the coherence of molecular surfaces, thermodynamics,
> electrodynamics, etc, have a very local, instantaneous, and presumably
> primitive sensory involvement. That could be very alien to us, as it is both
> very short term and very long term - maybe there is only a flash of feeling
> at the moment of change, who knows?

This part I can somewhat agree with. I do tend to believe that 1p
experience is possibly not limited to living organisms. I think about
it like you describe: "flashes of feeling" and "who knows" :)

> On the 2) absolute level, there is the logical sensibility which all 1)
> local events share - the least common denominator of body interactions. This
> is the universal machine that Bruno champions. It's not sense which is
> necessarily experienced directly, rather all local sense touches on this
> universal measuring system *when it measures* something else.
>
> The problem with machines is that there is no sense in between the
> momentary, memoryless, sensation of lock/unlock and the timeless, placeless
> sensibility of read/write or +/*. In a human experience, the 1) has evolved
> over billions of years to occupy the continuum in between 1) and 2), with
> implicit memories of feelings and experiences anchored in unique local
> contexts. Machines have no geography or ethnicity, no aesthetic presence.

Why do you believe we have evolved like that? What's the evolutionary
pressure for that? Whatever evolution did, why can't we recreate it?
Or do you. by evolution, mean something else/more than conventional
neo-Darwinism?

>
>>
>>
>> So let's talk about seeds.
>>
>> We now know how a human being grows from a seed that we pretty much
>> understand. We might not be able to model all the complexity involved
>> in networks of gene expression, protein folding and so on, but we
>> understand the building blocks. We understand them to a point where we
>> can actually engineer the outcome to a degree. It is now 2013 and we
>> are, in a sense, living in the future.
>>
>> So we can now take a fertilised egg and tweak it somehow. When done
>> successfully, a human being will grow out of it. Doing this with human
>> eggs is considered unethical, but I believe it is technically
>> possible. So a human being grows out of this egg. Is he/she normal?
>
>
> I don't know that there is normal. All that we can do is see whether people
> who have had various procedures done to their cell-bodies seem healthy to
> themselves and others.

So it appears you're open to the possibility that this is fine, and
that a human being like you and me was produced.

>>
>>
>> What if someone actually designs the entire DNA string and grows a
>> human being out of it? Still normal?
>
>
> Same thing. Probably, but it depends on how the mother's body responds to it
> as it develops.

So you don't believe this is possible:
http://e

Re: A challenge for Craig

2013-09-26 Thread Craig Weinberg


On Thursday, September 26, 2013 11:49:29 AM UTC-4, telmo_menezes wrote:
>
> On Thu, Sep 26, 2013 at 2:38 PM, Craig Weinberg 
> > 
> wrote: 
> > 
> > 
> > On Thursday, September 26, 2013 6:17:04 AM UTC-4, telmo_menezes wrote: 
> >> 
> >> Hi Craig (and all), 
> >> 
> >> Now that I have a better understanding of your ideas, I would like to 
> >> confront you with a thought experiment. Some of the stuff you say 
> >> looks completely esoteric to me, so I imagine there are three 
> >> possibilities: either you are significantly more intelligent than me 
> >> or you're a bit crazy, or both. I'm not joking, I don't know. 
> >> 
> >> But I would like to focus on sensory participation as the fundamental 
> >> stuff of reality and your claim that strong AI is impossible because 
> >> the machines we build are just Frankensteins, in a sense. If I 
> >> understand correctly, you still believe these machines have sensory 
> >> participation just because they exist, but not in the sense that they 
> >> could emulate our human experiences. They have the sensory 
> >> participation level of the stuff they're made of and nothing else. 
> >> Right? 
> > 
> > 
> > Not exactly. My view is that there is only sensory participation on the 
> > level of what has naturally evolved. 
>
> This sounds a bit like vitalism. What's so special about natural 
> evolution that can't be captured otherwise? 
>

It's not about life or nature being special, it's about recognizing that 
nature is an expression of experience, and that experience can't be 
substituted. A player piano can be made to play the notes of a song, but no 
matter how many notes it plays, it will never know the significance of what 
notes or music is.
 

>
> > Since the machine did not organize 
> > itself, there is no 'machine' any more than a book of Shakespeare's 
> quotes 
> > is a machine that is gradually turning into Shakespeare. 
>
> But the books are not machines. Shakespeare possibly was. If he was, 
> why can't he be emulated by another machine? 
>

I was using the example of a book to show how different a symbol is from 
that which we imagine the symbol represents. If we want a more machine-like 
example, we can use a copy machine. The copier can reproduce the works of 
any author mechanically, but does it appreciate or participate in the 
content of what it is copying?


> > What we see as 
> > machines are assemblies of parts which we use to automate functions 
> > according to our own human sense and motives - like a puppet. 
> > There is sensation going on two levels: 1) the very local level, and 2) 
> at 
> > the absolute level. On the 1) local level, all machines depend on local 
> > physical events. Whether they are driven by springs and gears, boiling 
> water 
> > in pipes, or subatomic collisions, Turing emulation rides on the back of 
> > specific conditions which lock and unlock small parts of the machine. 
> Those 
> > smallest of those parts would be associated with some sensory-motive 
> > interaction - the coherence of molecular surfaces, thermodynamics, 
> > electrodynamics, etc, have a very local, instantaneous, and presumably 
> > primitive sensory involvement. That could be very alien to us, as it is 
> both 
> > very short term and very long term - maybe there is only a flash of 
> feeling 
> > at the moment of change, who knows? 
>
> This part I can somewhat agree with. I do tend to believe that 1p 
> experience is possibly not limited to living organisms. I think about 
> it like you describe: "flashes of feeling" and "who knows" :) 
>
> > On the 2) absolute level, there is the logical sensibility which all 1) 
> > local events share - the least common denominator of body interactions. 
> This 
> > is the universal machine that Bruno champions. It's not sense which is 
> > necessarily experienced directly, rather all local sense touches on this 
> > universal measuring system *when it measures* something else. 
> > 
> > The problem with machines is that there is no sense in between the 
> > momentary, memoryless, sensation of lock/unlock and the timeless, 
> placeless 
> > sensibility of read/write or +/*. In a human experience, the 1) has 
> evolved 
> > over billions of years to occupy the continuum in between 1) and 2), 
> with 
> > implicit memories of feelings and experiences anchored in unique local 
> > contexts. Machines have no geography or ethnicity, no aesthetic 
> presence. 
>
> Why do you believe we have evolved like that? What's the evolutionary 
> pressure for that? Whatever evolution did, why can't we recreate it? 
> Or do you. by evolution, mean something else/more than conventional 
> neo-Darwinism? 
>

By evolution I mean that the history of individual experiences plays a role 
in accessing possibilities. Experience takes place in a spacetime context 
that may only occur one time. 

If we want to be billionaires, we might ask 'why can't we recreate John D. 
Rockefeller?", as if there were some particular recipe which

Re: A challenge for Craig

2013-09-27 Thread Telmo Menezes
On Thu, Sep 26, 2013 at 9:28 PM, Craig Weinberg  wrote:
>
>
> On Thursday, September 26, 2013 11:49:29 AM UTC-4, telmo_menezes wrote:
>>
>> On Thu, Sep 26, 2013 at 2:38 PM, Craig Weinberg 
>> wrote:
>> >
>> >
>> > On Thursday, September 26, 2013 6:17:04 AM UTC-4, telmo_menezes wrote:
>> >>
>> >> Hi Craig (and all),
>> >>
>> >> Now that I have a better understanding of your ideas, I would like to
>> >> confront you with a thought experiment. Some of the stuff you say
>> >> looks completely esoteric to me, so I imagine there are three
>> >> possibilities: either you are significantly more intelligent than me
>> >> or you're a bit crazy, or both. I'm not joking, I don't know.
>> >>
>> >> But I would like to focus on sensory participation as the fundamental
>> >> stuff of reality and your claim that strong AI is impossible because
>> >> the machines we build are just Frankensteins, in a sense. If I
>> >> understand correctly, you still believe these machines have sensory
>> >> participation just because they exist, but not in the sense that they
>> >> could emulate our human experiences. They have the sensory
>> >> participation level of the stuff they're made of and nothing else.
>> >> Right?
>> >
>> >
>> > Not exactly. My view is that there is only sensory participation on the
>> > level of what has naturally evolved.
>>
>> This sounds a bit like vitalism. What's so special about natural
>> evolution that can't be captured otherwise?
>
>
> It's not about life or nature being special, it's about recognizing that
> nature is an expression of experience, and that experience can't be
> substituted.

Ok. How did you arrive at this belief? How can you believe this
without proposing some mechanism by which it happens? Or do you
propose such a thing?

> A player piano can be made to play the notes of a song, but no
> matter how many notes it plays, it will never know the significance of what
> notes or music is.
>
>>
>>
>> > Since the machine did not organize
>> > itself, there is no 'machine' any more than a book of Shakespeare's
>> > quotes
>> > is a machine that is gradually turning into Shakespeare.
>>
>> But the books are not machines. Shakespeare possibly was. If he was,
>> why can't he be emulated by another machine?
>
>
> I was using the example of a book to show how different a symbol is from
> that which we imagine the symbol represents. If we want a more machine-like
> example, we can use a copy machine. The copier can reproduce the works of
> any author mechanically, but does it appreciate or participate in the
> content of what it is copying?

Ok. Yes, of course. But consider this: when you read a book, your
brain triggers in super-complex ways that constantly find patterns,
correlate with previous informations, trigger emotions and so on. This
clearly isn't happening with the copying machine. This would also not
happen if I was forced to copy a book in Japanese by hand. So I don't
think the comparison is fair. I'm not trying to argue that brain
complexity generates consciousness, but I am inclined to believe that
his complexity creates the necessary space for a human-like 1p. I
don't see why this couldn't be equally done in a computer.

>>
>> > What we see as
>> > machines are assemblies of parts which we use to automate functions
>> > according to our own human sense and motives - like a puppet.
>> > There is sensation going on two levels: 1) the very local level, and 2)
>> > at
>> > the absolute level. On the 1) local level, all machines depend on local
>> > physical events. Whether they are driven by springs and gears, boiling
>> > water
>> > in pipes, or subatomic collisions, Turing emulation rides on the back of
>> > specific conditions which lock and unlock small parts of the machine.
>> > Those
>> > smallest of those parts would be associated with some sensory-motive
>> > interaction - the coherence of molecular surfaces, thermodynamics,
>> > electrodynamics, etc, have a very local, instantaneous, and presumably
>> > primitive sensory involvement. That could be very alien to us, as it is
>> > both
>> > very short term and very long term - maybe there is only a flash of
>> > feeling
>> > at the moment of change, who knows?
>>
>> This part I can somewhat agree with. I do tend to believe that 1p
>> experience is possibly not limited to living organisms. I think about
>> it like you describe: "flashes of feeling" and "who knows" :)
>>
>> > On the 2) absolute level, there is the logical sensibility which all 1)
>> > local events share - the least common denominator of body interactions.
>> > This
>> > is the universal machine that Bruno champions. It's not sense which is
>> > necessarily experienced directly, rather all local sense touches on this
>> > universal measuring system *when it measures* something else.
>> >
>> > The problem with machines is that there is no sense in between the
>> > momentary, memoryless, sensation of lock/unlock and the timeless,
>> > placeless
>> > sensibility of read/wr

Re: A challenge for Craig

2013-09-29 Thread Pierz
If I might just butt in (said the barman)...

It seems to me that Craig's insistence that "nothing is Turing emulable, 
only the measurements are" expresses a different ontological assumption 
from the one that computationalists take for granted. It's evident that if 
we make a flight simulator, we will never leave the ground, regardless of 
the verisimilitude of the simulation. So why would a simulated 
consciousness be expected to actually be conscious? Because of different 
ontological assumptions about matter and consciousness. Science has given 
up on the notion of consciousness as having "being" the same way that 
matter is assumed to. Because consciousness has no place in an objective 
description of the world (i.e., one which is defined purely in terms of the 
measurable), contemporary scientific thinking reduces consciousness to 
those apparent behavioural outputs of consciousness which *can* be 
measured. This is functionalism. Because we can't measure the presence or 
absence of awareness, functionalism gives up on the attempt and presents 
the functional outputs as the only things that are "really real". Hence we 
get the Turing test. If we can't tell the difference, the simulator is no 
longer a simulator: it *is* the thing simulated. This conclusion is shored 
up by the apparently water-tight argument that the brain is made of atoms 
and molecules which are Turing emulable (even if it would take the lifetime 
of the universe to simulate the behaviour of a protein in a complex 
cellular environment, but oh well, we can ignore quantum effects because 
it's too hot in there anyway and just fast forward to the neuronal level, 
right?). It's also supported by the objectifying mental habit of people 
conditioned through years of scientific training. It becomes so natural to 
step into the god-level third person perspective that the elision of 
private experience starts seems like a small matter, and a step that one 
has no choice but to make. 

Of course, the alternative does present problems of its own! Craig 
frequently seems to slip into a kind of naturalism that would have it that 
brains possess soft, non-mechanical sense because they are soft and 
non-mechanical seeming. They can't be machines because they don't have 
cables and transistors. "Wetware" can't possibly be hardware. A lot of his 
arguments seem to be along those lines — the refusal to accept abstractions 
which others accept, as telmo aptly puts it. He claims to "solve the hard 
problem of consciousness" but the solution involves manoeuvres like 
"putting the whole universe into the explanatory gap" between objective and 
subjective: hardly illuminating! I get irritated by neologisms like PIP 
(whatever that stands for now - was "multi-sense realism' not obscure 
enough?), which to me seem to be about trying to add substance to vague and 
poetic intuitions about reality by attaching big, intellectual-sounding 
labels to them. 

However the same grain of sand that seems to get in Craig's eye does get in 
mine too. It's conceivable that some future incarnation of "cleverbot" 
(cleverbot.com, in case you don't know it) could reach a point of passing a 
Turing test through a combination of a vast repertoire of recorded 
conversation and some clever linguistic parsing to do a better job of 
keeping track of a semantic thread to the conversation (where the program 
currently falls down). But in this case, what goes in inside the machine 
seems to make all the difference, though the functionalists are committed 
to rejecting that position. Cleverly simulated conversation just doesn't 
seem to be real conversation if what is going on behind the scenes is just 
a bunch of rules for pulling lines out of a database. It's Craig's clever 
garbage lids. We can make a doll that screams and recoils from damaging 
inputs and learns to avoid them, but the functional outputs of pain are not 
the experience of pain. Imagine a being neurologically incapable of pain. 
Like "Mary", the hypothetical woman who lives her life seeing the world 
through a black and white monitor and cannot imagine colour qualia until 
she is released, such an entity could not begin to comprehend the meaning 
of screams of pain - beyond possibly recognising a self-protective 
function. The elision of qualia from functional theories of mind has 
potentially very serious ethical consequences - for only a subject with 
access to those qualia truly understand them. Understanding the human 
condition as it really is involves inhabiting human qualia. Otherwise you 
end up with Dr Mengele — humans as objects.

I've read Dennett's arguments against the "qualophiles" and I find them 
singularly unconvincing - though to say why is another long post. Dennett 
says we only "seem" to have qualia, but what can "seem" possibly mean in 
the absence of qualia? An illusion of a quality is an oxymoron, for the 
quality *is* only the way it seems. The comp assumption that computations 
have qua

Re: A challenge for Craig

2013-09-29 Thread LizR
Fascinating post. The "illusion of qualia" is perhaps something like the
"illusion of consciousness" - who is being fooled? (Who is the Master who
makes the grass green?)

My 2c on the Turing Test is that "ELIZA" passed it, so if you're being
pernickety that was solved in the 60s (I think it was) - but the real test
is whether "ELIZA" or "cleverbot" or whatever would *continue* to pass it
for, say, the duration of a voyage to Saturn (or Jupiter in the movie
version).  People manage to pass it for a lot of their lives, though I
would say, sadly, not for all of them (people or lives).

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-09-29 Thread Stathis Papaioannou
On 30 September 2013 11:36, Pierz  wrote:
> If I might just butt in (said the barman)...
>
> It seems to me that Craig's insistence that "nothing is Turing emulable,
> only the measurements are" expresses a different ontological assumption from
> the one that computationalists take for granted. It's evident that if we
> make a flight simulator, we will never leave the ground, regardless of the
> verisimilitude of the simulation. So why would a simulated consciousness be
> expected to actually be conscious? Because of different ontological
> assumptions about matter and consciousness. Science has given up on the
> notion of consciousness as having "being" the same way that matter is
> assumed to. Because consciousness has no place in an objective description
> of the world (i.e., one which is defined purely in terms of the measurable),
> contemporary scientific thinking reduces consciousness to those apparent
> behavioural outputs of consciousness which *can* be measured. This is
> functionalism. Because we can't measure the presence or absence of
> awareness, functionalism gives up on the attempt and presents the functional
> outputs as the only things that are "really real". Hence we get the Turing
> test. If we can't tell the difference, the simulator is no longer a
> simulator: it *is* the thing simulated. This conclusion is shored up by the
> apparently water-tight argument that the brain is made of atoms and
> molecules which are Turing emulable (even if it would take the lifetime of
> the universe to simulate the behaviour of a protein in a complex cellular
> environment, but oh well, we can ignore quantum effects because it's too hot
> in there anyway and just fast forward to the neuronal level, right?). It's
> also supported by the objectifying mental habit of people conditioned
> through years of scientific training. It becomes so natural to step into the
> god-level third person perspective that the elision of private experience
> starts seems like a small matter, and a step that one has no choice but to
> make.
>
> Of course, the alternative does present problems of its own! Craig
> frequently seems to slip into a kind of naturalism that would have it that
> brains possess soft, non-mechanical sense because they are soft and
> non-mechanical seeming. They can't be machines because they don't have
> cables and transistors. "Wetware" can't possibly be hardware. A lot of his
> arguments seem to be along those lines — the refusal to accept abstractions
> which others accept, as telmo aptly puts it. He claims to "solve the hard
> problem of consciousness" but the solution involves manoeuvres like "putting
> the whole universe into the explanatory gap" between objective and
> subjective: hardly illuminating! I get irritated by neologisms like PIP
> (whatever that stands for now - was "multi-sense realism' not obscure
> enough?), which to me seem to be about trying to add substance to vague and
> poetic intuitions about reality by attaching big, intellectual-sounding
> labels to them.
>
> However the same grain of sand that seems to get in Craig's eye does get in
> mine too. It's conceivable that some future incarnation of "cleverbot"
> (cleverbot.com, in case you don't know it) could reach a point of passing a
> Turing test through a combination of a vast repertoire of recorded
> conversation and some clever linguistic parsing to do a better job of
> keeping track of a semantic thread to the conversation (where the program
> currently falls down). But in this case, what goes in inside the machine
> seems to make all the difference, though the functionalists are committed to
> rejecting that position. Cleverly simulated conversation just doesn't seem
> to be real conversation if what is going on behind the scenes is just a
> bunch of rules for pulling lines out of a database. It's Craig's clever
> garbage lids. We can make a doll that screams and recoils from damaging
> inputs and learns to avoid them, but the functional outputs of pain are not
> the experience of pain. Imagine a being neurologically incapable of pain.
> Like "Mary", the hypothetical woman who lives her life seeing the world
> through a black and white monitor and cannot imagine colour qualia until she
> is released, such an entity could not begin to comprehend the meaning of
> screams of pain - beyond possibly recognising a self-protective function.
> The elision of qualia from functional theories of mind has potentially very
> serious ethical consequences - for only a subject with access to those
> qualia truly understand them. Understanding the human condition as it really
> is involves inhabiting human qualia. Otherwise you end up with Dr Mengele —
> humans as objects.
>
> I've read Dennett's arguments against the "qualophiles" and I find them
> singularly unconvincing - though to say why is another long post. Dennett
> says we only "seem" to have qualia, but what can "seem" possibly mean in the
> absence of qualia? An illusion of 

Re: A challenge for Craig

2013-09-30 Thread Telmo Menezes
On Fri, Sep 27, 2013 at 7:49 PM, Craig Weinberg  wrote:
>
>
> On Friday, September 27, 2013 8:00:11 AM UTC-4, telmo_menezes wrote:
>>
>> On Thu, Sep 26, 2013 at 9:28 PM, Craig Weinberg 
>> wrote:
>> >
>> >
>> > On Thursday, September 26, 2013 11:49:29 AM UTC-4, telmo_menezes wrote:
>> >>
>> >> On Thu, Sep 26, 2013 at 2:38 PM, Craig Weinberg 
>> >> wrote:
>> >> >
>> >> >
>> >> > On Thursday, September 26, 2013 6:17:04 AM UTC-4, telmo_menezes
>> >> > wrote:
>> >> >>
>> >> >> Hi Craig (and all),
>> >> >>
>> >> >> Now that I have a better understanding of your ideas, I would like
>> >> >> to
>> >> >> confront you with a thought experiment. Some of the stuff you say
>> >> >> looks completely esoteric to me, so I imagine there are three
>> >> >> possibilities: either you are significantly more intelligent than me
>> >> >> or you're a bit crazy, or both. I'm not joking, I don't know.
>> >> >>
>> >> >> But I would like to focus on sensory participation as the
>> >> >> fundamental
>> >> >> stuff of reality and your claim that strong AI is impossible because
>> >> >> the machines we build are just Frankensteins, in a sense. If I
>> >> >> understand correctly, you still believe these machines have sensory
>> >> >> participation just because they exist, but not in the sense that
>> >> >> they
>> >> >> could emulate our human experiences. They have the sensory
>> >> >> participation level of the stuff they're made of and nothing else.
>> >> >> Right?
>> >> >
>> >> >
>> >> > Not exactly. My view is that there is only sensory participation on
>> >> > the
>> >> > level of what has naturally evolved.
>> >>
>> >> This sounds a bit like vitalism. What's so special about natural
>> >> evolution that can't be captured otherwise?
>> >
>> >
>> > It's not about life or nature being special, it's about recognizing that
>> > nature is an expression of experience, and that experience can't be
>> > substituted.
>>
>> Ok. How did you arrive at this belief? How can you believe this
>> without proposing some mechanism by which it happens? Or do you
>> propose such a thing?
>
>
> Mechanisms are functions of time, but experience would be more primitive
> than time in this view. To have a mechanism, there must already be some
> experience of events, memory, expectation, etc.

But we know that universal machines can be built with very little:
simple cellular automata, arithmetics, balls colliding and so on. You
can then argue that some substrate is necessary for this computation,
but it is quite clear that what is necessary to have the possibility
of a full blown human zombie is theoretically very little. This does
not refute your main thesis, of course, but I think it does refute
that experience of events, memory and expectations are necessary for
mechanism.

>Think of the mechanism by
> which you change your attention or open your eyes. Sure, there are
> mechanisms that we can point to in the body, but what mechanism do *you* use
> to control yourself?

Ok, I know what you mean. Yes, I find this mysterious.

> I submit that there is no button to push or crank to
> turn. If there were, then you would already be controlling yourself to use
> them. No, at some point something has to directly control something by
> feeling and doing.

What if the thing that controls is being generated by the act of controlling?

> Whether we push it down to the microcosm or out to
> statistical laws makes no difference - somewhere something has to sense
> something directly or we cannot have experience.
>
> I wouldn't call it a belief, it's a hypothesis. I arrived at it by having a
> lot of conversations in my head about it over several years - writing things
> down, remembering them, dreaming about them, etc.

Ok. I have nothings against this but I would say you have to be very
cautious when relying on this type of approach. My position is that
there is a lot of value in doing this, but you cannot ever claim a
communicable discovery just by doing this. You can only find private
truth. When you try to communicate private truth, you risk sounding
like a lunatic. This is, in my view, what's so compelling about art.
Under the banner of "art", you are allowed to try to communicate
private truth and get a free pass from being considered a nutjob.

>>
>> > A player piano can be made to play the notes of a song, but no
>> > matter how many notes it plays, it will never know the significance of
>> > what
>> > notes or music is.
>> >
>> >>
>> >>
>> >> > Since the machine did not organize
>> >> > itself, there is no 'machine' any more than a book of Shakespeare's
>> >> > quotes
>> >> > is a machine that is gradually turning into Shakespeare.
>> >>
>> >> But the books are not machines. Shakespeare possibly was. If he was,
>> >> why can't he be emulated by another machine?
>> >
>> >
>> > I was using the example of a book to show how different a symbol is from
>> > that which we imagine the symbol represents. If we want a more
>> > machine-like
>> > example, we can use a 

Re: A challenge for Craig

2013-09-30 Thread Pierz
Yes indeed, and it is compelling. Fading qualia and all that. It's the 
absurdity of philosophical zombies. Those arguments did have an influence 
on my thinking. On the other hand the idea that we *can* replicate all the 
brain's outputs remains an article of faith. I remember that almost the 
first thing I read in Dennett's book was his claim that rich, detailed 
hallucinations (perceptions in the absence of physical stimuli) are 
impossible. Dennett is either wrong on this - or a vast body of research 
into hallucinogens is. Not to mention NDEs and OBEs. Dennett may be right 
and these reports may all be mistakes and lies, but I doubt it. If he is 
wrong, the his arguments become a compelling case in quite the opposite 
sense to what he intended: the brain not as a manufacturer of consciousness 
but as something more like a receptor. My instinct tells me we don't know 
enough about the brain or consciousness to be certain of any conclusions 
derived from logic alone. We may be like Newtonians arguing cosmology 
without the benefit of QM and relativity.

On Monday, September 30, 2013 2:08:23 PM UTC+10, stathisp wrote:
>
> On 30 September 2013 11:36, Pierz > wrote: 
> > If I might just butt in (said the barman)... 
> > 
> > It seems to me that Craig's insistence that "nothing is Turing emulable, 
> > only the measurements are" expresses a different ontological assumption 
> from 
> > the one that computationalists take for granted. It's evident that if we 
> > make a flight simulator, we will never leave the ground, regardless of 
> the 
> > verisimilitude of the simulation. So why would a simulated consciousness 
> be 
> > expected to actually be conscious? Because of different ontological 
> > assumptions about matter and consciousness. Science has given up on the 
> > notion of consciousness as having "being" the same way that matter is 
> > assumed to. Because consciousness has no place in an objective 
> description 
> > of the world (i.e., one which is defined purely in terms of the 
> measurable), 
> > contemporary scientific thinking reduces consciousness to those apparent 
> > behavioural outputs of consciousness which *can* be measured. This is 
> > functionalism. Because we can't measure the presence or absence of 
> > awareness, functionalism gives up on the attempt and presents the 
> functional 
> > outputs as the only things that are "really real". Hence we get the 
> Turing 
> > test. If we can't tell the difference, the simulator is no longer a 
> > simulator: it *is* the thing simulated. This conclusion is shored up by 
> the 
> > apparently water-tight argument that the brain is made of atoms and 
> > molecules which are Turing emulable (even if it would take the lifetime 
> of 
> > the universe to simulate the behaviour of a protein in a complex 
> cellular 
> > environment, but oh well, we can ignore quantum effects because it's too 
> hot 
> > in there anyway and just fast forward to the neuronal level, right?). 
> It's 
> > also supported by the objectifying mental habit of people conditioned 
> > through years of scientific training. It becomes so natural to step into 
> the 
> > god-level third person perspective that the elision of private 
> experience 
> > starts seems like a small matter, and a step that one has no choice but 
> to 
> > make. 
> > 
> > Of course, the alternative does present problems of its own! Craig 
> > frequently seems to slip into a kind of naturalism that would have it 
> that 
> > brains possess soft, non-mechanical sense because they are soft and 
> > non-mechanical seeming. They can't be machines because they don't have 
> > cables and transistors. "Wetware" can't possibly be hardware. A lot of 
> his 
> > arguments seem to be along those lines — the refusal to accept 
> abstractions 
> > which others accept, as telmo aptly puts it. He claims to "solve the 
> hard 
> > problem of consciousness" but the solution involves manoeuvres like 
> "putting 
> > the whole universe into the explanatory gap" between objective and 
> > subjective: hardly illuminating! I get irritated by neologisms like PIP 
> > (whatever that stands for now - was "multi-sense realism' not obscure 
> > enough?), which to me seem to be about trying to add substance to vague 
> and 
> > poetic intuitions about reality by attaching big, intellectual-sounding 
> > labels to them. 
> > 
> > However the same grain of sand that seems to get in Craig's eye does get 
> in 
> > mine too. It's conceivable that some future incarnation of "cleverbot" 
> > (cleverbot.com, in case you don't know it) could reach a point of 
> passing a 
> > Turing test through a combination of a vast repertoire of recorded 
> > conversation and some clever linguistic parsing to do a better job of 
> > keeping track of a semantic thread to the conversation (where the 
> program 
> > currently falls down). But in this case, what goes in inside the machine 
> > seems to make all the difference, though the functionali

Re: A challenge for Craig

2013-09-30 Thread Telmo Menezes
On Mon, Sep 30, 2013 at 3:36 AM, Pierz  wrote:
> If I might just butt in (said the barman)...

The more the merrier!

> It seems to me that Craig's insistence that "nothing is Turing emulable,
> only the measurements are" expresses a different ontological assumption from
> the one that computationalists take for granted. It's evident that if we
> make a flight simulator, we will never leave the ground, regardless of the
> verisimilitude of the simulation. So why would a simulated consciousness be
> expected to actually be conscious? Because of different ontological
> assumptions about matter and consciousness. Science has given up on the
> notion of consciousness as having "being" the same way that matter is
> assumed to. Because consciousness has no place in an objective description
> of the world (i.e., one which is defined purely in terms of the measurable),
> contemporary scientific thinking reduces consciousness to those apparent
> behavioural outputs of consciousness which *can* be measured. This is
> functionalism. Because we can't measure the presence or absence of
> awareness, functionalism gives up on the attempt and presents the functional
> outputs as the only things that are "really real". Hence we get the Turing
> test. If we can't tell the difference, the simulator is no longer a
> simulator: it *is* the thing simulated.

Even under functionalist assumptions, I still find the Turing test to
be misguided because it require the machine to lie, while a human can
pass it by telling the truth. I propose an alternative: you converse
with the machine for a certain amount of time and then I offer you $10
to kill it.

> This conclusion is shored up by the
> apparently water-tight argument that the brain is made of atoms and
> molecules which are Turing emulable (even if it would take the lifetime of
> the universe to simulate the behaviour of a protein in a complex cellular
> environment, but oh well, we can ignore quantum effects because it's too hot
> in there anyway and just fast forward to the neuronal level, right?). It's
> also supported by the objectifying mental habit of people conditioned
> through years of scientific training. It becomes so natural to step into the
> god-level third person perspective that the elision of private experience
> starts seems like a small matter, and a step that one has no choice but to
> make.
>
> Of course, the alternative does present problems of its own! Craig
> frequently seems to slip into a kind of naturalism that would have it that
> brains possess soft, non-mechanical sense because they are soft and
> non-mechanical seeming. They can't be machines because they don't have
> cables and transistors. "Wetware" can't possibly be hardware. A lot of his
> arguments seem to be along those lines — the refusal to accept abstractions
> which others accept, as telmo aptly puts it. He claims to "solve the hard
> problem of consciousness" but the solution involves manoeuvres like "putting
> the whole universe into the explanatory gap" between objective and
> subjective: hardly illuminating! I get irritated by neologisms like PIP
> (whatever that stands for now - was "multi-sense realism' not obscure
> enough?), which to me seem to be about trying to add substance to vague and
> poetic intuitions about reality by attaching big, intellectual-sounding
> labels to them.
>
> However the same grain of sand that seems to get in Craig's eye does get in
> mine too.

And mine.

> It's conceivable that some future incarnation of "cleverbot"
> (cleverbot.com, in case you don't know it) could reach a point of passing a
> Turing test through a combination of a vast repertoire of recorded
> conversation and some clever linguistic parsing to do a better job of
> keeping track of a semantic thread to the conversation (where the program
> currently falls down).

But then this approach is bound to fail if you extend the interaction
for long enough time, as Liz points out.

> But in this case, what goes in inside the machine
> seems to make all the difference, though the functionalists are committed to
> rejecting that position. Cleverly simulated conversation just doesn't seem
> to be real conversation if what is going on behind the scenes is just a
> bunch of rules for pulling lines out of a database. It's Craig's clever
> garbage lids. We can make a doll that screams and recoils from damaging
> inputs and learns to avoid them, but the functional outputs of pain are not
> the experience of pain. Imagine a being neurologically incapable of pain.
> Like "Mary", the hypothetical woman who lives her life seeing the world
> through a black and white monitor and cannot imagine colour qualia until she
> is released, such an entity could not begin to comprehend the meaning of
> screams of pain - beyond possibly recognising a self-protective function.
> The elision of qualia from functional theories of mind has potentially very
> serious ethical consequences - for only a subject with access to

Re: A challenge for Craig

2013-09-30 Thread Stathis Papaioannou
On 30 September 2013 22:00, Pierz  wrote:
> Yes indeed, and it is compelling. Fading qualia and all that. It's the
> absurdity of philosophical zombies.

The absurd thing is not philosophical zombies, which are at least
conceivable, it is partial zombies.

> Those arguments did have an influence on
> my thinking. On the other hand the idea that we *can* replicate all the
> brain's outputs remains an article of faith.

Although Chalmers doesn't point this out that I am aware, the argument
for functionalism is established merely with the *concept* of a
functionally equivalent brain component. That is, it is logically
impossible to make such a component that replicates behaviour but does
not replicate consciousness.

>  I remember that almost the
> first thing I read in Dennett's book was his claim that rich, detailed
> hallucinations (perceptions in the absence of physical stimuli) are
> impossible. Dennett is either wrong on this - or a vast body of research
> into hallucinogens is. Not to mention NDEs and OBEs. Dennett may be right
> and these reports may all be mistakes and lies, but I doubt it. If he is
> wrong, the his arguments become a compelling case in quite the opposite
> sense to what he intended: the brain not as a manufacturer of consciousness
> but as something more like a receptor.  My instinct tells me we don't know
> enough about the brain or consciousness to be certain of any conclusions
> derived from logic alone. We may be like Newtonians arguing cosmology
> without the benefit of QM and relativity.

Remarkably, without knowing anything about how the brain actually
works, it is possible to prove that it is impossible to replicate its
observable behaviour without also replicating its consciousness. This
is a very profound result.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-09-30 Thread Richard Ruquist
Stathis

Could you provide the proof or a link to it?
Richard


On Mon, Sep 30, 2013 at 9:00 AM, Stathis Papaioannou wrote:

> On 30 September 2013 22:00, Pierz  wrote:
> > Yes indeed, and it is compelling. Fading qualia and all that. It's the
> > absurdity of philosophical zombies.
>
> The absurd thing is not philosophical zombies, which are at least
> conceivable, it is partial zombies.
>
> > Those arguments did have an influence on
> > my thinking. On the other hand the idea that we *can* replicate all the
> > brain's outputs remains an article of faith.
>
> Although Chalmers doesn't point this out that I am aware, the argument
> for functionalism is established merely with the *concept* of a
> functionally equivalent brain component. That is, it is logically
> impossible to make such a component that replicates behaviour but does
> not replicate consciousness.
>
> >  I remember that almost the
> > first thing I read in Dennett's book was his claim that rich, detailed
> > hallucinations (perceptions in the absence of physical stimuli) are
> > impossible. Dennett is either wrong on this - or a vast body of research
> > into hallucinogens is. Not to mention NDEs and OBEs. Dennett may be right
> > and these reports may all be mistakes and lies, but I doubt it. If he is
> > wrong, the his arguments become a compelling case in quite the opposite
> > sense to what he intended: the brain not as a manufacturer of
> consciousness
> > but as something more like a receptor.  My instinct tells me we don't
> know
> > enough about the brain or consciousness to be certain of any conclusions
> > derived from logic alone. We may be like Newtonians arguing cosmology
> > without the benefit of QM and relativity.
>
> Remarkably, without knowing anything about how the brain actually
> works, it is possible to prove that it is impossible to replicate its
> observable behaviour without also replicating its consciousness. This
> is a very profound result.
>
>
> --
> Stathis Papaioannou
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-09-30 Thread Stathis Papaioannou


> On 30 Sep 2013, at 11:07 pm, Richard Ruquist  wrote:
> 
> Stathis 
> 
> Could you provide the proof or a link to it?
> Richard

It's the Chalmers "Fading Qualia" paper cited before. The paper refers to 
computer chips replacing neurons. The objection could be made that we do not 
know for sure that brain physics is computable, and if it isn't, the experiment 
is impossible. However, that would only show that computationalism was wrong, 
not the functionalism was wrong. Functionalism is established even if it turns 
out the neurons are animated by God.

>> On Mon, Sep 30, 2013 at 9:00 AM, Stathis Papaioannou  
>> wrote:
>> On 30 September 2013 22:00, Pierz  wrote:
>> > Yes indeed, and it is compelling. Fading qualia and all that. It's the
>> > absurdity of philosophical zombies.
>> 
>> The absurd thing is not philosophical zombies, which are at least
>> conceivable, it is partial zombies.
>> 
>> > Those arguments did have an influence on
>> > my thinking. On the other hand the idea that we *can* replicate all the
>> > brain's outputs remains an article of faith.
>> 
>> Although Chalmers doesn't point this out that I am aware, the argument
>> for functionalism is established merely with the *concept* of a
>> functionally equivalent brain component. That is, it is logically
>> impossible to make such a component that replicates behaviour but does
>> not replicate consciousness.
>> 
>> >  I remember that almost the
>> > first thing I read in Dennett's book was his claim that rich, detailed
>> > hallucinations (perceptions in the absence of physical stimuli) are
>> > impossible. Dennett is either wrong on this - or a vast body of research
>> > into hallucinogens is. Not to mention NDEs and OBEs. Dennett may be right
>> > and these reports may all be mistakes and lies, but I doubt it. If he is
>> > wrong, the his arguments become a compelling case in quite the opposite
>> > sense to what he intended: the brain not as a manufacturer of consciousness
>> > but as something more like a receptor.  My instinct tells me we don't know
>> > enough about the brain or consciousness to be certain of any conclusions
>> > derived from logic alone. We may be like Newtonians arguing cosmology
>> > without the benefit of QM and relativity.
>> 
>> Remarkably, without knowing anything about how the brain actually
>> works, it is possible to prove that it is impossible to replicate its
>> observable behaviour without also replicating its consciousness. This
>> is a very profound result.
>> 
>> 
>> --
>> Stathis Papaioannou
>> 
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-09-30 Thread Craig Weinberg


On Sunday, September 29, 2013 9:36:28 PM UTC-4, Pierz wrote:
>
> If I might just butt in (said the barman)...
>
> It seems to me that Craig's insistence that "nothing is Turing emulable, 
> only the measurements are" expresses a different ontological assumption 
> from the one that computationalists take for granted. It's evident that if 
> we make a flight simulator, we will never leave the ground, regardless of 
> the verisimilitude of the simulation. So why would a simulated 
> consciousness be expected to actually be conscious? Because of different 
> ontological assumptions about matter and consciousness. Science has given 
> up on the notion of consciousness as having "being" the same way that 
> matter is assumed to. Because consciousness has no place in an objective 
> description of the world (i.e., one which is defined purely in terms of the 
> measurable), contemporary scientific thinking reduces consciousness to 
> those apparent behavioural outputs of consciousness which *can* be 
> measured. This is functionalism. Because we can't measure the presence or 
> absence of awareness, functionalism gives up on the attempt and presents 
> the functional outputs as the only things that are "really real". Hence we 
> get the Turing test. If we can't tell the difference, the simulator is no 
> longer a simulator: it *is* the thing simulated. This conclusion is shored 
> up by the apparently water-tight argument that the brain is made of atoms 
> and molecules which are Turing emulable (even if it would take the lifetime 
> of the universe to simulate the behaviour of a protein in a complex 
> cellular environment, but oh well, we can ignore quantum effects because 
> it's too hot in there anyway and just fast forward to the neuronal level, 
> right?). It's also supported by the objectifying mental habit of people 
> conditioned through years of scientific training. It becomes so natural to 
> step into the god-level third person perspective that the elision of 
> private experience starts seems like a small matter, and a step that one 
> has no choice but to make. 
>
> Of course, the alternative does present problems of its own! Craig 
> frequently seems to slip into a kind of naturalism that would have it that 
> brains possess soft, non-mechanical sense because they are soft and 
> non-mechanical seeming.
>

Actually not. The aesthetic qualities of living organs do seem 
non-mechanical, and that may be a clue about their nature, but it doesn't 
have to be. We could make slippery, wet machines which were just as bad at 
feeling and experiencing deep qualia as a cell phone is. The naturalism 
that I appeal to arises not from the brain but from the nature of common 
experiences among humans, animals, and organisms and their apparent 
distance from inorganic systems. We can tell that a dog feels more like a 
person than a plant. Maybe it's not true. Maybe a Venus Flytrap feels like 
a dog? If I were going to recreate the universe from scratch however, and I 
had to bet on whether this intuitive hierarchy was important to include, I 
would bet that it was. It seems important, at least to living organisms. We 
need to know what we can eat and what we can impregnate with a high degree 
of veracity, and there seems to be a very natural understanding of that 
which does not require a Turing test.
 

> They can't be machines because they don't have cables and transistors. 
> "Wetware" can't possibly be hardware.
>

No, that's a Straw Man of my position - but an understandable and very 
common one. Wetware is hardware, but what is using the hardware is 
different than what uses the hardware of a silicon crystal.
 

> A lot of his arguments seem to be along those lines — the refusal to 
> accept abstractions which others accept, as telmo aptly puts it. He claims 
> to "solve the hard problem of consciousness" but the solution involves 
> manoeuvres like "putting the whole universe into the explanatory gap" 
> between objective and subjective: hardly illuminating!
>

It is illuminating to me. The universe becomes a continuum of aesthetic 
qualities modulated by physical-sense. There is no gap because there is 
nothing in the universe which does not bridge that gap. 
 

> I get irritated by neologisms like PIP (whatever that stands for now - was 
> "multi-sense realism' not obscure enough?), which to me seem to be about 
> trying to add substance to vague and poetic intuitions about reality by 
> attaching big, intellectual-sounding labels to them. 
>

I'm going to be posting a glossary in the next day or so. I know it sounds 
pretentious, but that's the irony. Like legalese, the point is not to 
obscure but to make absolutely clear. Multisense Realism is about the 
overall picture of experience and reality, while PIP (Primordial Identity 
Pansensitivty) describes the particular way that this approach differs from 
other views, like panpsychism or panexperientialism. Philosophical jargon 
is our friend :)
 

>
>

Re: A challenge for Craig

2013-09-30 Thread Craig Weinberg


On Monday, September 30, 2013 8:00:11 AM UTC-4, Pierz wrote:
>
> Yes indeed, and it is compelling. Fading qualia and all that. It's the 
> absurdity of philosophical zombies. Those arguments did have an influence 
> on my thinking. On the other hand the idea that we *can* replicate all the 
> brain's outputs remains an article of faith. I remember that almost the 
> first thing I read in Dennett's book was his claim that rich, detailed 
> hallucinations (perceptions in the absence of physical stimuli) are 
> impossible. Dennett is either wrong on this - or a vast body of research 
> into hallucinogens is. Not to mention NDEs and OBEs. Dennett may be right 
> and these reports may all be mistakes and lies, but I doubt it. If he is 
> wrong, the his arguments become a compelling case in quite the opposite 
> sense to what he intended: the brain not as a manufacturer of consciousness 
> but as something more like a receptor. My instinct tells me we don't know 
> enough about the brain or consciousness to be certain of any conclusions 
> derived from logic alone. We may be like Newtonians arguing cosmology 
> without the benefit of QM and relativity.
>


The key, IMO, lies in drilling down on the fundamentals. What does it mean 
to 'receive'? What is a 'signal' and what is it doing in physics?

Also, (and this is the one Chalmers argument where I think that he missed 
it) we can turn the fading qualia argument around. Is it any less absurd to 
propose that qualia fades in, or suddenly appears due to complex wiring? 

Instead of building a brain which is like our own, couldn't we also build a 
brain that measures and analyzes social data to become a perfect sociopath? 
What if we intentionally want to suppress understanding and emotion and 
build a perfect actor, a p-Zelig, who uses chameleon-like algorithms to 
ingratiate itself in any context.

This paper from Chalmers http://consc.net/papers/combination.pdf does a 
good job of getting more into the different views on the combination 
problem, and how the micro and macro relate. I think that PIP exposes an 
assumption which all of the other approaches listed in the paper do not, 
which is that there is even a possibility of nonphenomenal phenomenal. Once 
we take that away, we can see that our personal awareness may not be 
created by microphysical states, rather our personal awareness is a 
particular range of a total awareness that has sub-personal, 
super-personal, and impersonal (public physical) facets. 

Thanks,
Craig

 

>
> On Monday, September 30, 2013 2:08:23 PM UTC+10, stathisp wrote:
>>
>> On 30 September 2013 11:36, Pierz  wrote: 
>> > If I might just butt in (said the barman)... 
>> > 
>> > It seems to me that Craig's insistence that "nothing is Turing 
>> emulable, 
>> > only the measurements are" expresses a different ontological assumption 
>> from 
>> > the one that computationalists take for granted. It's evident that if 
>> we 
>> > make a flight simulator, we will never leave the ground, regardless of 
>> the 
>> > verisimilitude of the simulation. So why would a simulated 
>> consciousness be 
>> > expected to actually be conscious? Because of different ontological 
>> > assumptions about matter and consciousness. Science has given up on the 
>> > notion of consciousness as having "being" the same way that matter is 
>> > assumed to. Because consciousness has no place in an objective 
>> description 
>> > of the world (i.e., one which is defined purely in terms of the 
>> measurable), 
>> > contemporary scientific thinking reduces consciousness to those 
>> apparent 
>> > behavioural outputs of consciousness which *can* be measured. This is 
>> > functionalism. Because we can't measure the presence or absence of 
>> > awareness, functionalism gives up on the attempt and presents the 
>> functional 
>> > outputs as the only things that are "really real". Hence we get the 
>> Turing 
>> > test. If we can't tell the difference, the simulator is no longer a 
>> > simulator: it *is* the thing simulated. This conclusion is shored up by 
>> the 
>> > apparently water-tight argument that the brain is made of atoms and 
>> > molecules which are Turing emulable (even if it would take the lifetime 
>> of 
>> > the universe to simulate the behaviour of a protein in a complex 
>> cellular 
>> > environment, but oh well, we can ignore quantum effects because it's 
>> too hot 
>> > in there anyway and just fast forward to the neuronal level, right?). 
>> It's 
>> > also supported by the objectifying mental habit of people conditioned 
>> > through years of scientific training. It becomes so natural to step 
>> into the 
>> > god-level third person perspective that the elision of private 
>> experience 
>> > starts seems like a small matter, and a step that one has no choice but 
>> to 
>> > make. 
>> > 
>> > Of course, the alternative does present problems of its own! Craig 
>> > frequently seems to slip into a kind of naturalism that would have it 

Re: A challenge for Craig

2013-09-30 Thread Bruno Marchal


On 30 Sep 2013, at 03:36, Pierz wrote:


If I might just butt in (said the barman)...

It seems to me that Craig's insistence that "nothing is Turing  
emulable, only the measurements are" expresses a different  
ontological assumption from the one that computationalists take for  
granted. It's evident that if we make a flight simulator, we will  
never leave the ground, regardless of the verisimilitude of the  
simulation. So why would a simulated consciousness be expected to  
actually be conscious? Because of different ontological assumptions  
about matter and consciousness. Science has given up on the notion  
of consciousness as having "being" the same way that matter is  
assumed to. Because consciousness has no place in an objective  
description of the world (i.e., one which is defined purely in terms  
of the measurable), contemporary scientific thinking reduces  
consciousness to those apparent behavioural outputs of consciousness  
which *can* be measured. This is functionalism. Because we can't  
measure the presence or absence of awareness, functionalism gives up  
on the attempt and presents the functional outputs as the only  
things that are "really real". Hence we get the Turing test. If we  
can't tell the difference, the simulator is no longer a simulator:  
it *is* the thing simulated. This conclusion is shored up by the  
apparently water-tight argument that the brain is made of atoms and  
molecules which are Turing emulable (even if it would take the  
lifetime of the universe to simulate the behaviour of a protein in a  
complex cellular environment, but oh well, we can ignore quantum  
effects because it's too hot in there anyway and just fast forward  
to the neuronal level, right?). It's also supported by the  
objectifying mental habit of people conditioned through years of  
scientific training. It becomes so natural to step into the god- 
level third person perspective that the elision of private  
experience starts seems like a small matter, and a step that one has  
no choice but to make.


Of course, the alternative does present problems of its own! Craig  
frequently seems to slip into a kind of naturalism that would have  
it that brains possess soft, non-mechanical sense because they are  
soft and non-mechanical seeming. They can't be machines because they  
don't have cables and transistors. "Wetware" can't possibly be  
hardware. A lot of his arguments seem to be along those lines — the  
refusal to accept abstractions which others accept, as telmo aptly  
puts it. He claims to "solve the hard problem of consciousness" but  
the solution involves manoeuvres like "putting the whole universe  
into the explanatory gap" between objective and subjective: hardly  
illuminating! I get irritated by neologisms like PIP (whatever that  
stands for now - was "multi-sense realism' not obscure enough?),  
which to me seem to be about trying to add substance to vague and  
poetic intuitions about reality by attaching big, intellectual- 
sounding labels to them.


However the same grain of sand that seems to get in Craig's eye does  
get in mine too. It's conceivable that some future incarnation of  
"cleverbot" (cleverbot.com, in case you don't know it) could reach a  
point of passing a Turing test through a combination of a vast  
repertoire of recorded conversation and some clever linguistic  
parsing to do a better job of keeping track of a semantic thread to  
the conversation (where the program currently falls down). But in  
this case, what goes in inside the machine seems to make all the  
difference, though the functionalists are committed to rejecting  
that position. Cleverly simulated conversation just doesn't seem to  
be real conversation if what is going on behind the scenes is just a  
bunch of rules for pulling lines out of a database. It's Craig's  
clever garbage lids. We can make a doll that screams and recoils  
from damaging inputs and learns to avoid them, but the functional  
outputs of pain are not the experience of pain. Imagine a being  
neurologically incapable of pain. Like "Mary", the hypothetical  
woman who lives her life seeing the world through a black and white  
monitor and cannot imagine colour qualia until she is released, such  
an entity could not begin to comprehend the meaning of screams of  
pain - beyond possibly recognising a self-protective function. The  
elision of qualia from functional theories of mind has potentially  
very serious ethical consequences - for only a subject with access  
to those qualia truly understand them. Understanding the human  
condition as it really is involves inhabiting human qualia.  
Otherwise you end up with Dr Mengele — humans as objects.


I've read Dennett's arguments against the "qualophiles" and I find  
them singularly unconvincing - though to say why is another long  
post. Dennett says we only "seem" to have qualia, but what can  
"seem" possibly mean in the absence of qualia? An illusion o

Re: A challenge for Craig

2013-09-30 Thread meekerdb

On 9/30/2013 5:05 AM, Telmo Menezes wrote:

Even under functionalist assumptions, I still find the Turing test to
be misguided because it require the machine to lie, while a human can
pass it by telling the truth.


Actually Turing already thought of this.  If you read his paper you find that the test is 
not as usually proposed.  Turing's test was whether an person communicating with a 
computer pretending to be a woman and a man pretending to be a woman, would be fooled as 
to which was which.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-09-30 Thread LizR
On 1 October 2013 08:44, meekerdb  wrote:

>  On 9/30/2013 5:05 AM, Telmo Menezes wrote:
>
> Even under functionalist assumptions, I still find the Turing test to
> be misguided because it require the machine to lie, while a human can
> pass it by telling the truth.
>
>
> Actually Turing already thought of this.  If you read his paper you find
> that the test is not as usually proposed.  Turing's test was whether an
> person communicating with a computer pretending to be a woman and a man
> pretending to be a woman, would be fooled as to which was which.
>

I thought Turing mentioned "The Imitation Game" in which someone tried to
tell the other person's gender without any clues (like being able to hear
their voice, or discussing matters that only a man or woman might be *
expected* to know given the social norms at the time), and then extended
that to involve a computer as one of the participants?

That is, the TT as normally described involves someone trying to tell if
they're talking to a computer or a human being. Are you saying that isn't
how it was meant to be carried out?

I might also say that the above description of the TT (a computer has to
lie...) is also innaccurate, imho. The test is intended to indicate whether
a computer can be a *person*. For example if you were communicating with
HAL in 2001, you might easily mistake it for a man (as shown in the film)
and you would be right to do so, because HAL *is* a person in the story, by
any reasonable criterion. (In fact he's the most human like person in the
film! The astronauts act more like robots than he does most of the time!)

So a computer passing the TT (without hearing its voice or discussing
matters only a human being/computer is likely to know, of course - as
mentioned in the original paper, where the judge asks the testee to
multiply two numbers and it pauses a while, and then makes a mistake,
because the computer isn't allowed to invoke a "calculator function" just
as a human shouldn't use a calculator - not that such things existed at the
time!) shouldn't be considered to be lying. And it should be treated as a
person. I think that was Turing's point.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-01 Thread Telmo Menezes
On Mon, Sep 30, 2013 at 3:44 PM, Craig Weinberg  wrote:
>
>
> On Monday, September 30, 2013 6:12:45 AM UTC-4, telmo_menezes wrote:
>>
>> On Fri, Sep 27, 2013 at 7:49 PM, Craig Weinberg 
>> wrote:
>> >
>> >
>> > On Friday, September 27, 2013 8:00:11 AM UTC-4, telmo_menezes wrote:
>> >>
>> >> On Thu, Sep 26, 2013 at 9:28 PM, Craig Weinberg 
>> >> wrote:
>> >> >
>> >> >
>> >> > On Thursday, September 26, 2013 11:49:29 AM UTC-4, telmo_menezes
>> >> > wrote:
>> >> >>
>> >> >> On Thu, Sep 26, 2013 at 2:38 PM, Craig Weinberg 
>> >> >> wrote:
>> >> >> >
>> >> >> >
>> >> >> > On Thursday, September 26, 2013 6:17:04 AM UTC-4, telmo_menezes
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> Hi Craig (and all),
>> >> >> >>
>> >> >> >> Now that I have a better understanding of your ideas, I would
>> >> >> >> like
>> >> >> >> to
>> >> >> >> confront you with a thought experiment. Some of the stuff you say
>> >> >> >> looks completely esoteric to me, so I imagine there are three
>> >> >> >> possibilities: either you are significantly more intelligent than
>> >> >> >> me
>> >> >> >> or you're a bit crazy, or both. I'm not joking, I don't know.
>> >> >> >>
>> >> >> >> But I would like to focus on sensory participation as the
>> >> >> >> fundamental
>> >> >> >> stuff of reality and your claim that strong AI is impossible
>> >> >> >> because
>> >> >> >> the machines we build are just Frankensteins, in a sense. If I
>> >> >> >> understand correctly, you still believe these machines have
>> >> >> >> sensory
>> >> >> >> participation just because they exist, but not in the sense that
>> >> >> >> they
>> >> >> >> could emulate our human experiences. They have the sensory
>> >> >> >> participation level of the stuff they're made of and nothing
>> >> >> >> else.
>> >> >> >> Right?
>> >> >> >
>> >> >> >
>> >> >> > Not exactly. My view is that there is only sensory participation
>> >> >> > on
>> >> >> > the
>> >> >> > level of what has naturally evolved.
>> >> >>
>> >> >> This sounds a bit like vitalism. What's so special about natural
>> >> >> evolution that can't be captured otherwise?
>> >> >
>> >> >
>> >> > It's not about life or nature being special, it's about recognizing
>> >> > that
>> >> > nature is an expression of experience, and that experience can't be
>> >> > substituted.
>> >>
>> >> Ok. How did you arrive at this belief? How can you believe this
>> >> without proposing some mechanism by which it happens? Or do you
>> >> propose such a thing?
>> >
>> >
>> > Mechanisms are functions of time, but experience would be more primitive
>> > than time in this view. To have a mechanism, there must already be some
>> > experience of events, memory, expectation, etc.
>>
>> But we know that universal machines can be built with very little:
>> simple cellular automata, arithmetics, balls colliding and so on. You
>> can then argue that some substrate is necessary for this computation,
>
>
> I don't think that I have to argue it, it's a fact that we cannot construct
> universal machines out of uncontrollable materials. We can't use uncontained
> gases or live hamsters to do our computations for us. When we build
> machines, particular electronic computers, materials are refined to a
> pristine degree. Impurities must be removed so that only the most reliable
> and consistent qualities of matter are selected for.

Yes but if we are entities living as part of a computation that is not
surprising. We have to carve a medium somehow to perform our own
computations. It doesn't really tell you anything about the meta-level
computation that might be taking place.

>> but it is quite clear that what is necessary to have the possibility
>> of a full blown human zombie is theoretically very little. This does
>> not refute your main thesis, of course, but I think it does refute
>> that experience of events, memory and expectations are necessary for
>> mechanism.
>
>
> The necessity of experience is easy to refute in theory, but if we do that,
> we must at the same time justify the existence of experience on some
> arbitrary level of description of matter, which I don't think can be done
> convincingly.

But you can doubt matter. What then?

> We know that we can 'play possum' more easily than a dead
> possum can become a zombie. This is not to suggest that inanimate objects
> are pretending to be inanimate, but that it is the constraints of whatever
> kinds of sensation we have access to which hide the scales on which
> animation is taking place (too slow, too fast, too large, too small, too
> unfamiliar = invisible or inanimate).
>
>>
>>
>> >Think of the mechanism by
>> > which you change your attention or open your eyes. Sure, there are
>> > mechanisms that we can point to in the body, but what mechanism do *you*
>> > use
>> > to control yourself?
>>
>> Ok, I know what you mean. Yes, I find this mysterious.
>
>
> So if we scale down that mystery to every level of the universe (which we
> would sort of have to since our self control involves billions of

Re: A challenge for Craig

2013-10-01 Thread Telmo Menezes
Hi Liz,

On Tue, Oct 1, 2013 at 12:30 AM, LizR  wrote:
> On 1 October 2013 08:44, meekerdb  wrote:
>>
>> On 9/30/2013 5:05 AM, Telmo Menezes wrote:
>>
>> Even under functionalist assumptions, I still find the Turing test to
>> be misguided because it require the machine to lie, while a human can
>> pass it by telling the truth.
>>
>>
>> Actually Turing already thought of this.  If you read his paper you find
>> that the test is not as usually proposed.  Turing's test was whether an
>> person communicating with a computer pretending to be a woman and a man
>> pretending to be a woman, would be fooled as to which was which.
>
>
> I thought Turing mentioned "The Imitation Game" in which someone tried to
> tell the other person's gender without any clues (like being able to hear
> their voice, or discussing matters that only a man or woman might be
> expected to know given the social norms at the time), and then extended that
> to involve a computer as one of the participants?
>
> That is, the TT as normally described involves someone trying to tell if
> they're talking to a computer or a human being. Are you saying that isn't
> how it was meant to be carried out?
>
> I might also say that the above description of the TT (a computer has to
> lie...) is also innaccurate, imho. The test is intended to indicate whether
> a computer can be a person. For example if you were communicating with HAL
> in 2001, you might easily mistake it for a man (as shown in the film)

Yes, I agree with the spirit of the test and with what you say. I'm
just claiming that, in practice, none of the versions of the test
work. HAL would very quickly fail any of the formulations of the TT
unless he lied. Not just a small lie either, but a major lie,
involving him pretending that he has a human body, human experiences
and so on. He's a "person" but he's not human.

But if you chatted with HAL for a while, fully knowing that your are a
computer, you would be much more reluctant to terminate it than you
are to kill your browser or whatever program you are using to read
this email. This is, in fact, one of the themes in 2001.

> and
> you would be right to do so, because HAL *is* a person in the story, by any
> reasonable criterion. (In fact he's the most human like person in the film!
> The astronauts act more like robots than he does most of the time!)

No doubt. I think we witness HAL becoming conscious and thus acquiring
the capacity for violence, but that's my interpretation. One of the
astronauts, on the other hand, ends up becoming something else. A lot
of people see the final scene of the movie as beautiful and inspiring,
I see it as possibly horrendous, but this is getting way off track!

Btw, I'm sorry if I'm being rude and not replying to everyone as I
should, but the current volume is more than I can handle. I'm sure I'm
not the only one experiencing the same problem.

> So a computer passing the TT (without hearing its voice or discussing
> matters only a human being/computer is likely to know, of course - as
> mentioned in the original paper, where the judge asks the testee to multiply
> two numbers and it pauses a while, and then makes a mistake, because the
> computer isn't allowed to invoke a "calculator function" just as a human
> shouldn't use a calculator - not that such things existed at the time!)
> shouldn't be considered to be lying. And it should be treated as a person. I
> think that was Turing's point.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-01 Thread Bruno Marchal


On 30 Sep 2013, at 14:05, Telmo Menezes wrote :



The comp assumption that computations have
qualia hidden inside them is not much of an answer either in my view.


I have the same problem.


The solution is in the fact that all machines have that problem. More  
exactly: all persons capable of surviving a digital substitution must  
have that and similar problems. It is a sort of meta-solution  
explaining that we are indeed confronted to something which is simply  
totally unexplainable.


Note also that the expression "computation have qualia" can be  
misleading. A computation has no qualia, strictly speaking. Only a  
person supported by an infinity of computation can be said to have  
qualia, or to live qualia. Then the math of self-reference can be used  
to explain why the qualia have to escape the pure third person type of  
explanations.


A good exercise consists in trying to think about what could like an  
explanation of what a qualia is. Even without comp, that will seem  
impossible, and that explains why some people, like Craig, estimate  
that we have to take them as primitive. here comp explains, why there  
are things like qualia, which can emerge only in the frist person  
points of view, and admit irreductible components.


Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-01 Thread Bruno Marchal


On 30 Sep 2013, at 14:00, Pierz wrote:

Yes indeed, and it is compelling. Fading qualia and all that. It's  
the absurdity of philosophical zombies. Those arguments did have an  
influence on my thinking. On the other hand the idea that we *can*  
replicate all the brain's outputs remains an article of faith.


OK. That is behavioral mechanism (Be-Me), and I agree that it asks an  
act of faith, despite the strog evidence that nature already exploits  
this.
Comp asks for a nuch bigger act of faith, as you have to believe that  
you survive through the duplication. It is logically conceivable that  
we can replicate ourselves, but fail to survive through that  
replication. Comp -> Be-Me, but Be-Me does not imply comp.





I remember that almost the first thing I read in Dennett's book was  
his claim that rich, detailed hallucinations (perceptions in the  
absence of physical stimuli) are impossible. Dennett is either wrong  
on this - or a vast body of research into hallucinogens is. Not to  
mention NDEs and OBEs. Dennett may be right and these reports may  
all be mistakes and lies, but I doubt it. If he is wrong, the his  
arguments become a compelling case in quite the opposite sense to  
what he intended: the brain not as a manufacturer of consciousness  
but as something more like a receptor.


Yes, the brain seems to be (with comp) more a filter of consciousness  
that a producer of consciousness.




My instinct tells me we don't know enough about the brain or  
consciousness to be certain of any conclusions derived from logic  
alone.


In all cases, logic alone is too poor a device to dwelve in the  
matter. But with comp, arithmetic (and its internal meta-arithmetic)  
is enough, especillay for the negative part (the mystery) which has to  
remain a mystery in all possible mechanical extensions of the machine.  
That is what comp explains the better: that there must be a mystery.  
Abstract machines like PA and ZF can be said to know that already.


Bruno



We may be like Newtonians arguing cosmology without the benefit of  
QM and relativity.


On Monday, September 30, 2013 2:08:23 PM UTC+10, stathisp wrote:
On 30 September 2013 11:36, Pierz  wrote:
> If I might just butt in (said the barman)...
>
> It seems to me that Craig's insistence that "nothing is Turing  
emulable,
> only the measurements are" expresses a different ontological  
assumption from
> the one that computationalists take for granted. It's evident that  
if we
> make a flight simulator, we will never leave the ground,  
regardless of the
> verisimilitude of the simulation. So why would a simulated  
consciousness be

> expected to actually be conscious? Because of different ontological
> assumptions about matter and consciousness. Science has given up  
on the
> notion of consciousness as having "being" the same way that matter  
is
> assumed to. Because consciousness has no place in an objective  
description
> of the world (i.e., one which is defined purely in terms of the  
measurable),
> contemporary scientific thinking reduces consciousness to those  
apparent
> behavioural outputs of consciousness which *can* be measured. This  
is

> functionalism. Because we can't measure the presence or absence of
> awareness, functionalism gives up on the attempt and presents the  
functional
> outputs as the only things that are "really real". Hence we get  
the Turing

> test. If we can't tell the difference, the simulator is no longer a
> simulator: it *is* the thing simulated. This conclusion is shored  
up by the

> apparently water-tight argument that the brain is made of atoms and
> molecules which are Turing emulable (even if it would take the  
lifetime of
> the universe to simulate the behaviour of a protein in a complex  
cellular
> environment, but oh well, we can ignore quantum effects because  
it's too hot
> in there anyway and just fast forward to the neuronal level,  
right?). It's
> also supported by the objectifying mental habit of people  
conditioned
> through years of scientific training. It becomes so natural to  
step into the
> god-level third person perspective that the elision of private  
experience
> starts seems like a small matter, and a step that one has no  
choice but to

> make.
>
> Of course, the alternative does present problems of its own! Craig
> frequently seems to slip into a kind of naturalism that would have  
it that

> brains possess soft, non-mechanical sense because they are soft and
> non-mechanical seeming. They can't be machines because they don't  
have
> cables and transistors. "Wetware" can't possibly be hardware. A  
lot of his
> arguments seem to be along those lines — the refusal to accept  
abstractions
> which others accept, as telmo aptly puts it. He claims to "solve  
the hard
> problem of consciousness" but the solution involves manoeuvres  
like "putting

> the whole universe into the explanatory gap" between objective and
> subjective: hardly illuminating! I get irr

Re: A challenge for Craig

2013-10-01 Thread Pierz
Maybe. It would be a lot more profound if we definitely *could* reproduce the 
brain's behaviour. The devil is in the detail as they say. But a challenge to 
Chalmer's position has occurred to me. It seems to me that Bruno has 
convincingly argued that *if* comp holds, then consciousness supervenes on the 
computation, not on the physical matter. But functionalism suggests that what 
counts is the output, not the manner in which it as arrived at. That is to say, 
the brain or whatever neural subunit or computer is doing the processing is a 
black box. You input something and then read the output, but the intervening 
steps don't matter. Consider what this might mean in terms of a brain. Let's 
say a vastly advanced alien species comes to earth. It looks at our puny little 
brains and decides to make one to fool us. This constructed person/brain 
receives normal conversational input and outputs conversation that it knows 
will perfectly mimic a human being. But in fact the computer doing this 
processing is vastly superior to the human brain. It's like a modern PC 
emulating a TRS-80, except much more so. When it computes/thinks up a response, 
it draws on a vast amount of knowledge, intelligence and creativity and 
accesses qualia undreamed of by a human. Yet its response will completely fool 
any normal human and will pass Turing tests till the cows come home. What this 
thought experiment shows is that, while half-qualia may be absurd, it most 
certainly is possible to reproduce the outputs of a brain without replicating 
its qualia. It might have completely different qualia, just as a very good 
actor's emotions can't be distinguished from the real thing, even though his or 
her internal experience is quite different. And if qualia can be quite 
different even though the functional outputs are the same, this does seem to 
leave functionalism in something of a quandary. All we can say is that there 
must be some kind of qualia occurring, rather a different result from what 
Chalmers is claiming. When we extend this type of scenario to artificial 
neurons or partial brain prostheses as in Chamer's paper, we quickly run up 
against perplexing problems. Imagine the advanced alien provides these 
prostheses. It takes the same inputs and generates the same correct outputs, 
but it processes those inputs within a much vaster, more complex system. Does 
the brain utilizing this advanced prosthesis experience a kind of expanded 
consciousness because of this, without that difference being detectable? Or do 
the qualia remain somehow confined to the prosthesis (whatever that means)? 
These crazy quandaries suggest to me that basically, we don't know shit.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-01 Thread Pierz
Sorry, this list behaves strangely on my iPad. I can't reply to individual 
posts. The post above was meant to be a reply to stathis and his remark that 
"it is possible to prove that it is impossible to replicate its observable 
behaviour (a brain's) without also replicating its consciousness. This is a 
very profound result." Maybe someone can show me why I'm wrong, but I think my 
argument above refutes that "proof".

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-01 Thread Bruno Marchal


On 01 Oct 2013, at 15:31, Pierz wrote:

Maybe. It would be a lot more profound if we definitely *could*  
reproduce the brain's behaviour. The devil is in the detail as they  
say. But a challenge to Chalmer's position has occurred to me. It  
seems to me that Bruno has convincingly argued that *if* comp holds,  
then consciousness supervenes on the computation, not on the  
physical matter. But functionalism suggests that what counts is the  
output, not the manner in which it as arrived at. That is to say,  
the brain or whatever neural subunit or computer is doing the  
processing is a black box. You input something and then read the  
output, but the intervening steps don't matter. Consider what this  
might mean in terms of a brain.



That's not clear to me. The question is "output of what". If it is the  
entie subject, this is more behaviorism than functionalism.
Putnam's functionalism makes clear that we have to take the output of  
the neurons into account.
Comp is functionalism, but with the idea that we don't know the level  
of substitution, so it might be that we have to take into account the  
oputput of the gluons in our atoms (so comp makes clear that it only  
ask for the existence of a level of substitution, and then show that  
no machine can know for sure its subst. level, making Putnam's sort of  
functionalism a bit fuzzy).





Let's say a vastly advanced alien species comes to earth. It looks  
at our puny little brains and decides to make one to fool us. This  
constructed person/brain receives normal conversational input and  
outputs conversation that it knows will perfectly mimic a human  
being. But in fact the computer doing this processing is vastly  
superior to the human brain. It's like a modern PC emulating a  
TRS-80, except much more so. When it computes/thinks up a response,  
it draws on a vast amount of knowledge, intelligence and creativity  
and accesses qualia undreamed of by a human. Yet its response will  
completely fool any normal human and will pass Turing tests till the  
cows come home. What this thought experiment shows is that, while  
half-qualia may be absurd, it most certainly is possible to  
reproduce the outputs of a brain without replicating its qualia. It  
might have completely different qualia, just as a very good actor's  
emotions can't be distinguished from the real thing, even though his  
or her internal experience is quite different. And if qualia can be  
quite different even though the functional outputs are the same,  
this does seem to leave functionalism in something of a quandary.  
All we can say is that there must be some kind of qualia occurring,  
rather a different result from what Chalmers is claiming. When we  
extend this type of scenario to artificial neurons or partial brain  
prostheses as in Chamer's paper, we quickly run up against  
perplexing problems. Imagine the advanced alien provides these  
prostheses. It takes the same inputs and generates the same correct  
outputs, but it processes those inputs within a much vaster, more  
complex system. Does the brain utilizing this advanced prosthesis  
experience a kind of expanded consciousness because of this, without  
that difference being detectable? Or do the qualia remain somehow  
confined to the prosthesis (whatever that means)? These crazy  
quandaries suggest to me that basically, we don't know shit.


Hmm, I am not convinced. "Chalmers argument"  is that to get a  
philosophical zombie, the fading argument shows that you have to go  
through half-qualia, which is absurd. His goal (here) is to show that  
"no qualia" is absurd.


That the qualia can be different is known in the qualia literature,  
and is a big open problem per se. But Chalmers argues only that "no  
qualia" is absurd, indeed because it would needs some absurd notion of  
intermediate half qualia.


My be I miss a point. Stathis can clarify this furher.

Eventually the qualia is determined by infinitely many number  
relations, and a brain filters them. It does not create them, like no  
machine can create PI, only "re-compute" it, somehow. The anlogy here  
break sown as qualia are purely first person notion, which explains  
why they are distributed on the whole universal dovetailing (sigma_1  
arithmetic).



Bruno




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+u

Re: A challenge for Craig

2013-10-01 Thread Telmo Menezes
On Tue, Oct 1, 2013 at 1:13 PM, Bruno Marchal  wrote:
>
> On 30 Sep 2013, at 14:05, Telmo Menezes wrote :
>
>
> The comp assumption that computations have
>
> qualia hidden inside them is not much of an answer either in my view.
>
>
> I have the same problem.
>
>
> The solution is in the fact that all machines have that problem. More
> exactly: all persons capable of surviving a digital substitution must have
> that and similar problems. It is a sort of meta-solution explaining that we
> are indeed confronted to something which is simply totally unexplainable.
>
> Note also that the expression "computation have qualia" can be misleading. A
> computation has no qualia, strictly speaking. Only a person supported by an
> infinity of computation can be said to have qualia, or to live qualia. Then
> the math of self-reference can be used to explain why the qualia have to
> escape the pure third person type of explanations.

Thanks Bruno. Is there some formal proof of this? Can it be followed
by a mere mortal?

> A good exercise consists in trying to think about what could like an
> explanation of what a qualia is. Even without comp, that will seem
> impossible, and that explains why some people, like Craig, estimate that we
> have to take them as primitive. here comp explains, why there are things
> like qualia, which can emerge only in the frist person points of view, and
> admit irreductible components.
>
> Bruno
>
>
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-01 Thread Bruno Marchal


On 01 Oct 2013, at 17:09, Telmo Menezes wrote:

On Tue, Oct 1, 2013 at 1:13 PM, Bruno Marchal   
wrote:


On 30 Sep 2013, at 14:05, Telmo Menezes wrote :


The comp assumption that computations have

qualia hidden inside them is not much of an answer either in my view.


I have the same problem.


The solution is in the fact that all machines have that problem. More
exactly: all persons capable of surviving a digital substitution  
must have
that and similar problems. It is a sort of meta-solution explaining  
that we
are indeed confronted to something which is simply totally  
unexplainable.


Note also that the expression "computation have qualia" can be  
misleading. A
computation has no qualia, strictly speaking. Only a person  
supported by an
infinity of computation can be said to have qualia, or to live  
qualia. Then
the math of self-reference can be used to explain why the qualia  
have to

escape the pure third person type of explanations.


Thanks Bruno. Is there some formal proof of this? Can it be followed
by a mere mortal?


It follows from comp, the classical definition of knowledge (the  
agreement that the modal logic S4 defines an axiomatic of knowledge)  
and then from Solovay theorem, and the fact that


(Bp <-> Bp & p) belongs to G* minus G.

 It is explained in details in the long version "conscience et  
mécanisme", and with less detail in the short Lille thesis (that you  
have). It is also explained in the second part of sane04.


Formally a key text is the S4 provability chapter in Boolos 79 and 93,  
and the articles referred too.


We can come back on this. It is the heart of the Arithmeticalization  
of the UDA. It *is¨probably very naive, and I was sure this would be  
refuted, but it is not, yet.


I think it can be understood by mere mortals, having enough times and  
motivation.


For the sigma_1 restriction, you need also a good understanding around  
Gödel and Mechanism. One of the best good is the book by Judson Webb.  
Torkel Franzen's two books are quite good also. If you read the french  
I summarize a big part of the literature on that in "conscience &  
mécanisme".


http://iridia.ulb.ac.be/~marchal/bxlthesis/consciencemecanisme.html


Bruno





A good exercise consists in trying to think about what could like an
explanation of what a qualia is. Even without comp, that will seem
impossible, and that explains why some people, like Craig, estimate  
that we
have to take them as primitive. here comp explains, why there are  
things
like qualia, which can emerge only in the frist person points of  
view, and

admit irreductible components.

Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google  
Groups

"Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an

email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-01 Thread Craig Weinberg
I had a similar thought about a chameleon brain (I call a p-Zelig instead 
of a p-zombie), which would impersonate behaviors of whatever environment 
it was placed into. Unlike a philosophical zombie, which would have no 
personal qualia but seem like it does from the outside, the chameleon brain 
would explicitly forbid having any particular qualia, since its entire 
processing would be devoted to computing cross-modal generalities. It is 
intentionally not trying to be a person, it is just trying to mirror 
anything - clouds, wolves, dandelion, whatever, according to the 
measurements it takes using a large variety of peripheral detectors.

On Tuesday, October 1, 2013 9:31:24 AM UTC-4, Pierz wrote:
>
> Maybe. It would be a lot more profound if we definitely *could* reproduce 
> the brain's behaviour. The devil is in the detail as they say. But a 
> challenge to Chalmer's position has occurred to me. It seems to me that 
> Bruno has convincingly argued that *if* comp holds, then consciousness 
> supervenes on the computation, not on the physical matter. But 
> functionalism suggests that what counts is the output, not the manner in 
> which it as arrived at. That is to say, the brain or whatever neural 
> subunit or computer is doing the processing is a black box. You input 
> something and then read the output, but the intervening steps don't matter. 
> Consider what this might mean in terms of a brain. Let's say a vastly 
> advanced alien species comes to earth. It looks at our puny little brains 
> and decides to make one to fool us. This constructed person/brain receives 
> normal conversational input and outputs conversation that it knows will 
> perfectly mimic a human being. But in fact the computer doing this 
> processing is vastly superior to the human brain. It's like a modern PC 
> emulating a TRS-80, except much more so. When it computes/thinks up a 
> response, it draws on a vast amount of knowledge, intelligence and 
> creativity and accesses qualia undreamed of by a human. Yet its response 
> will completely fool any normal human and will pass Turing tests till the 
> cows come home. What this thought experiment shows is that, while 
> half-qualia may be absurd, it most certainly is possible to reproduce the 
> outputs of a brain without replicating its qualia. It might have completely 
> different qualia, just as a very good actor's emotions can't be 
> distinguished from the real thing, even though his or her internal 
> experience is quite different. And if qualia can be quite different even 
> though the functional outputs are the same, this does seem to leave 
> functionalism in something of a quandary. All we can say is that there must 
> be some kind of qualia occurring, rather a different result from what 
> Chalmers is claiming. When we extend this type of scenario to artificial 
> neurons or partial brain prostheses as in Chamer's paper, we quickly run up 
> against perplexing problems. Imagine the advanced alien provides these 
> prostheses. It takes the same inputs and generates the same correct 
> outputs, but it processes those inputs within a much vaster, more complex 
> system. Does the brain utilizing this advanced prosthesis experience a kind 
> of expanded consciousness because of this, without that difference being 
> detectable? Or do the qualia remain somehow confined to the prosthesis 
> (whatever that means)? These crazy quandaries suggest to me that basically, 
> we don't know shit.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-01 Thread Craig Weinberg


On Tuesday, October 1, 2013 7:13:17 AM UTC-4, Bruno Marchal wrote:
>
>
> On 30 Sep 2013, at 14:05, Telmo Menezes wrote :
>
>
> The comp assumption that computations have
>
> qualia hidden inside them is not much of an answer either in my view.
>
>
> I have the same problem.
>
>
> The solution is in the fact that all machines have that problem. More 
> exactly: all persons capable of surviving a digital substitution must have 
> that and similar problems. It is a sort of meta-solution explaining that we 
> are indeed confronted to something which is simply totally unexplainable.
>
> Note also that the expression "computation have qualia" can be misleading. 
> A computation has no qualia, strictly speaking. Only a person supported by 
> an infinity of computation can be said to have qualia, or to live qualia. 
> Then the math of self-reference can be used to explain why the qualia have 
> to escape the pure third person type of explanations.
>
> A good exercise consists in trying to think about what could like an 
> explanation of what a qualia is. Even without comp, that will seem 
> impossible, and that explains why some people, like Craig, estimate that we 
> have to take them as primitive. here comp explains, why there are things 
> like qualia, which can emerge only in the frist person points of view, and 
> admit irreductible components. 
>

Explaining why X is local to a certain perspective, or why X is irreducible 
does not explain why X is an aesthetic presence though. You can have 
numerical expressions which are irreducible and local to a machine without 
there being any such thing as flavor or color. As long as we are saying 
that both qualia and quanta are real, I don't see any advantage of making 
qualia supervene on quanta instead of the other way around, especially when 
we can understand that the nature of counting is to create figurative 
reductions which are nameless and homeless. We can't turn a wavelength into 
a color without color vision and real illumination, but we can turn color 
into a wavelength simply by discarding all of the actual color experience 
and looking at general patterns within optics analytically (abstractly). 
The irreducibility and 1p locality are hints, but they are neither 
necessary nor sufficient to access any specific qualia. I really don't 
think that I am missing something here. I can easily see it the other way 
around, I just don't think that it is true of the universe that we live in. 
Yes, it makes sense why a machine would not be able to tell that its 
experience is the result of a machine, but it doesn't make sense that Santa 
Claus would make that experience into tongues that taste that are different 
from eyes that see. All that matters is information transfer, so that 
difference would not engender any qualia, just clever addressing.

Thanks,
Craig
 

>
> Bruno
>
>
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-01 Thread meekerdb

On 10/1/2013 4:13 AM, Bruno Marchal wrote:
Note also that the expression "computation have qualia" can be misleading. A computation 
has no qualia, strictly speaking. Only a person supported by an infinity of computation 
can be said to have qualia, or to live qualia.


Why an infinity of computation??  That would preclude my building an intelligent robot 
having qualia, since it's computations would always be finite.  And I doubt there is room 
in my head for infinite computations - certainly not digital ones.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-01 Thread Bruno Marchal


On 01 Oct 2013, at 18:10, Craig Weinberg wrote:


Bruno's UDA eventually removes the requirement for a copy being
primitively real. That's one of the things that impressed me about the
argument. I think your position requires that you find a way to refute
the UDA.

I think that it does so by taking the need for a concrete universe  
for granted. It's a leak in UDA's philosophical vacuum. We can  
explain away the reality of realism, but what of the expectation of  
realistic qualities? Such an expectation would surely be ground to  
dust as each dust-mite's belch demands an infinity of universes to  
house its permutations and interactions. Where would a quality of  
realism arise from? Why is it something we need to explain away?


You are the one treating my sun in law like he was a zombie. Like if  
you knew, and could put away what he might be feeling.


Comp explain away matter, perhaps, not consciousness and mind. UDA  
starts from it, and AUDA recover it by listening to the machines, and  
already not treating them as zombie.


You are the one having a reductionist view of what machines can and  
cannot do, and seem to ignore that their relative representations are  
only the gates through which consciousness can differentiate. Our  
first person are, mathematically, not computable nor duplicable from  
our perspective, but we are 3-multiplied in the extreme, and the waves  
comes from that competition, below our sharable substitution level.


And there is math tools here so we can and will progress.


Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-01 Thread Pierz


On Wednesday, October 2, 2013 12:46:17 AM UTC+10, Bruno Marchal wrote:
>
>
> On 01 Oct 2013, at 15:31, Pierz wrote: 
>
> > Maybe. It would be a lot more profound if we definitely *could*   
> > reproduce the brain's behaviour. The devil is in the detail as they   
> > say. But a challenge to Chalmer's position has occurred to me. It   
> > seems to me that Bruno has convincingly argued that *if* comp holds,   
> > then consciousness supervenes on the computation, not on the   
> > physical matter. But functionalism suggests that what counts is the   
> > output, not the manner in which it as arrived at. That is to say,   
> > the brain or whatever neural subunit or computer is doing the   
> > processing is a black box. You input something and then read the   
> > output, but the intervening steps don't matter. Consider what this   
> > might mean in terms of a brain. 
>
>
> That's not clear to me. The question is "output of what". If it is the   
> entie subject, this is more behaviorism than functionalism. 
> Putnam's functionalism makes clear that we have to take the output of   
> the neurons into account. 
> Comp is functionalism, but with the idea that we don't know the level   
> of substitution, so it might be that we have to take into account the   
> oputput of the gluons in our atoms (so comp makes clear that it only   
> ask for the existence of a level of substitution, and then show that   
> no machine can know for sure its subst. level, making Putnam's sort of   
> functionalism a bit fuzzy). 
>
> I was going on stathis's post. He stated that reproducing the brain's 
functions meant reproducing the qualia, but I refuted that (I think). 

>
>
>
> > Let's say a vastly advanced alien species comes to earth. It looks   
> > at our puny little brains and decides to make one to fool us. This   
> > constructed person/brain receives normal conversational input and   
> > outputs conversation that it knows will perfectly mimic a human   
> > being. But in fact the computer doing this processing is vastly   
> > superior to the human brain. It's like a modern PC emulating a   
> > TRS-80, except much more so. When it computes/thinks up a response,   
> > it draws on a vast amount of knowledge, intelligence and creativity   
> > and accesses qualia undreamed of by a human. Yet its response will   
> > completely fool any normal human and will pass Turing tests till the   
> > cows come home. What this thought experiment shows is that, while   
> > half-qualia may be absurd, it most certainly is possible to   
> > reproduce the outputs of a brain without replicating its qualia. It   
> > might have completely different qualia, just as a very good actor's   
> > emotions can't be distinguished from the real thing, even though his   
> > or her internal experience is quite different. And if qualia can be   
> > quite different even though the functional outputs are the same,   
> > this does seem to leave functionalism in something of a quandary.   
> > All we can say is that there must be some kind of qualia occurring,   
> > rather a different result from what Chalmers is claiming. When we   
> > extend this type of scenario to artificial neurons or partial brain   
> > prostheses as in Chamer's paper, we quickly run up against   
> > perplexing problems. Imagine the advanced alien provides these   
> > prostheses. It takes the same inputs and generates the same correct   
> > outputs, but it processes those inputs within a much vaster, more   
> > complex system. Does the brain utilizing this advanced prosthesis   
> > experience a kind of expanded consciousness because of this, without   
> > that difference being detectable? Or do the qualia remain somehow   
> > confined to the prosthesis (whatever that means)? These crazy   
> > quandaries suggest to me that basically, we don't know shit. 
>
> Hmm, I am not convinced. "Chalmers argument"  is that to get a   
> philosophical zombie, the fading argument shows that you have to go   
> through half-qualia, which is absurd. His goal (here) is to show that   
> "no qualia" is absurd. 
>
> That the qualia can be different is known in the qualia literature,   
> and is a big open problem per se. But Chalmers argues only that "no   
> qualia" is absurd, indeed because it would needs some absurd notion of   
> intermediate half qualia. 
>
> My be I miss a point. Stathis can clarify this furher. 
>

Yes, I understand that to be Chalmer's main point. Although, if the qualia 
can be different, it does present issues - how much and in what way can it 
vary? I'm curious what the literature has to say about that. And if 
functionalism means reproducing more than the mere functional output of a 
system, if it potentially means replication down to the elementary 
particles and possibly their quantum entanglements, then duplication 
becomes impossible, not merely technically but in principle. That seems 
against the whole point of functionalism - as the idea of "f

Re: A challenge for Craig

2013-10-01 Thread meekerdb

On 10/1/2013 9:56 PM, Pierz wrote:
Yes, I understand that to be Chalmer's main point. Although, if the qualia can be 
different, it does present issues - how much and in what way can it vary? 


Yes, that's a question that interests me because I want to be able to build intelligent 
machines and so I need to know what qualia they will have, if any.  I think it will depend 
on their sensors and on their values/goals.  If I build a very intelligent Mars Rover, 
capable of learning and reasoning, with a goal of discovering whether there was once life 
on Mars; then I expect it will experience pleasure in finding evidence regarding this.  
But no matter how smart I make it, it won't experience lust.



I'm curious what the literature has to say about that. And if functionalism means 
reproducing more than the mere functional output of a system, if it potentially means 
replication down to the elementary particles and possibly their quantum entanglements, 
then duplication becomes impossible, not merely technically but in principle. That seems 
against the whole point of functionalism - as the idea of "function" is reduced to 
something almost meaningless.


I think functionalism must be confined to the classical functions, discounting the quantum 
level effects.  But it must include some behavior that is almost entirely internal - e.g. 
planning, imagining.  Excluding quantum entanglements isn't arbitrary; there cannot have 
been any evolution of goals and values based on quantum entanglement (beyond the 
statistical effects that produce decoherence and quasi-classical behavior).


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-01 Thread Pierz


On Wednesday, October 2, 2013 3:15:01 PM UTC+10, Brent wrote:
>
> On 10/1/2013 9:56 PM, Pierz wrote: 
> > Yes, I understand that to be Chalmer's main point. Although, if the 
> qualia can be 
> > different, it does present issues - how much and in what way can it 
> vary? 
>
> Yes, that's a question that interests me because I want to be able to 
> build intelligent 
> machines and so I need to know what qualia they will have, if any.  I 
> think it will depend 
> on their sensors and on their values/goals.  If I build a very intelligent 
> Mars Rover, 
> capable of learning and reasoning, with a goal of discovering whether 
> there was once life 
> on Mars; then I expect it will experience pleasure in finding evidence 
> regarding this.   
> But no matter how smart I make it, it won't experience lust. 
>
> "Reasoning" being what exactly? The ability to circumnavigate an obstacle 
for instance? There are no "rewards" in an algorithm. There are just paths 
which do or don't get followed depending on inputs. Sure, the argument that 
there must be qualia in a sufficiently sophisticated computer seems 
compelling. But the argument that there can't be seems equally so. As a 
programmer I have zero expectation that the computer I am programming will 
feel pleasure or suffering. It's just as happy to throw an exception as it 
is to complete its assigned task. *I* am the one who experiences pain when 
it hits an error! I just can't conceive of the magical point at which the 
computer goes from total indifference to giving a damn. That's the point 
Craig keeps pushing and which I agree with. Something is missing from our 
understanding.

>
> > I'm curious what the literature has to say about that. And if 
> functionalism means 
> > reproducing more than the mere functional output of a system, if it 
> potentially means 
> > replication down to the elementary particles and possibly their quantum 
> entanglements, 
> > then duplication becomes impossible, not merely technically but in 
> principle. That seems 
> > against the whole point of functionalism - as the idea of "function" is 
> reduced to 
> > something almost meaningless. 
>
> I think functionalism must be confined to the classical functions, 
> discounting the quantum 
> level effects.  But it must include some behavior that is almost entirely 
> internal - e.g. 
> planning, imagining.  Excluding quantum entanglements isn't arbitrary; 
> there cannot have 
> been any evolution of goals and values based on quantum entanglement 
> (beyond the 
> statistical effects that produce decoherence and quasi-classical 
> behavior). 
>
> But what do "planning" and "imagining" mean except their functional 
outputs? It shouldn't matter to you how the planning occurs - it's an 
"implementation detail" in development speak. Your argument may be valid 
regarding quantum entanglement, but it is still an argument based on what 
"seems to make sense" rather than on genuine understanding of the 
relationship between functions and their putative qualia. 
 

> Brent 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-02 Thread Russell Standish
On Tue, Oct 01, 2013 at 10:09:03AM -0700, meekerdb wrote:
> On 10/1/2013 4:13 AM, Bruno Marchal wrote:
> >Note also that the expression "computation have qualia" can be
> >misleading. A computation has no qualia, strictly speaking. Only a
> >person supported by an infinity of computation can be said to have
> >qualia, or to live qualia.
> 
> Why an infinity of computation??  That would preclude my building an
> intelligent robot having qualia, since it's computations would
> always be finite.  And I doubt there is room in my head for infinite
> computations - certainly not digital ones.
> 

He is alluding to the universal dovetailer here, which contains an
infinite number of distinct computations that implement any given
conscious state.

However, it is not clear that it is necessary for it to be infinite -
in a nonrebust world that doesn't contain a UD, we still consider the
possbility of conscious computations in the MGA.

Cheers
-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-02 Thread Bruno Marchal


On 01 Oct 2013, at 18:46, Craig Weinberg wrote:




On Tuesday, October 1, 2013 7:13:17 AM UTC-4, Bruno Marchal wrote:

On 30 Sep 2013, at 14:05, Telmo Menezes wrote :



The comp assumption that computations have
qualia hidden inside them is not much of an answer either in my  
view.


I have the same problem.


The solution is in the fact that all machines have that problem.  
More exactly: all persons capable of surviving a digital  
substitution must have that and similar problems. It is a sort of  
meta-solution explaining that we are indeed confronted to something  
which is simply totally unexplainable.


Note also that the expression "computation have qualia" can be  
misleading. A computation has no qualia, strictly speaking. Only a  
person supported by an infinity of computation can be said to have  
qualia, or to live qualia. Then the math of self-reference can be  
used to explain why the qualia have to escape the pure third person  
type of explanations.


A good exercise consists in trying to think about what could like an  
explanation of what a qualia is. Even without comp, that will seem  
impossible, and that explains why some people, like Craig, estimate  
that we have to take them as primitive. here comp explains, why  
there are things like qualia, which can emerge only in the frist  
person points of view, and admit irreductible components.


Explaining why X is local to a certain perspective, or why X is  
irreducible does not explain why X is an aesthetic presence though.


Good. This means comp gives job. Nobody pretends that comp solves  
everything at once, and on the contrary, I specifically explains that  
he leads to new problem, like explaining the laws of physics from a  
statistic on computation-seen-from inside (to be short).




You can have numerical expressions which are irreducible and local  
to a machine without there being any such thing as flavor or color.  
As long as we are saying that both qualia and quanta are real, I  
don't see any advantage of making qualia supervene on quanta instead  
of the other way around, especially when we can understand that the  
nature of counting is to create figurative reductions which are  
nameless and homeless.


It is easier to explain something immaterial from something  
immaterial, than to explain something immaterial from primary matter,  
which is a quite speculative notion (nobody has ever provided any  
evidence for it, except a niave extrapolation from our familiarity  
with the neighborhood).




We can't turn a wavelength into a color without color vision and  
real illumination,


No doubt.


but we can turn color into a wavelength simply by discarding all of  
the actual color experience and looking at general patterns within  
optics analytically (abstractly).


Sure. Goethe said this already, but was wrong in deducing from this  
that Newton theory of color was wrong. It was just not handling the  
qualia aspect.




The irreducibility and 1p locality are hints, but they are neither  
necessary nor sufficient to access any specific qualia.


This is what you should justify.



I really don't think that I am missing something here. I can easily  
see it the other way around, I just don't think that it is true of  
the universe that we live in. Yes, it makes sense why a machine  
would not be able to tell that its experience is the result of a  
machine, but it doesn't make sense that Santa Claus would make that  
experience into tongues that taste that are different from eyes that  
see. All that matters is information transfer, so that difference  
would not engender any qualia, just clever addressing.


The modal intensional variant of the self-reference is not related to  
addressing. Even G ([]p) is not, or quite indirectly with some  
imagination, but the subject (S4Grz, []p & p)) blows up any addressing  
and naming issues in this context. No machines, like us, can give a  
description of "who they are".


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-02 Thread Bruno Marchal


On 01 Oct 2013, at 19:09, meekerdb wrote:


On 10/1/2013 4:13 AM, Bruno Marchal wrote:
Note also that the expression "computation have qualia" can be  
misleading. A computation has no qualia, strictly speaking. Only a  
person supported by an infinity of computation can be said to have  
qualia, or to live qualia.


Why an infinity of computation??


Because the FPI bears on arithmetic, which contains the running of all  
universal machine implementing your code, below your substitution level.


With comp you can attach a mind to some body, but you cannot attach  
one token body to a mind, you can attach only an infinities of such  
bodies, through the interference of all computations which realize  
them in arithmetic.






That would preclude my building an intelligent robot having qualia,  
since it's computations would always be finite.  And I doubt there  
is room in my head for infinite computations - certainly not digital  
ones.


You are right. We cannot build intelligent being with qualia. The  
computer, and the 3p-robot, does not create that consciousness, it  
will only help a consciousness, which is already in Platonia, to be  
able to manifest itself relatively to you, with the same statistic for  
your and the robot continuations.


It is confusing, but this is because we tend to associate mind to  
brain or robot, but mind is an attribute of person, and a brain or a  
body is only needed for a relative purpose.


Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-02 Thread Bruno Marchal


On 02 Oct 2013, at 06:56, Pierz wrote:




On Wednesday, October 2, 2013 12:46:17 AM UTC+10, Bruno Marchal wrote:

On 01 Oct 2013, at 15:31, Pierz wrote:

> Maybe. It would be a lot more profound if we definitely *could*
> reproduce the brain's behaviour. The devil is in the detail as they
> say. But a challenge to Chalmer's position has occurred to me. It
> seems to me that Bruno has convincingly argued that *if* comp holds,
> then consciousness supervenes on the computation, not on the
> physical matter. But functionalism suggests that what counts is the
> output, not the manner in which it as arrived at. That is to say,
> the brain or whatever neural subunit or computer is doing the
> processing is a black box. You input something and then read the
> output, but the intervening steps don't matter. Consider what this
> might mean in terms of a brain.


That's not clear to me. The question is "output of what". If it is the
entie subject, this is more behaviorism than functionalism.
Putnam's functionalism makes clear that we have to take the output of
the neurons into account.
Comp is functionalism, but with the idea that we don't know the level
of substitution, so it might be that we have to take into account the
oputput of the gluons in our atoms (so comp makes clear that it only
ask for the existence of a level of substitution, and then show that
no machine can know for sure its subst. level, making Putnam's sort of
functionalism a bit fuzzy).

I was going on stathis's post. He stated that reproducing the  
brain's functions meant reproducing the qualia, but I refuted that  
(I think).


I am not entirely sure.
There is a problem as the notion of "reproducing brain's functions" is  
ambiguous, and "reproducing qualia" is too.












> Let's say a vastly advanced alien species comes to earth. It looks
> at our puny little brains and decides to make one to fool us. This
> constructed person/brain receives normal conversational input and
> outputs conversation that it knows will perfectly mimic a human
> being. But in fact the computer doing this processing is vastly
> superior to the human brain. It's like a modern PC emulating a
> TRS-80, except much more so. When it computes/thinks up a response,
> it draws on a vast amount of knowledge, intelligence and creativity
> and accesses qualia undreamed of by a human. Yet its response will
> completely fool any normal human and will pass Turing tests till the
> cows come home. What this thought experiment shows is that, while
> half-qualia may be absurd, it most certainly is possible to
> reproduce the outputs of a brain without replicating its qualia. It
> might have completely different qualia, just as a very good actor's
> emotions can't be distinguished from the real thing, even though his
> or her internal experience is quite different. And if qualia can be
> quite different even though the functional outputs are the same,
> this does seem to leave functionalism in something of a quandary.
> All we can say is that there must be some kind of qualia occurring,
> rather a different result from what Chalmers is claiming. When we
> extend this type of scenario to artificial neurons or partial brain
> prostheses as in Chamer's paper, we quickly run up against
> perplexing problems. Imagine the advanced alien provides these
> prostheses. It takes the same inputs and generates the same correct
> outputs, but it processes those inputs within a much vaster, more
> complex system. Does the brain utilizing this advanced prosthesis
> experience a kind of expanded consciousness because of this, without
> that difference being detectable? Or do the qualia remain somehow
> confined to the prosthesis (whatever that means)? These crazy
> quandaries suggest to me that basically, we don't know shit.

Hmm, I am not convinced. "Chalmers argument"  is that to get a
philosophical zombie, the fading argument shows that you have to go
through half-qualia, which is absurd. His goal (here) is to show that
"no qualia" is absurd.

That the qualia can be different is known in the qualia literature,
and is a big open problem per se. But Chalmers argues only that "no
qualia" is absurd, indeed because it would needs some absurd notion of
intermediate half qualia.

My be I miss a point. Stathis can clarify this furher.

Yes, I understand that to be Chalmer's main point. Although, if the  
qualia can be different, it does present issues - how much and in  
what way can it vary? I'm curious what the literature has to say  
about that. And if functionalism means reproducing more than the  
mere functional output of a system, if it potentially means  
replication down to the elementary particles and possibly their  
quantum entanglements, then duplication becomes impossible, not  
merely technically but in principle. That seems against the whole  
point of functionalism - as the idea of "function" is reduced to  
something almost meaningless.


This shows only that computationalism admi

Re: A challenge for Craig

2013-10-02 Thread Bruno Marchal


On 02 Oct 2013, at 11:04, Russell Standish wrote:


On Tue, Oct 01, 2013 at 10:09:03AM -0700, meekerdb wrote:

On 10/1/2013 4:13 AM, Bruno Marchal wrote:

Note also that the expression "computation have qualia" can be
misleading. A computation has no qualia, strictly speaking. Only a
person supported by an infinity of computation can be said to have
qualia, or to live qualia.


Why an infinity of computation??  That would preclude my building an
intelligent robot having qualia, since it's computations would
always be finite.  And I doubt there is room in my head for infinite
computations - certainly not digital ones.



He is alluding to the universal dovetailer here, which contains an
infinite number of distinct computations that implement any given
conscious state.


OK.

Strictly speaking the UD is the program, which is finite. But its  
running (UD*) is infinite, and it runs all computations, those who  
stop and those who does not stop.






However, it is not clear that it is necessary for it to be infinite -
in a nonrebust world that doesn't contain a UD,


Cannot run integrally the UD, OK.



we still consider the
possbility of conscious computations in the MGA.


Yes, but without (the interesting leasure problem), and we lost the  
ability to expalin where matter come from, indeed that move  
reintroduce the primary matter, which then appear as a "matter of the  
gap". I think.


Best,

Bruno





Cheers
--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-02 Thread Craig Weinberg


On Wednesday, October 2, 2013 12:26:45 PM UTC-4, Bruno Marchal wrote:
>
>
> On 02 Oct 2013, at 06:56, Pierz wrote:
>
>
>
> On Wednesday, October 2, 2013 12:46:17 AM UTC+10, Bruno Marchal wrote:
>>
>> Then the reasoning shows (at a meta-level, made possible with the 
> assumption used) how consciousness and beliefs (more or less deluded) in 
> physical realities develop in arithmetic.
>

Are 'beliefs in' physical realities the same as experiencing the realism of 
public physics though? For instance, I believe that if I should avoid 
driving recklessly in the same way as I would in a driving game as I would 
in my actual car. Because my belief that the consequences of a real life 
collision are more severe than a game collision, I would drive more 
conservatively in real life. That's all ok, but a belief about consequences 
would not generate realistic qualia. If someone held a gun to my head while 
I play the racing game, the game would not become any more realistic. I 
always feel like there is an equivalence between belief and qualia which is 
being implied here that is not the case. It's along the lines of assuming 
that a hypnotic state can fully replace reality. If that were the case, of 
course, everybody would be lining up to get hypnotized.There is some 
permeability there, but I think it's simplistic to imply that the aggregate 
of all qualia arises purely from the arbitrary tokenization of beliefs.
 

>
> But that's the mathematical (arithmetical) part. In UDA it is just shown 
> that if comp is true (an hypothesis on consciousness) then physics is a 
> branch of arithmetic. More precisely a branch of the ideally 
> self-referentially correct machine's theology. (always in the Greek sense).
>
> There is no pretense that comp is true, but if it is true, the correct 
> "QM" cannot postulate the wave, it has to derive the wave from the numbers. 
> That's what UDA shows: a problem. AUDA (the machine's interview) provides 
> the only path (by Gödel, Löb, Solovay) capable of relating the truth and 
> all machine's points of view. 
>
> There will be many ways to extract physics from the numbers, but 
> interviewing the self-introspecting universal machine is the only way to 
> get not just the laws of physics, but also why it can hurt, and why a part 
> of that seems to be necessarily not functional.
>

I don't think that an interview with anyone can explain why they can hurt, 
unless you have already naturalized an expectation of pain. In other words, 
if we don't presume that universal machine experiences anything, there is 
no need to invent qualia or experience to justify any mathematical 
relation. If mathematically all that you need is non-functional, secret 
kinds of variable labels to represent machine states, I don't see why we 
should assume they are qualitative. If anything, the unity of arithmetic 
truth would demand a single sensory channel that constitutes all possible 
I/O.

Craig


> Bruno
>
>
>
>  
>
>>
>> Bruno 
>>
>>
>> > 
>> > -- 
>> > You received this message because you are subscribed to the Google   
>> > Groups "Everything List" group. 
>> > To unsubscribe from this group and stop receiving emails from it,   
>> > send an email to everything-li...@googlegroups.com. 
>> > To post to this group, send email to everyth...@googlegroups.com. 
>> > Visit this group at http://groups.google.com/group/everything-list. 
>> > For more options, visit https://groups.google.com/groups/opt_out. 
>>
>> http://iridia.ulb.ac.be/~marchal/ 
>>
>>
>>
>>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com
> .
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-02 Thread meekerdb

On 10/2/2013 2:04 AM, Russell Standish wrote:

On Tue, Oct 01, 2013 at 10:09:03AM -0700, meekerdb wrote:

On 10/1/2013 4:13 AM, Bruno Marchal wrote:

Note also that the expression "computation have qualia" can be
misleading. A computation has no qualia, strictly speaking. Only a
person supported by an infinity of computation can be said to have
qualia, or to live qualia.

Why an infinity of computation??  That would preclude my building an
intelligent robot having qualia, since it's computations would
always be finite.  And I doubt there is room in my head for infinite
computations - certainly not digital ones.


He is alluding to the universal dovetailer here, which contains an
infinite number of distinct computations that implement any given
conscious state.

However, it is not clear that it is necessary for it to be infinite -
in a nonrebust world that doesn't contain a UD, we still consider the
possbility of conscious computations in the MGA.


Yes, I know what he is alluding to.  But if it really does take all those infinite threads 
of computation to realize conscious states, then I think that is the same as saying it 
takes the underlying physics of a brain (or computer) to realize consciousness.  But then 
Bruno's program of explaining things from computation hasn't avoided relying on the 
physical. ??


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-02 Thread meekerdb

On 10/1/2013 11:49 PM, Pierz wrote:



On Wednesday, October 2, 2013 3:15:01 PM UTC+10, Brent wrote:

On 10/1/2013 9:56 PM, Pierz wrote:
> Yes, I understand that to be Chalmer's main point. Although, if the 
qualia can be
> different, it does present issues - how much and in what way can it vary?

Yes, that's a question that interests me because I want to be able to build 
intelligent
machines and so I need to know what qualia they will have, if any.  I think 
it will
depend
on their sensors and on their values/goals.  If I build a very intelligent 
Mars Rover,
capable of learning and reasoning, with a goal of discovering whether there 
was once
life
on Mars; then I expect it will experience pleasure in finding evidence 
regarding this.
But no matter how smart I make it, it won't experience lust.

"Reasoning" being what exactly? The ability to circumnavigate an obstacle for instance? 
There are no "rewards" in an algorithm. There are just paths which do or don't get 
followed depending on inputs. Sure, the argument that there must be qualia in a 
sufficiently sophisticated computer seems compelling. But the argument that there can't 
be seems equally so. As a programmer I have zero expectation that the computer I am 
programming will feel pleasure or suffering. It's just as happy to throw an exception as 
it is to complete its assigned task. *I* am the one who experiences pain when it hits an 
error! I just can't conceive of the magical point at which the computer goes from total 
indifference to giving a damn. That's the point Craig keeps pushing and which I agree 
with. Something is missing from our understanding.


What's missing is you're considering a computer, not a robot.  As robot has to have values 
and goals in order to act and react in the world.  It has complex systems and subsystems 
that may have conflicting subgoals, and in order to learn from experience it keeps a 
narrative history about what it considers significant events.  At that level it may have 
the consciousness of a mouse.  If it's a social robot, one that needs to cooperate and 
compete in a society of other persons, then it will need a self-image and model of other 
people.  In that case it's quite reasonable to suppose it also has qualia.




> I'm curious what the literature has to say about that. And if 
functionalism means
> reproducing more than the mere functional output of a system, if it 
potentially means
> replication down to the elementary particles and possibly their quantum
entanglements,
> then duplication becomes impossible, not merely technically but in 
principle. That
seems
> against the whole point of functionalism - as the idea of "function" is 
reduced to
> something almost meaningless.

I think functionalism must be confined to the classical functions, 
discounting the
quantum
level effects.  But it must include some behavior that is almost entirely 
internal -
e.g.
planning, imagining.  Excluding quantum entanglements isn't arbitrary; 
there cannot
have
been any evolution of goals and values based on quantum entanglement 
(beyond the
statistical effects that produce decoherence and quasi-classical behavior).

But what do "planning" and "imagining" mean except their functional outputs? It 
shouldn't matter to you how the planning occurs - it's an "implementation detail" in 
development speak.


You can ask a person about plans and imaginings, and speech in response is an 
action.

Your argument may be valid regarding quantum entanglement, but it is still an argument 
based on what "seems to make sense" rather than on genuine understanding of the 
relationship between functions and their putative qualia.


But I suspect that there is no understanding that would satisfy Craig as "genuine".  Do we 
have a "genuine" understanding of electrodynamics?  of computation?  What we have is the 
ability to manipulate them for our purposes.  So when we can make an intelligent robot 
that interacts with people AS IF it experiences qualia and we can manipulate and 
anticipate that behavior, then we'll have just as genuine an understanding of qualia as we 
do of electrodynamics.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-02 Thread meekerdb

On 10/2/2013 6:35 AM, Bruno Marchal wrote:


On 01 Oct 2013, at 19:09, meekerdb wrote:


On 10/1/2013 4:13 AM, Bruno Marchal wrote:
Note also that the expression "computation have qualia" can be misleading. A 
computation has no qualia, strictly speaking. Only a person supported by an infinity 
of computation can be said to have qualia, or to live qualia.


Why an infinity of computation??


Because the FPI bears on arithmetic, which contains the running of all universal machine 
implementing your code, below your substitution level.


With comp you can attach a mind to some body, but you cannot attach one token body to a 
mind, you can attach only an infinities of such bodies, through the interference of all 
computations which realize them in arithmetic.






That would preclude my building an intelligent robot having qualia, since it's 
computations would always be finite.  And I doubt there is room in my head for infinite 
computations - certainly not digital ones.


You are right. We cannot build intelligent being with qualia. The computer, and the 
3p-robot, does not create that consciousness, it will only help a consciousness, which 
is already in Platonia, to be able to manifest itself relatively to you, with the same 
statistic for your and the robot continuations.


When a consciousness is not manifested, what is it's content?

Brent



It is confusing, but this is because we tend to associate mind to brain or robot, but 
mind is an attribute of person, and a brain or a body is only needed for a relative 
purpose.


Bruno




http://iridia.ulb.ac.be/~marchal/ 



No virus found in this message.
Checked by AVG - www.avg.com 
Version: 2014.0.4142 / Virus Database: 3604/6714 - Release Date: 10/01/13

--
You received this message because you are subscribed to the Google Groups "Everything 
List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-02 Thread meekerdb

On 10/2/2013 9:26 AM, Bruno Marchal wrote:
I agree with Brent though on this. Your UDA proceeds on the basis that a computer in a 
single reality (not an infinite sum of calculations - that comes later) can have a 1p.


Yes. It has 1p, it is not a zombie. But that 1p, for him, is really defined by a cloud 
of similar and variant corresponding to its indeterminacy domain in the universal 
dovetailing (= already in a tiny part of arithmetic).


And doesn't this cloud correspond to the fuzzy, quantum description of the underlying 
physics, i.e. the quantum state of the brain.  And isn't it, per Tegmark, quasi-classical.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-02 Thread John Mikes
Brent:
*"**But no matter how smart I make it, it won't experience lust."*
*
*
1. "lust" is not the universal criterion that makes us human, it is only
one of our humanly circumscribed paraphernalia we apply in HUMAN thinking
and HUMAN complexity with HUMAN language. Can you apply a similar criterion
for the robot in 'it's' characteristics?

2. A N D if *YOU * cannot make it 'smarter', is that a general statement?

John M


On Wed, Oct 2, 2013 at 1:15 AM, meekerdb  wrote:

> On 10/1/2013 9:56 PM, Pierz wrote:
>
>> Yes, I understand that to be Chalmer's main point. Although, if the
>> qualia can be different, it does present issues - how much and in what way
>> can it vary?
>>
>
> Yes, that's a question that interests me because I want to be able to
> build intelligent machines and so I need to know what qualia they will
> have, if any.  I think it will depend on their sensors and on their
> values/goals.  If I build a very intelligent Mars Rover, capable of
> learning and reasoning, with a goal of discovering whether there was once
> life on Mars; then I expect it will experience pleasure in finding evidence
> regarding this.  But no matter how smart I make it, it won't experience
> lust.
>
>
>
>  I'm curious what the literature has to say about that. And if
>> functionalism means reproducing more than the mere functional output of a
>> system, if it potentially means replication down to the elementary
>> particles and possibly their quantum entanglements, then duplication
>> becomes impossible, not merely technically but in principle. That seems
>> against the whole point of functionalism - as the idea of "function" is
>> reduced to something almost meaningless.
>>
>
> I think functionalism must be confined to the classical functions,
> discounting the quantum level effects.  But it must include some behavior
> that is almost entirely internal - e.g. planning, imagining.  Excluding
> quantum entanglements isn't arbitrary; there cannot have been any evolution
> of goals and values based on quantum entanglement (beyond the statistical
> effects that produce decoherence and quasi-classical behavior).
>
> Brent
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to 
> everything-list+unsubscribe@**googlegroups.com
> .
> To post to this group, send email to 
> everything-list@googlegroups.**com
> .
> Visit this group at 
> http://groups.google.com/**group/everything-list
> .
> For more options, visit 
> https://groups.google.com/**groups/opt_out
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-02 Thread meekerdb

On 10/2/2013 2:06 PM, John Mikes wrote:


Brent:
*/"/*/But no matter how smart I make it, it won't experience lust."/
/
/
1. "lust" is not the universal criterion that makes us human, it is only one of our 
humanly circumscribed paraphernalia we apply in HUMAN thinking and HUMAN complexity with 
HUMAN language.


I don't think so.  I think it's a qualia experienced by sexually reproducing species.  My 
dog seems to experience it when in the presence of a receptive female.


But of course I picked lust, just because it's not something a robot, that doesn't 
reproduce sexually, and might not reproduce at all, would need to have.



Can you apply a similar criterion for the robot in 'it's' characteristics?


I think that the robot could feel some qualia analogous to humans, e.g. frustration, fear, 
too cold, too hot, tired,...




2. A N D if _YOU _ cannot make it 'smarter', is that a general statement?


?? I didn't state that I cannot make it smarter.

Brent



John M


On Wed, Oct 2, 2013 at 1:15 AM, meekerdb > wrote:


On 10/1/2013 9:56 PM, Pierz wrote:

Yes, I understand that to be Chalmer's main point. Although, if the 
qualia can
be different, it does present issues - how much and in what way can it 
vary?


Yes, that's a question that interests me because I want to be able to build
intelligent machines and so I need to know what qualia they will have, if 
any.  I
think it will depend on their sensors and on their values/goals.  If I 
build a very
intelligent Mars Rover, capable of learning and reasoning, with a goal of
discovering whether there was once life on Mars; then I expect it will 
experience
pleasure in finding evidence regarding this.  But no matter how smart I 
make it, it
won't experience lust.



I'm curious what the literature has to say about that. And if 
functionalism
means reproducing more than the mere functional output of a system, if 
it
potentially means replication down to the elementary particles and 
possibly
their quantum entanglements, then duplication becomes impossible, not 
merely
technically but in principle. That seems against the whole point of
functionalism - as the idea of "function" is reduced to something almost
meaningless.


I think functionalism must be confined to the classical functions, 
discounting the
quantum level effects.  But it must include some behavior that is almost 
entirely
internal - e.g. planning, imagining.  Excluding quantum entanglements isn't
arbitrary; there cannot have been any evolution of goals and values based 
on quantum
entanglement (beyond the statistical effects that produce decoherence and
quasi-classical behavior).

Brent



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-02 Thread Stathis Papaioannou
On 1 October 2013 23:31, Pierz  wrote:
> Maybe. It would be a lot more profound if we definitely *could* reproduce the 
> brain's behaviour. The devil is in the detail as they say. But a challenge to 
> Chalmer's position has occurred to me. It seems to me that Bruno has 
> convincingly argued that *if* comp holds, then consciousness supervenes on 
> the computation, not on the physical matter.

When I say "comp holds" I mean in the first instance that my physical
brain could be replaced with an appropriate computer and I would still
be me. But this assumption leads to the conclusion that the computer
is not actually needed, just the computation as platonic object. So if
it's true that my brain could be replaced with a physical computer
then my brain and the computer were not physical in a fundamental
sense in the first place! While this is circular-sounding I don't
think that it's actually contradictory. It is not a necessary premise
of Chalmer's argument (or indeed, for most scientific arguments) that
there be a fundamental physical reality.

As for reproducing the brain's behaviour, it comes down to whether
brain physics is computable. It probably *is* computable, since we
have not found evidence of non-computable physics of which I am aware.
If it is not, then computationalism is false. But even if
computationalism is false, Chalmer's argument still shows that
*functionalism* is true. Computationalism is a subset of
functionalism.

> But functionalism suggests that what counts is the output, not the manner in 
> which it as arrived at. That is to say, the brain or whatever neural subunit 
> or computer is doing the processing is a black box. You input something and 
> then read the output, but the intervening steps don't matter. Consider what 
> this might mean in terms of a brain. Let's say a vastly advanced alien 
> species comes to earth. It looks at our puny little brains and decides to 
> make one to fool us. This constructed person/brain receives normal 
> conversational input and outputs conversation that it knows will perfectly 
> mimic a human being. But in fact the computer doing this processing is vastly 
> superior to the human brain. It's like a modern PC emulating a TRS-80, except 
> much more so. When it computes/thinks up a response, it draws on a vast 
> amount of knowledge, intelligence and creativity and accesses qualia 
> undreamed of by a human. Yet its response will completely fool any normal 
> human and will pass Turing tests till the cows come home. What this thought 
> experiment shows is that, while half-qualia may be absurd, it most certainly 
> is possible to reproduce the outputs of a brain without replicating its 
> qualia. It might have completely different qualia, just as a very good 
> actor's emotions can't be distinguished from the real thing, even though his 
> or her internal experience is quite different. And if qualia can be quite 
> different even though the functional outputs are the same, this does seem to 
> leave functionalism in something of a quandary. All we can say is that there 
> must be some kind of qualia occurring, rather a different result from what 
> Chalmers is claiming. When we extend this type of scenario to artificial 
> neurons or partial brain prostheses as in Chamer's paper, we quickly run up 
> against perplexing problems. Imagine the advanced alien provides these 
> prostheses. It takes the same inputs and generates the same correct outputs, 
> but it processes those inputs within a much vaster, more complex system. Does 
> the brain utilizing this advanced prosthesis experience a kind of expanded 
> consciousness because of this, without that difference being detectable? Or 
> do the qualia remain somehow confined to the prosthesis (whatever that 
> means)? These crazy quandaries suggest to me that basically, we don't know 
> shit.

Essentially, I think that if the alien computer reproduces human
behaviour then it will also reproduce human qualia. Start with a
prosthesis that replaces 1% of the brain. If it has different qualia
despite copying the original neurons' I/O behaviour then very quickly
the system will deteriorate: the brain's owner will notice that the
qualia are different and behave differently, which is impossible if
the original assumption about copying the original neurons' I/O
behaviour is true. The same is the case if the prosthesis replaces 99%
of the neurons - the 1% remaining neurons would notice that the qualia
were different and deviate from normal behaviour, and the same would
be the case if only one of the original neurons were present. If you
assume it is possible that the prosthesis reproduces the I/O behaviour
but not the qualia you get a contradiction, and a contradiction is
worse than a crazy quandary.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an emai

Re: A challenge for Craig

2013-10-02 Thread Stathis Papaioannou
On 2 October 2013 00:46, Bruno Marchal  wrote:
>
> On 01 Oct 2013, at 15:31, Pierz wrote:
>
>> Maybe. It would be a lot more profound if we definitely *could* reproduce
>> the brain's behaviour. The devil is in the detail as they say. But a
>> challenge to Chalmer's position has occurred to me. It seems to me that
>> Bruno has convincingly argued that *if* comp holds, then consciousness
>> supervenes on the computation, not on the physical matter. But functionalism
>> suggests that what counts is the output, not the manner in which it as
>> arrived at. That is to say, the brain or whatever neural subunit or computer
>> is doing the processing is a black box. You input something and then read
>> the output, but the intervening steps don't matter. Consider what this might
>> mean in terms of a brain.
>
>
>
> That's not clear to me. The question is "output of what". If it is the entie
> subject, this is more behaviorism than functionalism.
> Putnam's functionalism makes clear that we have to take the output of the
> neurons into account.
> Comp is functionalism, but with the idea that we don't know the level of
> substitution, so it might be that we have to take into account the oputput
> of the gluons in our atoms (so comp makes clear that it only ask for the
> existence of a level of substitution, and then show that no machine can know
> for sure its subst. level, making Putnam's sort of functionalism a bit
> fuzzy).
>
>
>
>
>
>> Let's say a vastly advanced alien species comes to earth. It looks at our
>> puny little brains and decides to make one to fool us. This constructed
>> person/brain receives normal conversational input and outputs conversation
>> that it knows will perfectly mimic a human being. But in fact the computer
>> doing this processing is vastly superior to the human brain. It's like a
>> modern PC emulating a TRS-80, except much more so. When it computes/thinks
>> up a response, it draws on a vast amount of knowledge, intelligence and
>> creativity and accesses qualia undreamed of by a human. Yet its response
>> will completely fool any normal human and will pass Turing tests till the
>> cows come home. What this thought experiment shows is that, while
>> half-qualia may be absurd, it most certainly is possible to reproduce the
>> outputs of a brain without replicating its qualia. It might have completely
>> different qualia, just as a very good actor's emotions can't be
>> distinguished from the real thing, even though his or her internal
>> experience is quite different. And if qualia can be quite different even
>> though the functional outputs are the same, this does seem to leave
>> functionalism in something of a quandary. All we can say is that there must
>> be some kind of qualia occurring, rather a different result from what
>> Chalmers is claiming. When we extend this type of scenario to artificial
>> neurons or partial brain prostheses as in Chamer's paper, we quickly run up
>> against perplexing problems. Imagine the advanced alien provides these
>> prostheses. It takes the same inputs and generates the same correct outputs,
>> but it processes those inputs within a much vaster, more complex system.
>> Does the brain utilizing this advanced prosthesis experience a kind of
>> expanded consciousness because of this, without that difference being
>> detectable? Or do the qualia remain somehow confined to the prosthesis
>> (whatever that means)? These crazy quandaries suggest to me that basically,
>> we don't know shit.
>
>
> Hmm, I am not convinced. "Chalmers argument"  is that to get a philosophical
> zombie, the fading argument shows that you have to go through half-qualia,
> which is absurd. His goal (here) is to show that "no qualia" is absurd.
>
> That the qualia can be different is known in the qualia literature, and is a
> big open problem per se. But Chalmers argues only that "no qualia" is
> absurd, indeed because it would needs some absurd notion of intermediate
> half qualia.
>
> My be I miss a point. Stathis can clarify this furher.

The argument is simply summarised thus: it is impossible even for God
to make a brain prosthesis that reproduces the I/O behaviour but has
different qualia. This is a proof of comp, provided that brain physics
is computable, or functionalism if brain physics is not computable.
Non-comp functionalism may entail, for example, that the replacement
brain contain a hypercomputer.

> Eventually the qualia is determined by infinitely many number relations, and
> a brain filters them. It does not create them, like no machine can create
> PI, only "re-compute" it, somehow. The anlogy here break sown as qualia are
> purely first person notion, which explains why they are distributed on the
> whole universal dovetailing (sigma_1 arithmetic).
>
>
> Bruno
>
>
>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> 

Re: A challenge for Craig

2013-10-02 Thread meekerdb

On 10/2/2013 5:15 PM, Stathis Papaioannou wrote:

On 1 October 2013 23:31, Pierz  wrote:

Maybe. It would be a lot more profound if we definitely *could* reproduce the 
brain's behaviour. The devil is in the detail as they say. But a challenge to 
Chalmer's position has occurred to me. It seems to me that Bruno has 
convincingly argued that *if* comp holds, then consciousness supervenes on the 
computation, not on the physical matter.

When I say "comp holds" I mean in the first instance that my physical
brain could be replaced with an appropriate computer and I would still
be me. But this assumption leads to the conclusion that the computer
is not actually needed, just the computation as platonic object.


But what if you were just slightly different or different only in some rare circumstances 
(like being in an MRI), which seems very likely?



So if
it's true that my brain could be replaced with a physical computer
then my brain and the computer were not physical in a fundamental
sense in the first place!


But this depends on the MGA or Olympia argument, which find suspect.


While this is circular-sounding I don't
think that it's actually contradictory. It is not a necessary premise
of Chalmer's argument (or indeed, for most scientific arguments) that
there be a fundamental physical reality.

As for reproducing the brain's behaviour, it comes down to whether
brain physics is computable. It probably *is* computable, since we
have not found evidence of non-computable physics of which I am aware.


Suppose it was not Turing computable, but was computable in some other sense (e.g. 
hypercomputable).  Aren't you just setting up a tautology in which whatever the brain 
does, whatever the universe does, we'll call it X-computable.  Already we have one good 
model of the universe, Copenhagen QM, that says it's not Turing computable.



If it is not, then computationalism is false. But even if
computationalism is false, Chalmer's argument still shows that
*functionalism* is true. Computationalism is a subset of
functionalism.


But functionalism suggests that what counts is the output, not the manner in 
which it as arrived at. That is to say, the brain or whatever neural subunit or 
computer is doing the processing is a black box. You input something and then 
read the output, but the intervening steps don't matter. Consider what this 
might mean in terms of a brain. Let's say a vastly advanced alien species comes 
to earth. It looks at our puny little brains and decides to make one to fool 
us. This constructed person/brain receives normal conversational input and 
outputs conversation that it knows will perfectly mimic a human being. But in 
fact the computer doing this processing is vastly superior to the human brain. 
It's like a modern PC emulating a TRS-80, except much more so. When it 
computes/thinks up a response, it draws on a vast amount of knowledge, 
intelligence and creativity and accesses qualia undreamed of by a human. Yet 
its response will completely fool any normal human and will pass Turing tests 
till the cows come home. What this thought experiment shows is that, while 
half-qualia may be absurd, it most certainly is possible to reproduce the 
outputs of a brain without replicating its qualia. It might have completely 
different qualia, just as a very good actor's emotions can't be distinguished 
from the real thing, even though his or her internal experience is quite 
different. And if qualia can be quite different even though the functional 
outputs are the same, this does seem to leave functionalism in something of a 
quandary. All we can say is that there must be some kind of qualia occurring, 
rather a different result from what Chalmers is claiming. When we extend this 
type of scenario to artificial neurons or partial brain prostheses as in 
Chamer's paper, we quickly run up against perplexing problems. Imagine the 
advanced alien provides these prostheses. It takes the same inputs and 
generates the same correct outputs, but it processes those inputs within a much 
vaster, more complex system. Does the brain utilizing this advanced prosthesis 
experience a kind of expanded consciousness because of this, without that 
difference being detectable? Or do the qualia remain somehow confined to the 
prosthesis (whatever that means)? These crazy quandaries suggest to me that 
basically, we don't know shit.

Essentially, I think that if the alien computer reproduces human
behaviour then it will also reproduce human qualia. Start with a
prosthesis that replaces 1% of the brain. If it has different qualia
despite copying the original neurons' I/O behaviour then very quickly
the system will deteriorate: the brain's owner will notice that the
qualia are different and behave differently


I don't see how you can be sure of that.  How will he compare his qualia of red now with 
his qualia of red before?  And why would small differences imply "the system will quickly 
deteriorate". Suppose he

Re: A challenge for Craig

2013-10-02 Thread Craig Weinberg


On Wednesday, October 2, 2013 2:59:17 PM UTC-4, Brent wrote:
>
>  On 10/1/2013 11:49 PM, Pierz wrote:
>  
>
>
> On Wednesday, October 2, 2013 3:15:01 PM UTC+10, Brent wrote: 
>>
>> On 10/1/2013 9:56 PM, Pierz wrote: 
>> > Yes, I understand that to be Chalmer's main point. Although, if the 
>> qualia can be 
>> > different, it does present issues - how much and in what way can it 
>> vary? 
>>
>> Yes, that's a question that interests me because I want to be able to 
>> build intelligent 
>> machines and so I need to know what qualia they will have, if any.  I 
>> think it will depend 
>> on their sensors and on their values/goals.  If I build a very 
>> intelligent Mars Rover, 
>> capable of learning and reasoning, with a goal of discovering whether 
>> there was once life 
>> on Mars; then I expect it will experience pleasure in finding evidence 
>> regarding this.   
>> But no matter how smart I make it, it won't experience lust. 
>>
>>  "Reasoning" being what exactly? The ability to circumnavigate an 
> obstacle for instance? There are no "rewards" in an algorithm. There are 
> just paths which do or don't get followed depending on inputs. Sure, the 
> argument that there must be qualia in a sufficiently sophisticated computer 
> seems compelling. But the argument that there can't be seems equally so. As 
> a programmer I have zero expectation that the computer I am programming 
> will feel pleasure or suffering. It's just as happy to throw an exception 
> as it is to complete its assigned task. *I* am the one who experiences pain 
> when it hits an error! I just can't conceive of the magical point at which 
> the computer goes from total indifference to giving a damn. That's the 
> point Craig keeps pushing and which I agree with. Something is missing from 
> our understanding.
>  
>
> What's missing is you're considering a computer, not a robot.  As robot 
> has to have values and goals in order to act and react in the world.  
>

Not necessarily. I think that a robot could be programmed to simulate goals 
or to avoid goals entirely, both with equal chance of success. A robot 
could be programmed to imitate the behaviors of others in their 
environment. Even in the case where a robot would naturally accumulate 
goal-like circuits, there is no reason to presume that there is any binding 
of those circuits into an overall goal. What we think of as a robot could 
just as easily be thousands of unrelated sub-units, just as a person with 
multiple personalities could navigate a single life if each personality 
handed off information to the next personality.
 

> It has complex systems and subsystems that may have conflicting subgoals, 
> and in order to learn from experience it keeps a narrative history about 
> what it considers significant events. 
>

We don't know that there is any such thing as 'it' though. To me it's seems 
more likely to me that assuming such a unified and intentional presence is 
1) succumbing to the pathetic fallacy, and 2) begging the question. It is 
to say "We know that robots must be alive because otherwise they would not 
be as happy as we know they are."
 

> At that level it may have the consciousness of a mouse.  If it's a social 
> robot, one that needs to cooperate and compete in a society of other 
> persons, then it will need a self-image and model of other people.  In that 
> case it's quite reasonable to suppose it also has qualia.
>

It's no more reasonable than supposing that a baseball diamond is rooting 
for the home team. Machines need not have any kind of model or self-image 
which is experienced in any way. It doesn't necessarily appear like 'Field 
of Dreams'. What is needed is simply a complex tree of unconscious logical 
relations. There is no image or model, only records which are compressed to 
the point of arithmetic generalization - this is the opposite of any kind 
of aesthetic presence (qualia). If that were not the case, we wouldn't need 
sense organs, we would simply collect data in its native form, compress it 
quantitatively, and execute reactions against it with Bayesian regressions. 
No qualia required.
 

>
>   
>> > I'm curious what the literature has to say about that. And if 
>> functionalism means 
>> > reproducing more than the mere functional output of a system, if it 
>> potentially means 
>> > replication down to the elementary particles and possibly their quantum 
>> entanglements, 
>> > then duplication becomes impossible, not merely technically but in 
>> principle. That seems 
>> > against the whole point of functionalism - as the idea of "function" is 
>> reduced to 
>> > something almost meaningless. 
>>
>> I think functionalism must be confined to the classical functions, 
>> discounting the quantum 
>> level effects.  But it must include some behavior that is almost entirely 
>> internal - e.g. 
>> planning, imagining.  Excluding quantum entanglements isn't arbitrary; 
>> there cannot have 
>> been any evolution of goals and val

Re: A challenge for Craig

2013-10-02 Thread Craig Weinberg


On Wednesday, October 2, 2013 8:23:36 PM UTC-4, stathisp wrote:
>
> On 2 October 2013 00:46, Bruno Marchal > 
> wrote: 
> > 
> > On 01 Oct 2013, at 15:31, Pierz wrote: 
> > 
> >> Maybe. It would be a lot more profound if we definitely *could* 
> reproduce 
> >> the brain's behaviour. The devil is in the detail as they say. But a 
> >> challenge to Chalmer's position has occurred to me. It seems to me that 
> >> Bruno has convincingly argued that *if* comp holds, then consciousness 
> >> supervenes on the computation, not on the physical matter. But 
> functionalism 
> >> suggests that what counts is the output, not the manner in which it as 
> >> arrived at. That is to say, the brain or whatever neural subunit or 
> computer 
> >> is doing the processing is a black box. You input something and then 
> read 
> >> the output, but the intervening steps don't matter. Consider what this 
> might 
> >> mean in terms of a brain. 
> > 
> > 
> > 
> > That's not clear to me. The question is "output of what". If it is the 
> entie 
> > subject, this is more behaviorism than functionalism. 
> > Putnam's functionalism makes clear that we have to take the output of 
> the 
> > neurons into account. 
> > Comp is functionalism, but with the idea that we don't know the level of 
> > substitution, so it might be that we have to take into account the 
> oputput 
> > of the gluons in our atoms (so comp makes clear that it only ask for the 
> > existence of a level of substitution, and then show that no machine can 
> know 
> > for sure its subst. level, making Putnam's sort of functionalism a bit 
> > fuzzy). 
> > 
> > 
> > 
> > 
> > 
> >> Let's say a vastly advanced alien species comes to earth. It looks at 
> our 
> >> puny little brains and decides to make one to fool us. This constructed 
> >> person/brain receives normal conversational input and outputs 
> conversation 
> >> that it knows will perfectly mimic a human being. But in fact the 
> computer 
> >> doing this processing is vastly superior to the human brain. It's like 
> a 
> >> modern PC emulating a TRS-80, except much more so. When it 
> computes/thinks 
> >> up a response, it draws on a vast amount of knowledge, intelligence and 
> >> creativity and accesses qualia undreamed of by a human. Yet its 
> response 
> >> will completely fool any normal human and will pass Turing tests till 
> the 
> >> cows come home. What this thought experiment shows is that, while 
> >> half-qualia may be absurd, it most certainly is possible to reproduce 
> the 
> >> outputs of a brain without replicating its qualia. It might have 
> completely 
> >> different qualia, just as a very good actor's emotions can't be 
> >> distinguished from the real thing, even though his or her internal 
> >> experience is quite different. And if qualia can be quite different 
> even 
> >> though the functional outputs are the same, this does seem to leave 
> >> functionalism in something of a quandary. All we can say is that there 
> must 
> >> be some kind of qualia occurring, rather a different result from what 
> >> Chalmers is claiming. When we extend this type of scenario to 
> artificial 
> >> neurons or partial brain prostheses as in Chamer's paper, we quickly 
> run up 
> >> against perplexing problems. Imagine the advanced alien provides these 
> >> prostheses. It takes the same inputs and generates the same correct 
> outputs, 
> >> but it processes those inputs within a much vaster, more complex 
> system. 
> >> Does the brain utilizing this advanced prosthesis experience a kind of 
> >> expanded consciousness because of this, without that difference being 
> >> detectable? Or do the qualia remain somehow confined to the prosthesis 
> >> (whatever that means)? These crazy quandaries suggest to me that 
> basically, 
> >> we don't know shit. 
> > 
> > 
> > Hmm, I am not convinced. "Chalmers argument"  is that to get a 
> philosophical 
> > zombie, the fading argument shows that you have to go through 
> half-qualia, 
> > which is absurd. His goal (here) is to show that "no qualia" is absurd. 
> > 
> > That the qualia can be different is known in the qualia literature, and 
> is a 
> > big open problem per se. But Chalmers argues only that "no qualia" is 
> > absurd, indeed because it would needs some absurd notion of intermediate 
> > half qualia. 
> > 
> > My be I miss a point. Stathis can clarify this furher. 
>
> The argument is simply summarised thus: it is impossible even for God 
> to make a brain prosthesis that reproduces the I/O behaviour but has 
> different qualia. This is a proof of comp, provided that brain physics 
> is computable, or functionalism if brain physics is not computable. 
> Non-comp functionalism may entail, for example, that the replacement 
> brain contain a hypercomputer. 
>


It's like saying that if the same rent is paid for every apartment in the 
same building, then the same person must be living there, and that proves 
that rent payments are people.

Re: A challenge for Craig

2013-10-03 Thread Telmo Menezes
On Tue, Oct 1, 2013 at 6:26 PM, Bruno Marchal  wrote:
>
> On 01 Oct 2013, at 17:09, Telmo Menezes wrote:
>
>> On Tue, Oct 1, 2013 at 1:13 PM, Bruno Marchal  wrote:
>>>
>>>
>>> On 30 Sep 2013, at 14:05, Telmo Menezes wrote :
>>>
>>>
>>> The comp assumption that computations have
>>>
>>> qualia hidden inside them is not much of an answer either in my view.
>>>
>>>
>>> I have the same problem.
>>>
>>>
>>> The solution is in the fact that all machines have that problem. More
>>> exactly: all persons capable of surviving a digital substitution must
>>> have
>>> that and similar problems. It is a sort of meta-solution explaining that
>>> we
>>> are indeed confronted to something which is simply totally unexplainable.
>>>
>>> Note also that the expression "computation have qualia" can be
>>> misleading. A
>>> computation has no qualia, strictly speaking. Only a person supported by
>>> an
>>> infinity of computation can be said to have qualia, or to live qualia.
>>> Then
>>> the math of self-reference can be used to explain why the qualia have to
>>> escape the pure third person type of explanations.
>>
>>
>> Thanks Bruno. Is there some formal proof of this? Can it be followed
>> by a mere mortal?
>
>
> It follows from comp, the classical definition of knowledge (the agreement
> that the modal logic S4 defines an axiomatic of knowledge) and then from
> Solovay theorem, and the fact that
>
> (Bp <-> Bp & p) belongs to G* minus G.
>
>  It is explained in details in the long version "conscience et mécanisme",
> and with less detail in the short Lille thesis (that you have).

Ok, I'm preparing to start chapter 4, the movie graph argument.

> It is also
> explained in the second part of sane04.
>
> Formally a key text is the S4 provability chapter in Boolos 79 and 93, and
> the articles referred too.
>
> We can come back on this. It is the heart of the Arithmeticalization of the
> UDA. It *is¨probably very naive, and I was sure this would be refuted, but
> it is not, yet.
>
> I think it can be understood by mere mortals, having enough times and
> motivation.
>
> For the sigma_1 restriction, you need also a good understanding around Gödel
> and Mechanism. One of the best good is the book by Judson Webb. Torkel
> Franzen's two books are quite good also. If you read the french I summarize
> a big part of the literature on that in "conscience & mécanisme".
>
> http://iridia.ulb.ac.be/~marchal/bxlthesis/consciencemecanisme.html

Thanks!

>
> Bruno
>
>
>
>>
>>> A good exercise consists in trying to think about what could like an
>>> explanation of what a qualia is. Even without comp, that will seem
>>> impossible, and that explains why some people, like Craig, estimate that
>>> we
>>> have to take them as primitive. here comp explains, why there are things
>>> like qualia, which can emerge only in the frist person points of view,
>>> and
>>> admit irreductible components.
>>>
>>> Bruno
>>>
>>>
>>>
>>>
>>> http://iridia.ulb.ac.be/~marchal/
>>>
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/everything-list.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-03 Thread Craig Weinberg


On Thursday, October 3, 2013 9:30:13 AM UTC-4, telmo_menezes wrote:
>
> On Tue, Oct 1, 2013 at 6:10 PM, Craig Weinberg 
> > 
> wrote: 
>
> > 
> > I think that evil continues to flourish, precisely because science has 
> not 
> > integrated privacy into an authoritative worldview. As long as 
> subjectivity 
> > remains the primary concern of most ordinary people, any view that 
> denies or 
> > diminishes it will be held at arms length. I think it secretly erodes 
> > support for all forms of progress and inspires fundamentalist politics 
> as 
> > well. 
>
> I agree. Taking privacy literally, this is in fact one of the most 
> creepy consequences of total surveillance: denial of privacy is, in a 
> sense, a denial of the right to existence. Can one truly exist as a 
> human being as an undistinguishable part of some mass? No secrets, no 
> mysteries, what you see is what you get to the extreme. This sounds 
> like hell. 
>

Right. I think that it is no coincidence that the major concerns of 
ubiquitous computing revolve around privacy, propreity, and security. 
Computers don't know what privacy is, they don't know who we are, and they 
can't care who owns what. That's all part of private physics and computers 
can only exploit the lowest common denominator of privacy  - that level 
which refers only to itself as a digitally quantified object.
 

>
>
>
> > Once we have a worldview which makes sense of all phenomena and 
> > experience, even the mystical and personal, then we can move forward in 
> a 
> > more mature and sensible way. Right now, what is being offered is 'you 
> can 
> > know the truth about the universe, but only if you agree that you aren't 
> > really part of it'. 
>
> I believe more than this is being offered in this mailing list. I feel 
> your objections apply mostly to mainstream views, and to that degree I 
> agree with you. 
>

I agree, I wasn't really talking about specialized groups like this.
 

>
>
> > 
> > Why would MWI or evolution place a high value on leadership or success? 
> It 
> > seems just the opposite. What difference does it make if you succeed 
> here 
> > and now, if you implicitly fail elsewhere? MWI doesn't seem to describe 
> any 
> > universe that could ever matter to anyone. It's the Occam's catastrophe 
> > factor. 
>
> Highly speculative and non-rigorous: 
> You can see it differently if you can assume self-sampling. Let's 
> assume everything is conscious, even rocks. A rock is so simple that, 
> for it, a millennia probably feels like a second. It does not contain 
> a variety of conscious states like humans do. Then, you would expect 
> to find yourself as a complex being. Certain branches of the 
> multiverse contain such complex beings, and this would make evolution 
> appear more effective/purposeful than it really is, from the vantage 
> point of these branches. 
>

Even so, why would uniqueness or firstness be of value in a universe based 
on such immense and inescapable redundancy as MWI suggests?
 

>
>
> >> >> 
> >> >> > Thanks, 
> >> >> > Craig 
> >> >> > 
> >> >> >> 
> >> >> >> 
> >> >> >> Cheers, 
> >> >> >> Telmo. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-03 Thread Pierz


On Friday, October 4, 2013 4:10:02 AM UTC+10, Craig Weinberg wrote:
>
>
>
> On Thursday, October 3, 2013 9:30:13 AM UTC-4, telmo_menezes wrote:
>>
>> On Tue, Oct 1, 2013 at 6:10 PM, Craig Weinberg  
>> wrote: 
>>
>> > 
>> > I think that evil continues to flourish, precisely because science has 
>> not 
>> > integrated privacy into an authoritative worldview. As long as 
>> subjectivity 
>> > remains the primary concern of most ordinary people, any view that 
>> denies or 
>> > diminishes it will be held at arms length. I think it secretly erodes 
>> > support for all forms of progress and inspires fundamentalist politics 
>> as 
>> > well. 
>>
>> I agree. Taking privacy literally, this is in fact one of the most 
>> creepy consequences of total surveillance: denial of privacy is, in a 
>> sense, a denial of the right to existence. Can one truly exist as a 
>> human being as an undistinguishable part of some mass? No secrets, no 
>> mysteries, what you see is what you get to the extreme. This sounds 
>> like hell. 
>>
>
> Right. I think that it is no coincidence that the major concerns of 
> ubiquitous computing revolve around privacy, propreity, and security. 
> Computers don't know what privacy is, they don't know who we are, and they 
> can't care who owns what. That's all part of private physics and computers 
> can only exploit the lowest common denominator of privacy  - that level 
> which refers only to itself as a digitally quantified object.
>  
>
>>
>>
>>
>> > Once we have a worldview which makes sense of all phenomena and 
>> > experience, even the mystical and personal, then we can move forward in 
>> a 
>> > more mature and sensible way. Right now, what is being offered is 'you 
>> can 
>> > know the truth about the universe, but only if you agree that you 
>> aren't 
>> > really part of it'. 
>>
>> I believe more than this is being offered in this mailing list. I feel 
>> your objections apply mostly to mainstream views, and to that degree I 
>> agree with you. 
>>
>
> I agree, I wasn't really talking about specialized groups like this.
>  
>
>>
>>
>> > 
>> > Why would MWI or evolution place a high value on leadership or success? 
>> It 
>> > seems just the opposite. What difference does it make if you succeed 
>> here 
>> > and now, if you implicitly fail elsewhere? MWI doesn't seem to describe 
>> any 
>> > universe that could ever matter to anyone. It's the Occam's catastrophe 
>> > factor. 
>>
>> Highly speculative and non-rigorous: 
>> You can see it differently if you can assume self-sampling. Let's 
>> assume everything is conscious, even rocks. A rock is so simple that, 
>> for it, a millennia probably feels like a second. It does not contain 
>> a variety of conscious states like humans do. Then, you would expect 
>> to find yourself as a complex being. Certain branches of the 
>> multiverse contain such complex beings, and this would make evolution 
>> appear more effective/purposeful than it really is, from the vantage 
>> point of these branches. 
>>
>
> Even so, why would uniqueness or firstness be of value in a universe based 
> on such immense and inescapable redundancy as MWI suggests?
>  
>
The universe doesn't seem to be too fussed about immense and inescapable 
redundancy. Have you noticed all the *space* out there?? The progress of 
scientific knowledge has proceeded so far in the same direction: the 
revelation of a context ever vaster and more impersonal. MWI does strike me 
as quite horrifying too. But that is based on a false perspective in which 
one imagine occupying all the branches of the universe and feels naturally 
appalled. But nobody experiences the multiverse as such thank god. As for 
what has value, again that is a matter for the first person perspective, 
the limited horizon of thoughts and feelings of the individual. From the 
god's eye view, any individual entity is utterly insignificant. You can't 
look to a cosmological theory for validation of personal significance. 
You're posing the same argument against MWI as Christians posed against 
Darwinism and before that the Copernican revolution. "What is *my*significance 
in this picture of the world?" Well sorry bud, but the news 
ain't good...

>
>>
>> >> >> 
>> >> >> > Thanks, 
>> >> >> > Craig 
>> >> >> > 
>> >> >> >> 
>> >> >> >> 
>> >> >> >> Cheers, 
>> >> >> >> Telmo. 
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-03 Thread Pierz


On Thursday, October 3, 2013 4:59:17 AM UTC+10, Brent wrote:
>
>  On 10/1/2013 11:49 PM, Pierz wrote:
>  
>
>
> On Wednesday, October 2, 2013 3:15:01 PM UTC+10, Brent wrote: 
>>
>> On 10/1/2013 9:56 PM, Pierz wrote: 
>> > Yes, I understand that to be Chalmer's main point. Although, if the 
>> qualia can be 
>> > different, it does present issues - how much and in what way can it 
>> vary? 
>>
>> Yes, that's a question that interests me because I want to be able to 
>> build intelligent 
>> machines and so I need to know what qualia they will have, if any.  I 
>> think it will depend 
>> on their sensors and on their values/goals.  If I build a very 
>> intelligent Mars Rover, 
>> capable of learning and reasoning, with a goal of discovering whether 
>> there was once life 
>> on Mars; then I expect it will experience pleasure in finding evidence 
>> regarding this.   
>> But no matter how smart I make it, it won't experience lust. 
>>
>>  "Reasoning" being what exactly? The ability to circumnavigate an 
> obstacle for instance? There are no "rewards" in an algorithm. There are 
> just paths which do or don't get followed depending on inputs. Sure, the 
> argument that there must be qualia in a sufficiently sophisticated computer 
> seems compelling. But the argument that there can't be seems equally so. As 
> a programmer I have zero expectation that the computer I am programming 
> will feel pleasure or suffering. It's just as happy to throw an exception 
> as it is to complete its assigned task. *I* am the one who experiences pain 
> when it hits an error! I just can't conceive of the magical point at which 
> the computer goes from total indifference to giving a damn. That's the 
> point Craig keeps pushing and which I agree with. Something is missing from 
> our understanding.
>  
>
> What's missing is you're considering a computer, not a robot.  As robot 
> has to have values and goals in order to act and react in the world.  It 
> has complex systems and subsystems that may have conflicting subgoals, and 
> in order to learn from experience it keeps a narrative history about what 
> it considers significant events.  At that level it may have the 
> consciousness of a mouse.  If it's a social robot, one that needs to 
> cooperate and compete in a society of other persons, then it will need a 
> self-image and model of other people.  In that case it's quite reasonable 
> to suppose it also has qualia.
>
> Really? You believe that a robot can experience qualia but a computer 
can't? Well that just makes no sense at all. A robot is a computer with 
peripherals. When I write the code to represent its "self image", I will 
probably write a class called "Self". But once compiled, the name of the 
class will be just another string of bits, and only the programmer will 
understand that it is supposed to represent the position, attitude and 
other states of the physical robot. Do the peripherals need to be real or 
can they just be simulated? Does a brain in a Futurama-style jar lose its 
qualia because it's now a computer not a robot? Come on. 

>   
>> > I'm curious what the literature has to say about that. And if 
>> functionalism means 
>> > reproducing more than the mere functional output of a system, if it 
>> potentially means 
>> > replication down to the elementary particles and possibly their quantum 
>> entanglements, 
>> > then duplication becomes impossible, not merely technically but in 
>> principle. That seems 
>> > against the whole point of functionalism - as the idea of "function" is 
>> reduced to 
>> > something almost meaningless. 
>>
>> I think functionalism must be confined to the classical functions, 
>> discounting the quantum 
>> level effects.  But it must include some behavior that is almost entirely 
>> internal - e.g. 
>> planning, imagining.  Excluding quantum entanglements isn't arbitrary; 
>> there cannot have 
>> been any evolution of goals and values based on quantum entanglement 
>> (beyond the 
>> statistical effects that produce decoherence and quasi-classical 
>> behavior). 
>>
>>  But what do "planning" and "imagining" mean except their functional 
> outputs? It shouldn't matter to you how the planning occurs - it's an 
> "implementation detail" in development speak. 
>  
>
> You can ask a person about plans and imaginings, and speech in response is 
> an action.
>
>  Your argument may be valid regarding quantum entanglement, but it is 
> still an argument based on what "seems to make sense" rather than on 
> genuine understanding of the relationship between functions and their 
> putative qualia.
>  
>
> But I suspect that there is no understanding that would satisfy Craig as 
> "genuine".  Do we have a "genuine" understanding of electrodynamics?  of 
> computation?  What we have is the ability to manipulate them for our 
> purposes.  So when we can make an intelligent robot that interacts with 
> people AS IF it experiences qualia and we can manipulate and anticipa

Re: A challenge for Craig

2013-10-03 Thread Stathis Papaioannou
On 3 October 2013 10:33, meekerdb  wrote:
> On 10/2/2013 5:15 PM, Stathis Papaioannou wrote:
>>
>> On 1 October 2013 23:31, Pierz  wrote:
>>>
>>> Maybe. It would be a lot more profound if we definitely *could* reproduce
>>> the brain's behaviour. The devil is in the detail as they say. But a
>>> challenge to Chalmer's position has occurred to me. It seems to me that
>>> Bruno has convincingly argued that *if* comp holds, then consciousness
>>> supervenes on the computation, not on the physical matter.
>>
>> When I say "comp holds" I mean in the first instance that my physical
>> brain could be replaced with an appropriate computer and I would still
>> be me. But this assumption leads to the conclusion that the computer
>> is not actually needed, just the computation as platonic object.
>
>
> But what if you were just slightly different or different only in some rare
> circumstances (like being in an MRI), which seems very likely?

If the replacement were slightly different then under particular
circumstances the consciousness would be different. It's like any
other prosthesis that might function well in most situations but fail
if pushed beyond a certain limit.

>> So if
>> it's true that my brain could be replaced with a physical computer
>> then my brain and the computer were not physical in a fundamental
>> sense in the first place!
>
>
> But this depends on the MGA or Olympia argument, which find suspect.

Yes, but the point I wanted to make was that the case for
functionalism is not destroyed even if this argument is valid.

>> While this is circular-sounding I don't
>> think that it's actually contradictory. It is not a necessary premise
>> of Chalmer's argument (or indeed, for most scientific arguments) that
>> there be a fundamental physical reality.
>>
>> As for reproducing the brain's behaviour, it comes down to whether
>> brain physics is computable. It probably *is* computable, since we
>> have not found evidence of non-computable physics of which I am aware.
>
>
> Suppose it was not Turing computable, but was computable in some other sense
> (e.g. hypercomputable).  Aren't you just setting up a tautology in which
> whatever the brain does, whatever the universe does, we'll call it
> X-computable.  Already we have one good model of the universe, Copenhagen
> QM, that says it's not Turing computable.

I think the usual meaning of "computable" is Turing computable.

>> If it is not, then computationalism is false. But even if
>> computationalism is false, Chalmer's argument still shows that
>> *functionalism* is true. Computationalism is a subset of
>> functionalism.
>>
>>> But functionalism suggests that what counts is the output, not the manner
>>> in which it as arrived at. That is to say, the brain or whatever neural
>>> subunit or computer is doing the processing is a black box. You input
>>> something and then read the output, but the intervening steps don't matter.
>>> Consider what this might mean in terms of a brain. Let's say a vastly
>>> advanced alien species comes to earth. It looks at our puny little brains
>>> and decides to make one to fool us. This constructed person/brain receives
>>> normal conversational input and outputs conversation that it knows will
>>> perfectly mimic a human being. But in fact the computer doing this
>>> processing is vastly superior to the human brain. It's like a modern PC
>>> emulating a TRS-80, except much more so. When it computes/thinks up a
>>> response, it draws on a vast amount of knowledge, intelligence and
>>> creativity and accesses qualia undreamed of by a human. Yet its response
>>> will completely fool any normal human and will pass Turing tests till the
>>> cows come home. What this thought experiment shows is that, while
>>> half-qualia may be absurd, it most certainly is possible to reproduce the
>>> outputs of a brain without replicating its qualia. It might have completely
>>> different qualia, just as a very good actor's emotions can't be
>>> distinguished from the real thing, even though his or her internal
>>> experience is quite different. And if qualia can be quite different even
>>> though the functional outputs are the same, this does seem to leave
>>> functionalism in something of a quandary. All we can say is that there must
>>> be some kind of qualia occurring, rather a different result from what
>>> Chalmers is claiming. When we extend this type of scenario to artificial
>>> neurons or partial brain prostheses as in Chamer's paper, we quickly run up
>>> against perplexing problems. Imagine the advanced alien provides these
>>> prostheses. It takes the same inputs and generates the same correct outputs,
>>> but it processes those inputs within a much vaster, more complex system.
>>> Does the brain utilizing this advanced prosthesis experience a kind of
>>> expanded consciousness because of this, without that difference being
>>> detectable? Or do the qualia remain somehow confined to the prosthesis
>>> (whatever that means)?

Re: A challenge for Craig

2013-10-03 Thread Craig Weinberg


On Thursday, October 3, 2013 7:36:10 PM UTC-4, Pierz wrote:
>
>
>
> On Friday, October 4, 2013 4:10:02 AM UTC+10, Craig Weinberg wrote:
>>
>>
>>
>> On Thursday, October 3, 2013 9:30:13 AM UTC-4, telmo_menezes wrote:
>>>
>>> On Tue, Oct 1, 2013 at 6:10 PM, Craig Weinberg  
>>> wrote: 
>>>
>>> > 
>>> > I think that evil continues to flourish, precisely because science has 
>>> not 
>>> > integrated privacy into an authoritative worldview. As long as 
>>> subjectivity 
>>> > remains the primary concern of most ordinary people, any view that 
>>> denies or 
>>> > diminishes it will be held at arms length. I think it secretly erodes 
>>> > support for all forms of progress and inspires fundamentalist politics 
>>> as 
>>> > well. 
>>>
>>> I agree. Taking privacy literally, this is in fact one of the most 
>>> creepy consequences of total surveillance: denial of privacy is, in a 
>>> sense, a denial of the right to existence. Can one truly exist as a 
>>> human being as an undistinguishable part of some mass? No secrets, no 
>>> mysteries, what you see is what you get to the extreme. This sounds 
>>> like hell. 
>>>
>>
>> Right. I think that it is no coincidence that the major concerns of 
>> ubiquitous computing revolve around privacy, propreity, and security. 
>> Computers don't know what privacy is, they don't know who we are, and they 
>> can't care who owns what. That's all part of private physics and computers 
>> can only exploit the lowest common denominator of privacy  - that level 
>> which refers only to itself as a digitally quantified object.
>>  
>>
>>>
>>>
>>>
>>> > Once we have a worldview which makes sense of all phenomena and 
>>> > experience, even the mystical and personal, then we can move forward 
>>> in a 
>>> > more mature and sensible way. Right now, what is being offered is 'you 
>>> can 
>>> > know the truth about the universe, but only if you agree that you 
>>> aren't 
>>> > really part of it'. 
>>>
>>> I believe more than this is being offered in this mailing list. I feel 
>>> your objections apply mostly to mainstream views, and to that degree I 
>>> agree with you. 
>>>
>>
>> I agree, I wasn't really talking about specialized groups like this.
>>  
>>
>>>
>>>
>>> > 
>>> > Why would MWI or evolution place a high value on leadership or 
>>> success? It 
>>> > seems just the opposite. What difference does it make if you succeed 
>>> here 
>>> > and now, if you implicitly fail elsewhere? MWI doesn't seem to 
>>> describe any 
>>> > universe that could ever matter to anyone. It's the Occam's 
>>> catastrophe 
>>> > factor. 
>>>
>>> Highly speculative and non-rigorous: 
>>> You can see it differently if you can assume self-sampling. Let's 
>>> assume everything is conscious, even rocks. A rock is so simple that, 
>>> for it, a millennia probably feels like a second. It does not contain 
>>> a variety of conscious states like humans do. Then, you would expect 
>>> to find yourself as a complex being. Certain branches of the 
>>> multiverse contain such complex beings, and this would make evolution 
>>> appear more effective/purposeful than it really is, from the vantage 
>>> point of these branches. 
>>>
>>
>> Even so, why would uniqueness or firstness be of value in a universe 
>> based on such immense and inescapable redundancy as MWI suggests?
>>  
>>
> The universe doesn't seem to be too fussed about immense and inescapable 
> redundancy. Have you noticed all the *space* out there??
>

Sure, there's a ridiculous amount of most things, but even so, the idea 
that every step that every boson or fermion needs its own collection of 
universes for every interaction it has seems to be really bending over 
backward. It seems to me like an excuse for your teacher "I need to be 
excused from having to explain the universe, because the universe could 
just be one of a fantastic number of universes being created constantly, 
none of which I can explain either."
 

> The progress of scientific knowledge has proceeded so far in the same 
> direction: the revelation of a context ever vaster and more impersonal. 
>

Statistically that pattern is no more likely to continue than it is to be 
reversed. I think that Relativity gave us the chance to reverse, but since 
that time we have overshot the mark and pursued a path of unrealism and 
arithmetic supremacy that has already become dysfunctional but we are in 
denial about it.
 

> MWI does strike me as quite horrifying too. But that is based on a false 
> perspective in which one imagine occupying all the branches of the universe 
> and feels naturally appalled. But nobody experiences the multiverse as such 
> thank god.
>

In my understanding, if nobody can ever experience the multiverse, then the 
multiverse is identical to that which can never exist. The idea of a 
context which simply 'is' without being described as an experience is just 
the default image of a God that is inverted to become the Absolute object. 
It's a confirmation bias roote

Re: A challenge for Craig

2013-10-03 Thread Stathis Papaioannou
On 3 October 2013 14:40, Craig Weinberg  wrote:

>> The argument is simply summarised thus: it is impossible even for God
>> to make a brain prosthesis that reproduces the I/O behaviour but has
>> different qualia. This is a proof of comp, provided that brain physics
>> is computable, or functionalism if brain physics is not computable.
>> Non-comp functionalism may entail, for example, that the replacement
>> brain contain a hypercomputer.
>
>
>
> It's like saying that if the same rent is paid for every apartment in the
> same building, then the same person must be living there, and that proves
> that rent payments are people.

The hypothesis is that if we replicate the rent then we necessarily
replicate the people. But we can think of an experiment where the rent
is replicated but the person is not replicated - there is no
contradiction here. However, if we can replicate the I/O behaviour of
the neurons but not the associated qualia there is a contradiction,
since that would allow partial zombies, which you have agreed are
absurd. Therefore, it is impossible to replicate the I/O behaviour of
the neurons without replicating the qualia. To refute this, you either
have to show that 1) replicating the I/O behaviour of the neurons
without replicating the qualia does not lead to partial zombies, or 2)
that partial zombies are not absurd.

A partial zombie is a person whose qualia change, for example he
becomes blind, but his behaviour does not change and he does not
notice that his qualia change.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-03 Thread meekerdb

On 10/3/2013 4:36 PM, Pierz wrote:



The universe doesn't seem to be too fussed about immense and inescapable 
redundancy.


Of course the universe doesn't care when the immense and inescapable redundancy is in our 
model of it.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-03 Thread meekerdb

On 10/3/2013 5:07 PM, Stathis Papaioannou wrote:

You seem to be agreeing with Craig that each neuron alone is conscious.

The experiment relates to replacement of neurons which play some part
in consciousness. The 1% remaining neurons are part of a system which
will notice that the qualia are different.


That assumes that 1% are sufficient to remember all the prior qualia with enough fidelity 
to notice they are different.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-03 Thread meekerdb

On 10/3/2013 4:53 PM, Pierz wrote:



On Thursday, October 3, 2013 4:59:17 AM UTC+10, Brent wrote:

On 10/1/2013 11:49 PM, Pierz wrote:



On Wednesday, October 2, 2013 3:15:01 PM UTC+10, Brent wrote:

On 10/1/2013 9:56 PM, Pierz wrote:
> Yes, I understand that to be Chalmer's main point. Although, if the 
qualia
can be
> different, it does present issues - how much and in what way can it 
vary?

Yes, that's a question that interests me because I want to be able to 
build
intelligent
machines and so I need to know what qualia they will have, if any.  I 
think it
will depend
on their sensors and on their values/goals.  If I build a very 
intelligent Mars
Rover,
capable of learning and reasoning, with a goal of discovering whether 
there was
once life
on Mars; then I expect it will experience pleasure in finding evidence
regarding this.
But no matter how smart I make it, it won't experience lust.

"Reasoning" being what exactly? The ability to circumnavigate an obstacle 
for
instance? There are no "rewards" in an algorithm. There are just paths 
which do or
don't get followed depending on inputs. Sure, the argument that there must 
be
qualia in a sufficiently sophisticated computer seems compelling. But the 
argument
that there can't be seems equally so. As a programmer I have zero 
expectation that
the computer I am programming will feel pleasure or suffering. It's just as 
happy
to throw an exception as it is to complete its assigned task. *I* am the 
one who
experiences pain when it hits an error! I just can't conceive of the 
magical point
at which the computer goes from total indifference to giving a damn. That's 
the
point Craig keeps pushing and which I agree with. Something is missing from 
our
understanding.


What's missing is you're considering a computer, not a robot.  As robot has 
to have
values and goals in order to act and react in the world.  It has complex 
systems and
subsystems that may have conflicting subgoals, and in order to learn from 
experience
it keeps a narrative history about what it considers significant events.  
At that
level it may have the consciousness of a mouse.  If it's a social robot, 
one that
needs to cooperate and compete in a society of other persons, then it will 
need a
self-image and model of other people.  In that case it's quite reasonable 
to suppose
it also has qualia.

Really? You believe that a robot can experience qualia but a computer can't? Well that 
just makes no sense at all. A robot is a computer with peripherals. When I write the 
code to represent its "self image", I will probably write a class called "Self". But 
once compiled, the name of the class will be just another string of bits, and only the 
programmer will understand that it is supposed to represent the position, attitude and 
other states of the physical robot.


But does the robot understand the class; i.e. does it use it in it's planning and modeling 
of actions, in learning, does it reason about itself.  Sure it's not enough to just label 
something self - it has to be something represented just as the robot represents the world 
in order to interact successfully.



Do the peripherals need to be real or can they just be simulated?


They can be simulated if they only have to interact with a simulated world.

Brent

Does a brain in a Futurama-style jar lose its qualia because it's now a computer not a 
robot? Come on.




> I'm curious what the literature has to say about that. And if 
functionalism
means
> reproducing more than the mere functional output of a system, if it
potentially means
> replication down to the elementary particles and possibly their 
quantum
entanglements,
> then duplication becomes impossible, not merely technically but in 
principle.
That seems
> against the whole point of functionalism - as the idea of "function" 
is
reduced to
> something almost meaningless.

I think functionalism must be confined to the classical functions, 
discounting
the quantum
level effects.  But it must include some behavior that is almost 
entirely
internal - e.g.
planning, imagining.  Excluding quantum entanglements isn't arbitrary; 
there
cannot have
been any evolution of goals and values based on quantum entanglement 
(beyond the
statistical effects that produce decoherence and quasi-classical 
behavior).

But what do "planning" and "imagining" mean except their functional 
outputs? It
shouldn't matter to you how the planning occurs - it's an "implementation 
detail"
in development speak.


You can ask a person about plans and imaginings, and speech in response is 
an action.


Your argument may be valid rega

Re: A challenge for Craig

2013-10-04 Thread Craig Weinberg


On Thursday, October 3, 2013 11:48:40 PM UTC-4, Brent wrote:
>
>  On 10/3/2013 4:36 PM, Pierz wrote:
>  
>   
>>  
> The universe doesn't seem to be too fussed about immense and inescapable 
> redundancy.
>
>
> Of course the universe doesn't care when the immense and inescapable 
> redundancy is in our model of it.
>

Yet under MWI, the multiverse would have to 'care' enough to know the 
difference between unity and multiplicity. The idea of a multiverse or 
universe is also in our model of it, but since we too are made of the same 
elementary physics that everything else in the universe is made of, then 
the difference between any model we have of the universe and any modeling 
capacity that can exist in the universe could only be one of degree not of 
kind. 

All models make sense because they are based on some sense that the 
universe makes. Whatever that elementary sense is cannot be a blind 
statistical exhaustion. As far as I can tell, it must be coherent, 
consistent, sensitive and creative. Once you have coherence and 
sensitivity, then you can mask it with insensitivity to generate 
multiplicity, but it might be more like perceptual fill-in on every level - 
a pseudo-multiplicity rather than a Planck level, granular realism. 
Granularity is a model generated by visual and tactile perception as far as 
I know.

Craig

 

>
> Brent
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-04 Thread Bruno Marchal


On 02 Oct 2013, at 19:20, Craig Weinberg wrote:




On Wednesday, October 2, 2013 12:26:45 PM UTC-4, Bruno Marchal wrote:

On 02 Oct 2013, at 06:56, Pierz wrote:




On Wednesday, October 2, 2013 12:46:17 AM UTC+10, Bruno Marchal  
wrote:
Then the reasoning shows (at a meta-level, made possible with the  
assumption used) how consciousness and beliefs (more or less  
deluded) in physical realities develop in arithmetic.


Are 'beliefs in' physical realities the same as experiencing the  
realism of public physics though? For instance, I believe that if I  
should avoid driving recklessly in the same way as I would in a  
driving game as I would in my actual car. Because my belief that the  
consequences of a real life collision are more severe than a game  
collision, I would drive more conservatively in real life. That's  
all ok, but a belief about consequences would not generate realistic  
qualia. If someone held a gun to my head while I play the racing  
game, the game would not become any more realistic. I always feel  
like there is an equivalence between belief and qualia which is  
being implied here that is not the case. It's along the lines of  
assuming that a hypnotic state can fully replace reality. If that  
were the case, of course, everybody would be lining up to get  
hypnotized.There is some permeability there, but I think it's  
simplistic to imply that the aggregate of all qualia arises purely  
from the arbitrary tokenization of beliefs.



Unless the tokenization is made explicit, and then your nuance should  
be catured by the nuance between (Bp & Dt, inteeligible matter) and  
(Bp & Dt & p, sensible matter).







But that's the mathematical (arithmetical) part. In UDA it is just  
shown that if comp is true (an hypothesis on consciousness) then  
physics is a branch of arithmetic. More precisely a branch of the  
ideally self-referentially correct machine's theology. (always in  
the Greek sense).


There is no pretense that comp is true, but if it is true, the  
correct "QM" cannot postulate the wave, it has to derive the wave  
from the numbers. That's what UDA shows: a problem. AUDA (the  
machine's interview) provides the only path (by Gödel, Löb, Solovay)  
capable of relating the truth and all machine's points of view.


There will be many ways to extract physics from the numbers, but  
interviewing the self-introspecting universal machine is the only  
way to get not just the laws of physics, but also why it can hurt,  
and why a part of that seems to be necessarily not functional.


I don't think that an interview with anyone can explain why they can  
hurt, unless you have already naturalized an expectation of pain. In  
other words, if we don't presume that universal machine experiences  
anything, there is no need to invent qualia or experience to justify  
any mathematical relation. If mathematically all that you need is  
non-functional, secret kinds of variable labels to represent machine  
states, I don't see why we should assume they are qualitative. If  
anything, the unity of arithmetic truth would demand a single  
sensory channel that constitutes all possible I/O.


But then you get zombies, which make no sense with comp. But you are  
right, I have to attribute consciousness to all universal machines, at  
the start. That consciousness will be a computer science theoretical  
semantical fixed point, that is something that the machine can "know",  
but cannot prove ("know" in a larger sense than the Theaetetus'  
notion, it is more an unconscious bet than a belief or proof). (Cf  
also Helmholtz, and the idea that perception is a form of  
extrapolation).


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-04 Thread Bruno Marchal


On 02 Oct 2013, at 20:48, meekerdb wrote:


On 10/2/2013 2:04 AM, Russell Standish wrote:

On Tue, Oct 01, 2013 at 10:09:03AM -0700, meekerdb wrote:

On 10/1/2013 4:13 AM, Bruno Marchal wrote:

Note also that the expression "computation have qualia" can be
misleading. A computation has no qualia, strictly speaking. Only a
person supported by an infinity of computation can be said to have
qualia, or to live qualia.

Why an infinity of computation??  That would preclude my building an
intelligent robot having qualia, since it's computations would
always be finite.  And I doubt there is room in my head for infinite
computations - certainly not digital ones.


He is alluding to the universal dovetailer here, which contains an
infinite number of distinct computations that implement any given
conscious state.

However, it is not clear that it is necessary for it to be infinite -
in a nonrebust world that doesn't contain a UD, we still consider the
possbility of conscious computations in the MGA.


Yes, I know what he is alluding to.  But if it really does take all  
those infinite threads of computation to realize conscious states,  
then I think that is the same as saying it takes the underlying  
physics of a brain (or computer) to realize consciousness.  But then  
Bruno's program of explaining things from computation hasn't avoided  
relying on the physical. ??


It just mean that human consciousness rely on the physical, but the  
physical itself relies on relative statistics made on infinities of  
computation + the self-referential logical view point. I gave the  
equation; when on the left side you have something defining the  
physical, and on the right side, purely logico-arithmetical notions.
I guess we will come back (as I have already given those equations)  
soon or later. It is quite technical (as we can expected).


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-04 Thread Bruno Marchal


On 02 Oct 2013, at 21:30, meekerdb wrote:


On 10/2/2013 6:35 AM, Bruno Marchal wrote:


On 01 Oct 2013, at 19:09, meekerdb wrote:


On 10/1/2013 4:13 AM, Bruno Marchal wrote:
Note also that the expression "computation have qualia" can be  
misleading. A computation has no qualia, strictly speaking. Only  
a person supported by an infinity of computation can be said to  
have qualia, or to live qualia.


Why an infinity of computation??


Because the FPI bears on arithmetic, which contains the running of  
all universal machine implementing your code, below your  
substitution level.


With comp you can attach a mind to some body, but you cannot attach  
one token body to a mind, you can attach only an infinities of such  
bodies, through the interference of all computations which realize  
them in arithmetic.






That would preclude my building an intelligent robot having  
qualia, since it's computations would always be finite.  And I  
doubt there is room in my head for infinite computations -  
certainly not digital ones.


You are right. We cannot build intelligent being with qualia. The  
computer, and the 3p-robot, does not create that consciousness, it  
will only help a consciousness, which is already in Platonia, to be  
able to manifest itself relatively  to you, with the same  
statistic for your and the robot continuations.


When a consciousness is not manifested, what is it's content?


Good question. Difficult. Sometimes ago, I would have said that  
consciousness exists only in manifested form.
But I am much less sure about that, and such consciousness state   
might be something like heavenly bliss or hellish terror, depending on  
the path where you would lost the ability of manifesting yourself.


Bruno





Brent



It is confusing, but this is because we tend to associate mind to  
brain or robot, but mind is an attribute of person, and a brain or  
a body is only needed for a relative purpose.


Bruno




http://iridia.ulb.ac.be/~marchal/



No virus found in this message.
Checked by AVG - www.avg.com
Version: 2014.0.4142 / Virus Database: 3604/6714 - Release Date:  
10/01/13


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.



--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-04 Thread Bruno Marchal


On 02 Oct 2013, at 22:12, meekerdb wrote:


On 10/2/2013 9:26 AM, Bruno Marchal wrote:
I agree with Brent though on this. Your UDA proceeds on the basis  
that a computer in a single reality (not an infinite sum of  
calculations - that comes later) can have a 1p.


Yes. It has 1p, it is not a zombie. But that 1p, for him, is really  
defined by a cloud of similar and variant corresponding to its  
indeterminacy domain in the universal dovetailing (= already in a  
tiny part of arithmetic).


And doesn't this cloud correspond to the fuzzy, quantum description  
of the underlying physics, i.e. the quantum state of the brain.  And  
isn't it, per Tegmark, quasi-classical.


Hopefully. because if it is not, it means that either  
computationalism, or quantum mechanics is wrong.


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-04 Thread Bruno Marchal


On 03 Oct 2013, at 02:23, Stathis Papaioannou wrote:


On 2 October 2013 00:46, Bruno Marchal  wrote:


On 01 Oct 2013, at 15:31, Pierz wrote:

Maybe. It would be a lot more profound if we definitely *could*  
reproduce

the brain's behaviour. The devil is in the detail as they say. But a
challenge to Chalmer's position has occurred to me. It seems to me  
that
Bruno has convincingly argued that *if* comp holds, then  
consciousness
supervenes on the computation, not on the physical matter. But  
functionalism
suggests that what counts is the output, not the manner in which  
it as
arrived at. That is to say, the brain or whatever neural subunit  
or computer
is doing the processing is a black box. You input something and  
then read
the output, but the intervening steps don't matter. Consider what  
this might

mean in terms of a brain.




That's not clear to me. The question is "output of what". If it is  
the entie

subject, this is more behaviorism than functionalism.
Putnam's functionalism makes clear that we have to take the output  
of the

neurons into account.
Comp is functionalism, but with the idea that we don't know the  
level of
substitution, so it might be that we have to take into account the  
oputput
of the gluons in our atoms (so comp makes clear that it only ask  
for the
existence of a level of substitution, and then show that no machine  
can know
for sure its subst. level, making Putnam's sort of functionalism a  
bit

fuzzy).





Let's say a vastly advanced alien species comes to earth. It looks  
at our
puny little brains and decides to make one to fool us. This  
constructed
person/brain receives normal conversational input and outputs  
conversation
that it knows will perfectly mimic a human being. But in fact the  
computer
doing this processing is vastly superior to the human brain. It's  
like a
modern PC emulating a TRS-80, except much more so. When it  
computes/thinks
up a response, it draws on a vast amount of knowledge,  
intelligence and
creativity and accesses qualia undreamed of by a human. Yet its  
response
will completely fool any normal human and will pass Turing tests  
till the

cows come home. What this thought experiment shows is that, while
half-qualia may be absurd, it most certainly is possible to  
reproduce the
outputs of a brain without replicating its qualia. It might have  
completely

different qualia, just as a very good actor's emotions can't be
distinguished from the real thing, even though his or her internal
experience is quite different. And if qualia can be quite  
different even

though the functional outputs are the same, this does seem to leave
functionalism in something of a quandary. All we can say is that  
there must
be some kind of qualia occurring, rather a different result from  
what
Chalmers is claiming. When we extend this type of scenario to  
artificial
neurons or partial brain prostheses as in Chamer's paper, we  
quickly run up
against perplexing problems. Imagine the advanced alien provides  
these
prostheses. It takes the same inputs and generates the same  
correct outputs,
but it processes those inputs within a much vaster, more complex  
system.
Does the brain utilizing this advanced prosthesis experience a  
kind of
expanded consciousness because of this, without that difference  
being
detectable? Or do the qualia remain somehow confined to the  
prosthesis
(whatever that means)? These crazy quandaries suggest to me that  
basically,

we don't know shit.



Hmm, I am not convinced. "Chalmers argument"  is that to get a  
philosophical
zombie, the fading argument shows that you have to go through half- 
qualia,
which is absurd. His goal (here) is to show that "no qualia" is  
absurd.


That the qualia can be different is known in the qualia literature,  
and is a

big open problem per se. But Chalmers argues only that "no qualia" is
absurd, indeed because it would needs some absurd notion of  
intermediate

half qualia.

My be I miss a point. Stathis can clarify this furher.


The argument is simply summarised thus: it is impossible even for God
to make a brain prosthesis that reproduces the I/O behaviour but has
different qualia. This is a proof of comp,


Hmm... I can agree, but eventually no God can make such a prothesis,  
only because the qualia is an attribute of the "immaterial person",  
and not of the brain, body, or computer.  Then the prosthesis will  
manifest the person if it emulates the correct level.
If not, even me, can do a brain prothesis that reproduce the  
consciousness of a sleeping dreaming person, ...
OK, I guess you mean the full I/O behavior, but for this, I am not  
even sure that my actual current brain can be enough, ... if only  
because "I" from the first person point of view is distributed in  
infinities of computations, and I cannot exclude that the qualia  
(certainly stable lasting qualia) might rely on that.






provided that brain physics
is computable, or functionalism if brain

Re: A challenge for Craig

2013-10-04 Thread Craig Weinberg


On Friday, October 4, 2013 10:39:44 AM UTC-4, Bruno Marchal wrote:
>
>
> On 02 Oct 2013, at 19:20, Craig Weinberg wrote: 
>
> > 
> > 
> > On Wednesday, October 2, 2013 12:26:45 PM UTC-4, Bruno Marchal wrote: 
> > 
> > On 02 Oct 2013, at 06:56, Pierz wrote: 
> > 
> >> 
> >> 
> >> On Wednesday, October 2, 2013 12:46:17 AM UTC+10, Bruno Marchal   
> >> wrote: 
> > Then the reasoning shows (at a meta-level, made possible with the   
> > assumption used) how consciousness and beliefs (more or less   
> > deluded) in physical realities develop in arithmetic. 
> > 
> > Are 'beliefs in' physical realities the same as experiencing the   
> > realism of public physics though? For instance, I believe that if I   
> > should avoid driving recklessly in the same way as I would in a   
> > driving game as I would in my actual car. Because my belief that the   
> > consequences of a real life collision are more severe than a game   
> > collision, I would drive more conservatively in real life. That's   
> > all ok, but a belief about consequences would not generate realistic   
> > qualia. If someone held a gun to my head while I play the racing   
> > game, the game would not become any more realistic. I always feel   
> > like there is an equivalence between belief and qualia which is   
> > being implied here that is not the case. It's along the lines of   
> > assuming that a hypnotic state can fully replace reality. If that   
> > were the case, of course, everybody would be lining up to get   
> > hypnotized.There is some permeability there, but I think it's   
> > simplistic to imply that the aggregate of all qualia arises purely   
> > from the arbitrary tokenization of beliefs. 
>
>
> Unless the tokenization is made explicit, and then your nuance should   
> be catured by the nuance between (Bp & Dt, inteeligible matter) and   
> (Bp & Dt & p, sensible matter). 
>

Can't you just add an "& p" flag to your token? It need not be sensible or 
intelligible, just consistent.
 

>
>
>
> > 
> > 
> > But that's the mathematical (arithmetical) part. In UDA it is just   
> > shown that if comp is true (an hypothesis on consciousness) then   
> > physics is a branch of arithmetic. More precisely a branch of the   
> > ideally self-referentially correct machine's theology. (always in   
> > the Greek sense). 
> > 
> > There is no pretense that comp is true, but if it is true, the   
> > correct "QM" cannot postulate the wave, it has to derive the wave   
> > from the numbers. That's what UDA shows: a problem. AUDA (the   
> > machine's interview) provides the only path (by Gödel, Löb, Solovay)   
> > capable of relating the truth and all machine's points of view. 
> > 
> > There will be many ways to extract physics from the numbers, but   
> > interviewing the self-introspecting universal machine is the only   
> > way to get not just the laws of physics, but also why it can hurt,   
> > and why a part of that seems to be necessarily not functional. 
> > 
> > I don't think that an interview with anyone can explain why they can   
> > hurt, unless you have already naturalized an expectation of pain. In   
> > other words, if we don't presume that universal machine experiences   
> > anything, there is no need to invent qualia or experience to justify   
> > any mathematical relation. If mathematically all that you need is   
> > non-functional, secret kinds of variable labels to represent machine   
> > states, I don't see why we should assume they are qualitative. If   
> > anything, the unity of arithmetic truth would demand a single   
> > sensory channel that constitutes all possible I/O. 
>
> But then you get zombies, which make no sense with comp.


Because comp is blind to authenticity, which works perfectly: Zombie-hood 
make no sense to zombies.

 

> But you are   
> right, I have to attribute consciousness to all universal machines, at   
> the start. That consciousness will be a computer science theoretical   
> semantical fixed point, that is something that the machine can "know",   
> but cannot prove ("know" in a larger sense than the Theaetetus'   
> notion, it is more an unconscious bet than a belief or proof). (Cf   
> also Helmholtz, and the idea that perception is a form of   
> extrapolation). 
>

It seems to me that treating consciousness as a zero dimensional point 
intersecting two logical sets (known data and unprovable data) is accurate 
from the point of view of Comp, but that's only because Comp is by 
definition blind to qualia. If you are blind, you can define sight as a 
capacity that you know you are lacking, but you can't prove it (since you 
can't literally see what you are missing). 

The Comp perspective can't account for feeling for what it actually is (a 
direct aesthetic appreciation), it can only describe what kinds of things 
happen as a consequence of unprovable knowledge.

Pansensitivity (P) proposes that sensation is a universal property. 


Primordial Pansensitivity (PP) propos

Re: A challenge for Craig

2013-10-04 Thread meekerdb

On 10/4/2013 7:40 AM, Bruno Marchal wrote:

When a consciousness is not manifested, what is it's content?


Good question. Difficult. Sometimes ago, I would have said that consciousness exists 
only in manifested form.


That's what I would say.

But I am much less sure about that, and such consciousness state  might be something 
like heavenly bliss or hellish terror, depending on the path where you would lost the 
ability of manifesting yourself.


Recognizing that "consciousness" means different things: perception, self-modeling, 
awareness of self-modeling, self-evaluation,... I think we can at least see what it is 
like to not have some of these forms of consciousness because we generally have at most 
one at a given time - and sometimes we don't have any of them.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-04 Thread Stathis Papaioannou
I

On Friday, October 4, 2013, meekerdb wrote:

>  On 10/3/2013 5:07 PM, Stathis Papaioannou wrote:
>
>  You seem to be agreeing with Craig that each neuron alone is conscious.
>
>  The experiment relates to replacement of neurons which play some part
> in consciousness. The 1% remaining neurons are part of a system which
> will notice that the qualia are different.
>
>
> That assumes that 1% are sufficient to remember all the prior qualia with
> enough fidelity to notice they are different.
>

No, I assume the system of which the neurons are a part will notice a
difference. If not, then the replacement has not changed the qualia.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-04 Thread meekerdb

On 10/4/2013 7:18 PM, Stathis Papaioannou wrote:

I

On Friday, October 4, 2013, meekerdb wrote:

On 10/3/2013 5:07 PM, Stathis Papaioannou wrote:

You seem to be agreeing with Craig that each neuron alone is conscious.

The experiment relates to replacement of neurons which play some part
in consciousness. The 1% remaining neurons are part of a system which
will notice that the qualia are different.


That assumes that 1% are sufficient to remember all the prior qualia with 
enough
fidelity to notice they are different.


No, I assume the system of which the neurons are a part will notice a difference. If 
not, then the replacement has not changed the qualia.


I don't understand that.  If the system can notice a difference, why does it need that 
1%?  Why can't it detect a difference with 0% of the original remaining?  What's the 1% doing?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-04 Thread Stathis Papaioannou
On 5 October 2013 12:53, meekerdb  wrote:
> On 10/4/2013 7:18 PM, Stathis Papaioannou wrote:
>
> I
>
> On Friday, October 4, 2013, meekerdb wrote:
>>
>> On 10/3/2013 5:07 PM, Stathis Papaioannou wrote:
>>
>> You seem to be agreeing with Craig that each neuron alone is conscious.
>>
>> The experiment relates to replacement of neurons which play some part
>> in consciousness. The 1% remaining neurons are part of a system which
>> will notice that the qualia are different.
>>
>>
>> That assumes that 1% are sufficient to remember all the prior qualia with
>> enough fidelity to notice they are different.
>
>
> No, I assume the system of which the neurons are a part will notice a
> difference. If not, then the replacement has not changed the qualia.
>
>
> I don't understand that.  If the system can notice a difference, why does it
> need that 1%?  Why can't it detect a difference with 0% of the original
> remaining?  What's the 1% doing?

The question is whether swapping out part of the system for a
functional equivalent will change the qualia the system experiences
without changing the behaviour. I don't think this is possible, for if
the qualia change the subject would (at least) notice and say that the
qualia have changed, which constitutes a change in behaviour.
Therefore, the qualia and the behaviour are somehow inextricably
linked. The alternative, that the qualia are substrate dependent,
can't work.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-04 Thread meekerdb

On 10/4/2013 9:46 PM, Stathis Papaioannou wrote:

On 5 October 2013 12:53, meekerdb  wrote:

On 10/4/2013 7:18 PM, Stathis Papaioannou wrote:

I

On Friday, October 4, 2013, meekerdb wrote:

On 10/3/2013 5:07 PM, Stathis Papaioannou wrote:

You seem to be agreeing with Craig that each neuron alone is conscious.

The experiment relates to replacement of neurons which play some part
in consciousness. The 1% remaining neurons are part of a system which
will notice that the qualia are different.


That assumes that 1% are sufficient to remember all the prior qualia with
enough fidelity to notice they are different.


No, I assume the system of which the neurons are a part will notice a
difference. If not, then the replacement has not changed the qualia.


I don't understand that.  If the system can notice a difference, why does it
need that 1%?  Why can't it detect a difference with 0% of the original
remaining?  What's the 1% doing?

The question is whether swapping out part of the system for a
functional equivalent will change the qualia the system experiences
without changing the behaviour. I don't think this is possible, for if
the qualia change the subject would (at least) notice


That's the point I find questionable.  Why couldn't some qualia change in minor ways and 
the system *not* notice because the system doesn't have any absolute memory to which it 
can compare qualia. Have you ever gone back to a house you lived in as a small child? 
Looks a lot smaller doesn't it.


Brent


and say that the
qualia have changed, which constitutes a change in behaviour.
Therefore, the qualia and the behaviour are somehow inextricably
linked. The alternative, that the qualia are substrate dependent,
can't work.




--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-05 Thread Bruno Marchal


On 04 Oct 2013, at 19:22, Craig Weinberg wrote:




On Friday, October 4, 2013 10:39:44 AM UTC-4, Bruno Marchal wrote:

On 02 Oct 2013, at 19:20, Craig Weinberg wrote:

>
>
> On Wednesday, October 2, 2013 12:26:45 PM UTC-4, Bruno Marchal  
wrote:

>
> On 02 Oct 2013, at 06:56, Pierz wrote:
>
>>
>>
>> On Wednesday, October 2, 2013 12:46:17 AM UTC+10, Bruno Marchal
>> wrote:
> Then the reasoning shows (at a meta-level, made possible with the
> assumption used) how consciousness and beliefs (more or less
> deluded) in physical realities develop in arithmetic.
>
> Are 'beliefs in' physical realities the same as experiencing the
> realism of public physics though? For instance, I believe that if I
> should avoid driving recklessly in the same way as I would in a
> driving game as I would in my actual car. Because my belief that the
> consequences of a real life collision are more severe than a game
> collision, I would drive more conservatively in real life. That's
> all ok, but a belief about consequences would not generate realistic
> qualia. If someone held a gun to my head while I play the racing
> game, the game would not become any more realistic. I always feel
> like there is an equivalence between belief and qualia which is
> being implied here that is not the case. It's along the lines of
> assuming that a hypnotic state can fully replace reality. If that
> were the case, of course, everybody would be lining up to get
> hypnotized.There is some permeability there, but I think it's
> simplistic to imply that the aggregate of all qualia arises purely
> from the arbitrary tokenization of beliefs.


Unless the tokenization is made explicit, and then your nuance should
be catured by the nuance between (Bp & Dt, inteeligible matter) and
(Bp & Dt & p, sensible matter).

Can't you just add an "& p" flag to your token? It need not be  
sensible or intelligible, just consistent.


Consistent = ~[] f = <>t = Dt. It is in the "& Dt".
But "& p" is needed to get the "sensibility", or the "connection with  
God (truth). It is what makes some dream being true, in some sense.








>
>
> But that's the mathematical (arithmetical) part. In UDA it is just
> shown that if comp is true (an hypothesis on consciousness) then
> physics is a branch of arithmetic. More precisely a branch of the
> ideally self-referentially correct machine's theology. (always in
> the Greek sense).
>
> There is no pretense that comp is true, but if it is true, the
> correct "QM" cannot postulate the wave, it has to derive the wave
> from the numbers. That's what UDA shows: a problem. AUDA (the
> machine's interview) provides the only path (by Gödel, Löb, Solovay)
> capable of relating the truth and all machine's points of view.
>
> There will be many ways to extract physics from the numbers, but
> interviewing the self-introspecting universal machine is the only
> way to get not just the laws of physics, but also why it can hurt,
> and why a part of that seems to be necessarily not functional.
>
> I don't think that an interview with anyone can explain why they can
> hurt, unless you have already naturalized an expectation of pain. In
> other words, if we don't presume that universal machine experiences
> anything, there is no need to invent qualia or experience to justify
> any mathematical relation. If mathematically all that you need is
> non-functional, secret kinds of variable labels to represent machine
> states, I don't see why we should assume they are qualitative. If
> anything, the unity of arithmetic truth would demand a single
> sensory channel that constitutes all possible I/O.

But then you get zombies, which make no sense with comp.

Because comp is blind to authenticity, which works perfectly: Zombie- 
hood make no sense to zombies.


?




But you are
right, I have to attribute consciousness to all universal machines, at
the start. That consciousness will be a computer science theoretical
semantical fixed point, that is something that the machine can "know",
but cannot prove ("know" in a larger sense than the Theaetetus'
notion, it is more an unconscious bet than a belief or proof). (Cf
also Helmholtz, and the idea that perception is a form of
extrapolation).

It seems to me that treating consciousness as a zero dimensional point


?



intersecting two logical sets (known data and unprovable data) is  
accurate from the point of view of Comp, but that's only because  
Comp is by definition blind to qualia.


It is not. The arithmetical definition (the Bp & Dt & p) recovers  
qualia theories (notably the Bell's quantum logic) right where we can  
expect it.





If you are blind, you can define sight as a capacity that you know  
you are lacking, but you can't prove it (since you can't literally  
see what you are missing).


OK. But you beg the question of why a machine needs to be blind (or  
needs to be unable to instantiate a non blind person).






The Comp perspective can't account for feeling for what it actually  
i

Re: A challenge for Craig

2013-10-05 Thread Bruno Marchal


On 04 Oct 2013, at 20:06, meekerdb wrote:


On 10/4/2013 7:40 AM, Bruno Marchal wrote:

When a consciousness is not manifested, what is it's content?


Good question. Difficult. Sometimes ago, I would have said that  
consciousness exists only in manifested form.


That's what I would say.


I have to confess that salvia has put a doubt on this. I cannot reject  
this as an hallucination, because the experience does not depend on  
the fact that it is an hallucination.
A bit like a blind person cannot say that he saw something but that is  
was an hallucination.


I have no certainty at all in those matter.





But I am much less sure about that, and such consciousness state   
might be something like heavenly bliss or hellish terror, depending  
on the path where you would lost the ability of manifesting yourself.


Recognizing that "consciousness" means different things: perception,  
self-modeling, awareness of self-modeling, self-evaluation,... I  
think we can at least see what it is like to not have some of these  
forms of consciousness because we generally have at most one at a  
given time - and sometimes we don't have any of them.


Here the salvia experience is tremendously interesting, as we loose  
many things, like memory, sense of self, body, notions of time and  
space, etc. yet we remain conscious, with the weird feeling that we  
are conscious for the first time, and the last time, and that we  
remember something that we know better than everything we might have  
believed to know.
It is a quite paradoxical state of mind, and coming back from it, it  
gives a sense that consciousness is fundamentally something statical,  
making time illusory. I thought that consciousness needed that time  
illusion, but now I am less sure about that.


Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-05 Thread Stathis Papaioannou
On 5 October 2013 15:25, meekerdb  wrote:

>> The question is whether swapping out part of the system for a
>> functional equivalent will change the qualia the system experiences
>> without changing the behaviour. I don't think this is possible, for if
>> the qualia change the subject would (at least) notice
>
>
> That's the point I find questionable.  Why couldn't some qualia change in
> minor ways and the system *not* notice because the system doesn't have any
> absolute memory to which it can compare qualia. Have you ever gone back to a
> house you lived in as a small child? Looks a lot smaller doesn't it.
>
> Brent

If a normal brain does not notice changes or falsely notices changes
then a brain with functionally identical implants will also fail to
notice or falsely notice these changes.

>> and say that the
>> qualia have changed, which constitutes a change in behaviour.
>> Therefore, the qualia and the behaviour are somehow inextricably
>> linked. The alternative, that the qualia are substrate dependent,
>> can't work.
>>
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.



-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-05 Thread meekerdb

On 10/5/2013 5:38 AM, Stathis Papaioannou wrote:

On 5 October 2013 15:25, meekerdb  wrote:


The question is whether swapping out part of the system for a
functional equivalent will change the qualia the system experiences
without changing the behaviour. I don't think this is possible, for if
the qualia change the subject would (at least) notice


That's the point I find questionable.  Why couldn't some qualia change in
minor ways and the system *not* notice because the system doesn't have any
absolute memory to which it can compare qualia. Have you ever gone back to a
house you lived in as a small child? Looks a lot smaller doesn't it.

Brent

If a normal brain does not notice changes or falsely notices changes
then a brain with functionally identical implants will also fail to
notice or falsely notice these changes.


But now this is a circular definition of "functional".  It no longer refers just to what 
is 3p observable; now "functionally identical" is to include 1p qualia and the argument 
purporting to prove qualia must be preserved if behavior is preserved is turned into a 
tautology.


Brent




and say that the
qualia have changed, which constitutes a change in behaviour.
Therefore, the qualia and the behaviour are somehow inextricably
linked. The alternative, that the qualia are substrate dependent,
can't work.



--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-05 Thread Stathis Papaioannou




> On 6 Oct 2013, at 7:03 am, meekerdb  wrote:
> 
>> On 10/5/2013 5:38 AM, Stathis Papaioannou wrote:
>> On 5 October 2013 15:25, meekerdb  wrote:
>> 
 The question is whether swapping out part of the system for a
 functional equivalent will change the qualia the system experiences
 without changing the behaviour. I don't think this is possible, for if
 the qualia change the subject would (at least) notice
>>> 
>>> That's the point I find questionable.  Why couldn't some qualia change in
>>> minor ways and the system *not* notice because the system doesn't have any
>>> absolute memory to which it can compare qualia. Have you ever gone back to a
>>> house you lived in as a small child? Looks a lot smaller doesn't it.
>>> 
>>> Brent
>> If a normal brain does not notice changes or falsely notices changes
>> then a brain with functionally identical implants will also fail to
>> notice or falsely notice these changes.
> 
> But now this is a circular definition of "functional".  It no longer refers 
> just to what is 3p observable; now "functionally identical" is to include 1p 
> qualia and the argument purporting to prove qualia must be preserved if 
> behavior is preserved is turned into a tautology.

No, it refers only to externally observable behaviour. If your qualia are 
different this may affect your behaviour even if it's just to report that your 
qualia are different. But how could your behaviour be affected if the 
replacement is functionally identical? And if the qualia can change without 
behaviour changing then in what sense have the qualia changed? Not a minor 
change that doesn't get noticed but a gross change, like going completely blind 
or losing the ability to understand language. If consciousness is substrate 
dependent then such a thing should be possible.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-05 Thread meekerdb

On 10/5/2013 1:25 PM, Stathis Papaioannou wrote:





On 6 Oct 2013, at 7:03 am, meekerdb  wrote:


On 10/5/2013 5:38 AM, Stathis Papaioannou wrote:
On 5 October 2013 15:25, meekerdb  wrote:


The question is whether swapping out part of the system for a
functional equivalent will change the qualia the system experiences
without changing the behaviour. I don't think this is possible, for if
the qualia change the subject would (at least) notice

That's the point I find questionable.  Why couldn't some qualia change in
minor ways and the system *not* notice because the system doesn't have any
absolute memory to which it can compare qualia. Have you ever gone back to a
house you lived in as a small child? Looks a lot smaller doesn't it.

Brent

If a normal brain does not notice changes or falsely notices changes
then a brain with functionally identical implants will also fail to
notice or falsely notice these changes.

But now this is a circular definition of "functional".  It no longer refers just to what 
is 3p observable; now "functionally identical" is to include 1p qualia and the argument 
purporting to prove qualia must be preserved if behavior is preserved is turned into a tautology.

No, it refers only to externally observable behaviour. If your qualia are 
different this may affect your behaviour even if it's just to report that your 
qualia are different. But how could your behaviour be affected if the 
replacement is functionally identical? And if the qualia can change without 
behaviour changing then in what sense have the qualia changed? Not a minor 
change that doesn't get noticed but a gross change, like going completely blind 
or losing the ability to understand language. If consciousness is substrate 
dependent then such a thing should be possible.


So you agree that there could be minor or subtle changes that went unnoticed?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-05 Thread Stathis Papaioannou
On 6 October 2013 08:13, meekerdb  wrote:

> So you agree that there could be minor or subtle changes that went
> unnoticed?

Yes, but it makes no difference to the argument, since subtle changes
may be missed with a normal brain. To disprove functionalism you would
have to show that it is possible to have an arbitrarily large change
in consciousness and yet the subject would be unable, under any
circumstances, to notice a change, nor would any change be externally
observable.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-05 Thread Stathis Papaioannou
On 5 October 2013 00:40, Bruno Marchal  wrote:

>> The argument is simply summarised thus: it is impossible even for God
>> to make a brain prosthesis that reproduces the I/O behaviour but has
>> different qualia. This is a proof of comp,
>
>
> Hmm... I can agree, but eventually no God can make such a prothesis, only
> because the qualia is an attribute of the "immaterial person", and not of
> the brain, body, or computer.  Then the prosthesis will manifest the person
> if it emulates the correct level.

But if the qualia are attributed to the substance of the physical
brain then where is the problem making a prosthesis that replicates
the behaviour but not the qualia? The problem is that it would allow
one to make a partial zombie, which I think is absurd. Therefore, the
qualia cannot be attributed to the substance of the physical brain.

> If not, even me, can do a brain prothesis that reproduce the consciousness
> of a sleeping dreaming person, ...
> OK, I guess you mean the full I/O behavior, but for this, I am not even sure
> that my actual current brain can be enough, ... if only because "I" from the
> first person point of view is distributed in infinities of computations, and
> I cannot exclude that the qualia (certainly stable lasting qualia) might
> rely on that.
>
>
>
>
>
>> provided that brain physics
>> is computable, or functionalism if brain physics is not computable.
>> Non-comp functionalism may entail, for example, that the replacement
>> brain contain a hypercomputer.
>
>
> OK.
>
> Bruno


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-06 Thread Bruno Marchal


On 06 Oct 2013, at 03:17, Stathis Papaioannou wrote:


On 5 October 2013 00:40, Bruno Marchal  wrote:

The argument is simply summarised thus: it is impossible even for  
God

to make a brain prosthesis that reproduces the I/O behaviour but has
different qualia. This is a proof of comp,



Hmm... I can agree, but eventually no God can make such a  
prothesis, only
because the qualia is an attribute of the "immaterial person", and  
not of
the brain, body, or computer.  Then the prosthesis will manifest  
the person

if it emulates the correct level.


But if the qualia are attributed to the substance of the physical
brain then where is the problem making a prosthesis that replicates
the behaviour but not the qualia?
The problem is that it would allow
one to make a partial zombie, which I think is absurd. Therefore, the
qualia cannot be attributed to the substance of the physical brain.


I agree.

Note that in that case the qualia is no more attributed to an  
immaterial person, but to a piece of primary matter.
In that case, both comp and functionalism (in your sense, not in  
Putnam's usual sense of functionalism which is a particular case of  
comp) are wrong.


Then, it is almost obvious that an immaterial being cannot distinguish  
between a primarily material incarnation, and an immaterial one, as it  
would need some magic (non Turing emulable) ability to make the  
difference.  People agreeing with this do no more need the UDA step 8  
(which is an attempt to make this more rigorous or clear).


I might criticize, as a devil's advocate, a little bit the partial- 
zombie argument. Very often some people pretend that they feel less  
conscious after some drink of vodka, but that they are still able to  
behave normally. Of course those people are notoriously wrong. It is  
just that alcohol augments a fake self-confidence feeling, which  
typically is not verified (in the laboratory, or more sadly on the  
roads). Also, they confuse "less conscious" with "blurred  
consciousness", I think.

So I think we are in agreement.
(I usually use "functionalism" in Putnam's sense, but your's or  
Chalmers' use is more logical, yet more rarely used in the community  
of philosopher of mind, but that's a vocabulary issue).


Bruno





If not, even me, can do a brain prothesis that reproduce the  
consciousness

of a sleeping dreaming person, ...
OK, I guess you mean the full I/O behavior, but for this, I am not  
even sure
that my actual current brain can be enough, ... if only because "I"  
from the
first person point of view is distributed in infinities of  
computations, and
I cannot exclude that the qualia (certainly stable lasting qualia)  
might

rely on that.






provided that brain physics
is computable, or functionalism if brain physics is not computable.
Non-comp functionalism may entail, for example, that the replacement
brain contain a hypercomputer.



OK.

Bruno



--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-06 Thread Craig Weinberg


On Sunday, October 6, 2013 5:06:31 AM UTC-4, Bruno Marchal wrote:
>
>
> On 06 Oct 2013, at 03:17, Stathis Papaioannou wrote: 
>
> > On 5 October 2013 00:40, Bruno Marchal > 
> wrote: 
> > 
> >>> The argument is simply summarised thus: it is impossible even for   
> >>> God 
> >>> to make a brain prosthesis that reproduces the I/O behaviour but has 
> >>> different qualia. This is a proof of comp, 
> >> 
> >> 
> >> Hmm... I can agree, but eventually no God can make such a   
> >> prothesis, only 
> >> because the qualia is an attribute of the "immaterial person", and   
> >> not of 
> >> the brain, body, or computer.  Then the prosthesis will manifest   
> >> the person 
> >> if it emulates the correct level. 
> > 
> > But if the qualia are attributed to the substance of the physical 
> > brain then where is the problem making a prosthesis that replicates 
> > the behaviour but not the qualia? 
> > The problem is that it would allow 
> > one to make a partial zombie, which I think is absurd. Therefore, the 
> > qualia cannot be attributed to the substance of the physical brain. 
>
> I agree. 
>
> Note that in that case the qualia is no more attributed to an   
> immaterial person, but to a piece of primary matter. 
> In that case, both comp and functionalism (in your sense, not in   
> Putnam's usual sense of functionalism which is a particular case of   
> comp) are wrong. 
>
> Then, it is almost obvious that an immaterial being cannot distinguish   
> between a primarily material incarnation, and an immaterial one, as it   
> would need some magic (non Turing emulable) ability to make the   
> difference.  People agreeing with this do no more need the UDA step 8   
> (which is an attempt to make this more rigorous or clear). 
>
> I might criticize, as a devil's advocate, a little bit the partial- 
> zombie argument. Very often some people pretend that they feel less   
> conscious after some drink of vodka, but that they are still able to   
> behave normally. Of course those people are notoriously wrong. It is   
> just that alcohol augments a fake self-confidence feeling, which   
> typically is not verified (in the laboratory, or more sadly on the   
> roads). Also, they confuse "less conscious" with "blurred   
> consciousness",


Why wouldn't less consciousness have the effect of seeming blurred? If your 
battery is dying in a device, the device might begin to fail in numerous 
ways, but those are all symptoms of the battery dying - of the device 
becoming less reliable as different parts are unavailable at different 
times.

Think of qualia as a character in a long story, which is divided into 
episodes. If, for instance, someone starts watching a show like Breaking 
Bad only in the last season, they have no explicit understanding of who 
Walter White is or why he behaves like he does, where Jesse came from, etc. 
They can only pick up what is presented directly in that episode, so his 
character is relatively flat. The difference between the appreciation of 
the last episode by someone who has seen the entire series on HDTV and 
someone who has only read the closed captioning of the last episode on 
Twitter is like the difference between a human being's qualia and the 
qualia which is available through a logical imitation of a human bring. 

Qualia is experience which contains the felt relation to all other 
experiences; specific experiences which directly relate, and extended 
experiential contexts which extent to eternity (totality of manifested 
events so far relative to the participant plus semi-potential events which 
relate to higher octaves of their participation...the bigger picture with 
the larger now.)

Human psychology is not a monolith. Blindsight already *proves* that 'we 
can be a partial zombie' from our 1p perspective. I have tried to make my 
solution to the combination problem here: 
http://multisenserealism.com/thesis/6-panpsychism/eigenmorphism/ 

What it means is that it is a mistake to say "we can be a partial zombie" - 
rather the evidence of brain injuries and surgeries demonstrate that the 
extent to which we are who we expect ourselves to be, or that others expect 
a person to be, can be changed in many quantitative and qualitative ways. 
We may not be less conscious after a massive debilitating stroke, but what 
is conscious after that is less of us. This is because consciousness is not 
a function or a process, it is the sole source of presence. 

Qualia is what we are made of. As human beings at this stage of human 
civilization, our direct qualia is primarily cognitive-logical-verbal. We 
identify with our ability to describe with words - to qualify other qualia 
as verbal qualia. We name our perceptions and name our naming power 'mind', 
but that is not consciousness. Logic and intellect can only name 
public-facing reductions of certain qualia (visible and tangible qualia - 
the stuff of public bodies). The name for those public-facing reductions is 
quanta, or nu

Re: A challenge for Craig

2013-10-07 Thread Bruno Marchal


On 06 Oct 2013, at 22:00, Craig Weinberg wrote:




On Sunday, October 6, 2013 5:06:31 AM UTC-4, Bruno Marchal wrote:

On 06 Oct 2013, at 03:17, Stathis Papaioannou wrote:

> On 5 October 2013 00:40, Bruno Marchal  wrote:
>
>>> The argument is simply summarised thus: it is impossible even for
>>> God
>>> to make a brain prosthesis that reproduces the I/O behaviour but  
has

>>> different qualia. This is a proof of comp,
>>
>>
>> Hmm... I can agree, but eventually no God can make such a
>> prothesis, only
>> because the qualia is an attribute of the "immaterial person", and
>> not of
>> the brain, body, or computer.  Then the prosthesis will manifest
>> the person
>> if it emulates the correct level.
>
> But if the qualia are attributed to the substance of the physical
> brain then where is the problem making a prosthesis that replicates
> the behaviour but not the qualia?
> The problem is that it would allow
> one to make a partial zombie, which I think is absurd. Therefore,  
the

> qualia cannot be attributed to the substance of the physical brain.

I agree.

Note that in that case the qualia is no more attributed to an
immaterial person, but to a piece of primary matter.
In that case, both comp and functionalism (in your sense, not in
Putnam's usual sense of functionalism which is a particular case of
comp) are wrong.

Then, it is almost obvious that an immaterial being cannot distinguish
between a primarily material incarnation, and an immaterial one, as it
would need some magic (non Turing emulable) ability to make the
difference.  People agreeing with this do no more need the UDA step 8
(which is an attempt to make this more rigorous or clear).

I might criticize, as a devil's advocate, a little bit the partial-
zombie argument. Very often some people pretend that they feel less
conscious after some drink of vodka, but that they are still able to
behave normally. Of course those people are notoriously wrong. It is
just that alcohol augments a fake self-confidence feeling, which
typically is not verified (in the laboratory, or more sadly on the
roads). Also, they confuse "less conscious" with "blurred
consciousness",

Why wouldn't less consciousness have the effect of seeming blurred?  
If your battery is dying in a device, the device might begin to fail  
in numerous ways, but those are all symptoms of the battery dying -  
of the device becoming less reliable as different parts are  
unavailable at different times.


Think of qualia as a character in a long story, which is divided  
into episodes. If, for instance, someone starts watching a show like  
Breaking Bad only in the last season, they have no explicit  
understanding of who Walter White is or why he behaves like he does,  
where Jesse came from, etc. They can only pick up what is presented  
directly in that episode, so his character is relatively flat. The  
difference between the appreciation of the last episode by someone  
who has seen the entire series on HDTV and someone who has only read  
the closed captioning of the last episode on Twitter is like the  
difference between a human being's qualia and the qualia which is  
available through a logical imitation of a human bring.


Qualia is experience which contains the felt relation to all other  
experiences; specific experiences which directly relate, and  
extended experiential contexts which extent to eternity (totality of  
manifested events so far relative to the participant plus semi- 
potential events which relate to higher octaves of their  
participation...the bigger picture with the larger now.)


Then qualia are infinite. This contradict some of your previous  
statement.







Human psychology is not a monolith. Blindsight already *proves* that  
'we can be a partial zombie' from our 1p perspective. I have tried  
to make my solution to the combination problem here: http://multisenserealism.com/thesis/6-panpsychism/eigenmorphism/


What it means is that it is a mistake to say "we can be a partial  
zombie" - rather the evidence of brain injuries and surgeries  
demonstrate that the extent to which we are who we expect ourselves  
to be, or that others expect a person to be, can be changed in many  
quantitative and qualitative ways. We may not be less conscious  
after a massive debilitating stroke, but what is conscious after  
that is less of us.


OK.
As Chardin said, we are not human beings having from time to time some  
divine experiences, but we are divine beings having from time to time  
human experiences ...





This is because consciousness is not a function or a process,


OK



it is the sole source of presence.

Qualia is what we are made of. As human beings at this stage of  
human civilization, our direct qualia is primarily cognitive-logical- 
verbal. We identify with our ability to describe with words - to  
qualify other qualia as verbal qualia. We name our perceptions and  
name our naming power 'mind', but that is not consciousness. Logic  
and 

Re: A challenge for Craig

2013-10-07 Thread Craig Weinberg


On Monday, October 7, 2013 3:56:55 AM UTC-4, Bruno Marchal wrote:
>
>
> On 06 Oct 2013, at 22:00, Craig Weinberg wrote:
>
>
>
> On Sunday, October 6, 2013 5:06:31 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 06 Oct 2013, at 03:17, Stathis Papaioannou wrote: 
>>
>> > On 5 October 2013 00:40, Bruno Marchal  wrote: 
>> > 
>> >>> The argument is simply summarised thus: it is impossible even for   
>> >>> God 
>> >>> to make a brain prosthesis that reproduces the I/O behaviour but has 
>> >>> different qualia. This is a proof of comp, 
>> >> 
>> >> 
>> >> Hmm... I can agree, but eventually no God can make such a   
>> >> prothesis, only 
>> >> because the qualia is an attribute of the "immaterial person", and   
>> >> not of 
>> >> the brain, body, or computer.  Then the prosthesis will manifest   
>> >> the person 
>> >> if it emulates the correct level. 
>> > 
>> > But if the qualia are attributed to the substance of the physical 
>> > brain then where is the problem making a prosthesis that replicates 
>> > the behaviour but not the qualia? 
>> > The problem is that it would allow 
>> > one to make a partial zombie, which I think is absurd. Therefore, the 
>> > qualia cannot be attributed to the substance of the physical brain. 
>>
>> I agree. 
>>
>> Note that in that case the qualia is no more attributed to an   
>> immaterial person, but to a piece of primary matter. 
>> In that case, both comp and functionalism (in your sense, not in   
>> Putnam's usual sense of functionalism which is a particular case of   
>> comp) are wrong. 
>>
>> Then, it is almost obvious that an immaterial being cannot distinguish   
>> between a primarily material incarnation, and an immaterial one, as it   
>> would need some magic (non Turing emulable) ability to make the   
>> difference.  People agreeing with this do no more need the UDA step 8   
>> (which is an attempt to make this more rigorous or clear). 
>>
>> I might criticize, as a devil's advocate, a little bit the partial- 
>> zombie argument. Very often some people pretend that they feel less   
>> conscious after some drink of vodka, but that they are still able to   
>> behave normally. Of course those people are notoriously wrong. It is   
>> just that alcohol augments a fake self-confidence feeling, which   
>> typically is not verified (in the laboratory, or more sadly on the   
>> roads). Also, they confuse "less conscious" with "blurred   
>> consciousness",
>
>
> Why wouldn't less consciousness have the effect of seeming blurred? If 
> your battery is dying in a device, the device might begin to fail in 
> numerous ways, but those are all symptoms of the battery dying - of the 
> device becoming less reliable as different parts are unavailable at 
> different times.
>
> Think of qualia as a character in a long story, which is divided into 
> episodes. If, for instance, someone starts watching a show like Breaking 
> Bad only in the last season, they have no explicit understanding of who 
> Walter White is or why he behaves like he does, where Jesse came from, etc. 
> They can only pick up what is presented directly in that episode, so his 
> character is relatively flat. The difference between the appreciation of 
> the last episode by someone who has seen the entire series on HDTV and 
> someone who has only read the closed captioning of the last episode on 
> Twitter is like the difference between a human being's qualia and the 
> qualia which is available through a logical imitation of a human bring. 
>
> Qualia is experience which contains the felt relation to all other 
> experiences; specific experiences which directly relate, and extended 
> experiential contexts which extent to eternity (totality of manifested 
> events so far relative to the participant plus semi-potential events which 
> relate to higher octaves of their participation...the bigger picture with 
> the larger now.)
>
>
> Then qualia are infinite. This contradict some of your previous statement. 
>

It's not qualia that is finite or infinite, it is finity-infinity itself 
that is an intellectual quale. Quanta is derived from qualia, so 
quantitative characteristics have ambiguous application outside of quanta.
 

>
>
>
>
>
> Human psychology is not a monolith. Blindsight already *proves* that 'we 
> can be a partial zombie' from our 1p perspective. I have tried to make my 
> solution to the combination problem here: 
> http://multisenserealism.com/thesis/6-panpsychism/eigenmorphism/ 
>
> What it means is that it is a mistake to say "we can be a partial zombie" 
> - rather the evidence of brain injuries and surgeries demonstrate that the 
> extent to which we are who we expect ourselves to be, or that others expect 
> a person to be, can be changed in many quantitative and qualitative ways. 
> We may not be less conscious after a massive debilitating stroke, but what 
> is conscious after that is less of us. 
>
>
> OK.
> As Chardin said, we are not human beings having from time to time

Re: A challenge for Craig

2013-10-07 Thread Platonist Guitar Cowboy
On Craig’s use of the term “Aesthetic”.

One of the hindrances preventing me from understanding Craig’s statements
is the pluralistic use of the term “aesthetics”. Sorry for not being able
to produce a proper account but the following conflicts will just be stream
of consciousness for 15 minutes:

Often you use aesthetics in a pre 19th century enlightenment way, as in
rigorous theory of sense, beauty, and harmony in nature and art. At the
same time you use the term as synonymous for qualifying taste, which is
reflected in everyday language use, but has little relation, if any, to
aesthetics as theory.

At other times you use it in Kantian way of transcendental, implying it to
be a source for knowledge (“Ästhetische Erkenntnis” in German) about
ourselves; but then at the same time you ditch distinguishing between form,
existing a priori as transcendental structure which theory studies, and the
impressions created for Kant a posteriori as experience, which is limited
by contexts of time, space, language, and perceptual apparatus in its
potential for us to grasp and study.

So you take the Kantian transcendental idea in part, but make experience by
perceptual apparatus primary to which Kant would reply: “without study and
evolution of timeless form, the arts and our ability to engage new forms of
transcendental experience with the sensory apparatus would stagnate.”

In other words, his objection would be: if we reduce sensory experience to
be the primary aesthetic mode, instead of the bonus and fruits of labors
and histories of theory, then we’d all be waiting for the next movie to be
projected in a theater, but nobody would produce new movies anymore. I’ve
never seen you address this quagmire convincingly. Where does novelty or
its appearance come from if everything makes sense? Why are some aesthetic
objects and presences more self-evident than others?

Then another use you make is aesthetics in semiotic interpretation, i.e.
that we can only sense what is pre-ordained by symbolic systems. This
however robs your use of aesthetics of the primary status you often assert
it to have via sense.

Further, it is not clear whether your use of the term corresponds to
mystical traditions of antique (Beauty as expression of universality,
divinity, or spirituality) or if it is the secular version including and
post Baumgarten.

Then, if sense is universal with aesthetic experience in primary tow, how
do you explain the unique contributions of a Beethoven or Bach? Why can’t
anybody write/find such well crafted triple fuges if sense and aesthetic
experience are universal and give rise to the whole thing in the first
place: everybody should be at least as good as Bach because all engage the
world via sense. So you have to struggle with the 19th century genius
problem, if you reject the primacy of forms beyond sense.

It is also unclear where your model stands in more modern contexts, such as
psychological aesthetics or the route of Fiedler. Sometimes you oppose
aesthetics and rationality (maths and music) but when convenient this is
unified when talking “sense primary”, which produces further obscurity.

Would you agree with G. T. Fechner’s distinctions of “from above” and “from
below” in your approach? If sense and material world experience have
primary status, then you have to accept that we can hone in on the
beautiful via experiment and study beauty empirically. Your model suggests
sense is primary, but I have no way of studying and verifying your claims
other than believing you. Your model is full of explanations, but I find no
avenues for inquiry when I read you, other than that you have your
positions sorted and they are correct.

These are the kind of of conflicts that bar me from understanding your use
of aesthetics. The list isn’t exhaustive and I don’t demand you explain
these. They’re just illustrative of some difficulties I have with
understanding your use. So when you throw around sense, qualia, aesthetic
experience; I have difficulty following because of the jungle of possible
complex interpretations. Which ones Craig? - is what this boils down to
somewhere, I guess. PGC


On Mon, Oct 7, 2013 at 5:20 PM, Craig Weinberg wrote:

>
>
> On Monday, October 7, 2013 3:56:55 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 06 Oct 2013, at 22:00, Craig Weinberg wrote:
>>
>>
>>
>> On Sunday, October 6, 2013 5:06:31 AM UTC-4, Bruno Marchal wrote:
>>>
>>>
>>> On 06 Oct 2013, at 03:17, Stathis Papaioannou wrote:
>>>
>>> > On 5 October 2013 00:40, Bruno Marchal  wrote:
>>> >
>>> >>> The argument is simply summarised thus: it is impossible even for
>>> >>> God
>>> >>> to make a brain prosthesis that reproduces the I/O behaviour but has
>>> >>> different qualia. This is a proof of comp,
>>> >>
>>> >>
>>> >> Hmm... I can agree, but eventually no God can make such a
>>> >> prothesis, only
>>> >> because the qualia is an attribute of the "immaterial person", and
>>> >> not of
>>> >> the brain, body, or computer.  Then the prosthesis

Re: A challenge for Craig

2013-10-07 Thread Craig Weinberg
I can understand why it seems that my use of 'aesthetic' (and sense) is all 
over the place, and part of that is because I am trying to prompt others to 
make a connection between all of the different uses of the word. What I 
like about aesthetic is:

Anesthetic is used to refer to both general unconsciousness and local 
numbness. This hints at a natural link between sensitivity and 
consciousness. The loss of consciousness is a general an-aesthesia.

Aesthetic also has a connotation of patterns which are intended to be 
appreciated artistically or decoratively rather than for function. For 
example, there is a specific difference between red and green that is not 
reflected in the difference between wavelength measurements. We might 
explain the fact *that* there seem to be X number of functional breakpoints 
within the E-M continuum because of the function of our optical system, but 
there is no functional accounting for the the aesthetic presence of red or 
green. The aesthetic of red or green is far more than a recognition that 
there is a functional difference between E-M wavelengths associated with 
one part of the continuum or another.

Aesthetic then is a synonym for qualia, but without the nebulous baggage of 
that term. It just means something that is experienced directly as a 
presentation of sight, sound, touch, taste, etc - whether as a dream or 
imagined shape or a public object. When we hook up a video monitor to a 
computer, we are giving ourselves an aesthetic interface with which to 
display the anesthetic functions of software. Of course, I think that the 
entire cosmos is aesthetic, so that the functions of software are not 
absolutely anesthetic, but whatever aesthetic dimensions they have arise at 
the level of physics, not on the logical level that we have abstracted on 
top of it. A computer made of gears and pumps has no common aesthetic with 
an electronic computer, even though they may be running what we think is 
the same program, the program itself is an expectation, not a presence. 

There are common aesthetic themes within physics which give computation a 
ready medium in any group of rigid bodies that can be controlled reliably, 
but they cannot be made to scale up qualitatively from the outside in. If 
they could, we might expect the pixels of a video screen to realize that 
they are all contributing to a coherent image and merge into a more 
intelligent unified pixel-less screen. The fact that we can take a set of 
data in a computer and make it play as music or an image or text output is 
evidence that computation is blind to of higher aesthetic qualities.


On Monday, October 7, 2013 1:24:58 PM UTC-4, Platonist Guitar Cowboy wrote:
>
> On Craig’s use of the term “Aesthetic”.
>
> One of the hindrances preventing me from understanding Craig’s statements 
> is the pluralistic use of the term “aesthetics”. Sorry for not being able 
> to produce a proper account but the following conflicts will just be stream 
> of consciousness for 15 minutes:
>
> Often you use aesthetics in a pre 19th century enlightenment way, as in 
> rigorous theory of sense, beauty, and harmony in nature and art. At the 
> same time you use the term as synonymous for qualifying taste, which is 
> reflected in everyday language use, but has little relation, if any, to 
> aesthetics as theory. 
>
> At other times you use it in Kantian way of transcendental, implying it to 
> be a source for knowledge (“Ästhetische Erkenntnis” in German) about 
> ourselves; but then at the same time you ditch distinguishing between form, 
> existing a priori as transcendental structure which theory studies, and the 
> impressions created for Kant a posteriori as experience, which is limited 
> by contexts of time, space, language, and perceptual apparatus in its 
> potential for us to grasp and study.
>
> So you take the Kantian transcendental idea in part, but make experience 
> by perceptual apparatus primary to which Kant would reply: “without study 
> and evolution of timeless form, the arts and our ability to engage new 
> forms of transcendental experience with the sensory apparatus would 
> stagnate.” 
>
> In other words, his objection would be: if we reduce sensory experience to 
> be the primary aesthetic mode, instead of the bonus and fruits of labors 
> and histories of theory, then we’d all be waiting for the next movie to be 
> projected in a theater, but nobody would produce new movies anymore. I’ve 
> never seen you address this quagmire convincingly. Where does novelty or 
> its appearance come from if everything makes sense? Why are some aesthetic 
> objects and presences more self-evident than others?
>
> Then another use you make is aesthetics in semiotic interpretation, i.e. 
> that we can only sense what is pre-ordained by symbolic systems. This 
> however robs your use of aesthetics of the primary status you often assert 
> it to have via sense.
>
> Further, it is not clear whether your use of the te

Re: A challenge for Craig

2013-10-08 Thread Bruno Marchal


On 07 Oct 2013, at 17:20, Craig Weinberg wrote:




On Monday, October 7, 2013 3:56:55 AM UTC-4, Bruno Marchal wrote:

On 06 Oct 2013, at 22:00, Craig Weinberg wrote:



Qualia is experience which contains the felt relation to all other  
experiences; specific experiences which directly relate, and  
extended experiential contexts which extent to eternity (totality  
of manifested events so far relative to the participant plus semi- 
potential events which relate to higher octaves of their  
participation...the bigger picture with the larger now.)


Then qualia are infinite. This contradict some of your previous  
statement.


It's not qualia that is finite or infinite, it is finity-infinity  
itself that is an intellectual quale.


OK. But this does not mean it is not also objective. The set of  
dividers of 24 is finite. The set of multiple of 24 is infinite. For  
example.



Quanta is derived from qualia, so quantitative characteristics have  
ambiguous application outside of quanta.


Yes, quanta comes from the Löbian qualia, in a 100% verifiable way.  
Indeed. But that is again a consequence of computationalism.








Qualia is what we are made of. As human beings at this stage of  
human civilization, our direct qualia is primarily cognitive- 
logical-verbal. We identify with our ability to describe with words  
- to qualify other qualia as verbal qualia. We name our perceptions  
and name our naming power 'mind', but that is not consciousness.  
Logic and intellect can only name public-facing reductions of  
certain qualia (visible and tangible qualia - the stuff of public  
bodies). The name for those public-facing reductions is quanta, or  
numbers, and the totality of the playing field which can be used  
for the quanta game is called arithmetic truth.


Arithmetical truth is full of non nameable things. Qualia refer to  
non verbally describable first person truth.


Can arithmetical truth really name anything?


I am not sure Arithmetical Truth can be seen as a person, or anything  
capable of naming things. You are stretching the words too much. I  
guess that if you make your statement more precise, it will lead to an  
open problem in comp.




It seems to me that we can use arithmetic truth to locate a number  
within the infinity of computable realtions, but any 'naming' is  
only our own attempt to attach a proprietary first person sense to  
that which is irreducibly generic and nameless. The thing about  
qualia is not that it is non-nameable, it is the specific aesthetic  
presence that is manifested. Names are just qualia of mental  
association - a rose by any other name, etc.


I think this could be made more precise by taking "our" in the Löbian  
sense.


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-08 Thread Jason Resch
On Sun, Oct 6, 2013 at 3:00 PM, Craig Weinberg wrote:

>
>
> On Sunday, October 6, 2013 5:06:31 AM UTC-4, Bruno Marchal wrote:
>
>>
>> On 06 Oct 2013, at 03:17, Stathis Papaioannou wrote:
>>
>> > On 5 October 2013 00:40, Bruno Marchal  wrote:
>> >
>> >>> The argument is simply summarised thus: it is impossible even for
>> >>> God
>> >>> to make a brain prosthesis that reproduces the I/O behaviour but has
>> >>> different qualia. This is a proof of comp,
>> >>
>> >>
>> >> Hmm... I can agree, but eventually no God can make such a
>> >> prothesis, only
>> >> because the qualia is an attribute of the "immaterial person", and
>> >> not of
>> >> the brain, body, or computer.  Then the prosthesis will manifest
>> >> the person
>> >> if it emulates the correct level.
>> >
>> > But if the qualia are attributed to the substance of the physical
>> > brain then where is the problem making a prosthesis that replicates
>> > the behaviour but not the qualia?
>> > The problem is that it would allow
>> > one to make a partial zombie, which I think is absurd. Therefore, the
>> > qualia cannot be attributed to the substance of the physical brain.
>>
>> I agree.
>>
>> Note that in that case the qualia is no more attributed to an
>> immaterial person, but to a piece of primary matter.
>> In that case, both comp and functionalism (in your sense, not in
>> Putnam's usual sense of functionalism which is a particular case of
>> comp) are wrong.
>>
>> Then, it is almost obvious that an immaterial being cannot distinguish
>> between a primarily material incarnation, and an immaterial one, as it
>> would need some magic (non Turing emulable) ability to make the
>> difference.  People agreeing with this do no more need the UDA step 8
>> (which is an attempt to make this more rigorous or clear).
>>
>> I might criticize, as a devil's advocate, a little bit the partial-
>> zombie argument. Very often some people pretend that they feel less
>> conscious after some drink of vodka, but that they are still able to
>> behave normally. Of course those people are notoriously wrong. It is
>> just that alcohol augments a fake self-confidence feeling, which
>> typically is not verified (in the laboratory, or more sadly on the
>> roads). Also, they confuse "less conscious" with "blurred
>> consciousness",
>>
>
> Why wouldn't less consciousness have the effect of seeming blurred? If
> your battery is dying in a device, the device might begin to fail in
> numerous ways, but those are all symptoms of the battery dying - of the
> device becoming less reliable as different parts are unavailable at
> different times.
>
> Think of qualia as a character in a long story, which is divided into
> episodes. If, for instance, someone starts watching a show like Breaking
> Bad only in the last season, they have no explicit understanding of who
> Walter White is or why he behaves like he does, where Jesse came from, etc.
> They can only pick up what is presented directly in that episode, so his
> character is relatively flat. The difference between the appreciation of
> the last episode by someone who has seen the entire series on HDTV and
> someone who has only read the closed captioning of the last episode on
> Twitter is like the difference between a human being's qualia and the
> qualia which is available through a logical imitation of a human bring.
>
> Qualia is experience which contains the felt relation to all other
> experiences; specific experiences which directly relate, and extended
> experiential contexts which extent to eternity (totality of manifested
> events so far relative to the participant plus semi-potential events which
> relate to higher octaves of their participation...the bigger picture with
> the larger now.)
>
>

Craig,

I agree with you that there is some "building up" required to create a full
and rich human experience, which cannot happen in a single instance or with
a single CPU instruction being executed. However, where I disagree with you
is in how long it takes for all the particulars of the experience to be
generated from the computation.  I don't think it requires re-calculating
the entire history of the human race, or life itself on some planet.  I
think it can be done by comparing relations to memories and data stored
entirely within the brain itself; say within 0.1 to 0.5 seconds of
computation by the brain, not the eons of life's evolution.

So perhaps we are on the same page, but merely disagree on how detailed the
details of the computation need to be.  i.e., what is the substitution
layer (atomic interactions, or the history of atomic interactions on a
global scale?).

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/g

Re: A challenge for Craig

2013-10-08 Thread Craig Weinberg


On Tuesday, October 8, 2013 3:40:53 AM UTC-4, Bruno Marchal wrote:
>
>
> On 07 Oct 2013, at 17:20, Craig Weinberg wrote:
>
>
>
> On Monday, October 7, 2013 3:56:55 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 06 Oct 2013, at 22:00, Craig Weinberg wrote:
>>
>>
>>
>> Qualia is experience which contains the felt relation to all other 
>> experiences; specific experiences which directly relate, and extended 
>> experiential contexts which extent to eternity (totality of manifested 
>> events so far relative to the participant plus semi-potential events which 
>> relate to higher octaves of their participation...the bigger picture with 
>> the larger now.)
>>
>>
>> Then qualia are infinite. This contradict some of your previous 
>> statement. 
>>
>
> It's not qualia that is finite or infinite, it is finity-infinity itself 
> that is an intellectual quale. 
>
>
> OK. But this does not mean it is not also objective. The set of dividers 
> of 24 is finite. The set of multiple of 24 is infinite. For example.
>

It might not be objective, just common and consistent because it ultimately 
reflects itself, and because it reflects reflection. It may be the essence 
of objectivity, but from the absolute perspective, objectivity is the 
imposter - the power of sense to approximate itself without genuine 
embodiment.

Is the statement that the set of dividers is finite objectively true, or is 
it contingent upon ruling out rational numbers? Can't we just designate a 
variable, k = {the imaginary set of infinite dividers of 24}? 


>
> Quanta is derived from qualia, so quantitative characteristics have 
> ambiguous application outside of quanta.
>
>
> Yes, quanta comes from the Löbian qualia, in a 100% verifiable way. 
> Indeed. But that is again a consequence of computationalism.
>

Why isn't computationalism the consequence of quanta though? What can be 
computed other than quantities?
 

>
>
>
>
>
>>
>> Qualia is what we are made of. As human beings at this stage of human 
>> civilization, our direct qualia is primarily cognitive-logical-verbal. We 
>> identify with our ability to describe with words - to qualify other qualia 
>> as verbal qualia. We name our perceptions and name our naming power 'mind', 
>> but that is not consciousness. Logic and intellect can only name 
>> public-facing reductions of certain qualia (visible and tangible qualia - 
>> the stuff of public bodies). The name for those public-facing reductions is 
>> quanta, or numbers, and the totality of the playing field which can be used 
>> for the quanta game is called arithmetic truth.
>>
>>
>> Arithmetical truth is full of non nameable things. Qualia refer to non 
>> verbally describable first person truth.
>>
>
> Can arithmetical truth really name anything? 
>
>
> I am not sure Arithmetical Truth can be seen as a person, or anything 
> capable of naming things. You are stretching the words too much. I guess 
> that if you make your statement more precise, it will lead to an open 
> problem in comp.
>

If Arithmetic truth is full of non nameable things, what nameable things 
does it also contain, and what or who is naming them? Otherwise wouldn't it 
be tautological to say that it is full of non nameable things, as it would 
be to say that water is full of non dry things.
 

>
>
>
> It seems to me that we can use arithmetic truth to locate a number within 
> the infinity of computable realtions, but any 'naming' is only our own 
> attempt to attach a proprietary first person sense to that which is 
> irreducibly generic and nameless. The thing about qualia is not that it is 
> non-nameable, it is the specific aesthetic presence that is manifested. 
> Names are just qualia of mental association - a rose by any other name, 
> etc. 
>
>
> I think this could be made more precise by taking "our" in the Löbian 
> sense.
>

If quanta is Löbian qualia, why would it need any non-quantitative names?

Craig


> Bruno
>
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: A challenge for Craig

2013-10-08 Thread Craig Weinberg


On Tuesday, October 8, 2013 10:10:25 AM UTC-4, Jason wrote:
>
>
>
>
> On Sun, Oct 6, 2013 at 3:00 PM, Craig Weinberg 
> 
> > wrote:
>
>>
>>
>> On Sunday, October 6, 2013 5:06:31 AM UTC-4, Bruno Marchal wrote:
>>
>>>  
>>> On 06 Oct 2013, at 03:17, Stathis Papaioannou wrote: 
>>>
>>> > On 5 October 2013 00:40, Bruno Marchal  wrote: 
>>> > 
>>> >>> The argument is simply summarised thus: it is impossible even for   
>>> >>> God 
>>> >>> to make a brain prosthesis that reproduces the I/O behaviour but has 
>>> >>> different qualia. This is a proof of comp, 
>>> >> 
>>> >> 
>>> >> Hmm... I can agree, but eventually no God can make such a   
>>> >> prothesis, only 
>>> >> because the qualia is an attribute of the "immaterial person", and   
>>> >> not of 
>>> >> the brain, body, or computer.  Then the prosthesis will manifest   
>>> >> the person 
>>> >> if it emulates the correct level. 
>>> > 
>>> > But if the qualia are attributed to the substance of the physical 
>>> > brain then where is the problem making a prosthesis that replicates 
>>> > the behaviour but not the qualia? 
>>> > The problem is that it would allow 
>>> > one to make a partial zombie, which I think is absurd. Therefore, the 
>>> > qualia cannot be attributed to the substance of the physical brain. 
>>>
>>> I agree. 
>>>
>>> Note that in that case the qualia is no more attributed to an   
>>> immaterial person, but to a piece of primary matter. 
>>> In that case, both comp and functionalism (in your sense, not in   
>>> Putnam's usual sense of functionalism which is a particular case of   
>>> comp) are wrong. 
>>>
>>> Then, it is almost obvious that an immaterial being cannot distinguish   
>>> between a primarily material incarnation, and an immaterial one, as it   
>>> would need some magic (non Turing emulable) ability to make the   
>>> difference.  People agreeing with this do no more need the UDA step 8   
>>> (which is an attempt to make this more rigorous or clear). 
>>>
>>> I might criticize, as a devil's advocate, a little bit the partial- 
>>> zombie argument. Very often some people pretend that they feel less   
>>> conscious after some drink of vodka, but that they are still able to   
>>> behave normally. Of course those people are notoriously wrong. It is   
>>> just that alcohol augments a fake self-confidence feeling, which   
>>> typically is not verified (in the laboratory, or more sadly on the   
>>> roads). Also, they confuse "less conscious" with "blurred   
>>> consciousness",
>>>
>>
>> Why wouldn't less consciousness have the effect of seeming blurred? If 
>> your battery is dying in a device, the device might begin to fail in 
>> numerous ways, but those are all symptoms of the battery dying - of the 
>> device becoming less reliable as different parts are unavailable at 
>> different times.
>>
>> Think of qualia as a character in a long story, which is divided into 
>> episodes. If, for instance, someone starts watching a show like Breaking 
>> Bad only in the last season, they have no explicit understanding of who 
>> Walter White is or why he behaves like he does, where Jesse came from, etc. 
>> They can only pick up what is presented directly in that episode, so his 
>> character is relatively flat. The difference between the appreciation of 
>> the last episode by someone who has seen the entire series on HDTV and 
>> someone who has only read the closed captioning of the last episode on 
>> Twitter is like the difference between a human being's qualia and the 
>> qualia which is available through a logical imitation of a human bring. 
>>
>> Qualia is experience which contains the felt relation to all other 
>> experiences; specific experiences which directly relate, and extended 
>> experiential contexts which extent to eternity (totality of manifested 
>> events so far relative to the participant plus semi-potential events which 
>> relate to higher octaves of their participation...the bigger picture with 
>> the larger now.) 
>>
>>
>
> Craig,
>
> I agree with you that there is some "building up" required to create a 
> full and rich human experience, which cannot happen in a single instance or 
> with a single CPU instruction being executed. However, where I disagree 
> with you is in how long it takes for all the particulars of the experience 
> to be generated from the computation.  I don't think it requires 
> re-calculating the entire history of the human race, or life itself on some 
> planet.  I think it can be done by comparing relations to memories and data 
> stored entirely within the brain itself; say within 0.1 to 0.5 seconds of 
> computation by the brain, not the eons of life's evolution.
>

That could be true in theory but it does not seem to be supported by 
nature. In reality, there is no way to watch a movie in less time than the 
movie takes to be watched without sacrificing some qualities of the 
experience. Experience is nothing like data, as data is compressible since 
it has no qua

Re: A challenge for Craig

2013-10-08 Thread Bruno Marchal


On 08 Oct 2013, at 17:59, Craig Weinberg wrote:




On Tuesday, October 8, 2013 3:40:53 AM UTC-4, Bruno Marchal wrote:

On 07 Oct 2013, at 17:20, Craig Weinberg wrote:




On Monday, October 7, 2013 3:56:55 AM UTC-4, Bruno Marchal wrote:

On 06 Oct 2013, at 22:00, Craig Weinberg wrote:



Qualia is experience which contains the felt relation to all other  
experiences; specific experiences which directly relate, and  
extended experiential contexts which extent to eternity (totality  
of manifested events so far relative to the participant plus semi- 
potential events which relate to higher octaves of their  
participation...the bigger picture with the larger now.)


Then qualia are infinite. This contradict some of your previous  
statement.


It's not qualia that is finite or infinite, it is finity-infinity  
itself that is an intellectual quale.


OK. But this does not mean it is not also objective. The set of  
dividers of 24 is finite. The set of multiple of 24 is infinite. For  
example.


It might not be objective, just common and consistent because it  
ultimately reflects itself, and because it reflects reflection. It  
may be the essence of objectivity, but from the absolute  
perspective, objectivity is the imposter - the power of sense to  
approximate itself without genuine embodiment.


Is the statement that the set of dividers is finite objectively  
true, or is it contingent upon ruling out rational numbers? Can't we  
just designate a variable, k = {the imaginary set of infinite  
dividers of 24}?


"Absolute" can be used once we agree on the definition. The fact that  
some alien write 1+1=4 for our 1+1=2, just because they define 4 by  
s(s(0)), would not made 1+1=2 less absolute.


The fact that we are interested in integers dividing integers might be  
contingent, but that does not make contingent the fact that the set of  
dividers of 24 is a finite set of integers.








Quanta is derived from qualia, so quantitative characteristics have  
ambiguous application outside of quanta.


Yes, quanta comes from the Löbian qualia, in a 100% verifiable way.  
Indeed. But that is again a consequence of computationalism.


Why isn't computationalism the consequence of quanta though?


Human computationalism does.

But I want the simplest conceptual theory, and integers are easier to  
define than human integers.







What can be computed other than quantities?


Quantities are easily computed by stopping machines, but most machines  
does not stop, and when they introspect, the theory explains why they  
get troubled by consciousness, qualia, etc. Those qualia are not  
really computed, they are part of non computable truth, but which  
still bear on machines or machine's perspective.
















Qualia is what we are made of. As human beings at this stage of  
human civilization, our direct qualia is primarily cognitive- 
logical-verbal. We identify with our ability to describe with  
words - to qualify other qualia as verbal qualia. We name our  
perceptions and name our naming power 'mind', but that is not  
consciousness. Logic and intellect can only name public-facing  
reductions of certain qualia (visible and tangible qualia - the  
stuff of public bodies). The name for those public-facing  
reductions is quanta, or numbers, and the totality of the playing  
field which can be used for the quanta game is called arithmetic  
truth.


Arithmetical truth is full of non nameable things. Qualia refer to  
non verbally describable first person truth.


Can arithmetical truth really name anything?


I am not sure Arithmetical Truth can be seen as a person, or  
anything capable of naming things. You are stretching the words too  
much. I guess that if you make your statement more precise, it will  
lead to an open problem in comp.


If Arithmetic truth is full of non nameable things, what nameable  
things does it also contain,


The numbers, the recursive properties, the recursively enumarable  
properties, the Sigma_i truth, well a lot of things.
You have the recursive (the simplest in our comp setting), then the  
recursively enumerable (the universal machines, notably), then a whole  
hierarchy of non computable, but still nameable set of numbers, or  
machine's properties, then you got the non nameable properties, like  
true (for number relations) but very plausibly, things like  
consciousness, persons, etc.
Some of those non nameable things can still be studied by machines,  
through assumptions, and approximations.

Above that you have the truth that you cannot even approximated, etc.
Arithmetical truth is big, *very* big.




and what or who is naming them?


The machines. (in the comp setting, despite the machines theology does  
refer to higher non-machine entities capable of naming things. That's  
the case for the first order logical G* (which I note usually qG*,  
this one needs more than arithmetical truth, but it is normal as it  
describes an intensional (modal) view

Re: A challenge for Craig

2013-10-08 Thread Jason Resch
On Tue, Oct 8, 2013 at 11:18 AM, Craig Weinberg wrote:

>
>
> On Tuesday, October 8, 2013 10:10:25 AM UTC-4, Jason wrote:
>
>>
>>
>>
>> On Sun, Oct 6, 2013 at 3:00 PM, Craig Weinberg wrote:
>>
>>>
>>>
>>> On Sunday, October 6, 2013 5:06:31 AM UTC-4, Bruno Marchal wrote:
>>>

 On 06 Oct 2013, at 03:17, Stathis Papaioannou wrote:

 > On 5 October 2013 00:40, Bruno Marchal  wrote:
 >
 >>> The argument is simply summarised thus: it is impossible even for
 >>> God
 >>> to make a brain prosthesis that reproduces the I/O behaviour but
 has
 >>> different qualia. This is a proof of comp,
 >>
 >>
 >> Hmm... I can agree, but eventually no God can make such a
 >> prothesis, only
 >> because the qualia is an attribute of the "immaterial person", and
 >> not of
 >> the brain, body, or computer.  Then the prosthesis will manifest
 >> the person
 >> if it emulates the correct level.
 >
 > But if the qualia are attributed to the substance of the physical
 > brain then where is the problem making a prosthesis that replicates
 > the behaviour but not the qualia?
 > The problem is that it would allow
 > one to make a partial zombie, which I think is absurd. Therefore, the
 > qualia cannot be attributed to the substance of the physical brain.

 I agree.

 Note that in that case the qualia is no more attributed to an
 immaterial person, but to a piece of primary matter.
 In that case, both comp and functionalism (in your sense, not in
 Putnam's usual sense of functionalism which is a particular case of
 comp) are wrong.

 Then, it is almost obvious that an immaterial being cannot distinguish

 between a primarily material incarnation, and an immaterial one, as it

 would need some magic (non Turing emulable) ability to make the
 difference.  People agreeing with this do no more need the UDA step 8
 (which is an attempt to make this more rigorous or clear).

 I might criticize, as a devil's advocate, a little bit the partial-
 zombie argument. Very often some people pretend that they feel less
 conscious after some drink of vodka, but that they are still able to
 behave normally. Of course those people are notoriously wrong. It is
 just that alcohol augments a fake self-confidence feeling, which
 typically is not verified (in the laboratory, or more sadly on the
 roads). Also, they confuse "less conscious" with "blurred
 consciousness",

>>>
>>> Why wouldn't less consciousness have the effect of seeming blurred? If
>>> your battery is dying in a device, the device might begin to fail in
>>> numerous ways, but those are all symptoms of the battery dying - of the
>>> device becoming less reliable as different parts are unavailable at
>>> different times.
>>>
>>> Think of qualia as a character in a long story, which is divided into
>>> episodes. If, for instance, someone starts watching a show like Breaking
>>> Bad only in the last season, they have no explicit understanding of who
>>> Walter White is or why he behaves like he does, where Jesse came from, etc.
>>> They can only pick up what is presented directly in that episode, so his
>>> character is relatively flat. The difference between the appreciation of
>>> the last episode by someone who has seen the entire series on HDTV and
>>> someone who has only read the closed captioning of the last episode on
>>> Twitter is like the difference between a human being's qualia and the
>>> qualia which is available through a logical imitation of a human bring.
>>>
>>> Qualia is experience which contains the felt relation to all other
>>> experiences; specific experiences which directly relate, and extended
>>> experiential contexts which extent to eternity (totality of manifested
>>> events so far relative to the participant plus semi-potential events which
>>> relate to higher octaves of their participation...the bigger picture with
>>> the larger now.)
>>>
>>>
>>
>> Craig,
>>
>> I agree with you that there is some "building up" required to create a
>> full and rich human experience, which cannot happen in a single instance or
>> with a single CPU instruction being executed. However, where I disagree
>> with you is in how long it takes for all the particulars of the experience
>> to be generated from the computation.  I don't think it requires
>> re-calculating the entire history of the human race, or life itself on some
>> planet.  I think it can be done by comparing relations to memories and data
>> stored entirely within the brain itself; say within 0.1 to 0.5 seconds of
>> computation by the brain, not the eons of life's evolution.
>>
>
> That could be true in theory but it does not seem to be supported by
> nature. In reality, there is no way to watch a movie in less time than the
> movie takes to be watched without sacrificing some qualities of the
> experience.
>

But when 

Re: A challenge for Craig

2013-10-08 Thread Craig Weinberg


On Tuesday, October 8, 2013 12:41:26 PM UTC-4, Jason wrote:
>
>
>
>

>>>
>>> Craig,
>>>
>>> I agree with you that there is some "building up" required to create a 
>>> full and rich human experience, which cannot happen in a single instance or 
>>> with a single CPU instruction being executed. However, where I disagree 
>>> with you is in how long it takes for all the particulars of the experience 
>>> to be generated from the computation.  I don't think it requires 
>>> re-calculating the entire history of the human race, or life itself on some 
>>> planet.  I think it can be done by comparing relations to memories and data 
>>> stored entirely within the brain itself; say within 0.1 to 0.5 seconds of 
>>> computation by the brain, not the eons of life's evolution.
>>>
>>
>> That could be true in theory but it does not seem to be supported by 
>> nature. In reality, there is no way to watch a movie in less time than the 
>> movie takes to be watched without sacrificing some qualities of the 
>> experience. 
>>
>
> But when you watch the last 5 seconds of the movie, your brain has the 
> context/memories of all the previous parts of the movie.  If you 
> instantiated a brain from scratch with all the same memories of someone who 
> watched the first 2 hours of the movie,
>

That's what I am saying is not necessarily possible. A brain is not a 
receptacle of memories any more than a body is a person's autobiography. We 
are not a brain - the brain mostly does things that have nothing to do with 
our awareness, and we do things which have mostly nothing to do with our 
brains. Filling someone's library with books they have never read does not 
give them the experience of having read them. You are assuming that 
experience is in fact unnecessary and can be transplanted out of context 
into another life. If that could happen, I think that no living organism 
would ever forget anything, and there would never be any desire to repeat 
any experience. Why eat an apple when you can remember eating one in the 
past? Why have any experiences at all if we can just compute data? What is 
the benefit of experience?
 

> and then showed them the last 5 seconds, they would understand the ending 
> as well as anyone made to sit through the whole thing.
>

You wouldn't need to show them anything, just implant the memory of having 
seen the whole thing. That would work if the universe was based on 
mechanism instead of experience, but a universe based on mechanism makes 
experience redundant and superfluous.
 

>  
>
>> Experience is nothing like data, as data is compressible since it has no 
>> qualitative content to be lost in translation. In all cases, calculation is 
>> used to eliminate experience - to automate and anesthetize. 
>>
>
> For you to make such a claim, you would have to experience life as one of 
> those automatons.  But you have not, so I don't see where you get this 
> knowledge about what entities are or or are not conscious.
>

It's knowledge, it's an understanding about what data can be used for and 
what it can't be. There are no entities which are not conscious. 
Consciousness is what defines an entity. We have only to look at our uses 
of computations and machines - how they relieve us of our conscious burdens 
with automatic and impersonal service. We have to look at our confirmation 
bias in the desire to animate puppets, in pareidolia, apophenia, and the 
pathetic fallacy. I like science fiction and technology as much as anyone, 
but if we are serious about turning a program into a person, we would have 
a lot of absurdities to overcome. Why is anything presented instead of just 
computed invisibly? Why do we care about individuality or authenticity? Why 
do we care about anything? So many dead ends with Comp.

 
>
>> It cannot imitate experience, any more than an HSV coordinate can look 
>> like a particular color. 
>>
>
> I believe it is the machine's interpretation of the input (however it is 
> represented), and in the context of the rest of its mind, which manifests 
> as the experience of color.  You could say an HSV coordinate is not a 
> color, but neither is the electrical signaling of the optic nerve a color.
>

Right, but I would never say that the electrical signaling of the optic 
nerve is a color any more than I would say that the Eiffel Tower has a 
French accent. We can't look assume that a brain is a complete description 
of a human life, otherwise the human life would be redundant. The brain 
would simply be there, running computations, measuring acoustic and optical 
vibrations, analyzing aerosol chemistry, etc - all in complete blind 
silence. Memories would simply be logs of previous computations, not 
worldly fictions.

If you start from the perspective that what is outside of your personal 
experience must be the only reality, then you are taking a description of 
yourself from something that knows almost nothing about you. Trying to 
recreate yourself from that descr

Re: A challenge for Craig

2013-10-08 Thread Craig Weinberg


On Tuesday, October 8, 2013 12:34:57 PM UTC-4, Bruno Marchal wrote:
>
>
> On 08 Oct 2013, at 17:59, Craig Weinberg wrote:
>
>
>
> On Tuesday, October 8, 2013 3:40:53 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 07 Oct 2013, at 17:20, Craig Weinberg wrote:
>>
>>
>>
>> On Monday, October 7, 2013 3:56:55 AM UTC-4, Bruno Marchal wrote:
>>>
>>>
>>> On 06 Oct 2013, at 22:00, Craig Weinberg wrote:
>>>
>>>
>>>
>>> Qualia is experience which contains the felt relation to all other 
>>> experiences; specific experiences which directly relate, and extended 
>>> experiential contexts which extent to eternity (totality of manifested 
>>> events so far relative to the participant plus semi-potential events which 
>>> relate to higher octaves of their participation...the bigger picture with 
>>> the larger now.)
>>>
>>>
>>> Then qualia are infinite. This contradict some of your previous 
>>> statement. 
>>>
>>
>> It's not qualia that is finite or infinite, it is finity-infinity itself 
>> that is an intellectual quale. 
>>
>>
>> OK. But this does not mean it is not also objective. The set of dividers 
>> of 24 is finite. The set of multiple of 24 is infinite. For example.
>>
>
> It might not be objective, just common and consistent because it 
> ultimately reflects itself, and because it reflects reflection. It may be 
> the essence of objectivity, but from the absolute perspective, objectivity 
> is the imposter - the power of sense to approximate itself without genuine 
> embodiment.
>
> Is the statement that the set of dividers is finite objectively true, or 
> is it contingent upon ruling out rational numbers? Can't we just designate 
> a variable, k = {the imaginary set of infinite dividers of 24}? 
>
>
> "Absolute" can be used once we agree on the definition. The fact that some 
> alien write 1+1=4 for our 1+1=2, just because they define 4 by s(s(0)), 
> would not made 1+1=2 less absolute.
>
> The fact that we are interested in integers dividing integers might be 
> contingent, but that does not make contingent the fact that the set of 
> dividers of 24 is a finite set of integers.
>

Sure, but anything that is natural has self-consistent wholeness and can 
seem like a universal given if we focus our attention only on that. If it 
were truly not contingent it would be impossible for anyone to get a math 
problem wrong. As far as I can tell, the idea of an integer is an 
abstraction of countable solid objects that we use to objectify our own 
cognitive products. It doesn't seem very useful when it comes to 
representing non-solids, non-objects, or non-cognitive phenomenology.


>
>
>
>
>>
>> Quanta is derived from qualia, so quantitative characteristics have 
>> ambiguous application outside of quanta.
>>
>>
>> Yes, quanta comes from the Löbian qualia, in a 100% verifiable way. 
>> Indeed. But that is again a consequence of computationalism.
>>
>
> Why isn't computationalism the consequence of quanta though? 
>
>
> Human computationalism does.
>
> But I want the simplest conceptual theory, and integers are easier to 
> define than human integers.
>

I'm not sure how that relates to computationalism being something other 
than quanta. Humans are easier to define to themselves than integers. A 
baby can be themselves for years before counting to 10. 
 

>
>
>
>
>
> What can be computed other than quantities?
>
>
> Quantities are easily computed by stopping machines, but most machines 
> does not stop, and when they introspect, the theory explains why they get 
> troubled by consciousness, qualia, etc. Those qualia are not really 
> computed, they are part of non computable truth, but which still bear on 
> machines or machine's perspective.
>

Then you still have an explanatory gap. How can anything which is 
non-computable bear on the computation of an ideal machine? What connects 
the qualia to the quanta, and why isn't the qualia just quantitative 
summaries of quanta?
 

>
>
>
>
>
>
>  
>
>>
>>
>>
>>
>>
>>>
>>> Qualia is what we are made of. As human beings at this stage of human 
>>> civilization, our direct qualia is primarily cognitive-logical-verbal. We 
>>> identify with our ability to describe with words - to qualify other qualia 
>>> as verbal qualia. We name our perceptions and name our naming power 'mind', 
>>> but that is not consciousness. Logic and intellect can only name 
>>> public-facing reductions of certain qualia (visible and tangible qualia - 
>>> the stuff of public bodies). The name for those public-facing reductions is 
>>> quanta, or numbers, and the totality of the playing field which can be used 
>>> for the quanta game is called arithmetic truth.
>>>
>>>
>>> Arithmetical truth is full of non nameable things. Qualia refer to non 
>>> verbally describable first person truth.
>>>
>>
>> Can arithmetical truth really name anything? 
>>
>>
>> I am not sure Arithmetical Truth can be seen as a person, or anything 
>> capable of naming things. You are stretching the words too much. I guess 
>> tha

Re: A challenge for Craig

2013-10-09 Thread Bruno Marchal


On 08 Oct 2013, at 20:12, Craig Weinberg wrote:




On Tuesday, October 8, 2013 12:34:57 PM UTC-4, Bruno Marchal wrote:

On 08 Oct 2013, at 17:59, Craig Weinberg wrote:



Why isn't computationalism the consequence of quanta though?


Human computationalism does.

But I want the simplest conceptual theory, and integers are easier  
to define than human integers.


I'm not sure how that relates to computationalism being something  
other than quanta. Humans are easier to define to themselves than  
integers. A baby can be themselves for years before counting to 10.


Phenomenologically? Yes.
Fundamentally? That does not follow. It took a long time before  
discovering the Higgs-Englert-Brout Boson.












What can be computed other than quantities?


Quantities are easily computed by stopping machines, but most  
machines does not stop, and when they introspect, the theory  
explains why they get troubled by consciousness, qualia, etc. Those  
qualia are not really computed, they are part of non computable  
truth, but which still bear on machines or machine's perspective.


Then you still have an explanatory gap.


But that is a good point for comp, as it explains why there is a gap,  
and it imposes on it a precise mathematical structure.




How can anything which is non-computable bear on the computation of  
an ideal machine?


That is the whole subject of en entire field: recursion theory, or  
theoretical computer science.





What connects the qualia to the quanta, and why isn't the qualia  
just quantitative summaries of quanta?


Qualia are not connected to quanta. Quanta are appearances in the  
qualia theory, and they are not quantitative, they are lived at the  
first person plural views.





If Arithmetic truth is full of non nameable things, what nameable  
things does it also contain,


The numbers, the recursive properties, the recursively enumarable  
properties, the Sigma_i truth, well a lot of things.
You have the recursive (the simplest in our comp setting), then the  
recursively enumerable (the universal machines, notably), then a  
whole hierarchy of non computable, but still nameable set of  
numbers, or machine's properties,


You say they are nameable, but I don't believe you. It is not as if  
a number would ever need to go by some other name. Why not refer to  
it by its precise coordinate within Arithmetic Truth?


Because it is independent of the choice of the computational base,  
like volume in geometry. If you can name something with fortran, then  
you can name it with numbers, combinators, etc. Nameability is  
"machine independent", like the modal logics G, G*, Z, etc;






then you got the non nameable properties, like true (for number  
relations) but very plausibly, things like consciousness, persons,  
etc.
Some of those non nameable things can still be studied by machines,  
through assumptions, and approximations.

Above that you have the truth that you cannot even approximated, etc.
Arithmetical truth is big, *very* big.

Big, sure, but that's exactly why it needs no names at all.


It is worst than that. Many things cannot have a name.


Each feature and meta-feature of Arithmetic truth can only be found  
at its own address. What point would there be in adding a fictional  
label on something that is pervasively and factually true?


In science it is not a matter of decision, but of verifiable facts.







and what or who is naming them?


The machines. (in the comp setting, despite the machines theology  
does refer to higher non-machine entities capable of naming things.  
That's the case for the first order logical G* (which I note usually  
qG*, this one needs more than arithmetical truth, but it is normal  
as it describes an intensional (modal) views by a sort of God  
(Truth) about the machine. here the miracle is that its zero order  
logical (propositional) part is decidable.


I don't think that names and machines are compatible in any way.  
Programmers of machines might use names, but once compiled, all high  
level terms are crushed into the digital sand that the machine can  
digest. No trace of proprietary intent remains.


Not at all. The whole point is that such proprietary are invariant for  
the high or low level implementations.









Otherwise wouldn't it be tautological to say that it is full of non  
nameable things, as it would be to say that water is full of non  
dry things.


? (here you stretch an analogy to far, I think).

Could be, but I don't know until I hear the counter-argument.


(Stretched) analogy are immune to argumentation.













It seems to me that we can use arithmetic truth to locate a number  
within the infinity of computable realtions, but any 'naming' is  
only our own attempt to attach a proprietary first person sense to  
that which is irreducibly generic and nameless. The thing about  
qualia is not that it is non-nameable, it is the specific  
aesthetic presence that is manifested. Names are 

Re: A challenge for Craig

2013-10-09 Thread Craig Weinberg


On Wednesday, October 9, 2013 3:18:52 AM UTC-4, Bruno Marchal wrote:
>
>
> On 08 Oct 2013, at 20:12, Craig Weinberg wrote:
>
>
>
> On Tuesday, October 8, 2013 12:34:57 PM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 08 Oct 2013, at 17:59, Craig Weinberg wrote:
>>
>
>>
>> Why isn't computationalism the consequence of quanta though? 
>>
>>
>> Human computationalism does.
>>
>> But I want the simplest conceptual theory, and integers are easier to 
>> define than human integers.
>>
>
> I'm not sure how that relates to computationalism being something other 
> than quanta. Humans are easier to define to themselves than integers. A 
> baby can be themselves for years before counting to 10. 
>
>
> Phenomenologically? Yes.
> Fundamentally? That does not follow. It took a long time before 
> discovering the Higgs-Englert-Brout Boson.
>

It doesn't have to follow, but it can be a clue. The Higgs is a particular 
type of elementary phenomenon which is not accessible to us directly. That 
would not be the case with Comp if we were in fact using only computation. 
If our world was composed on every level by computation alone, it wouldn't 
make much sense for people to have to learn to count integers only after 
years of aesthetic saturation.
 

>
>
>
>  
>
>>
>>
>>
>>
>>
>> What can be computed other than quantities?
>>
>>
>> Quantities are easily computed by stopping machines, but most machines 
>> does not stop, and when they introspect, the theory explains why they get 
>> troubled by consciousness, qualia, etc. Those qualia are not really 
>> computed, they are part of non computable truth, but which still bear on 
>> machines or machine's perspective.
>>
>
> Then you still have an explanatory gap.
>
>
> But that is a good point for comp, as it explains why there is a gap, and 
> it imposes on it a precise mathematical structure.
>

But there's nothing on the other side of the gap from the comp view. You're 
still just finding a gap in comp that comp says is supposed to be there and 
then presuming that the entire universe other than comp must fit in there. 
If there is nothing within comp to specifically indicate color or flavor or 
kinesthetic sensations, or even the lines and shapes of geometry, then I 
don't see how comp can claim to be a theory that relates to consciousness.


>
>
> How can anything which is non-computable bear on the computation of an 
> ideal machine? 
>
>
> That is the whole subject of en entire field: recursion theory, or 
> theoretical computer science.
>

Ok, so what is an example of something that specifically bridges a kind of 
computation with something personal that comp claims to produce?
 

>
>
>
>
> What connects the qualia to the quanta, and why isn't the qualia just 
> quantitative summaries of quanta?
>
>
> Qualia are not connected to quanta.
>

Then what is even the point of Comp? To me quanta = all that relates to 
quantity and certain measurement. If they are not connected to quanta then 
a machine that is made of quanta can't possibly produce qualia that has no 
connection to it. That's no better than Descartes.
 

> Quanta are appearances in the qualia theory, and they are not 
> quantitative, they are lived at the first person plural views.
>

Quanta aren't quantitative?
 

>
>
>
>> If Arithmetic truth is full of non nameable things, what nameable things 
>> does it also contain, 
>>
>>
>> The numbers, the recursive properties, the recursively enumarable 
>> properties, the Sigma_i truth, well a lot of things.
>> You have the recursive (the simplest in our comp setting), then the 
>> recursively enumerable (the universal machines, notably), then a whole 
>> hierarchy of non computable, but still nameable set of numbers, or 
>> machine's properties, 
>>
>
> You say they are nameable, but I don't believe you. It is not as if a 
> number would ever need to go by some other name. Why not refer to it by its 
> precise coordinate within Arithmetic Truth?
>
>
> Because it is independent of the choice of the computational base, like 
> volume in geometry. If you can name something with fortran, then you can 
> name it with numbers, combinators, etc. Nameability is "machine 
> independent", like the modal logics G, G*, Z, etc;
>

What you are calling names should be made of binary numbers though. I'm 
asking why binary numbers should ever need any non-binary, non-digtial, 
non-quantitative names.
 

>
>
>
>  
>
>> then you got the non nameable properties, like true (for number 
>> relations) but very plausibly, things like consciousness, persons, etc. 
>> Some of those non nameable things can still be studied by machines, 
>> through assumptions, and approximations.
>> Above that you have the truth that you cannot even approximated, etc.
>> Arithmetical truth is big, *very* big.
>>
>
> Big, sure, but that's exactly why it needs no names at all. 
>
>
> It is worst than that. Many things cannot have a name.
>

what can they have?
 

>
>
> Each feature and meta-feature of Arithmetic tru

Re: A challenge for Craig

2013-10-09 Thread Bruno Marchal


On 09 Oct 2013, at 15:43, Craig Weinberg wrote:




On Wednesday, October 9, 2013 3:18:52 AM UTC-4, Bruno Marchal wrote:

On 08 Oct 2013, at 20:12, Craig Weinberg wrote:




On Tuesday, October 8, 2013 12:34:57 PM UTC-4, Bruno Marchal wrote:

On 08 Oct 2013, at 17:59, Craig Weinberg wrote:



Why isn't computationalism the consequence of quanta though?


Human computationalism does.

But I want the simplest conceptual theory, and integers are easier  
to define than human integers.


I'm not sure how that relates to computationalism being something  
other than quanta. Humans are easier to define to themselves than  
integers. A baby can be themselves for years before counting to 10.


Phenomenologically? Yes.
Fundamentally? That does not follow. It took a long time before  
discovering the Higgs-Englert-Brout Boson.


It doesn't have to follow, but it can be a clue. The Higgs is a  
particular type of elementary phenomenon which is not accessible to  
us directly. That would not be the case with Comp if we were in fact  
using only computation. If our world was composed on every level by  
computation alone,


Hmm It is not obvious, and not well known, but if comp is true,  
then "our world" is not "made of" computations.
Our world is "only" an appearance in a multi-user arithmetical video  
game or dream.






it wouldn't make much sense for people to have to learn to count  
integers only after years of aesthetic saturation.













What can be computed other than quantities?


Quantities are easily computed by stopping machines, but most  
machines does not stop, and when they introspect, the theory  
explains why they get troubled by consciousness, qualia, etc. Those  
qualia are not really computed, they are part of non computable  
truth, but which still bear on machines or machine's perspective.


Then you still have an explanatory gap.


But that is a good point for comp, as it explains why there is a  
gap, and it imposes on it a precise mathematical structure.


But there's nothing on the other side of the gap from the comp view.  
You're still just finding a gap in comp that comp says is supposed  
to be there and then presuming that the entire universe other than  
comp must fit in there. If there is nothing within comp to  
specifically indicate color or flavor or kinesthetic sensations, or  
even the lines and shapes of geometry, then I don't see how comp can  
claim to be a theory that relates to consciousness.


There is something in the comp theory which specifically indicate  
qualia.

The gaps in the intensional nuances could very well do that.







How can anything which is non-computable bear on the computation of  
an ideal machine?


That is the whole subject of en entire field: recursion theory, or  
theoretical computer science.


Ok, so what is an example of something that specifically bridges a  
kind of computation with something personal that comp claims to  
produce?


That is technical, and you need to study AUDA. I would say that *all*  
statements in X1* minus X1 produces that. No doubt many open problems  
have to be solved to progress here.
But even if that fails, you have not produced an argument that it is  
not possible.











What connects the qualia to the quanta, and why isn't the qualia  
just quantitative summaries of quanta?


Qualia are not connected to quanta.

Then what is even the point of Comp? To me quanta = all that relates  
to quantity and certain measurement. If they are not connected to  
quanta then a machine that is made of quanta can't possibly produce  
qualia that has no connection to it. That's no better than Descartes.


I realize that you have not yet really study comp. Physical Machine  
are not made of quanta. Quanta appears only as first person plural  
sharable qualia. They are observable pattern common to people  
belonging to highly splitting or differentiating computations, most  
plausibly the "linear computations" (like in QM).






Quanta are appearances in the qualia theory, and they are not  
quantitative, they are lived at the first person plural views.


Quanta aren't quantitative?


They might be. The fact that they come from qualia does not prevent  
that they have quantitative aspect.










If Arithmetic truth is full of non nameable things, what nameable  
things does it also contain,


The numbers, the recursive properties, the recursively enumarable  
properties, the Sigma_i truth, well a lot of things.
You have the recursive (the simplest in our comp setting), then the  
recursively enumerable (the universal machines, notably), then a  
whole hierarchy of non computable, but still nameable set of  
numbers, or machine's properties,


You say they are nameable, but I don't believe you. It is not as if  
a number would ever need to go by some other name. Why not refer to  
it by its precise coordinate within Arithmetic Truth?


Because it is independent of the choice of the computational base,  
like 

Re: A challenge for Craig

2013-10-09 Thread Craig Weinberg


On Wednesday, October 9, 2013 11:18:03 AM UTC-4, Bruno Marchal wrote:
>
>
> On 09 Oct 2013, at 15:43, Craig Weinberg wrote:
>
>
>
> On Wednesday, October 9, 2013 3:18:52 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 08 Oct 2013, at 20:12, Craig Weinberg wrote:
>>
>>
>>
>> On Tuesday, October 8, 2013 12:34:57 PM UTC-4, Bruno Marchal wrote:
>>>
>>>
>>> On 08 Oct 2013, at 17:59, Craig Weinberg wrote:
>>>
>>
>>>
>>> Why isn't computationalism the consequence of quanta though? 
>>>
>>>
>>> Human computationalism does.
>>>
>>> But I want the simplest conceptual theory, and integers are easier to 
>>> define than human integers.
>>>
>>
>> I'm not sure how that relates to computationalism being something other 
>> than quanta. Humans are easier to define to themselves than integers. A 
>> baby can be themselves for years before counting to 10. 
>>
>>
>> Phenomenologically? Yes.
>> Fundamentally? That does not follow. It took a long time before 
>> discovering the Higgs-Englert-Brout Boson.
>>
>
> It doesn't have to follow, but it can be a clue. The Higgs is a particular 
> type of elementary phenomenon which is not accessible to us directly. That 
> would not be the case with Comp if we were in fact using only computation. 
> If our world was composed on every level by computation alone,
>
>
> Hmm It is not obvious, and not well known, but if comp is true, then 
> "our world" is not "made of" computations. 
> Our world is "only" an appearance in a multi-user arithmetical video game 
> or dream. 
>

That's the problem though, what is an "appearance"? How can an arithmetic 
game become video or dreamlike in any way? This is what I keep talking 
about - the Presentation problem. Comp is pulling aesthetic experiences out 
of thin air. without a specific theory of what they are or how they are 
manufactured by computation or arithmetic. 
 

>
>
>
>
>
> it wouldn't make much sense for people to have to learn to count integers 
> only after years of aesthetic saturation.
>  
>
>>
>>
>>
>>  
>>
>>>
>>>
>>>
>>>
>>>
>>> What can be computed other than quantities?
>>>
>>>
>>> Quantities are easily computed by stopping machines, but most machines 
>>> does not stop, and when they introspect, the theory explains why they get 
>>> troubled by consciousness, qualia, etc. Those qualia are not really 
>>> computed, they are part of non computable truth, but which still bear on 
>>> machines or machine's perspective.
>>>
>>
>> Then you still have an explanatory gap.
>>
>>
>> But that is a good point for comp, as it explains why there is a gap, and 
>> it imposes on it a precise mathematical structure.
>>
>
> But there's nothing on the other side of the gap from the comp view. 
> You're still just finding a gap in comp that comp says is supposed to be 
> there and then presuming that the entire universe other than comp must fit 
> in there. If there is nothing within comp to specifically indicate color or 
> flavor or kinesthetic sensations, or even the lines and shapes of geometry, 
> then I don't see how comp can claim to be a theory that relates to 
> consciousness.
>
>
> There is something in the comp theory which specifically indicate qualia.
> The gaps in the intensional nuances could very well do that. 
>

But flavors and colors aren't gaps. It would be like painting with 
invisible paint. How does theory become visible to itself, and why would it?
 

>
>
>
>
>>
>>
>> How can anything which is non-computable bear on the computation of an 
>> ideal machine? 
>>
>>
>> That is the whole subject of en entire field: recursion theory, or 
>> theoretical computer science.
>>
>
> Ok, so what is an example of something that specifically bridges a kind of 
> computation with something personal that comp claims to produce?
>
>
> That is technical, and you need to study AUDA. I would say that *all* 
> statements in X1* minus X1 produces that. No doubt many open problems have 
> to be solved to progress here.
> But even if that fails, you have not produced an argument that it is not 
> possible.
>

What is an example of an X1* minus X1 statement that produces something 
personal and non-computable?


>
>
>
>  
>
>>
>>
>>
>>
>> What connects the qualia to the quanta, and why isn't the qualia just 
>> quantitative summaries of quanta?
>>
>>
>> Qualia are not connected to quanta.
>>
>
> Then what is even the point of Comp? To me quanta = all that relates to 
> quantity and certain measurement. If they are not connected to quanta then 
> a machine that is made of quanta can't possibly produce qualia that has no 
> connection to it. That's no better than Descartes.
>
>
> I realize that you have not yet really study comp. Physical Machine are 
> not made of quanta. Quanta appears only as first person plural sharable 
> qualia. They are observable pattern common to people belonging to highly 
> splitting or differentiating computations, most plausibly the "linear 
> computations" (like in QM).
>

I can agree with all of that, I would say that

Re: A challenge for Craig

2013-10-09 Thread Platonist Guitar Cowboy
On Wed, Oct 9, 2013 at 8:39 PM, Craig Weinberg wrote:

>
>
> On Wednesday, October 9, 2013 11:18:03 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 09 Oct 2013, at 15:43, Craig Weinberg wrote:
>>
>>
>>
>> On Wednesday, October 9, 2013 3:18:52 AM UTC-4, Bruno Marchal wrote:
>>>
>>>
>>> On 08 Oct 2013, at 20:12, Craig Weinberg wrote:
>>>
>>>
>>>
>>> On Tuesday, October 8, 2013 12:34:57 PM UTC-4, Bruno Marchal wrote:


 On 08 Oct 2013, at 17:59, Craig Weinberg wrote:

>>>

 Why isn't computationalism the consequence of quanta though?


 Human computationalism does.

 But I want the simplest conceptual theory, and integers are easier to
 define than human integers.

>>>
>>> I'm not sure how that relates to computationalism being something other
>>> than quanta. Humans are easier to define to themselves than integers. A
>>> baby can be themselves for years before counting to 10.
>>>
>>>
>>> Phenomenologically? Yes.
>>> Fundamentally? That does not follow. It took a long time before
>>> discovering the Higgs-Englert-Brout Boson.
>>>
>>
>> It doesn't have to follow, but it can be a clue. The Higgs is a
>> particular type of elementary phenomenon which is not accessible to us
>> directly. That would not be the case with Comp if we were in fact using
>> only computation. If our world was composed on every level by computation
>> alone,
>>
>>
>> Hmm It is not obvious, and not well known, but if comp is true, then
>> "our world" is not "made of" computations.
>> Our world is "only" an appearance in a multi-user arithmetical video game
>> or dream.
>>
>
> That's the problem though, what is an "appearance"? How can an arithmetic
> game become video or dreamlike in any way? This is what I keep talking
> about - the Presentation problem. Comp is pulling aesthetic experiences out
> of thin air. without a specific theory of what they are or how they are
> manufactured by computation or arithmetic.
>

No, that is you and your personalized definition of aesthetic experience
that has nothing to do with any standard interpretation of the term, and
where you default to "what I like about aesthetic..." free association to
fit your current mood and the exchange you're involved in, when prompted
these days.

Comp doesn't need to pull aesthetic experience, in it's standard
interpretations from anywhere. In the case of music, the vast majority of
music theories, if not all, are number based. Multisense realism is puling
aesthetic experience from thin air, as you constantly evade the question:

I can see how I can derive music and improvisation from counting and
numbers; can multisense realism show me how to do the same? Because given
all the claims on how central aesthetic experience is, it should at least
offer some clues, if not be even better than numbers.



>
>
>
>>
>>
>>
>>
>>
>> it wouldn't make much sense for people to have to learn to count integers
>> only after years of aesthetic saturation.
>>
>>
>>>
>>>
>>>
>>>
>>>





 What can be computed other than quantities?


 Quantities are easily computed by stopping machines, but most machines
 does not stop, and when they introspect, the theory explains why they get
 troubled by consciousness, qualia, etc. Those qualia are not really
 computed, they are part of non computable truth, but which still bear on
 machines or machine's perspective.

>>>
>>> Then you still have an explanatory gap.
>>>
>>>
>>> But that is a good point for comp, as it explains why there is a gap,
>>> and it imposes on it a precise mathematical structure.
>>>
>>
>> But there's nothing on the other side of the gap from the comp view.
>> You're still just finding a gap in comp that comp says is supposed to be
>> there and then presuming that the entire universe other than comp must fit
>> in there. If there is nothing within comp to specifically indicate color or
>> flavor or kinesthetic sensations, or even the lines and shapes of geometry,
>> then I don't see how comp can claim to be a theory that relates to
>> consciousness.
>>
>>
>> There is something in the comp theory which specifically indicate qualia.
>> The gaps in the intensional nuances could very well do that.
>>
>
> But flavors and colors aren't gaps.
>

You do not know what Bruno is referring to and are changing the question.
If you do know which intensional nuances he is referring to, then explain
them and why gaps as colors would be inappropriate.


> It would be like painting with invisible paint.
>

UV paint. 5.40$ at Ebay.


> How does theory become visible to itself, and why would it?
>

Black lights. To party and have indiscriminate fun, in this case. PGC


>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send em

  1   2   >