Re: Searles' Fundamental Error

2007-02-20 Thread Stathis Papaioannou
On 2/21/07, Jesse Mazer <[EMAIL PROTECTED]> wrote:


>
> Stathis Papaioannou wrote:
> >
> >
> >It is a complicated issue. Patients with psychotic illnesses can
> sometimes
> >reflect on a past episode and see that they were unwell then even though
> >they insisted they were not at the time. They then might say something
> >like,
> >"I don't know I'm unwell when I'm unwell, but when I'm well I know I'm
> >well". OK, but then how do you know that you're not unwell now? How do I
> >know I'm not unwell now? We rely on other people telling us (although of
> >course we won't believe them if we lack insight into our own illness),
> but
> >in the example of fading qualia we would (a) not notice that the qualia
> >were
> >fading, a kind of delusion or anosognosia, and (b) no-one else would
> notice
> >either, because by whatever mechanism the external appearance of
> conscious
> >behaviour would be kept up. So how do I know I'm not that special kind of
> >zombie or partial zombie now? I feel absolutely sure that I am not but
> then
> >I would think that, wouldn't I? The fact is, it happens all the time, to
> at
> >least 1% of the population.
> >
> >Stathis Papaioannou
>
> But are you claiming that psychotic patients not only are mistaken about
> what's going on in the external world, but are mistaken about the actual
> qualia they experience? i.e. if a psychotic says he's hearing voices and
> thinks they are martians sending him messages via microwaves, not only is
> he
> mistaken that the voices come from martians as opposed to being
> hallucinations, but he's mistaken that he's having the subjective
> experience
> of hearing voices in the first place? I've never heard of a condition like
> that...your example of recognizing one was unwell in the past is more like
> recognizing the things one was hearing and seeing were hallucinatory
> rather
> than accurate perceptions of the external world, not recognizing that one
> was not hearing and seeing anything at all, even hallucinations.
>
> Jesse


A patient says that his leg is paralysed, behaves as if his leg is
paralysed, but the clinical signs and investigations are not consistent with
a paralysed leg. The diagnosis of hysterical paralysis is made. A patient
claims to hear voices of people nobody else sees, responds to the voices as
if they are there, but the clinical signs and response to antipsychotic
treatment is not consistent with the auditory hallucinations experienced by
peopel with psychotic illness. The diagnosis of hysterical hallucinations is
made: that is, they aren't hearing voices that aren't there, they only
*think* they're hearing voices that aren't there. As with the leg, some of
these patients may be malingering for various reasons, but there will be
some who genuinely experience the symptom.

However, that's a digression. My point was simply that people can be
deluded, for example thinking that they can see when they in fact are blind,
despite extremely strong evidence that they are deluded. If this is the
case, then surely it would be possible to maintain the delusion that nothing
remarkable is happening as your qualia gradually fade if there were *no*
external evidence of your blindness, because electronic chips are taking
over your brain function. I don't actually think this is likely to happen,
and the real examples I gave are presumably due to specific (though
ill-understood) neurological dysfunction causing lack of insight, since
generally we *do* notice when our perceptions are affected due to
neurological lesions. Nevertheless, the examples do show that it is possible
for qualia to fade away without the patient/victim noticing, and presumably
without anyone else noticing if the unconscious component of the
functionality of the neuron is replaced.

Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Searles' Fundamental Error

2007-02-20 Thread Jesse Mazer


Stathis Papaioannou wrote:
>
>
>It is a complicated issue. Patients with psychotic illnesses can sometimes
>reflect on a past episode and see that they were unwell then even though
>they insisted they were not at the time. They then might say something 
>like,
>"I don't know I'm unwell when I'm unwell, but when I'm well I know I'm
>well". OK, but then how do you know that you're not unwell now? How do I
>know I'm not unwell now? We rely on other people telling us (although of
>course we won't believe them if we lack insight into our own illness), but
>in the example of fading qualia we would (a) not notice that the qualia 
>were
>fading, a kind of delusion or anosognosia, and (b) no-one else would notice
>either, because by whatever mechanism the external appearance of conscious
>behaviour would be kept up. So how do I know I'm not that special kind of
>zombie or partial zombie now? I feel absolutely sure that I am not but then
>I would think that, wouldn't I? The fact is, it happens all the time, to at
>least 1% of the population.
>
>Stathis Papaioannou

But are you claiming that psychotic patients not only are mistaken about 
what's going on in the external world, but are mistaken about the actual 
qualia they experience? i.e. if a psychotic says he's hearing voices and 
thinks they are martians sending him messages via microwaves, not only is he 
mistaken that the voices come from martians as opposed to being 
hallucinations, but he's mistaken that he's having the subjective experience 
of hearing voices in the first place? I've never heard of a condition like 
that...your example of recognizing one was unwell in the past is more like 
recognizing the things one was hearing and seeing were hallucinatory rather 
than accurate perceptions of the external world, not recognizing that one 
was not hearing and seeing anything at all, even hallucinations.

Jesse

_
Mortgage rates as low as 4.625% - Refinance $150,000 loan for $579 a month. 
Intro*Terms  http://www.NexTag.com


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: ASSA and Many-Worlds

2007-02-20 Thread Hal Ruhl

Hi Bruno:

As to my grasp of the UDA I think I understood it at one time well 
enough for my purpose but that will become clearer as I progress 
through my model.   There are not too many more steps.

Examining the complete list of possible properties of objects we 
should find "Empty of all information".

This would on a sub list.   It would from at least part of the sub 
list that could be assigned the name "The Nothing" or just "Nothing".

The Nothing would also be incomplete if there was a meaningful 
question it must answer.  The question would be "Can The Nothing 
sustain its of property of being empty of information?"  It can not 
answer this question so it is incomplete.  However, it must answer 
this question so its incompleteness is unstable.  It must eventually 
eat its way into the rest of the list so to speak - eventually having 
an countably infinite number of properties.  This is the source of my 
model's dynamic.

The list itself has properties and these are on a sub list.

We actually do not need the list if we allow for simplicity that the 
objects it and its sub lists define are themselves the sufficient 
elements of the model.  The list is then an object and contains 
itself.  It is infinitely nested.  Each nesting has its unstably 
incomplete Nothing.  An infinite nesting of dynamic potential.

If the list is complete which seems certain then it should be [I 
believe] inconsistent [will answer all questions all ways] which we 
have touched on before.  The inconsistency is inherited by the 
dynamic so the dynamic  has a random content.

All levels of randomness of trips to completeness are allowed.

A UD trace if I understand it correctly would be equivalent to a 
Nothing on a reasonably monotonic trip to completeness.

Yours

Hal Ruhl



At 12:10 PM 2/20/2007, you wrote:

>Hi Hal,
>
>You say my theory is a subset of yours. I don't understand. I have no
>theory, just a deductive argument that IF we are (digital) machine then
>"the physical world" is in our head. Then I show how a Universal Turing
>Machine can discover it in its own "head". This makes comp, or
>variants, testable.
>
>I have no theory (beside theory of number and machine), I'm just
>listening to the machine. That's all. Then I compare the comp-physics
>with empirical physics.
>
>Do you grasp the Universal Dovetailer Argument? Ask if not.
>
>Regards,
>
>Bruno


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Searles' Fundamental Error

2007-02-20 Thread Stathis Papaioannou
On 2/21/07, Jesse Mazer <[EMAIL PROTECTED]> wrote:
>
>
> Stathis Papaioannou wrote:
> >
> >On 2/20/07, Jesse Mazer <[EMAIL PROTECTED]> wrote:
> > >
> > >
> > > >I would bet on functionalism as the correct theory of mind for
> various
> > > >reasons, but I don't see that there is anything illogical the
> >possibility
> > > >that consciousness is substrate-dependent. Let's say that when you
> rub
> > > two
> > > >carbon atoms together they have a scratchy experience, whereas when
> you
> > > rub
> > > >two silicon atoms together they have a squirmy experience. This could
> > > just
> > > >be a mundane fact about the universe, no more mysterious than any
> other
> > > >basic physical fact.  What is illogical, however, is the "no causal
> > > effect"
> > > >criterion if this is called epiphenomenalism. If the effect is purely
> >and
> > > >necessarily on first person experience, it's no less an effect; we
> >might
> > > >not
> > > >notice if the carbon atoms were zombified, but the carbon atoms would
> > > >certainly notice. I think it all comes down to the deep-seated and
> very
> > > >obviously wrong idea that only third person empirical data is genuine
> > > >empirical data. It is a legitimate concern of science that data
> should
> >be
> > > >verifiable and experiments repeatable, but it's taking it a bit far
> to
> > > >conclude from this that we are therefore all zombies.
> > > >
> > > >Stathis Papaioannou
> > >
> > > One major argument against the idea that qualia and/or consciousness
> >could
> > > be substrate-dependent is what philosopher David Chalmers refers to as
> >the
> > > "dancing qualia" and "fading qualia" arguments, which you can read
> more
> > > about at http://consc.net/papers/qualia.html . As a
> thought-experiment,
> > > imagine gradually replacing neurons in my brain with functionally
> > > identical
> > > devices whose physical construction was quite different from neurons
> > > (silicon chips emulating the input and output of the neurons they
> > > replaced,
> > > perhaps). If one believes that this substrate is associated with
> either
> > > different qualia or absent qualia, then as one gradually replaces more
> >and
> > > more of my brain, they'll either have to be a sudden discontinuous
> >change
> > > (and it seems implausible that the replacement of a single neuron
> would
> > > cause such a radical change) or else a gradual shift or fade-out of
> the
> > > qualia my brain experiences...but if I were noticing such a shift or
> > > fade-out, I would expect to be able to comment on it, and yet the
> > > assumption
> > > that the new parts are functionally identical means my behavior should
> >be
> > > indistinguishable from what it would be if my neurons were left alone.
> >And
> > > if we suppose that I might be having panicked thoughts about a change
> in
> > > my
> > > perceptions yet find that my voice and body are acting as if nothing
> is
> > > wrong, and there is no neural activity associated with these panicked
> > > thoughts, then there would have to be a radical disconnect between
> > > subjective experiences and physical activity in my brain, which would
> > > contradict the assumption of supervenience (see
> > > http://philosophy.uwaterloo.ca/MindDict/supervenience.html ) and lead
> to
> > > the
> > > possibility of radical mind/body disconnects like rocks and trees
> having
> > > complex thoughts and experiences that have nothing to do with any
> >physical
> > > activity within them.
> > >
> > > Jesse
> >
> >
> >  It's a persuasive argument, but I can think of a mechanism whereby your
> >qualia can fade away and you wouldn't notice. In some cases of cortical
> >blindness, in which the visual cortex is damaged but the rest of the
> visual
> >pathways intact, patients insist that they are not blind and come up with
> >explanations as to why they fall over and walk into things, eg. they
> accuse
> >people of putting obstacles in their way while their back is turned. This
> >isn't just denial because it is specific to cortical lesions, not
> blindness
> >due to other reasons. If these patients had advanced cyborg implants they
> >could presumably convince the world, and be convinced themselves, that
> >their
> >visual perception had not suffered when in fact they can't see a thing.
> >Perhaps gradual cyborgisation of the brain as per Hans Moravec would lead
> >to
> >a similar, gradual fading of thoughts and perceptions; the external
> >observer
> >would not notice any change and the subject would not notice any change
> >either, until he was dead, replaced by a zombie.
> >
> >Stathis Papaioannou
>
> That's an interesting analogy, but it seems to me there's an important
> difference between this real case and the hypothetical fading qualia case
> since presumably the brain activity associated with inventing false visual
> sensations is different from the activity associated with visual
> sensations
> that are based on actual signals from the optic nerve. Additionally, we

Re: Searles' Fundamental Error

2007-02-20 Thread Stathis Papaioannou
They're completely blind, walking into things and falling over. They insist
that they see things and they confabulate, claiming that they see tables and
chairs if they believe they are in a dining room, that they can see the face
of someone they know when they are talking to them, and so on. It is an
example of anosognosia, the condition where someone has a disease or
disability and does not recognise it. The term is not usually used in
psychiatric illness, when we usually talk of "lack of insight", but it is
the same sort of thing.

Stathis Papaioannou

On 2/21/07, Brent Meeker <[EMAIL PROTECTED]> wrote:
>
>
> Stathis Papaioannou wrote:
> >
> >
> > On 2/20/07, *Jesse Mazer* <[EMAIL PROTECTED]
> > > wrote:
> >
> >
> >  >I would bet on functionalism as the correct theory of mind for
> various
> >  >reasons, but I don't see that there is anything illogical the
> > possibility
> >  >that consciousness is substrate-dependent. Let's say that when you
> > rub two
> >  >carbon atoms together they have a scratchy experience, whereas
> > when you rub
> >  >two silicon atoms together they have a squirmy experience. This
> > could just
> >  >be a mundane fact about the universe, no more mysterious than any
> > other
> >  >basic physical fact.  What is illogical, however, is the "no
> > causal effect"
> >  >criterion if this is called epiphenomenalism. If the effect is
> > purely and
> >  >necessarily on first person experience, it's no less an effect; we
> > might
> >  >not
> >  >notice if the carbon atoms were zombified, but the carbon atoms
> would
> >  >certainly notice. I think it all comes down to the deep-seated and
> > very
> >  >obviously wrong idea that only third person empirical data is
> genuine
> >  >empirical data. It is a legitimate concern of science that data
> > should be
> >  >verifiable and experiments repeatable, but it's taking it a bit
> far to
> >  >conclude from this that we are therefore all zombies.
> >  >
> >  >Stathis Papaioannou
> >
> > One major argument against the idea that qualia and/or consciousness
> > could
> > be substrate-dependent is what philosopher David Chalmers refers to
> > as the
> > "dancing qualia" and "fading qualia" arguments, which you can read
> more
> > about at http://consc.net/papers/qualia.html . As a
> thought-experiment,
> > imagine gradually replacing neurons in my brain with functionally
> > identical
> > devices whose physical construction was quite different from neurons
> > (silicon chips emulating the input and output of the neurons they
> > replaced,
> > perhaps). If one believes that this substrate is associated with
> either
> > different qualia or absent qualia, then as one gradually replaces
> > more and
> > more of my brain, they'll either have to be a sudden discontinuous
> > change
> > (and it seems implausible that the replacement of a single neuron
> would
> > cause such a radical change) or else a gradual shift or fade-out of
> the
> > qualia my brain experiences...but if I were noticing such a shift or
> > fade-out, I would expect to be able to comment on it, and yet the
> > assumption
> > that the new parts are functionally identical means my behavior
> > should be
> > indistinguishable from what it would be if my neurons were left
> > alone. And
> > if we suppose that I might be having panicked thoughts about a
> > change in my
> > perceptions yet find that my voice and body are acting as if nothing
> is
> > wrong, and there is no neural activity associated with these
> panicked
> > thoughts, then there would have to be a radical disconnect between
> > subjective experiences and physical activity in my brain, which
> would
> > contradict the assumption of supervenience (see
> > http://philosophy.uwaterloo.ca/MindDict/supervenience.html ) and
> > lead to the
> > possibility of radical mind/body disconnects like rocks and trees
> > having
> > complex thoughts and experiences that have nothing to do with any
> > physical
> > activity within them.
> >
> > Jesse
> >
> >
> >  It's a persuasive argument, but I can think of a mechanism whereby your
> > qualia can fade away and you wouldn't notice. In some cases of cortical
> > blindness, in which the visual cortex is damaged but the rest of the
> > visual pathways intact, patients insist that they are not blind and come
> > up with explanations as to why they fall over and walk into things, eg.
> > they accuse people of putting obstacles in their way while their back is
> > turned. This isn't just denial because it is specific to cortical
> > lesions, not blindness due to other reasons. If these patients had
> > advanced cyborg implants they could presumably convince the world, and
> > be convinced themsel

Re: Searles' Fundamental Error

2007-02-20 Thread Stathis Papaioannou
On 2/21/07, Brent Meeker <[EMAIL PROTECTED]> wrote:

> A human with an intact brain behaving like an awake human could not
> > really be a zombie unless you believe in magic. However, it is possible
> > to conceive of intelligently-behaving beings who do not have an internal
> > life because they lack the right sort of brains. I am not suggesting
> > that this is the case and there are reasons to think it is unlikely to
> > be the case, but it is not ruled out by any empirical observation.
> >
> > Stathis Papaioannou
>
> The problem is that there doesn't seem to be any conceivable observation
> that could rule it out.  So by Popper's rule it is a not a scientific
> proposition but rather a metaphysical one.  This is another way of saying
> that there is no agreed upon way of assigning a truth or probability value
> to it.
>
> Brent Meeker


This is the usual accusation, but in one sense first person experience is
perfectly easily verified - by the first person. This is a problem for
science because we want our experiments to be third person repeatable and
verifiable, otherwise anyone could make up anything, but arguably this is a
practical rather than philosophical requirement.

Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Computer reads minds

2007-02-20 Thread Stathis Papaioannou
On 2/21/07, Brent Meeker <[EMAIL PROTECTED]> wrote:

http://news.bbc.co.uk/2/hi/health/6346069.stm
>
> But is the *computer* conscious of the decision?
>
> Brent Meeker


My hand in a sense reads my mind because it moves when I will it to move,
but normally it is not thought to participate in  the conscious decision,
even with those experiments showing that the hand may move before the
conscious decision to move (Libet).

Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: ASSA and Many-Worlds

2007-02-20 Thread John Mikes
Thanks, Bruno, lots of remarkable notions in your remarks (I mean: I can
write remarks to them 0 sorry for the pun). Let me interject in Italics
below.
John

On 2/5/07, Bruno Marchal <[EMAIL PROTECTED]> wrote:
>
> Hi John,
>
>
> Le 03-févr.-07, à 17:20, John Mikes a écrit :
>
> > Stathis, Bruno,
> >
> >  This summary sounds fine if I accept to 'let words go'. Is there a
> > way to
> >  'understand' (=use with comprehension) the 'words' used here without
> > the
> >  'technical' acceptance of the theoretical platform?
>
>
> I am not sure. Avoiding technical acceptance of a theoretical platform
> can be done for presenting result, not really for discussing about
> them.


Before discussing, I want to 'understand' - definitely without first
'accepting' the platform I may discuss. One has to be able to express ideas
for people who do not know them in advance.

>  There are sacrosanct 'words' used without explaining them (over and
> > over again?, BUT
> >  at least once for the benefit of that newcomer 'alien' who comes from
> > another vista' ,
> >  like
> >
> >  (absolute?) probability - is there such a thing as probability, the
> > figment that
> >   if it happend x times it WILL happen the (X+one)th time as well?
>
>
> This is inductive inference, not probability.


There are probability-discussions going on on  2 lists. aLL FALL into your
term. Do you have an example for probability (as pointed out from a
muiltitude of possible occurrences)?

> combined with
> >   the statistical hoax of counting from select members in a limited
> > group the version
> >   'A' models and assuming its 'probability'?
>
>
> That is why to use probability and/or any uncertainty measure we have
> to be clear about the axioms we are willing to admit, at least for the
> sake of some argument.


I do not accept 'axioms', they are postulated to make a theoretical position
feasible. I will come back to this at your 'numbers'.

>
> >  observer moment (observer, for that matter), whether the moment is a
> > time-concept
> >   in it and the 'observer' must be conscious (btw: identifying
> > 'conscious')
>
>
> The expression "observer moment" has originated with Nick Bostrom, in
> context similar to the doomsday argument. I would call them "first
> person observer moment". I will try to explain how to translate them in
> comp.


Translate it please first into plain English. Without those symbols which
may be looked up in half an hour just to find 8 other ones in the
explanation which then can be looked up to find 5-6 further ones in each and
so on.
this is the reason for my FIRST par question.

>
> >  number (in the broader sense, yet applied as real integers) (Btw: are
> > the 'non-Arabic'
> >   numbers also numbers? the figments of evolutionary languages
> > alp[habetical or not?
> >   Is zero a number? Was not in "Platonia" - a millennium before its
> > invention(?!)
>
>
> Number, by default are the so called "natural number": 0, 1, 2, 3, 4,
> ...
> They correspond to the number of strokes in the following sequence of
> sets:
> { }, { I }, { II }, { III }, {  }, { I  }, { II  }, {
> III  }, {   }, etc.


Does that mean that you cannot distinguish whether 3, 30, 101010, 120, 1002,
etcetera, ALL SYMBOLISED BY {III}   (plus the unmarked zeroes)
(You did not include the hiatus and position, as number, as I see).
Which would nicely fit into the "Number=God" statement, as infinite
variations of infinite many meanings..

Zero is a number by definition. But this is just a question of
> definition. For the Greeks number begins with three. Like the adjective
> "numerous" still rarely applies when only two things are referred too.


Like Teen(ager) starts at 13. Early development counted to 5, (fingers?)
above that it was "many". In Russian there is a singular and a  dual case,
then a 'small plural' for 3,4,5, then comes the big plural 6-10 in every
decimal size repeatedly.  Ancient Hungarian etc. music was pentatonal. Now
we are decimalic (for practical reasons, except for some backward countries,
e.g. USA) - our toddler computers are binary. So I presume (induction-wise)
that there will be developed other number-systems as well in the future,
unless we accept humbly to be omniscient and sit at the top of the epistemic
enrichment.

>
> >  The 'extensions' of machine into (loebian etc.) [non?]-machine, like
> > comp into the nondigital
>
>
>
> ? comp does not go out of the digital, except from a first person point
> of view (but that is an hard technical point, to be sure).


Do you deny the analogue computing? or(!!) transcribe the participants of
any analogy into numbers? I called above the digital computing  "toddler".

In "english" I would define a "universal (digital) machine", by a
> digital machine potentially capable of emulating (simulating perfectly)
> any other digital machine from a description of it. Today's computers
> and interpreters are typical example of such "hard" and soft
> (respectively) univers

Re: Searles' Fundamental Error

2007-02-20 Thread Jesse Mazer

Stathis Papaioannou wrote:
>
>On 2/20/07, Jesse Mazer <[EMAIL PROTECTED]> wrote:
> >
> >
> > >I would bet on functionalism as the correct theory of mind for various
> > >reasons, but I don't see that there is anything illogical the 
>possibility
> > >that consciousness is substrate-dependent. Let's say that when you rub
> > two
> > >carbon atoms together they have a scratchy experience, whereas when you
> > rub
> > >two silicon atoms together they have a squirmy experience. This could
> > just
> > >be a mundane fact about the universe, no more mysterious than any other
> > >basic physical fact.  What is illogical, however, is the "no causal
> > effect"
> > >criterion if this is called epiphenomenalism. If the effect is purely 
>and
> > >necessarily on first person experience, it's no less an effect; we 
>might
> > >not
> > >notice if the carbon atoms were zombified, but the carbon atoms would
> > >certainly notice. I think it all comes down to the deep-seated and very
> > >obviously wrong idea that only third person empirical data is genuine
> > >empirical data. It is a legitimate concern of science that data should 
>be
> > >verifiable and experiments repeatable, but it's taking it a bit far to
> > >conclude from this that we are therefore all zombies.
> > >
> > >Stathis Papaioannou
> >
> > One major argument against the idea that qualia and/or consciousness 
>could
> > be substrate-dependent is what philosopher David Chalmers refers to as 
>the
> > "dancing qualia" and "fading qualia" arguments, which you can read more
> > about at http://consc.net/papers/qualia.html . As a thought-experiment,
> > imagine gradually replacing neurons in my brain with functionally
> > identical
> > devices whose physical construction was quite different from neurons
> > (silicon chips emulating the input and output of the neurons they
> > replaced,
> > perhaps). If one believes that this substrate is associated with either
> > different qualia or absent qualia, then as one gradually replaces more 
>and
> > more of my brain, they'll either have to be a sudden discontinuous 
>change
> > (and it seems implausible that the replacement of a single neuron would
> > cause such a radical change) or else a gradual shift or fade-out of the
> > qualia my brain experiences...but if I were noticing such a shift or
> > fade-out, I would expect to be able to comment on it, and yet the
> > assumption
> > that the new parts are functionally identical means my behavior should 
>be
> > indistinguishable from what it would be if my neurons were left alone. 
>And
> > if we suppose that I might be having panicked thoughts about a change in
> > my
> > perceptions yet find that my voice and body are acting as if nothing is
> > wrong, and there is no neural activity associated with these panicked
> > thoughts, then there would have to be a radical disconnect between
> > subjective experiences and physical activity in my brain, which would
> > contradict the assumption of supervenience (see
> > http://philosophy.uwaterloo.ca/MindDict/supervenience.html ) and lead to
> > the
> > possibility of radical mind/body disconnects like rocks and trees having
> > complex thoughts and experiences that have nothing to do with any 
>physical
> > activity within them.
> >
> > Jesse
>
>
>  It's a persuasive argument, but I can think of a mechanism whereby your
>qualia can fade away and you wouldn't notice. In some cases of cortical
>blindness, in which the visual cortex is damaged but the rest of the visual
>pathways intact, patients insist that they are not blind and come up with
>explanations as to why they fall over and walk into things, eg. they accuse
>people of putting obstacles in their way while their back is turned. This
>isn't just denial because it is specific to cortical lesions, not blindness
>due to other reasons. If these patients had advanced cyborg implants they
>could presumably convince the world, and be convinced themselves, that 
>their
>visual perception had not suffered when in fact they can't see a thing.
>Perhaps gradual cyborgisation of the brain as per Hans Moravec would lead 
>to
>a similar, gradual fading of thoughts and perceptions; the external 
>observer
>would not notice any change and the subject would not notice any change
>either, until he was dead, replaced by a zombie.
>
>Stathis Papaioannou

That's an interesting analogy, but it seems to me there's an important 
difference between this real case and the hypothetical fading qualia case 
since presumably the brain activity associated with inventing false visual 
sensations is different from the activity associated with visual sensations 
that are based on actual signals from the optic nerve. Additionally, we'd 
still assume it's true that their reports of what they are seeing match the 
visual qualia they are having, even if these visual qualia have no relation 
to the outside world as in dreams or hallucinations. In the case of 
replacing the visual cortex with functionally 

Re: Computer reads minds

2007-02-20 Thread Brent Meeker

John Mikes wrote:
> Brent:
> 
>  2 questions (and pls try to take them seriously):
> 
> 1. do you have a common-sensibly expressible meaning for 'conscious' - 
> in this respect, of machines (computers being so? (conscious of - is 
> easier, but also not obvious).

Short answer - Not a well defined one.  

Which is why it is an interesting question whether the computer that is 
"reading a mind" partakes of consciousness.  Consider the extension of this 
technology to a person, who through injury, had no other way of communicating 
except as the computer "read their mind".  Would that computer be part of 
consciousness?

> 
> 2. The BBC article allows 'scans' to inform about 'theoretical' (or 
> whatever is the appropriate word they use) topics. Do they have a 
> price-list: how many mAmps refer to 'delusional aberration' vs. how many 
> to 'inductive prognostication'?

Not that I know of.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Computer reads minds

2007-02-20 Thread John Mikes
Brent:

 2 questions (and pls try to take them seriously):

1. do you have a common-sensibly expressible meaning for 'conscious' - in
this respect, of machines (computers being so? (conscious of - is easier,
but also not obvious).

2. The BBC article allows 'scans' to inform about 'theoretical' (or whatever
is the appropriate word they use) topics. Do they have a price-list: how
many mAmps refer to 'delusional aberration' vs. how many to 'inductive
prognostication'?
I think all they could do is to differentiate between DONE thinking whether
it was false or believed as true. This is also more than I like (in the
wrong hands).

John

On 2/20/07, Brent Meeker <[EMAIL PROTECTED]> wrote:
>
>
>
>
> http://news.bbc.co.uk/2/hi/health/6346069.stm
>
> But is the *computer* conscious of the decision?
>
> Brent Meeker
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Searles' Fundamental Error

2007-02-20 Thread Brent Meeker

Bruno Marchal wrote:
> 
> Le 19-févr.-07, à 20:14, Brent Meeker a écrit :
> 
>> Bruno Marchal wrote:
>>>
>>> Le 18-févr.-07, à 13:57, Mark Peaty a écrit :
>>>
>>> My main problem with Comp is that it needs several unprovable
>>> assumptions to be accepted. For example the Yes Doctor hypothesis,
>>> wherein it is assumed that it must be possible to digitally 
>>> emulate
>>> some or all of a person's body/brain function and the person will
>>> not notice any difference. The Yes Doctor hypothesis is a 
>>> particular
>>> case of the digital emulation hypothesis in which it is asserted
>>> that, basically, ANYTHING can be digitally emulated if one had
>>> enough computational resources available. As this seems to me to 
>>> be
>>> almost a version of Comp [at least as far as I have got with 
>>> reading
>>> Bruno's exposition] then from my simple minded perspective it 
>>> looks
>>> rather like assuming the very thing that needs to be demonstrated.
>>>
>>>
>>>
>>> I disagree. The main basic lesson from the UDA is that IF I am a 
>>> machine
>>> (whatever I am) then the universe (whatever the universe is) cannot 
>>> be a
>>> machine.
>>> Except if I am (literaly) the universe (which I assume to be false).
>>>
>>> If I survive classical teleportation, then the physical appearances
>>> emerge from a randomization of all my consistent continuations,
>> What characterizes a consistent continuation?
> 
> 
> 
> 
> It is a continuation in which I am unable to prove 0 = 1.  I can only 
> hope *that* exists.

OK, it means logical consistency relative to some initial axioms (which you 
take to be Peano's for the integers).  But I take it that there are many 
continuations which branch.  Is a continuation a consistent continuation up to 
the last branching vertex before 0=1 is proven - or do only infinite 
continuations count as consistent?  

I also wonder about basing this on Peano's axioms.  Would it matter if we took 
arithmetic mod some very large integer instead, i.e. finite arithmetic as done 
is real compters?  Wouldn't this ruin some of your diagonalizations?

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Computer reads minds

2007-02-20 Thread Brent Meeker



http://news.bbc.co.uk/2/hi/health/6346069.stm

But is the *computer* conscious of the decision?

Brent Meeker 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Searles' Fundamental Error

2007-02-20 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> 
> On 2/20/07, *Jesse Mazer* <[EMAIL PROTECTED] 
> > wrote:
> 
> 
>  >I would bet on functionalism as the correct theory of mind for various
>  >reasons, but I don't see that there is anything illogical the
> possibility
>  >that consciousness is substrate-dependent. Let's say that when you
> rub two
>  >carbon atoms together they have a scratchy experience, whereas
> when you rub
>  >two silicon atoms together they have a squirmy experience. This
> could just
>  >be a mundane fact about the universe, no more mysterious than any
> other
>  >basic physical fact.  What is illogical, however, is the "no
> causal effect"
>  >criterion if this is called epiphenomenalism. If the effect is
> purely and
>  >necessarily on first person experience, it's no less an effect; we
> might
>  >not
>  >notice if the carbon atoms were zombified, but the carbon atoms would
>  >certainly notice. I think it all comes down to the deep-seated and
> very
>  >obviously wrong idea that only third person empirical data is genuine
>  >empirical data. It is a legitimate concern of science that data
> should be
>  >verifiable and experiments repeatable, but it's taking it a bit far to
>  >conclude from this that we are therefore all zombies.
>  >
>  >Stathis Papaioannou
> 
> One major argument against the idea that qualia and/or consciousness
> could
> be substrate-dependent is what philosopher David Chalmers refers to
> as the
> "dancing qualia" and "fading qualia" arguments, which you can read more
> about at http://consc.net/papers/qualia.html . As a thought-experiment,
> imagine gradually replacing neurons in my brain with functionally
> identical
> devices whose physical construction was quite different from neurons
> (silicon chips emulating the input and output of the neurons they
> replaced,
> perhaps). If one believes that this substrate is associated with either
> different qualia or absent qualia, then as one gradually replaces
> more and
> more of my brain, they'll either have to be a sudden discontinuous
> change
> (and it seems implausible that the replacement of a single neuron would
> cause such a radical change) or else a gradual shift or fade-out of the
> qualia my brain experiences...but if I were noticing such a shift or
> fade-out, I would expect to be able to comment on it, and yet the
> assumption
> that the new parts are functionally identical means my behavior
> should be
> indistinguishable from what it would be if my neurons were left
> alone. And
> if we suppose that I might be having panicked thoughts about a
> change in my
> perceptions yet find that my voice and body are acting as if nothing is
> wrong, and there is no neural activity associated with these panicked
> thoughts, then there would have to be a radical disconnect between
> subjective experiences and physical activity in my brain, which would
> contradict the assumption of supervenience (see
> http://philosophy.uwaterloo.ca/MindDict/supervenience.html ) and
> lead to the
> possibility of radical mind/body disconnects like rocks and trees
> having
> complex thoughts and experiences that have nothing to do with any
> physical
> activity within them.
> 
> Jesse
> 
> 
>  It's a persuasive argument, but I can think of a mechanism whereby your 
> qualia can fade away and you wouldn't notice. In some cases of cortical 
> blindness, in which the visual cortex is damaged but the rest of the 
> visual pathways intact, patients insist that they are not blind and come 
> up with explanations as to why they fall over and walk into things, eg. 
> they accuse people of putting obstacles in their way while their back is 
> turned. This isn't just denial because it is specific to cortical 
> lesions, not blindness due to other reasons. If these patients had 
> advanced cyborg implants they could presumably convince the world, and 
> be convinced themselves, that their visual perception had not suffered 
> when in fact they can't see a thing. Perhaps gradual cyborgisation of 
> the brain as per Hans Moravec would lead to a similar, gradual fading of 
> thoughts and perceptions; the external observer would not notice any 
> change and the subject would not notice any change either, until he was 
> dead, replaced by a zombie.
> 
> Stathis Papaioannou

An interesting example.  Are these people completely blind?  Do they describe 
seeing things?

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]

Re: Searles' Fundamental Error

2007-02-20 Thread Bruno Marchal


Le 19-févr.-07, à 20:14, Brent Meeker a écrit :

>
> Bruno Marchal wrote:
>>
>>
>> Le 18-févr.-07, à 13:57, Mark Peaty a écrit :
>>
>> My main problem with Comp is that it needs several unprovable
>> assumptions to be accepted. For example the Yes Doctor hypothesis,
>> wherein it is assumed that it must be possible to digitally 
>> emulate
>> some or all of a person's body/brain function and the person will
>> not notice any difference. The Yes Doctor hypothesis is a 
>> particular
>> case of the digital emulation hypothesis in which it is asserted
>> that, basically, ANYTHING can be digitally emulated if one had
>> enough computational resources available. As this seems to me to 
>> be
>> almost a version of Comp [at least as far as I have got with 
>> reading
>> Bruno's exposition] then from my simple minded perspective it 
>> looks
>> rather like assuming the very thing that needs to be demonstrated.
>>
>>
>>
>> I disagree. The main basic lesson from the UDA is that IF I am a 
>> machine
>> (whatever I am) then the universe (whatever the universe is) cannot 
>> be a
>> machine.
>> Except if I am (literaly) the universe (which I assume to be false).
>>
>> If I survive classical teleportation, then the physical appearances
>> emerge from a randomization of all my consistent continuations,
>
> What characterizes a consistent continuation?




It is a continuation in which I am unable to prove 0 = 1.  I can only 
hope *that* exists.





> Does this refer to one's memory and self-identity or does it mean 
> consistent with the unfolding of some algorithm or does it mean 
> consistent with some physical "law" like unitary evolution in Hilbert 
> space?



One's memory and self-identity. This is difficult to define for 
arbitrary machine, and that is why I limit myself with correct and 
recursively enumerable extensions of Peano Arithmetic. It is enough for 
finding the comp-correct physical laws.





>
>> and this
>> is enough for explaining why comp predicts that the "physical
>> appearance" cannot be entirely computational (cf first person
>> indeterminacy, etc.).
>>
>>
>> You can remember it by a slogan: If I am a machine, then (not-I) is 
>> not
>> a machine.
>>
>> Of course something like "arithmetical truth" is not a machine, or
>> cannot be produced by a machine.
>>
>> Remember that one of my goal is to show that the comp hyp is 
>> refutable.
>> A priori it entails some highly non computable things, but then 
>> computer
>> science makes it less easy to refute quickly comp, and empiry (the
>> quantum) seems to assess comp, until now.
>>
>>
>> However, as far as I can see it is inherent in the nature of
>> consciousness to reify something.
>>
>>
>> Well, it depends what you mean by reifying. I take it as a high level
>> intellectual error. When a cat pursues a mouse, it plausible that the
>> cat believes in the mouse, and reify it in a sense. If that is your
>> sense of reifying, then I am ok with the idea that consciousness 
>> reifies
>> things.
>> But I prefer to use "reifying" more technically for making existing
>> something primitively, despite existence of phenomenological 
>> explanation.
>>
>> Let me be clear, because it could be confusing. A computationalist can
>> guess there is a universe, atoms, etc. He cannot remain consistent if 
>> he
>> believes the universe emerge from its parts, that the universe is made
>> of atoms, etc.
>
> You are saying that these beliefs entail a logical contradiction.  
> What is that contradiction?




OK. I was quick. As I have explained to Peter Jones, it is an 
epistemological contradiction. Primary Matter looses all its apparent 
explanative power, given that with or without matter, we have only the 
arithmetical relation to justify or next OM (by UDA). It is a bit like 
the particles in Bohm's interpretation of QM. With comp they are 
totally useless.

Bruno



http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Searles' Fundamental Error

2007-02-20 Thread Brent Meeker

Stathis Papaioannou wrote:
> 
> 
> On 2/20/07, *Mark Peaty* <[EMAIL PROTECTED] 
> > wrote:
> 
> Stathis:'Would any device that can create a representation of the
> world, itself and the relationship between the world and itself be
> conscious?'
> 
> MP: Well that, in a nutshell, is how I understand it; with the
> proviso that it is dynamic: that all representations of all salient
> features and relationships are being updated sufficiently often to
> deal with all salient changes in the environment and self. In the
> natural world this occurs because all the creatures in the past
> who/which failed significantly in this respect got eaten by
> something that stalked its way in between the updates, or the
> creature in effect did not pay enough attention to its environment
> and in consequence lost out somehow in ever contributing to the
> continuation of its specie's gene pool.
> 
> Stathis [in another response to me in this thread]: 'You can't prove
> that a machine will be conscious in the same way you are.'
> 
> MP: Well, that depends what you mean;
> 
>1. to what extent does it matter what I can prove anyway?   
>2. exactly what or, rather, what range of sufficiently complex
>   systems are you referring to as 'machines';
>3. what do you mean by 'conscious in the same way you are'?;
> 
> I'm sure others can think of equally or more interesting questions
> than these, but I can respond to these.
> 
>1. I am sure I couldn't prove whether or not a machine was
>   conscious, but it the 'machine' was, and it was smart enough
>   and interested enough IT could, by engaging us in conversation
>   about its experiences, what it felt like to be what/who it is,
>   and questioning us about what it is like to be us.
>   Furthermore, as Colin Hales has pointed out, if the machine
>   was doing real science it would be pretty much conclusive that
>   it was conscious.
>2. By the word machine I could refer to many of the biological
>   entities that are significantly less complex than humans. What
>   ever one says in this respect, someone somewhere is going to
>   disagree, but I think maybe insects and the like could be
>   quite reasonably be classed as sentient machines with near
>   Zombie status.
>3. If we accept a rough and ready type of physicalism, and
>   naturalism maybe the word I am looking for here, then it is
>   pretty much axiomatic that the consciousness of a
>   creature/machine will differ from mine in the same degree that
>   its body, instinctive behaviour, and environmental niche
>   differ from mine. I think this must be true of all sentient
>   entities. Some of the people I know are 'colour blind'; about
>   half the people I know are female; many of the people I know
>   exhibit quite substantial differences in temperament and
>   predispositions. I take it that these differences from me are
>   real and entail various real differences in the quality of
>   what it is like to be them [or rather their brain's updating
>   of the model of them in their worlds].
> 
> I am interested in birds [and here is meant the feathered
> variety] and often speculate about why they are doing what they
> do and what it may be like to be them. They have very small
> heads compared to mine so their brains can update their models
> of self in the world very much faster than mine can. This must
> mean that their perceptions of time and changes are very
> different. To them I must be a very slow and stupid seeming
> terrestrial giant. Also many birds can see by means of ultra
> violet light. This means that many things such as flowers and
> other birds will look very different compared to what I see.
> [Aside: I am psyching myself up slowly to start creating a
> flight simulator program that flies birds rather than aircraft.
> One of the challenges - by no mean the hardest though -  will be
> to represent UV reflectance in a meaningful way.]
> 
> 
> 1. If it behaved as if it were conscious *and* it did this using the 
> same sort of hardware as I am using (i.e. a human brain) then I would 
> agree that almost certainly it is conscious. If the hardware were on a 
> different substrate but a direct analogue of a human brain and the 
> result was a functionally equivalent machine then I would be almost as 
> confident, but if the configuration were completely different I would 
> not be confident that it was conscious and I would bet that at least it 
> was differently conscious. As for scientific research, I never managed 
> to understand why Colin thought this was

Re: ASSA and Many-Worlds

2007-02-20 Thread Bruno Marchal

Hi Hal,

You say my theory is a subset of yours. I don't understand. I have no 
theory, just a deductive argument that IF we are (digital) machine then 
"the physical world" is in our head. Then I show how a Universal Turing 
Machine can discover it in its own "head". This makes comp, or 
variants, testable.

I have no theory (beside theory of number and machine), I'm just 
listening to the machine. That's all. Then I compare the comp-physics 
with empirical physics.

Do you grasp the Universal Dovetailer Argument? Ask if not.

Regards,

Bruno



Le 20-févr.-07, à 04:42, Hal Ruhl a écrit :

>
> Hi Bruno:
>
> At 05:43 AM 2/19/2007, you wrote:
>
>
>> Le 18-févr.-07, à 03:33, Hal Ruhl a écrit :
>>
>>>
>>> Hi Bruno:
>>>
>>> In response I will start with some assumptions central to my 
>>> approach.
>>>
>>> The first has to do with the process of making a list.
>>>
>>> The assumption is:
>>>
>>> Making a list of items [which could be some of
>>> the elements of a set for example] is always a
>>> process of making a one to one mapping of the
>>> items to some of the counting numbers such as:
>>>
>>> 1 - an item
>>> 2 - an item not previously on the list
>>> 3 - an item not previously on the list
>>> .
>>> .
>>> .
>>> n - last item and it was not previously on the list
>>
>>
>> I don' t see clearly an assumption here. I guess you are assuming
>> existence of things capable of being put in a list.
>
> What I am trying to do is establish what making a
> list is in my model and does it have any mathematical credence.
>
> I make it an assumption because some may believe
> that "make a list" means something different.
>
>> Effectively? then
>> why not use the Wi (cf Cutland's book or older explanations I have
>> provided on the list. Help yourself with Podniek's page perhaps, or 
>> try
>> to be just informal.
>>
>
> See below
>
>
>
>
>>>
>>> My second assumption is:
>>>
>>> Objects [such as states of universes for example] have properties.
>>
>>
>> You talk like if it was an axiomatic. A good test to see if it is an
>> axiomatic consists to change the primitive words you are using by
>> arbitrary words. You are saying "glass of bears have trees and 
>> garden".
>
> Did you mean class not "glass"?
>
>> You can add that you mean that the term "glass of bear" is *intended
>> for states of universes,
>
> I am not a mathematician so I do not quite understand the above.
>
>>  but recall the goal is to provide an
>> explanation for the appearance of the "states of universes".
>
> If I understand you, that comes later in the walk through of my model
>
>>  In general
>> properties are modelized by sets. It is ok to presuppose some naive 
>> set
>> theory, but then you "axiomatic" has to be clean.
>>
>
> See below
>
>
>
>>>
>>> My third assumption is:
>>>
>>> All of the properties it is possible for objects to have can be 
>>> listed.
>>
>>
>> I guess you assume church thesis, and you are talking about effective
>> properties.
>>
>
> To me at this point the Church Thesis is an
> ingredient in some of the possible state
> succession sequences allowed in my model.
>
> I mean all properties I do not know if that is
> the same as your "effective" properties.
>
>
>>>
>>> My fourth assumption is:
>>>
>>> The list of possible properties of objects is countably infinite.
>>
>>
>> ? (lists are supposed to be countably infinite (or finite)).
>>
>
> This is my point above - "to list" inherently a
> countably infinite [as max length] process.
>
> I would add that my third assumption becomes more
> important later as one of the keys to my model's dynamic.
>
>
>
>>>
>>> Conclusions so far:
>>> [All possible objects are defined by all the sub lists of the full
>>> list.]
>>> [The number of objects is uncountably infinite]
>>
>> What is the full list?
>
> The list of all possible properties of objects.
>
>
>>>
>>> I will stop there for now and await comments.
>>>
>>> As to the remainder of the post:
>>>
>>> In the above I have not reached the point of
>>> deriving the dynamic of my model but I am not
>>> focusing on computations when I say that any
>>> succession of states is allowed.  Logically
>>> related successions are allowed.  Successions
>>> displaying any degree of randomness are also allowed.
>>
>>
>> I have already mentionned that comp entails some strong form of (first
>> person) randomness. Indeed, a priori to much.
>>
>
> Yes we have discussed this before, and it is one
> of the reasons I continue to believe that your approach is a sub set 
> of mine.
>
> I know it has taken a long time for me to reach a
> level in my model where I could even begin to use
> an axiom based description and I appreciate your patience.
>
>>>
>>> I would like to finish the walk through of my
>>> model before discussing white rabbits and observation.
>>
>>
>> I am really sorry Hall. It looks you want to be both informal and
>> formal. It does not help me to understand what you are trying to say.
>
> I have read that it takes 10 years of focused
> prac

Re: The Meaning of Life

2007-02-20 Thread Brent Meeker

Tom Caylor wrote:
> On Feb 19, 7:00 pm, "Stathis Papaioannou" <[EMAIL PROTECTED]> wrote:
>> On 2/20/07, Tom Caylor <[EMAIL PROTECTED]> wrote:
>>
>>> On Feb 19, 4:00 pm, "Stathis Papaioannou" <[EMAIL PROTECTED]> wrote:
 On 2/20/07, Tom Caylor <[EMAIL PROTECTED]> wrote:
> These are positivist questions.  This is your basic error in this
> whole post (and previous ones).  These questions are assuming that
> positivism is the right way of viewing everything, even ultimate
> meaning (at least when meaning is said to be based on God, but not
> when meaning is said to be based on ourselves).
> Tom
 Can you explain that a bit further? I can understand that personal
>>> meaning
 is not necessarily connected to empirical facts. The ancient Greeks
>>> believed
 in the gods of Olympus, built temples to them, wrote songs about them,
>>> and
 so on. They provided meaning to the Greeks, and had an overall positive
 effect on Greek society even though as a matter of fact there weren't
>>> any
 gods living on Mount Olympus. Just as long as we are clear about that.
 Stathis Papaioannou
>>> It is a given that whatever belief we have falls short of the set of
>>> all truth.  But here we are talking about different "theories" behind
>>> beliefs in general.  Positivism is one such "theory" or world view.
>>> This problematic type of world view in which positivism falls has also
>>> been referred to as "rationalism in a closed system".  In such a world
>>> view there is no ultimate meaning.  All meaning is a reference to
>>> something else which is in turn meaningless except for in reference to
>>> yet something else which is meaningless.  We can try to hide this
>>> problem by putting the end of the meaning dependency line inside each
>>> individual person's 1st person point of view.  At that point, if we
>>> claim that we still have a closed system, then we have to call the 1st
>>> person point of view meaningless.  Or, if we at that point allow an
>>> "open system", then we can say that the 1st person point of view has
>>> meaning which comes from where-we-know-not.  This is just as useless
>>> as the meaningless view (in terms of being meaningful ;).  This is all
>>> opposed to the world view which allows an ultimate source of meaning
>>> for persons.  If there were such an ultimate source of meaning for
>>> persons, then, even though our beliefs would fall short of the full
>>> truth of it, it makes sense that there would be some way of "seeing"
>>> or discovering the truth in a sort of progressive or growing process
>>> at the personal level.  Gotta go.
>>> Tom
>> I don't see how ultimate meaning is logically possible (if it is even
>> desirable, but that's another question). What is God's ultimate meaning? If
>> he gets away without one or has one from where-we-know-not then how is this
>> different to the case of the individual human? Saying God is infinite
>> doesn't help because we can still ask for the meaning of the whole infinite
>> series. Defining God as someone who *just has* ultimate meaning as one of
>> his attributes is a rehash of the ontological argument.
>>
>> Stathis Papaioannou
>>
> 
> Ultimate meaning is analogous to axioms or arithmetic truth (e.g. 42
> is not prime).  In fact the famous quote of Kronecker "God created the
> integers" makes this point.  I think Bruno takes arithmetic truth as
> his ultimate source of meaning.  If you ask the same positivist
> questions of arithmetic truth, you also have the same problem.  The
> problem lies in the positivist view that there can be no given truth.
> 
> Tom

I think you mis-state the positivist view; which is that what we can directly 
perceive can be the referent of true statements.  But I take your point.  It is 
strictly parallel to the question of what is reality.  It seems pretty clear 
that we can't know what is real as opposed to what seems real to us; except for 
our own thoughts.  So some people deny there is any reality and we're just 
making it all up in a dream (solipism) or in a kind of joint dream (mysticism). 
 Others suppose there is a reality but it's completely unknowable.  Scientists 
generally suppose there is a reality, which we can never know with certainity, 
but which we may know some aspects with varying degrees of confidence through 
inductive inference.  Some on this list suppose that we may be entities in a 
computer game and so we can never know the really real reality of the 
programmer.  Theists suppose there is a reality that cannot be known through 
perception but only through revelation (as if the programme
r told his creations about the computer).  Some seize on the fact that we must 
know our own thoughts and conclude that reality must consist of 
observer-moments.

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email

Re: Searles' Fundamental Error

2007-02-20 Thread Stathis Papaioannou
On 2/20/07, Jesse Mazer <[EMAIL PROTECTED]> wrote:
>
>
> >I would bet on functionalism as the correct theory of mind for various
> >reasons, but I don't see that there is anything illogical the possibility
> >that consciousness is substrate-dependent. Let's say that when you rub
> two
> >carbon atoms together they have a scratchy experience, whereas when you
> rub
> >two silicon atoms together they have a squirmy experience. This could
> just
> >be a mundane fact about the universe, no more mysterious than any other
> >basic physical fact.  What is illogical, however, is the "no causal
> effect"
> >criterion if this is called epiphenomenalism. If the effect is purely and
> >necessarily on first person experience, it's no less an effect; we might
> >not
> >notice if the carbon atoms were zombified, but the carbon atoms would
> >certainly notice. I think it all comes down to the deep-seated and very
> >obviously wrong idea that only third person empirical data is genuine
> >empirical data. It is a legitimate concern of science that data should be
> >verifiable and experiments repeatable, but it's taking it a bit far to
> >conclude from this that we are therefore all zombies.
> >
> >Stathis Papaioannou
>
> One major argument against the idea that qualia and/or consciousness could
> be substrate-dependent is what philosopher David Chalmers refers to as the
> "dancing qualia" and "fading qualia" arguments, which you can read more
> about at http://consc.net/papers/qualia.html . As a thought-experiment,
> imagine gradually replacing neurons in my brain with functionally
> identical
> devices whose physical construction was quite different from neurons
> (silicon chips emulating the input and output of the neurons they
> replaced,
> perhaps). If one believes that this substrate is associated with either
> different qualia or absent qualia, then as one gradually replaces more and
> more of my brain, they'll either have to be a sudden discontinuous change
> (and it seems implausible that the replacement of a single neuron would
> cause such a radical change) or else a gradual shift or fade-out of the
> qualia my brain experiences...but if I were noticing such a shift or
> fade-out, I would expect to be able to comment on it, and yet the
> assumption
> that the new parts are functionally identical means my behavior should be
> indistinguishable from what it would be if my neurons were left alone. And
> if we suppose that I might be having panicked thoughts about a change in
> my
> perceptions yet find that my voice and body are acting as if nothing is
> wrong, and there is no neural activity associated with these panicked
> thoughts, then there would have to be a radical disconnect between
> subjective experiences and physical activity in my brain, which would
> contradict the assumption of supervenience (see
> http://philosophy.uwaterloo.ca/MindDict/supervenience.html ) and lead to
> the
> possibility of radical mind/body disconnects like rocks and trees having
> complex thoughts and experiences that have nothing to do with any physical
> activity within them.
>
> Jesse


 It's a persuasive argument, but I can think of a mechanism whereby your
qualia can fade away and you wouldn't notice. In some cases of cortical
blindness, in which the visual cortex is damaged but the rest of the visual
pathways intact, patients insist that they are not blind and come up with
explanations as to why they fall over and walk into things, eg. they accuse
people of putting obstacles in their way while their back is turned. This
isn't just denial because it is specific to cortical lesions, not blindness
due to other reasons. If these patients had advanced cyborg implants they
could presumably convince the world, and be convinced themselves, that their
visual perception had not suffered when in fact they can't see a thing.
Perhaps gradual cyborgisation of the brain as per Hans Moravec would lead to
a similar, gradual fading of thoughts and perceptions; the external observer
would not notice any change and the subject would not notice any change
either, until he was dead, replaced by a zombie.

Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Searles' Fundamental Error

2007-02-20 Thread Stathis Papaioannou
On 2/20/07, Mark Peaty <[EMAIL PROTECTED]> wrote:

 Stathis:'Would any device that can create a representation of the world,
> itself and the relationship between the world and itself be conscious?'
>
> MP: Well that, in a nutshell, is how I understand it; with the proviso
> that it is dynamic: that all representations of all salient features and
> relationships are being updated sufficiently often to deal with all salient
> changes in the environment and self. In the natural world this occurs
> because all the creatures in the past who/which failed significantly in this
> respect got eaten by something that stalked its way in between the updates,
> or the creature in effect did not pay enough attention to its environment
> and in consequence lost out somehow in ever contributing to the continuation
> of its specie's gene pool.
>
> Stathis [in another response to me in this thread]: 'You can't prove that
> a machine will be conscious in the same way you are.'
>
> MP: Well, that depends what you mean;
>
>1. to what extent does it matter what I can prove anyway?
>2. exactly what or, rather, what range of sufficiently complex
>systems are you referring to as 'machines';
>3. what do you mean by 'conscious in the same way you are'?;
>
> I'm sure others can think of equally or more interesting questions than
> these, but I can respond to these.
>
>1. I am sure I couldn't prove whether or not a machine was
>conscious, but it the 'machine' was, and it was smart enough and interested
>enough IT could, by engaging us in conversation about its experiences, what
>it felt like to be what/who it is, and questioning us about what it is like
>to be us. Furthermore, as Colin Hales has pointed out, if the machine was
>doing real science it would be pretty much conclusive that it was 
> conscious.
>2. By the word machine I could refer to many of the biological
>entities that are significantly less complex than humans. What ever one 
> says
>in this respect, someone somewhere is going to disagree, but I think maybe
>insects and the like could be quite reasonably be classed as sentient
>machines with near Zombie status.
>3. If we accept a rough and ready type of physicalism, and
>naturalism maybe the word I am looking for here, then it is pretty much
>axiomatic that the consciousness of a creature/machine will differ from 
> mine
>in the same degree that its body, instinctive behaviour, and environmental
>niche differ from mine. I think this must be true of all sentient entities.
>Some of the people I know are 'colour blind'; about half the people I know
>are female; many of the people I know exhibit quite substantial differences
>in temperament and predispositions. I take it that these differences from 
> me
>are real and entail various real differences in the quality of what it is
>like to be them [or rather their brain's updating of the model of them in
>their worlds].
>
> I am interested in birds [and here is meant the feathered variety] and
> often speculate about why they are doing what they do and what it may be
> like to be them. They have very small heads compared to mine so their brains
> can update their models of self in the world very much faster than mine can.
> This must mean that their perceptions of time and changes are very
> different. To them I must be a very slow and stupid seeming terrestrial
> giant. Also many birds can see by means of ultra violet light. This means
> that many things such as flowers and other birds will look very different
> compared to what I see. [Aside: I am psyching myself up slowly to start
> creating a flight simulator program that flies birds rather than aircraft.
> One of the challenges - by no mean the hardest though -  will be to
> represent UV reflectance in a meaningful way.]
>
>
1. If it behaved as if it were conscious *and* it did this using the same
sort of hardware as I am using (i.e. a human brain) then I would agree that
almost certainly it is conscious. If the hardware were on a different
substrate but a direct analogue of a human brain and the result was a
functionally equivalent machine then I would be almost as confident, but if
the configuration were completely different I would not be confident that it
was conscious and I would bet that at least it was differently conscious. As
for scientific research, I never managed to understand why Colin thought
this was more than just a version of the Turing test.

2. I don't consider biological machines to be fundamentally different to
other machines.

3. Sure, different entities with (at least) functionally different brains
will be differently conscious. But I like to use "conscious in the way I am"
in order to avoid having to explain or define consciousness in general, or
my consciousness in particular. I can meaningfully talk about "seeing red"
to a blind person who has no idea what the experience is like: What
wavelen

Re: Searles' Fundamental Error

2007-02-20 Thread Stathis Papaioannou
On 2/20/07, John Mikes <[EMAIL PROTECTED]> wrote:

Stathis (barging in to your post to Mark);
> Your premis is redundant, a limited model (machine) cannot be (act,
> perform, sense, react etc.) identical to the total it was cut out from.  So
> you cannot prove it either. As i GOT the difference lately, so I would
> use 'simulated' instead of 'emulated' if I got it right. Even the 3rd p and
> as you restrict it: "observable" behavior is prone to MY 1st p.
> interpretation (distortion).
> "Of the brain"? if you extend it into "the tool of mental behavior" it
> refers to more than just the tissue-machine up to our today's level of
> knowledge. Penrose (though not a friendly correspondent) is smart (happens
> to Nobelist also) in assuming more than computable. His (if he used it
> really) "brain" must be that all inclusive total complexity of all related
> networks.


Whatever today's level of knowledge about it, the brain does what the brain
does inside the skull, no? If you remove the brain, then you remove the
consciousness. Also, Roger Penrose has not received a Nobel prize; neither
has Stephen Hawking.

What I really wanted to stress is your expression "purpose" in evolution. (I
> am not 'in' for the 'zombie craze' because a person without *anything*
> belonging to 'it' is not "the person"), but the *purpose* in conventional
> 'evolution-talk' points to the ID camouflage of creationism. Evolutionary
> mutation does not occur 'in order to' better sustainability (a purpose) -
> rather 'because of'' - in variations induced by the changes in the totality
> (an entailment).


I agree in general, we have to be careful when using words such as "purpose"
when discussing evolution. We can talk about evolution "choosing" animals
with heavier coats when the climate gets colder, because the heavier coats
serve a "purpose" in increasing the animal's survival advantage. Of course,
this is just convenient talk: evolution is blind and stupid.

How intensely some change may influence 'us' is still my terra incognita to
> be explored.
> (In my 'evolution' term i.e. the history of a universe from occurring from
> the plenitude all the way to re-smoothening into it I include a 'purpose: to
> facilitate such 're-smoothening from the incipient unavoidable
> complexity-formation from the plenitude's infinite invariant symmetry - see
> my 'Multiverse-narrative).
>
> John


I often find your posts difficult to understand, John, although that puts
you in good company :)

On 2/18/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:
> >
> >
> >
> > On 2/18/07, Mark Peaty <[EMAIL PROTECTED]> wrote:
> >
> > My main problem with Comp is that it needs several unprovable
> > > assumptions to be accepted. For example the Yes Doctor hypothesis, wherein
> > > it is assumed that it must be possible to digitally emulate some or all 
> > > of a
> > > person's body/brain function and the person will not notice any 
> > > difference.
> > > The Yes Doctor hypothesis is a particular case of the digital emulation
> > > hypothesis in which it is asserted that, basically, ANYTHING can be
> > > digitally emulated if one had enough computational resources available. As
> > > this seems to me to be almost a version of Comp [at least as far as I have
> > > got with reading Bruno's exposition] then from my simple minded 
> > > perspective
> > > it looks rather like assuming the very thing that needs to be 
> > > demonstrated.
> > >
> >
> > You can't prove that a machine will be conscious in the same way you
> > are. There is good reason to believe that the third person observable
> > behaviour of the brain can be emulated, because the brain is just chemical
> > reactions and chemistry is a well-understood field. (Roger Penrose believes
> > that something fundamentally non-computable may be happening in the brain
> > but he is almost on his own in this view.) However, it is possible that the
> > actual chemical reactions are needed for consciousness, and a computer
> > emulation would be a philosophical zombie. I think it is very unlikely that
> > something as elaborate as consciousness could have developed with no
> > evolutionary purpose (evolution cannot distinguish between me and my zombie
> > twin if zombies are possible), but it is a logical possibility.
> >
> > Stathis Papaioannou
> >
>
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: The Meaning of Life

2007-02-20 Thread Stathis Papaioannou
On 2/20/07, Tom Caylor <[EMAIL PROTECTED]> wrote:
>
>
> On Feb 19, 7:00 pm, "Stathis Papaioannou" <[EMAIL PROTECTED]> wrote:
> > On 2/20/07, Tom Caylor <[EMAIL PROTECTED]> wrote:
> >
> > > On Feb 19, 4:00 pm, "Stathis Papaioannou" <[EMAIL PROTECTED]> wrote:
> > > > On 2/20/07, Tom Caylor <[EMAIL PROTECTED]> wrote:
> >
> > > > > These are positivist questions.  This is your basic error in this
> > > > > whole post (and previous ones).  These questions are assuming that
> > > > > positivism is the right way of viewing everything, even ultimate
> > > > > meaning (at least when meaning is said to be based on God, but not
> > > > > when meaning is said to be based on ourselves).
> >
> > > > > Tom
> >
> > > > Can you explain that a bit further? I can understand that personal
> > > meaning
> > > > is not necessarily connected to empirical facts. The ancient Greeks
> > > believed
> > > > in the gods of Olympus, built temples to them, wrote songs about
> them,
> > > and
> > > > so on. They provided meaning to the Greeks, and had an overall
> positive
> > > > effect on Greek society even though as a matter of fact there
> weren't
> > > any
> > > > gods living on Mount Olympus. Just as long as we are clear about
> that.
> >
> > > > Stathis Papaioannou
> >
> > > It is a given that whatever belief we have falls short of the set of
> > > all truth.  But here we are talking about different "theories" behind
> > > beliefs in general.  Positivism is one such "theory" or world view.
> > > This problematic type of world view in which positivism falls has also
> > > been referred to as "rationalism in a closed system".  In such a world
> > > view there is no ultimate meaning.  All meaning is a reference to
> > > something else which is in turn meaningless except for in reference to
> > > yet something else which is meaningless.  We can try to hide this
> > > problem by putting the end of the meaning dependency line inside each
> > > individual person's 1st person point of view.  At that point, if we
> > > claim that we still have a closed system, then we have to call the 1st
> > > person point of view meaningless.  Or, if we at that point allow an
> > > "open system", then we can say that the 1st person point of view has
> > > meaning which comes from where-we-know-not.  This is just as useless
> > > as the meaningless view (in terms of being meaningful ;).  This is all
> > > opposed to the world view which allows an ultimate source of meaning
> > > for persons.  If there were such an ultimate source of meaning for
> > > persons, then, even though our beliefs would fall short of the full
> > > truth of it, it makes sense that there would be some way of "seeing"
> > > or discovering the truth in a sort of progressive or growing process
> > > at the personal level.  Gotta go.
> >
> > > Tom
> >
> > I don't see how ultimate meaning is logically possible (if it is even
> > desirable, but that's another question). What is God's ultimate meaning?
> If
> > he gets away without one or has one from where-we-know-not then how is
> this
> > different to the case of the individual human? Saying God is infinite
> > doesn't help because we can still ask for the meaning of the whole
> infinite
> > series. Defining God as someone who *just has* ultimate meaning as one
> of
> > his attributes is a rehash of the ontological argument.
> >
> > Stathis Papaioannou
> >
>
> Ultimate meaning is analogous to axioms or arithmetic truth (e.g. 42
> is not prime).  In fact the famous quote of Kronecker "God created the
> integers" makes this point.  I think Bruno takes arithmetic truth as
> his ultimate source of meaning.  If you ask the same positivist
> questions of arithmetic truth, you also have the same problem.  The
> problem lies in the positivist view that there can be no given truth.
>
> Tom


This is indeed related to the ontological argument, first formulated by
Anselm of Canterbury in the 11th century: We say that God is a being than
which nothing more perfect can be imagined. If God did not exist, then we
can imagine an entity just like God, but with the additional attribute of
existence - which is absurd, because we would then be imagining something
more perfect than that than which nothing more perfect can be imagined.
Therefore, God the most perfect being imaginable must necessarily have
existence as one of his attributes. Versions of the argument from first
cause and the argument from design also reduce to the ontological argument,
answering the question "who made God?" with the assertion that God exists
necessarily, with no need for the creator/designer (or, you might add,
external source of meaning) that the merely contingents things in the
universe need.

The problem with defining God in this way as something which necessarily
exists is that you can use the same trick to conjure up anything you like:
an "existent pink elephant" can't be non-existent any more than a bachelor
can be married. This objection pales a little if we admit that