Re: Consciousness is information?

2009-06-02 Thread Kelly Harmon

On Sun, May 31, 2009 at 1:46 PM, Bruno Marchal  wrote:
>
>>
>> BUT, if there is significant suffering likely in the worlds where I
>> lose, I might very well focus making a choice that will minimize that
>> suffering.  In which case I will generally not base much of my
>> decision on the "probabilities", since it is my view that all outcomes
>> occur.
>
> ?

For example, if my main concern is to avoid suffering, I might only
make small bets, even in situations with very high odds of success.
In this way I avoid the pain of losing a lot of money in the few
"unlikely" worlds, though at the cost of forfeiting some gains in the
many worlds where the odds come in.

The single-world equivalent is just being very risk averse, I suppose.
 But the motivation is different.  In the single-world view, if I'm
risk averse I just don't want to take the risk of losing a lot of
money, even when given very good odds.  In the many-world view, I know
that a future version of me is going to lose, and I want to minimize
the consequences of that loss even at the expense of limiting the
gains for the winning future-Kellys.

So the idea that I might bet more when given better odds wouldn't hold
in this case because I know that betting more is causing more
suffering for the few but inevitable losing Kellys.

And I can imagine other types of scenarios where I would bet on a
lower probability outcome, if such a bet had less severe consequences
in the case of a loss.

Though the fact that at the time you place your bet, branching may
occur resulting in different bets being placed also has to be
considered.


> First, in the multiplication experience, the question of your choice
> is not addressed, nor needed.
> The question is really: what will happen to you. You give the right
> answer above.
>

You're saying that there are no low probability worlds?  Or only that
they're outnumbered by the high probability worlds?

I guess I'm not clear on what you're getting at with this pixel
thought-experiment.


> Have you understand UDA1-6?, because I think most get those steps. I
> will soon explain in all details UDA-7, which is not entirely obvious.
> If you take your own philosophy seriously, you don't need UDA8. But it
> can be useful to convince others, of the necessity of that
> "philosophy", once we bet on the comp hyp.
>

I think I have a good grasp of 1 through 6.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-29 Thread Kelly Harmon

On Thu, May 28, 2009 at 3:49 PM, Bruno Marchal  wrote:
>
> What do you thing is the more probable events that you will live which
> one is the more probable? What is your most rational choice among

So if nothing is riding on the outcome of my choice, then it seems
rational to choose the option that will make me right in the most
futures, which is option 6, white noise.  If there's one world for
each unique pattern of pixels, then most of those worlds will be
"white noise" worlds, and making the choice that makes me right in the
most worlds seems as rational as anything else.

Though, if there is something significant riding on whether I choose
correctly or not, then I have to decide what is most important to me:
minimizing my suffering in the worlds where I'm wrong, or maximizing
my gains in the worlds where I'm right.

If there isn't significant suffering likely in the losing worlds, then
I will be much more likely to base my decision on the observed or
calculated probabilities, as Papineau suggests.

BUT, if there is significant suffering likely in the worlds where I
lose, I might very well focus making a choice that will minimize that
suffering.  In which case I will generally not base much of my
decision on the "probabilities", since it is my view that all outcomes
occur.

However, going a little further, this assumes that I only make one
bet.  As I mentioned before, I think that I will make all possible
bets.  So, even if I make the "safe" suffering-minimizing bet in this
branch, I know that in a closely related branch I will make the risky
"gain-maximizing" bet and say to hell with the Kellys in the losing
worlds.

So I know that even if I make the safe bet, there's another Kelly two
worlds over making the risky bet, which will result in a Kelly
suffering the consequences of losing over there anyway.  So maybe I'll
say, "screw it", and make the risky bet myself.

Ultimately, it doesn't matter.  Every Kelly in every situation with
every history is actualized.  So my subjective feeling that I am
making choices is irrelevant.  Every choice is going to get made, so
my "choice" is really just me taking my place in the continuum of
Kellys.


> And I am asking you, here and now, what do you expect the most
> probable experience you will feel tomorrow, when I will do that
> experiment.

So to speak of expectations is to appeal to my "single world"
intuitions.  But we know that intuition isn't a reliable guide, since
there are many aspects of reality that are unintuitive.  So I think
the fact that I have an intuitive expectation that things will happen
a certain way, and only that way, is neither here nor there.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-28 Thread Kelly Harmon

On Wed, May 27, 2009 at 10:21 AM, Bruno Marchal  wrote:
>
> Since you told me that you accept comp, after all, and do no more
> oppose it to your view, I think we agree, at least on many things.
> Indeed you agree with the hypothesis, and your philosophy appears to
> be a consequence of the hypothesis.

Excellent!


> It remains possible that we have a disagreement concerning the
> probability, and this has some importance, because it is the use of
> probability (or credibility) which makes the consequences of comp
> testable. More in the comment below.

So my only problem with the usual view of probability is that it
doesn't seem to me to emerge naturally from a platonic theory of
conscious.  Is your proposal something that would conceivably be
arrived at by a rational observer in one of the (supposedly) rare
worlds where white rabbits are common?   Does it have features that
would lead one to predict the absence of white rabbits, or does it
just offer a way to explain their absence after the fact?

As I mentioned before, assuming computationalism it seems to me that
it is theoretically possible to create a computer simulation that
would manifest any imaginable conscious entity observing any
imaginable "world", including schizophrenic beings observing
psychedelic realities.  So, then further assuming Platonism, all of
these strange experiences should exist in Platonia.  Along with all
possible normal experiences.

I don't see any obvious, non-"ad hoc" mechanism to eliminate or
minimize strange experiences relative to normal experiences, and I
don't think adding one is justified just for that purpose, or even
necessary since an unconstrained platonic theory does have the obvious
virtue of saying that there will always be Kellys like myself who have
never seen white rabbits.

As for your earlier questions about how you should bet, I have two responses.

First that there exists a Bruno who will make every possible bet.
One particular Bruno will make his bet on a whim, while another Bruno
will do so only after long consideration, and yet another will make a
wild bet in a fit of madness.  Each Bruno will "feel" like he made a
choice, but actually all possible Brunos exist, so all possible bets
are made, for all possible subjectively "felt" reasons.

Second, and probably more helpfully, I'll quote this paper
(http://www.kcl.ac.uk/content/1/c6/04/17/78/manymindsandprobs.doc) by
David Papineau, which sounds reasonable to me:

"But many minds theorists can respond that the logic of statistical
inference is just the same on their view as on the conventional view.
True, on their view in any repeated trial all the different possible
sequences of results can be observed, and so some attempts to infer
the probability from the observed frequency will get it wrong.  Still,
any particular mind observing any one of these sequences will reason
just as the conventional view would recommend:  note the frequency,
infer that the probability is close to the frequency, and hope that
you are not the unlucky victim of an improbable sample.  Of course the
logic of this kind of statistical inference is itself a matter of
active philosophical controversy.  But it will be just the same
inference on both the many minds and the conventional view.

[...]

It is worth observing that, on the conventional view, what agents want
from their choices are the desired results, rather than that these
results be objectively probable (a choice that makes the results
objectively probable, but unluckily doesn't produce them, doesn't give
you what you want).  Given this, there is room to raise the question:
why are rational agents well-advised to choose actions that make their
desired results objectively probable?  Rather surprisingly, is no good
answer to this question.  (After all, you can't assume you will get
what you want if you so choose.)  From Pierce on, philosophers have
been forced to conclude that it is simply a primitive fact about
rational choice that you ought to weight future possibilities
according to known objective probabilities in making decisions.

The many minds view simply says the same thing.  Rational agents ought
to choose those actions which will maximize the known objective
probability of desired results.  As to why they ought to do this,
there is no further explanation.  This is simply a basic truth about
rational choice.

[...]

I supect that this basic truth actually makes more sense on the many
minds view than on the conventional view.  For on the conventional
view there is a puzzle about the relation between this truth and the
further thought that ultimate success in action depends on desired
results actually occurring.  On the many minds view, by contrast,
there is no such further thought, since all possible results occur,
desired and undesired, and so no puzzle:  in effect there is only one
criterion of success in action, namely, maximizing the known objective
probability of desired results.  However, this is really th

Re: Consciousness is information?

2009-05-27 Thread Kelly Harmon

On Mon, May 25, 2009 at 11:21 AM, Bruno Marchal  wrote:
>
>
> Actually I still have no clue of what you mean by "information".

Well, I don't think I can say it much better than I did before:

In my view, there are ungrounded abstract symbols that acquire
meaning via constraints placed on them by their relationships to other
symbols.  The only "grounding" comes from the conscious experience
that is intrinsic to a particular set of relationships.  To repeat my
earlier Chalmers quote, "Experience is information from the inside;
physics is information from the outside."  It is this subjective
experience of information that provides meaning to the otherwise
completely abstract "platonic" symbols.

Going a little further:  I would say that the relationships between
the symbols that make up a particular mental state have some sort of
consistency, some regularity, some syntax - so that when these
syntactical relationships are combined with the symbols it does make
up some sort of descriptive language.  A language that is used to
describe a state of mind.  Here we're well into the realm of semiotics
I think.

To come back to our disagreement, what is it that a Turing machine
does that results in consciousness?  It would seem to me that
ultimately what a Turing machine does is manipulate symbols according
to specific rules.  But is it the process of manipulating the symbols
that produces consciousness?  OR is it the state of the symbols and
their relationships with each other AFTER the manipulation which
really accounts for consciousness?

I say the latter.  You seem to be saying the former...or maybe you're
saying it's both?

As I've mentioned, I think that the symbols which combine to create a
mental state can be manipulated in MANY ways.  And algorithms just
serve as descriptions of these ways.  But subjective consciousness is
in the states, not in how the states are manipulated.


> With different probabilities. That is why we are partially responsible
> of our future. This motivates education and learning, and commenting
> posts ...

In my view, life is just something that we experience.  That's it.
There's nothing more to life than subjective experience.  The feeling
of being an active participant, of making decisions, of planning, of
choosing, is only that:  a feeling.  A type of qualia.

Okay, it's past my bedtime, I'll do probability tomorrow!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-24 Thread Kelly Harmon

On Sun, May 24, 2009 at 1:54 AM, Bruno Marchal  wrote:
>
> May be you could study the UDA, and directly tell me at which step
> your "theory" departs from the comp hyp.

Okay, I read over your SANE2004 paper again.

>From step 1 of UDA:

"The scanned (read) information is send by traditional means, by mails
or radio waves for instance, at Helsinki, where you are correctly
reconstituted with ambient organic material."

Okay, so this information that is sent by traditional means is really
I think where consciousness lives.  Though not literally in the
physical instantiation of the information.  For instance if you were
to print out that information in some format, I would NOT point to the
large pile of ink-stained paper and say that it was conscious.  But
would say that the information that is represented by that pile of ink
and paper "represents", or "identifies", or "points to" a single
instant of consciousness.

So, what is the information?  Well, let's say the data you're
transmitting is from a neural scan and consists of a bunch of numbers
indicating neural connection weights, chemical concentrations,
molecular positions and states, or whatever.  I wouldn't even say that
this information is the information that is conscious.  Instead this
information is ultimately an encoding (via the particular way that the
brain stores information) of the symbols and the relationships between
those symbols that represent your knowledge, beliefs, and memories
(all of the information that makes you who you are).  (Echoes here of
the Latent Semantic Analysis (LSA) stuff that I referenced before)


>From step 8 of UDA:

"Instead of linking [the pain I feel] at space-time (x,t) to [a
machine state] at space-time (x,t), we are obliged to associate [the
pain I feel at space-time (x,t)] to a type or a sheaf of computations
(existing forever in the arithmetical Platonia which is accepted as
existing independently of our selves with arithmetical realism)."

So instead I would write this as:

"Instead of linking [the pain I feel] at space-time (x,t) to [a
machine state] at space-time (x,t), we are obliged to associate [the
pain I feel at space-time (x,t)] to an [informational state] existing
forever in Platonia which is accepted as existing independently of
ourselves."


> You have to see that, personally, I don't have a theory other than the
> assumption that the brain is emulable by a Turing machine

I also believe that, but I think that consciousness is in the
information represented by the discrete states of the data stored on
the Turing machine's tape after each instruction is executed, NOT in
the actual execution of the Turing machine.  The instruction table of
the Turing machine just describes one possible way that a particular
sequence of information states could be produced.

Execution of the instructions in the action table actually doesn't do
anything with respect to the production of consciousness.  The output
informational states represented by data on tape exists platonically
even if the Turing machine program is never run.  And therefore the
consciousness that goes with those states also exists platonically,
even if the Turing machine program is never run.


> OK. So, now, Kelly, just to understand what you mean by your theory, I
> have to ask you what your theory predicts in case of self-
> multiplication.

Well, first I'd say there aren't copies of identical information in
Platonia.  All perceived physical representations all actually point
to (similarly to a C-style pointer in programming) the same
platonically existing information state.  So if there are 1000
identical copies of me in identical mental states, they are really
just representations of the same "source" information state.

Piles of atoms aren't conscious.  Information is conscious.  1000
identically arranged piles of atoms still represent only a single
information state (setting aside putnam mapping issues).  The
information state is conscious, not the piles of atoms.

However, once their experiences diverge so that they are no longer
identical, then they are totally seperate and they represent (or point
to) seperate, non-overlapping conscious information states.


> To see where does those probabilities come from, you have to
> understand that 1) you can be multiplied (that is read, copy (cut) and
> pasted in Washington AND Moscow (say)), and 2) you are multiplied (by
> 2^aleph_zero, at each instant, with a comp definition of instant not
> related in principle with any form of physical time).

Well, probability is a tricky subject, right?

An interesting quote:

"Whereas the interpretation of quantum mechanics has only been
puzzling us for ~75 years, the interpretation of probability has been
doing so for more than 300 years [16, 17]. Poincare [18] (p. 186)
described probability as "an obscure instinct". In the century that
has elapsed since then philosophers have worked hard to lessen the
obscurity. However, the result has not been to arrive at

Re: Consciousness is information?

2009-05-23 Thread Kelly Harmon

On Sat, May 23, 2009 at 8:47 AM, Bruno Marchal  wrote:
>
>
>> To repeat my
>> earlier Chalmers quote, "Experience is information from the inside;
>> physics is information from the outside."  It is this subjective
>> experience of information that provides meaning to the otherwise
>> completely abstract "platonic" symbols.
>
>
> I insist on this well before Chalmers. We are agreeing on this.
> But then you associate consciousness with the experience of information.
> This is what I told you. I can understand the relation between
> consciousness and information content.

Information.  Information content.  H.  Well, I'm not entirely
sure what you're saying here.  Maybe I don't have a problem with this,
but maybe I do.  Maybe we're really saying the same thing here, but
maybe we're not.  Hm.


>> Note that I don't have Bruno's fear of white rabbits.
>
> Then you disagree with all reader of David Lewis, including David
> lewis himself who recognizes this inflation of to many realities as a
> weakness of its modal realism. My point is that the comp constraints
> leads to a solution of that problem, indeed a solution close to the
> quantum Everett solution. But the existence of white rabbits, and thus
> the correctness of comp remains to be tested.

True, Lewis apparently saw it as a cost, BUT not so high a cost as to
abandon modal realism.  I don't even see it as a high cost, I see it
as a logical consequence.  Again, it's easy to imagine a computer
simulation/virtual reality in which a conscious observer would see
disembodied talking heads and flying pigs.  So it certainly seems
possible for a conscious being to be in a state of observing an
unattached talking head.

Given that it's possible, why wouldn't it be actual?

The only reason to think that it wouldn't be actual is that our
external objectively existing physical universe doesn't have physical
laws that can lead easily to the existance of such talking heads to be
observed.  But once you've abandoned the external universe and
embraced platonism, then where does the constraint against observing
talking heads come from?

Assuming platonism, I can explain why "I" don't see talking heads:
because every possible Kelly is realized, and that includes a Kelly
who doesn't observe disembodied talking heads and who doesn't know
anyone who has ever seen such a head.

So given that my observations aren't in conflict with my theory, I
don't see a problem.  The fact that nothing that I could observe would
ever conflict with my theory is also not particularly troubling to me
because I didn't arrive at my theory as means of explaining any
particular observed fact about the external universe.

My theory isn't intended to explain the contingent details of what I
observe.  It's intended to explain the fact THAT I subjectively
observe anything at all.

Given that it seems theoretically possible to create a computer
simulation that would manifest any imaginable conscious being
observing any imaginable "world", including schizophrenic beings
observing psychodelic realities, I don't see why you are trying to
constrain the platonic realities that can be experienced to those that
are extremely similar to ours.


> It is just a question of testing a theory. You seem to say something
> like "if the theory predict that water under fire will typically boil,
> and that experience does not confirm that typicality (water froze
> regularly) then it means we are just very unlucky". But then all
> theories are correct.

I say there is no water.  There is just our subjective experience of
observing water.  Trying to constrain a Platonic theory of
consciousness so that it matches a particular observed physical
reality seems like a mistake to me.

Is there a limit to what we could experience in a computer simulated
reality?  If not, why would there be a limit to what we could
experience in Platonia?


>> The double-aspect principle stems from the observation that there is a
>> direct isomorphism between certain physically embodied information
>> spaces and certain phenomenal (or experiential) information spaces.
>
> This can be shown false in Quantum theory without collapse, and more
> easily with the comp assumption.
> No problem if you tell me that you reject both Everett and comp.
> Chalmers seems in some place to accept both Everett and comp, indeed.
> He explains to me that he stops at step 3. He believes that after a
> duplication you feel to be simultaneously at the both place, even
> assuming comp. I think and can argue that this is non sense. Nobody
> defends this on the list. Are you defending an idea like that?

I included the Chalmers quote because I think it provides a good image
of how abstract information seems to supervene on physical systems.
BUT by quoting the passage I'm not saying that I think that this
appearance of supervenience is the source of consciousness.  I still
buy into the putnam mapping view that there is no 1-to-1 mapping from
information or com

Re: Consciousness is information?

2009-05-23 Thread Kelly Harmon

Okay, below are three passages that I think give a good sense of what
I mean by "information" when I say that "consciousness is
information".  The first is from David Chalmers' "Facing up to the
Problem of Consciousness."  The second is from the SEP article on
"Semantic Conceptions of Information", and the third is from "Symbol
Grounding and Meaning:  A comparison of High-Dimensional and Embodied
Theories of Meaning", by Arthur Glenberg and David Robertson.

So I'm looking at these largely from a static, timeless, platonic
view.  In my view, there are ungrounded abstract symbols that acquire
meaning via constraints placed on them by their relationships to other
symbols.  The only "grounding" comes from the conscious experience
that is intrinsic to a particular set of relationships.  To repeat my
earlier Chalmers quote, "Experience is information from the inside;
physics is information from the outside."  It is this subjective
experience of information that provides meaning to the otherwise
completely abstract "platonic" symbols.

So I think that something like David Lewis' "modal realism" is true by
virtue of the fact that all possible sets of relationships are
realized in Platonia.

Note that I don't have Bruno's fear of white rabbits.  Assuming that
we are typical observers is fine as a starting point, and is a good
way to choose between otherwise equivalent explanations, but I don't
think it should hold a unilateral veto over our final conclusions.  If
the most reasonable explanation says that our observations aren't
especially typical, then so be it.  Not everyone can be typical.

I think the final passage from Glenberg and Robertson (from a paper
that actually argues against what's being described) gives the best
sense of what I have in mind, though obviously I'm extrapolating out
quite abit from the ideas presented.

Okay, so the passages of interest:

--

David Chalmers:

The basic principle that I suggest centrally involves the notion of
information. I understand information in more or less the sense of
Shannon (1948). Where there is information, there are information
states embedded in an information space. An information space has a
basic structure of difference relations between its elements,
characterizing the ways in which different elements in a space are
similar or different, possibly in complex ways. An information space
is an abstract object, but following Shannon we can see information as
physically embodied when there is a space of distinct physical states,
the differences between which can be transmitted down some causal
pathway. The states that are transmitted can be seen as themselves
constituting an information space. To borrow a phrase from Bateson
(1972), physical information is a difference that makes a difference.

The double-aspect principle stems from the observation that there is a
direct isomorphism between certain physically embodied information
spaces and certain phenomenal (or experiential) information spaces.
>From the same sort of observations that went into the principle of
structural coherence, we can note that the differences between
phenomenal states have a structure that corresponds directly to the
differences embedded in physical processes; in particular, to those
differences that make a difference down certain causal pathways
implicated in global availability and control. That is, we can find
the same abstract information space embedded in physical processing
and in conscious experience.

--

SEP:

Information cannot be dataless but, in the simplest case, it can
consist of a single datum.  A datum is reducible to just a lack of
uniformity (diaphora is the Greek word for “difference”), so a general
definition of a datum is:

The Diaphoric Definition of Data (DDD):

A datum is a putative fact regarding some difference or lack of
uniformity within some context.  [In particular data as diaphora de
dicto, that is, lack of uniformity between two symbols, for example
the letters A and B in the Latin alphabet.]

--

Glenberg and Robertson:

Meaning arises from the syntactic combination of abstract, amodal
symbols that are arbitrarily related to what they signify.  A new form
of the abstract symbol approach to meaning affords the opportunity to
examine its adequacy as a psychological theory of meaning.  This form
is represented by two theories of linguistic meaning (that is, the
meaning of words, sentences, and discourses), both of which take
advantage of the mathematics of high-dimensional spaces. The
Hyperspace Analogue to Language (HAL; Burgess & Lund, 1997) posits
that the meaning of a word is its vector representation in a space
based on 140,000 word–word co-occurrences. Latent Semantic Analysis
(LSA; Landauer & Dumais, 1997) posits that the meaning of a word is
its vector representation in a space with approximately 300 dimensions
derived from a space with many more dimensions. The vector elements
found in both theories are just the sort of abstract features that ar

Re: Consciousness is information?

2009-05-19 Thread Kelly Harmon

On Mon, May 18, 2009 at 6:36 AM, Bruno Marchal  wrote:
>
> I agree with your critic of "consciousness = information". This is "not
> even wrong",

Ouch!  Et tu, Bruno???


> and Kelly should define what he means by "information" so
> that we could see what he really means.

Okay, okay!  I was hoping it wouldn't come to this, but you've backed
me into a corner.  (ha!)

I'll come up with a definition and post it asap.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-19 Thread Kelly Harmon

On Mon, May 18, 2009 at 12:30 AM, Brent Meeker  wrote:
>
> On the contrary, I think it does.  First, I think Chalmers idea that
> vitalists recognized that all that needed explaining was structure and
> function is revisionist history.  They were looking for the animating
> spirit.  It is in hind sight, having found the function and structure,
> that we've realized that was all the explanation available.

Hmmm.  I'm not familiar enough with the history of this to argue one
way or the other.  A quick read through the wikipedia article on
vitalism, and some light googling, left me with the impression that
most of the argument centered around function.  And also the
difference between organic and inorganic chemical compounds.

Though to the extent that there was something being debated beyond
structure and function, I think that Chalmers makes a good point here:

> There is not even a plausible candidate for a further sort of property of
> life that needs explaining (leaving aside consciousness itself), and
> indeed there never was.

I'm highlighting the parenthetical "leaving aside consciousness itself".

SO.  Dennett makes one claim.  Chalmers makes what I thought was a
pretty good rebuttal.  I've never seen a counter-response from Dennett
on this point, and it's not a historical topic that I know much about.
 Do you have some special expertise, or a good source that overturns
Chalmers rebuttal?

Though, comparing what people thought about an entirely different
topic 150 years ago to this topic now seems like a clever debating
point, but otherwise of iffy relevance.


> We will eventually
> be able to make robots that behave as humans do and we will infer, from
> their behavior, that they are conscious.

What about robots (or non-embodied computer programs) that are equally
complex but (for whatever design reasons) don't exhibit any
"human-like" behaviors?  Will we "infer" that they are conscious?  How
will we know which types of complex systems are conscious and which
aren't?  What is the marker?

We'll just "know it when we see it"?  If so, it's only because we have
definite knowledge of our own conscious experience, and we're looking
for behaviors that we can "empathize" with.  But is empathy reliable?
It's certainly exploitable...Kismet for example.  So it can generate
false positives, but what might it also miss?


> And we, being their designers,
> will be able to analyze them and say, "Here's what makes R2D2 have
> conscious experiences of visual perception and here's what makes 3CPO
> have self awareness relative to humans."

I would agree that we could say something definite about the
functional aspects, but not about any experiential aspects.  Those
would have to be taken on faith.  For all we know, R2D2 might have a
case of blindsight AND Anton-Babinski syndrome...in which case he
would react to visual data but have no conscious experience of what he
saw (blindsight), BUT would claim that he did experience it
(Anton-Babinksi)!


> We will find that there are
> many different kinds of "conscious" and we will be able to invent new
> ones.

How would we know that we had actually invented new ones?  What is it
like to be a robo-Bat?


> We will never "solve" Chalmers hard problem, we'll just realize
> it's a non-question.

Maybe.  Time will tell.  But even if we all agree that it's a
non-question, that wouldn't necessarily mean that we'd be correct in
doing so.


>>
>> Well, here's where it gets tricky.  Conscious experience is associated
>> with information.
>
> I think that's the point in question.  However, we all agree that
> consciousness is associated with, can be identified by, certain
> behavior.  So to say that physical systems are too representationally
> ambiguous seems to me to beg the question.  It is based on assuming that
> consciousness is information and since the physical representation of
> information is ambiguous it is inferred that physical representations
> aren't enough for consciousness.  But  going back to the basis: Is
> behavior ambiguous?  Sure it is - yet we rely in it to identify
> consciousness (at least if you don't believe in philosophical
> zombies).   I think the significant point is that consciousness is an
> attribute of behavior that is relative to an environment.
>

So I think the possibility (conceivability?) of conscious computer
simulations is what throws a kink into this line of thought.

I'll quote Hans Moravec here:

"A simulated world hosting a simulated person can be a closed
self-contained entity. It might exist as a program on a computer
processing data quietly in some dark corner, giving no external hint
of the joys and pains, successes and frustrations of the person
inside. Inside the simulation events unfold according to the strict
logic of the program, which defines the ``laws of physics'' of the
simulation. The inhabitant might, by patient experimentation and
inference, deduce some representation of the simulation laws, but not
the nature or eve

Re: Consciousness is information?

2009-05-18 Thread Kelly Harmon

On Mon, May 18, 2009 at 4:22 PM, George Levy  wrote:
> Kelly Harmon wrote:
>
> What if you used a lookup table for only a single neuron in a computer
> simulation of a brain?
>
>
> Hi Kelly
>
> Zombie arguments involving look up tables are faulty because look up tables
> are not closed systems. They require someone to fill them up.
> To resolve these arguments you need to include the creator of the look up
> table in the argument. (Inclusion can be across widely different time
> periods and spacial location)
>

Indeed!  I'm not arguing that the use of look-up tables entails
zombie-ism.  I was posing a question in response to Jessie's comment:

>> I don't have a problem with the idea that a giant lookup table is just
>> a sort of "zombie", since after all the way you'd create a lookup table

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Sun, May 17, 2009 at 9:13 PM, Brent Meeker  wrote:
>
>> Generally I don't think that what we experience is necessarily caused
>> by physical systems.  I think that sometimes physical systems assume
>> configurations that "shadow", or represent, our conscious experience.
>> But they don't CAUSE our conscious experience.
>>
>
> So if we could track the functions of the brain at a fine enough scale,
> we'd see physical events that didn't have physical causes (ones that
> were caused by mental events?).
>

No, no, no.  I'm not saying that at all.  Ultimately I'm saying that
if there is a physical world, it's irrelevant to consciousness.
Consciousness is information.  Physical systems can be interpreted as
representing, or "storing", information, but that act of "storage"
isn't what gives rise to conscious experience.

>
> You're aware of course that the same things were said about the
> physio/chemical bases of life.
>

You mentioned that point before, as I recall.  Dennett made a similar
argument against Chalmers, to which Chalmers had what I thought was an
effective response:

---
http://consc.net/papers/moving.html

Perhaps the most common strategy for a type-A materialist is to
deflate the "hard problem" by using analogies to other domains, where
talk of such a problem would be misguided. Thus Dennett imagines a
vitalist arguing about the hard problem of "life", or a neuroscientist
arguing about the hard problem of "perception". Similarly, Paul
Churchland (1996) imagines a nineteenth century philosopher worrying
about the hard problem of "light", and Patricia Churchland brings up
an analogy involving "heat". In all these cases, we are to suppose,
someone might once have thought that more needed explaining than
structure and function; but in each case, science has proved them
wrong. So perhaps the argument about consciousness is no better.

This sort of argument cannot bear much weight, however. Pointing out
that analogous arguments do not work in other domains is no news: the
whole point of anti-reductionist arguments about consciousness is that
there is a disanalogy between the problem of consciousness and
problems in other domains. As for the claim that analogous arguments
in such domains might once have been plausible, this strikes me as
something of a convenient myth: in the other domains, it is more or
less obvious that structure and function are what need explaining, at
least once any experiential aspects are left aside, and one would be
hard pressed to find a substantial body of people who ever argued
otherwise.

When it comes to the problem of life, for example, it is just obvious
that what needs explaining is structure and function: How does a
living system self-organize? How does it adapt to its environment? How
does it reproduce? Even the vitalists recognized this central point:
their driving question was always "How could a mere physical system
perform these complex functions?", not "Why are these functions
accompanied by life?" It is no accident that Dennett's version of a
vitalist is "imaginary". There is no distinct "hard problem" of life,
and there never was one, even for vitalists.

In general, when faced with the challenge "explain X", we need to ask:
what are the phenomena in the vicinity of X that need explaining, and
how might we explain them? In the case of life, what cries out for
explanation are such phenomena as reproduction, adaptation,
metabolism, self-sustenance, and so on: all complex functions. There
is not even a plausible candidate for a further sort of property of
life that needs explaining (leaving aside consciousness itself), and
indeed there never was. In the case of consciousness, on the other
hand, the manifest phenomena that need explaining are such things as
discrimination, reportability, integration (the functions), and
experience. So this analogy does not even get off the ground.

--

>> Though it DOES seem plausible/obvious to me that a physical system
>> going through a sequence of these representations is what produces
>> human behavior.
>
> So you're saying that a sequence of physical representations is enough
> to produce behavior.

Right, observed behavior.  What I'm saying here is that it seems
obvious to me that mechanistic computation is sufficient to explain
observed human behavior.  If that was the only thing that needed
explaining, we'd be done.  Mission accomplished.

BUT...there's subjective experience that also needs explained, and
this is actually the first question that needs answered.  All other
answers are suspect until subjective experience has been explained.


> And there must be conscious experience associated
> with behavior.

Well, here's where it gets tricky.  Conscious experience is associated
with information.  But how information is tied to physical systems is
a different question.  Any physical systems can be interpreted as
representing all sorts of things (again, back to Putnam and Searle,
one-time pads, Maudlin's Olympia 

Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Fri, May 15, 2009 at 12:32 AM, Jesse Mazer  wrote:
>
> I don't have a problem with the idea that a giant lookup table is just a
> sort of "zombie", since after all the way you'd create a lookup table for a
> given algorithmic mind would be to run a huge series of actual simulations
> of that mind with all possible inputs, creating a huge archive of
> "recordings" so that later if anyone supplies the lookup table with a given
> input, the table just looks up the recording of the occasion in which the
> original simulated mind was supplied with that exact input in the past, and
> plays it back. Why should merely replaying a recording of something that
> happened to a simulated observer in the past contribute to the measure of
> that observer-moment? I don't believe that playing a videotape of me being
> happy or sad in the past will increase the measure of happy or sad
> observer-moments involving me, after all. And Olympia seems to be somewhat
> similar to a lookup table in that the only way to construct "her" would be
> to have already run the regular Turing machine program that she is supposed
> to emulate, so that you know in advance the order that the Turing machine's
> read/write head visits different cells, and then you can rearrange the
> positions of those cells so Olympia will visit them in the correct order
> just by going from one cell to the next in line over and over again.
>

What if you used a lookup table for only a single neuron in a computer
simulation of a brain?  So actual calculations for the rest of the
brain's neurons are performed, but this single neuron just does
lookups into a table of pre-calculated outputs.  Would consciousness
still be produced in this case?

What if you then re-ran the simulation with 10 neurons doing lookups,
but calculations still being executed for the rest of the simulated
brain?  Still consciousness is produced?

What if 10% of the neurons are implemented using lookup tables?  50%?
90%?  How about all except 1 neuron is implemented via lookup tables,
but that 1 neuron's outputs are still calculated from inputs?

At what point does the simulation become a zombie?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Sun, May 17, 2009 at 8:07 AM, John Mikes  wrote:
>
> A fitting computer simulation would include ALL aspects involved - call it
> mind AND body, 'physically' observable 'activity' and 'consciousness as
> cause' -- but alas, no such thing so far. Our embryonic machine with its
> binary algorithms, driven by a switched on (electrically induced) primitive
> mechanism can do just that much, within the known segments designed 'in'.
> What we may call 'qualia' is waiting for some analogue comp, working
> simultaneously on all aspects of the phenomena involved (IMO not practical,
> since there cannot be a limit drawn in the interrelated totality, beyond
> which relations may be irrelevant).
>

So you're saying that it's not possible, even in principle, to
simulate a human brain on a digital computer?  But that it would be
possible on a massively parallel analog computer?  What "extra
something" do you think an analog computer provides that isn't
available from a digital computer?  Why would it be necessary to run
all of the calculations in parallel?


> 'consciousness as cause'

You are saying that consciousness has a causal role, that is
additional to the causal structure found in non-conscious physical
systems?  What leads you to this conclusion?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Sun, May 17, 2009 at 6:43 AM, Alberto G.Corona  wrote:
>
> Therefore I think that I answer your question: it´s not only
> information; It´s about a certain kind of information and their own
> processor. The exact nature of this processor that permits qualia is
> not known; that’s true, and it´s good from my point of view, because,
> for one side, the unknown is stimulating and for the other,
> reductionist explanations for everything, like the mine above, are a
> bit frustrating.
>

Given that we don't have an understanding of the subjective process by
which we experience the world, I think we should be skeptical about
the nature of WHAT we experience.

All that I can really conclude is that my experience of reality is one
of the set of all possible experiences.

But I'm reasonably convinced that our experience of reality is all
there is to reality.  All possible experiencers are actual to
themselves.

If you accept that a computer simulation of a human brain is
theoretically possible (which I think you should given your
functionalist views), and you then accept that such a simulation would
be conscious in the same way as a real human is conscious, and then
you start pondering WHY that would be, I think my point above is a
(the?) logical conclusion.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Sun, May 17, 2009 at 2:03 AM, Brent Meeker  wrote:
>
> Do you suppose that something could behave just as humans do yet not be
> conscious, i.e. could there be a philosophical zombie?

I think that somewhere there would have to be a conscious experience
associated with the production of the behavior, THOUGH the conscious
experience might not supervene onto the system producing the behavior
in an obvious way.

Generally I don't think that what we experience is necessarily caused
by physical systems.  I think that sometimes physical systems assume
configurations that "shadow", or represent, our conscious experience.
But they don't CAUSE our conscious experience.

So a computer simulation of a human brain that thinks it's at the
beach would be an example.  The computer running the simulation
assumes a sequence of configurations that could be interpreted as
representing the mental processes of a person enjoying a day at the
beach.  But I can't see any reason why a bunch of electrons moving
through copper and silicon in a particular way would "cause" that
subjective experience of surf and sand.

And for similar reasons I don't see why a human brain would either,
even if it was actually at the beach, given that it is also just
electrons and protons and neutrons.moving in specific ways.

It doesn't seem plausible to me that it is the act of being
represented in some way by a physical system that produces conscious
experience.

Though it DOES seem plausible/obvious to me that a physical system
going through a sequence of these representations is what produces
human behavior.

>
> The information processing?
>

Well, I would say information processing, but it seems to me that many
different "processes" could produce the same information.  And I would
not expect a change in "process" or algorithm to produce a different
subjective experience if the information that was being
processed/output remained the same.

So for this reason I go with "consciousness is information", not
"consciousness is information processing".

Processes just describe ways that different information states CAN be
connected, or related, or transformed.  But I don't think that
consciousness resides in those processes.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-16 Thread Kelly Harmon

I think your discussing the functional aspects of consciousness.  AKA,
the "easy problems" of consciousness.  The question of how human
behavior is produced.

My question was what is the source of "phenomenal" consciousness.
What is the absolute minimum requirement which must be met in order
for conscious experience to exist?  So my question isn't HOW human
behavior is produced, but instead I'm asking why the mechanistic
processes that produce human behavior are accompanied by subjective
"first person" conscious experience.  The "hard problem".  Qualia.

I wasn't asking "how is it that we do the things we do", or, "how did
this come about", but instead "given that we do these things, why is
there a subjective experience associated with doing them."

So none of the things you reference are relevant to the question of
whether a computer simulation of a human mind would be conscious in
the same way as a real human mind.  If a simulation would be, then
what are the properties that those to two very dissimilar physical
systems have in common that would explain this mutual experience of
consciousness?



On Sat, May 16, 2009 at 3:22 AM, Alberto G.Corona  wrote:
>
> No. Consciousness is not information. It is an additional process that
> handles its own generated information. I you don´t recognize the
> driving mechanism towards order in the universe, you will be running
> on empty. This driving mechanism is natural selection. Things gets
> selected, replicated and selected again.
>
> In the case of humans, time ago the evolutionary psychologists and
> philosophers (Dennet etc) discovered the evolutionary nature of
> consciousness, that is double: For social animals, consciousness keeps
> an actualized image of how the others see ourselves. This ability is
> very important in order to plan future actions with/towards others
> members. A memory of past actions, favors and offenses are kept in
> memory for consciousness processing.  This is a part of our moral
> sense, that is, our navigation device in the social environment.
> Additionally, by reflection on ourselves, the consciousness module can
> discover the motivations of others.
>
> The evolutionary steps for the emergence of consciousness are: 1) in
> order to optimize the outcome of collaboration, a social animal start
> to look the others as unique individuals, and memorize their own
> record of actions. 2) Because the others do 1, the animal develop a
> sense of itself and record how each one of the others see himself
> (this is adaptive because 1). 3) This primitive conscious module
> evolved in 2 starts to inspect first and lately, even take control of
> some action with a deep social load. 4) The conscious module
> attributes to an individual moral self every action triggered by the
> brain, even if it driven by low instincts, just because that´s is the
> way the others see himself as individual. That´s why we feel ourselves
> as unique individuals and with an indivisible Cartesian mind.
>
> The consciousness ability is fairly recent in evolutionary terms. This
> explain its inefficient and sequential nature. This and 3 explains why
> we feel anxiety in some social situations: the cognitive load is too
> much for the conscious module when he tries to take control of the
> situation when self image it at a stake. This also explain why when we
> travel we feel a kind of liberation: because the conscious module is
> made irrelevant outside our social circle, so our more efficient lower
> level modules take care of our actions
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: No MWI

2009-05-14 Thread Kelly Harmon

On Thu, May 14, 2009 at 6:18 PM, Colin Hales
 wrote:
>
> My ability to mentally manipulate mathematics therefore makes me a
> powerful lord of reality and puts me in a position of great authority and
> clarity.

Aren't people who are good at math already pretty much in this
position?  Engineering, phsyics, chemistry, finance, etc., all require
some aptitude with math.

If you have significant mathematical ability, then you should be in a
very good position in the modern world, all other things being equal.

Whether reality IS math, or is just described by math...being good at
math is a major bonus either way.  If reality IS math...I'm not sure
how much extra this really buys you over reality just being
describable by math.

So I think your "god complex" explanation is off.


> Yet we have religious zeal surrounding (1b)

What is the difference between "religious zeal" and just "regular
zeal"?  How do you tell the difference?  Is any sign of zeal
automatically tagged as "religious"?  Or only certain kinds of zeal?


> It is not that MWI is true/false it's that confinement to the discourse
> of MWI alone is justified only on religious grounds of the kind I have
> delineated.

I think you overestimate people's devotion to MWI.  I myself only
occasionally pray to it.





On Thu, May 14, 2009 at 6:18 PM, Colin Hales
 wrote:
> Hi,
> When I read quantum mechanics and listen to those invested in the many
> places the mathematics leads, What strikes me is the extent to which the
> starting point is mathematics. That is, the entire discussion is couched as
> if the mathematics is defining what there is, rather than a mere describing
> what is there. I can see that the form of the mathematics projects a
> multitude of possibilities. But those invested in the  business seem to
> operate under the assumption - an extra belief  - about the relationship of
> the mathematics to reality. It imbues the discussion. At least that is how
> it appears to me. Consider the pragmatics of it. I, scientist X,  am in a
> position of adopting 2 possible mindsets:
>
> Position 1
> 1a) The mathematics of quantum mechanics is very accurately predictive of
> observed phenomena
> 1b) Reality literally IS the mathematics of quantum mechanics (and by
> extension all the multitudinous alternative realities actually exist).
> Therefor to discuss mathematical constructs is to speak literally of
> reality. My ability to mentally manipulate mathematics therefore makes me a
> powerful lord of reality and puts me in a position of great authority and
> clarity.
>
> Position 2
> 2a) The mathematics of quantum mechanics is very accurately predictive of
> observed phenomena
> 2b) Reality is not the mathematics of (a). Reality is constructed of
> something that merely appears/behaves quantum-mechanically to an observer
> made of whatever it is, within a universe made of it. The mathematics of
> this "something" is not the mathematics of kind (a).
>
> Note
> 1a) = 2a)
> 1b)  and 2b) they are totally different.
>
> The (a) is completely consistent with either (b).
> Yet we have religious zeal surrounding (1b)
>
> I hope that you can see the subtlety of the distinction between position 1
> and position 2. As a thinking person in the logical position of wondering
> what position to adopt, position 1 is *completely unjustified*. The
> parsimonious position is one in which the universe is made of something
> other than 1b maths, and then to find a method of describing ways in which
> position 1 might seem apparent to an observer made of whatever the universe
> is actually made of.. The nice thing about position 2 is that I have room
> for *doubt* in 2b which does not exist in 1b. In position 2 I have:
>
> (i) laws of nature that are the describing system (predictive of phenomena
> in the usual ways)
> (ii) behaviours of a doubtable 'stuff' relating in doubtable ways to produce
> an observer able to to (i)
>
> In position 1 there is no doubt of kind (ii). That doubt is replaced by
> religious adherence to an unfounded implicit belief which imbues the
> discourse. At the same time  position 1 completely fails to explain an
> observer of the kind able to do 1a.
>
> In my ponderings on this I am coming to the conclusion that the very nature
> of the discourse and training self-selects for people who's mental skills in
> abstract symbol manipulation make Position 1 a dominating tendency.
> Aggregates of position 1 thinkers - such as the everything list and 'fabric
> of reality' act like small cults. There is some kind of psychological
> payback involved in position 1 which selects for people susceptible to
> religiosity of kind 1b. Once you have a couple of generations of these folk
> who are so disconnected from the reality of themselves as embedded, situated
> agents/observers... that position 2, which involves an admission of
> permanent ignorance of some kind, and thereby demoting the physicist from
> the prime source of authority over reality, is margin

Re: Artificial Hippocampus

2009-05-07 Thread Kelly Harmon

Another good one:

http://news.bbc.co.uk/2/hi/science/nature/8012496.stm

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---