Re: on formally indescribable merde

2001-03-21 Thread George Levy

Sorry guys I am running behind in my replies

Stephen Paul King wrote:


 [SPK]

 It is trivial to show that TM's can not give rise to consciousness for the simple
 reason that consciousness is not pre-specifiable in its behaviour. Have you read
 Peter Wegner's papers about this?

I just got the paper I think I agree with what it says reading the abstract... I 
need
a few days to read the whole thing.


  I have  reached almost the same conclusion, that our consciousness come about from
  an ensemble of more or less identical points or states in the plenitude and the
  thickness of this ensemble is a measure of  the Heisenberg uncertainty. The
  difference is that you call them computation. I view them more as instantaneous
  static entities which are logically connected to each other. Maybe we could
  resolve this issue by saying that I focus on the points of the graph and you, on
  the links :-)

 [SPK]

 Could you elaborate on the nature of this logical connection?

As Bruno pointed out, this logical connection is is actually an ensemble of connections
which are all consistent with the current state of the consciousness (machine). I am
speculating that the size of the ensemble corresponds at the physical level to Planck's
constant and at the logical level to the degree of incompleteness associated with the 
set
of axioms or laws driving this consciousness or machine.


George




Re: another anthropic reasoning

2001-03-21 Thread Saibal Mitra


Wei Dai wrote:
 This experiment is not a game, since the action of each participant only
 affects his or her own payoff, and not the payoff of the other player.
 Actually you can do this with just one participant, and maybe that will
 make the paradoxical nature of anthropic reasoning clearer.

 Suppose the new experiment has two rounds. In each round the participant
 will be given temporary amnesia so he can't tell which round he is in. In
 round one he will have low measure (1/100 of normal). In round two he will
 have normal measure. He is also told:

 If you push button 1, you will lose $9.
 If you push button 2 and you are in round 1, you will win $10.
 If you push button 2 and you are in round 2, you will lose $10.

 According to anthropic reasoning, the participant when faced with the
 choices should think that he is much more likely to be in round 2, and
 therefore push button 1 in both rounds, but obviously he would have been
 better off pushing button 2 in both rounds.

I would conclude that it is inconsistent to say that there are two rounds
and that in round one the participant has 1/100 of the measure of the second
round. If it is certain that there will be two rounds then the measures must
be equal.

Saibal




Re: another anthropic reasoning

2001-03-21 Thread Wei Dai

On Tue, Mar 20, 2001 at 11:56:33PM -0700, Brent Meeker wrote:
 OOPS!  My mistake.  So you always push button 1.  I still don't see the
 paradox.

The paradox is that if you always push button 1, you end up with $-18. If
you always push button 2 instead, you end up with $0.




Re: on formally describable ...

2001-03-21 Thread juergen

Bruno Marchal explained to Jesse Mazer:
 
 Schmidhuber's solution is based on a belongness relation between
 observer and universes which is impossible to keep once we take
 comp seriously. But even if we make sense to such a relation, it
 would only eliminates third person white rabbits and not the
 first person white rabbits: remember that the great programmer
 emulates all (semi)computable universe but also all possible 
 dreams.
 
 In fact Schmidhuber assume a solution of the mind body problem
 which is just incompatible with comp. Technically that makes 
 his work incomplete (at least).

Such statements keep failing to make sense to me and others I know.  
Anybody out there who does understand what is meant?





Re: A FAQ for the list

2001-03-21 Thread George Levy

Hi Hal

The purpose of my post of september 99 was to clarify some of these issues and
terminologies. I am still not an expert except for my own position... I certainly
could not speak for others.

A possible method for performing the tasks I outlined below may be to
decentralized them... In effect assign each one of us to present in his own web
page the documents I have outlined below...and simultaneously have each web page
linked to the other ones... thus providing the appearance of a coordinated
system. If somehow, we could use the same presentation software, then the
ensemble would really look like a single system. Each site could even include an
index for the whole system as well as a section for the owner of the site where
he could expound his own TOE. This approach has the advantage of being absolutely
egalitarian as well as of  providing each author with the appropriate credit and
blame.

This approach leaves many questions open such as who will be the administrator of
the network... could there be no administrator, with all decisions based on a
democratic process?

George


Hal Ruhl wrote:

 Dear George:

 Back in Sept of 99 as part of a post you said:

 ---
 There is a need for the following:

 1) An index of acronyms and ideas such as ASSA, RSSA, COMP, COMP2,
 observer-moments and the published ones such as QS, MWI etc..
 2) Short definitions of these ideas with the author or champion of these
 ideas maintaining such definitions.
 3) Posting a set of FAQs related to each idea
 4) A (preferably short) paragraph *for* the idea written by one or several
 champions
 5) A (preferably short) paragraph *against* the idea written by one or
 several challengers.
 6) A (preferably short) rebuttal paragraph by the champion
 7) A (preferably short) rebuttal paragraph by the challenger
 8) A list of references such as the obvious articles by Tegmark and the book
 by Deutsch with short synopsis (couple of lines) of what these references are
 about.

 The first step is to compile the index, and have volunteer to champion them.

 Any suggestions regarding the mechanization of such scheme?

 It will make it much easier to argue about positions when we understand
 exactly where we stand and where the other participants stand. This would
 avoid a lot of repetition and needless arguing.

 -

 Is there any chance you might be willing to help me on the FAQ project I
 started?

 Yours

 Hal




Re: another anthropic reasoning

2001-03-21 Thread Wei Dai

On Tue, Mar 20, 2001 at 06:14:58PM -0500, Jacques Mallah wrote:
 Effectively it is, since Bob has a Bayesian probability of affecting 
 Alice and so on.

He doesn't know whether he is Alice or Bob, but he does know that his
payoff only depends on his own action. Bob has a Bayesian probability of
affecting Alice is true in the sense that Bob doesn't know whether he is
Alice or Bob, so he doesn't know whether his action is going to affect
Alice or Bob, but that doesn't matter if he cares about himself no matter
who he is, rather than either Alice or Bob by name.

 You are correct as far as him thinking he is more likely to be in round 
 2.  However, you are wrong to think he will push button 1.  It is much the 
 same as with the Bob and Alice example:

You said that in the Bob and Alice example, they would push button 1 if
they were selfish, which I'm assuming that they are, and you said that the
seeming paradox is actually a result of game theory (hence the above
discussion). But in this example you're saying that the participant would 
push button 2. How is that the same?

 He thinks he is only 1/101 likely to be in round 1.  However, he also 
 knows that if he _is_ in round 1, the effect of his actions will be 
 magnified 100-fold.  Thus he will push button 2.
 You might see this better by thinking of measure as the # of copies of 
 him in operation.
 If he is in round 1, there is 1 copy operating.  The decision that copy 
 makes will affect the fate of all 100 copies of him.
 If he is in round 2, all 100 copies are running.  Thus any one copy of 
 him will effectively only decide its own fate and not that of its 99 
 brothers.

That actually illustrates my point, which is that the measure of oneself
is irrelevant to decision making. It's really the magnitude of the effect
of the decision that is relevant. You say that the participant should
think I'm more likely to be in round 2, but if I were in round 1 my
decision would have a greater effect. I suggest that he instead think
I'm in both round 1 and round 2, and I should give equal
consideration to the effects of my decision in both rounds.

This way, we can make decisions without reference to the measure of
conscious thoughts (unless you choose to consider it as part of your
utility function), and we do not need to have a theory of consciousness,
which at least involves solving the implementation problem (or in my own
proposal where static bit strings can be conscious, the analogous
interpretation problem).




Re: another anthropic reasoning

2001-03-21 Thread Jacques Mallah

From: Wei Dai [EMAIL PROTECTED]
To: Jacques Mallah [EMAIL PROTECTED]
On Tue, Mar 20, 2001 at 06:14:58PM -0500, Jacques Mallah wrote:
  Effectively it is [a game], since Bob has a Bayesian probability of 
affecting Alice and so on.

He doesn't know whether he is Alice or Bob, but he does know that his
payoff only depends on his own action. Bob has a Bayesian probability of 
affecting Alice is true in the sense that Bob doesn't know whether he is 
Alice or Bob, so he doesn't know whether his action is going to affect 
Alice or Bob, but that doesn't matter if he cares about himself no matter 
who he is, rather than either Alice or Bob by name.

[Prepare for some parenthetical remarks.
(I assume you mean (s)he cares only about his/her implementation, body, 
gender or the like.  A utility function that depends on indexical 
information.  Fine, but tricky.  If I care only about my implementation, 
then I don't care about my brothers.  Things will depend on exactly how 
the experiment works.
On the other hand, I don't think it's unreasonable for the utility 
function to not depend on indexical information.  For example, Bob might 
like Alice and place equal utility on both Alice's money and his own, like 
in the example I used.
In practice, I think people mainly place utility on those who will 
remember the stuff they are currently experiencing.  Thus if there was a way 
to partially share memories, things could get interesting.)
(Note for James Higgo: the concept of self can be defined in various 
ways.  I do not mean to imply that there is any objective reason for him to 
use these ways.  e.g. I might decide, not knowing my gender, that I still 
care only about people who have the same gender as me.  Thus Bob would not 
care about Alice.  Silly, but possible.  I guess the drugs also make you 
forget which body parts go with what, but you can still look, so the 
experiences are not identical.  Just mentioning that in case someone would 
have jumped in on that point.)
(I would also say that any wise man - which I am not - will certainly 
have a utility function that does _not_ depend on indexical information!  We 
are fools.)
OK, back to the question.  Forget I said a thing.]

It's effectively a game.  But there's no point in debating sematics.  In 
any case, just choose a utility function (which will also depend on 
indexical information), analyse it based on that, and out comes the correct 
answer to maximize expected utility.  So let's concentrate on the case with 
just Bob below.

  You are correct as far as him thinking he is more likely to be in 
round 2.  However, you are wrong to think he will push button 1.  It is 
much the same as with the Bob and Alice example:

You said that in the Bob and Alice example, they would push button 1 if
they were selfish, which I'm assuming that they are, and you said that the 
seeming paradox is actually a result of game theory (hence the above 
discussion). But in this example you're saying that the participant would 
push button 2. How is that the same?

If you're saying that even if they are selfish they would push button 
2, I won't argue.  I was just using a different utility function for being 
selfish, one that did not depend on indexical info.  Pushing 2 is better 
anyway, so why complain?

  He thinks he is only 1/101 likely to be in round 1.  However, he 
also knows that if he _is_ in round 1, the effect of his actions will be 
magnified 100-fold.  Thus he will push button 2.
  You might see this better by thinking of measure as the # of copies 
of him in operation.
  If he is in round 1, there is 1 copy operating.  The decision that 
copy makes will affect the fate of all 100 copies of him.
  If he is in round 2, all 100 copies are running.  Thus any one copy 
of him will effectively only decide its own fate and not that of its 99 
brothers.

That actually illustrates my point, which is that the measure of oneself is 
irrelevant to decision making. It's really the magnitude of the effect of 
the decision that is relevant. You say that the participant should think 
I'm more likely to be in round 2, but if I were in round 1 my decision 
would have a greater effect.

First, it's nice to see that you accept my resolution of the paradox.
But I have a hard time believing that your point was, in fact, the 
above.  You brought forth an attack on anthropic reasoning, calling it 
paradoxical, and I parried it.  Now you claim that you were only pointing 
out that anthropic reasoning is just an innocent bystander?  Of course it's 
just a friendly training exercise, but you do seem to be pulling a switch 
here.

I suggest that he instead think
I'm in both round 1 and round 2, and I should give equal
consideration to the effects of my decision in both rounds.

I assume you mean he should think I am, or was, in round 1, and I am, 
or will be, in round 2.  There is no need for him to think that, and it's