On 2/12/2012 17:29, Stephen P. King wrote:
Hi Folks,

I would like to bring the following to your attention. I think that we
do need to revisit this problem.

http://lesswrong.com/lw/19d/the_anthropic_trilemma/


The Anthropic Trilemma
<http://lesswrong.com/lw/19d/the_anthropic_trilemma/>

21Eliezer_Yudkowsky <http://lesswrong.com/user/Eliezer_Yudkowsky/>27
September 2009 01:47AM

Speaking of problems I don't know how to solve, here's one that's been
gnawing at me for years.

The operation of splitting a subjective worldline seems obvious enough -
the skeptical initiate can consider the Ebborians
<http://lesswrong.com/lw/ps/where_physics_meets_experience/>, creatures
whose brains come in flat sheets and who can symmetrically divide down
their thickness. The more sophisticated need merely consider a sentient
computer program: stop, copy, paste, start, and what was one person has
now continued on in two places. If one of your future selves will see
red, and one of your future selves will see green, then (it seems) you
should /anticipate/ seeing red or green when you wake up with 50%
probability. That is, it's a known fact that different versions of you
will see red, or alternatively green, and you should weight the two
anticipated possibilities equally. (Consider what happens when you're
flipping a quantum coin: half your measure will continue into either
branch, and subjective probability will follow quantum measure for
unknown reasons <http://lesswrong.com/lw/py/the_born_probabilities/>.)

But if I make two copies of the same computer program, is there twice as
much experience, or only the same experience? Does someone who runs
redundantly on three processors, get three times as much weight as
someone who runs on one processor?

Let's suppose that three copies get three times as much experience. (If
not, then, in a Big universe, large enough that at least one copy of
anything exists /somewhere,/ you run into the Boltzmann Brain problem
<http://lesswrong.com/lw/17d/forcing_anthropics_boltzmann_brains/>.)

Just as computer programs or brains can split, they ought to be able to
merge. If we imagine a version of the Ebborian species that computes
digitally, so that the brains remain synchronized so long as they go on
getting the same sensory inputs, then we ought to be able to put two
brains back together along the thickness, after dividing them. In the
case of computer programs, we should be able to perform an operation
where we compare each two bits in the program, and if they are the same,
copy them, and if they are different, delete the whole program. (This
seems to establish an equal causal dependency of the final program on
the two original programs that went into it. E.g., if you test the
causal dependency via counterfactuals, then disturbing any bit of the
two originals, results in the final program being completely different
(namely deleted).)

So here's a simple algorithm for winning the lottery:

Buy a ticket. Suspend your computer program just before the lottery
drawing - which should of course be a quantum lottery, so that every
ticket wins somewhere. Program your computational environment to, if you
win, make a trillion copies of yourself, and wake them up for ten
seconds, long enough to experience winning the lottery. Then suspend the
programs, merge them again, and start the result. If you don't win the
lottery, then just wake up automatically.

The odds of winning the lottery are ordinarily a billion to one. But now
the branch in which you /win /has your "measure", your "amount of
experience", /temporarily/ multiplied by a trillion. So with the brief
expenditure of a little extra computing power, you can subjectively win
the lottery - be reasonably sure that when next you open your eyes, you
will see a computer screen flashing "You won!" As for what happens ten
seconds after that, you have no way of knowing how many processors you
run on, so you shouldn't feel a thing.

Now you could just bite this bullet. You could say, "Sounds to me like
it should work fine." You could say, "There's no reason why you
/shouldn't /be able to exert anthropic psychic powers." You could say,
"I have no problem with the idea that no one else could see you exerting
your anthropic psychic powers, and I have no problem with the idea that
different people can send different portions of their subjective futures
into different realities."

I find myself somewhat reluctant to bite that bullet, personally.

Nick Bostrom, when I proposed this problem to him, offered that you
should anticipate winning the lottery after five seconds, but anticipate
losing the lottery after fifteen seconds.

To bite this bullet, you have to throw away the idea that your joint
subjective probabilities are the product of your conditional subjective
probabilities. If you win the lottery, the subjective probability of
having still won the lottery, ten seconds later, is ~1. And if you lose
the lottery, the subjective probability of having lost the lottery, ten
seconds later, is ~1. But we don't have p("experience win after 15s") =
p("experience win after 15s"|"experience win after 5s")*p("experience
win after 5s") + p("experience win after 15s"|"experience not-win after
5s")*p("experience not-win after 5s").

I'm reluctant to bite that bullet too.

And the third horn of the trilemma is to reject the idea of the personal
future - that there's any /meaningful /sense in which I can anticipate
waking up as /myself/ tomorrow, rather than Britney Spears. Or, for that
matter, that there's any meaningful sense in which I can anticipate
being /myself/ in five seconds, rather than Britney Spears. In five
seconds there will be an Eliezer Yudkowsky, and there will be a Britney
Spears, but it is meaningless to speak of the /current/ Eliezer
"continuing on" as Eliezer+5 rather than Britney+5; these are simply
three different people we are talking about.

There are no threads connecting subjective experiences. There are simply
different subjective experiences. Even if some subjective experiences
are highly similar to, and causally computed from
<http://lesswrong.com/lw/qx/timeless_identity/>, other subjective
experiences, they are not /connected/.

I still have trouble biting that bullet for some reason. Maybe I'm
naive, I know, but there's a sense in which I just can't seem to let go
of the question, "What will I see happen next?" I strive for altruism,
but I'm not sure I can believe that subjective selfishness - caring
about your own future experiences - is an /incoherent/ utility function;
that we are /forced/ to be Buddhists who dare not cheat a neighbor, not
because we are kind, but because we anticipate experiencing their
consequences just as much as we anticipate experiencing our own. I don't
think that, if I were /really/ selfish, I could jump off a cliff knowing
smugly that a different person would experience the consequence of
hitting the ground.

Bound to my naive intuitions that can be explained away by obvious
evolutionary instincts, you say? It's plausible that I could be forced
down this path, but I don't feel forced down it quite /yet./ It would
feel like a fake reduction
<http://lesswrong.com/lw/op/fake_reductionism/>. I have rather the sense
that my confusion here is tied up with my confusion over what sort of
physical configurations, or cascades of cause and effect, "exist" in any
sense and "experience" anything in any sense, and flatly denying the
existence of subjective continuity would not make me feel any less
confused about that.

The fourth horn of the trilemma (as 'twere) would be denying that two
copies of the same computation had any more "weight of experience" than
one; but in addition to the Boltzmann Brain problem
<http://lesswrong.com/lw/17d/forcing_anthropics_boltzmann_brains/> in
large universes, you might develop similar anthropic psychic powers if
you could split a trillion times, have each computation view a slightly
different scene in some small detail, forget that detail, and converge
the computations so they could be reunified afterward - then you were
temporarily a trillion different people who all happened to develop into
the same future self. So it's not clear that the fourth horn actually
changes anything, which is why I call it a trilemma.

I should mention, in this connection, a truly remarkable observation:
/quantum/ measure seems to behave in a way that would avoid this
trilemma completely, if you tried the analogue using quantum branching
within a large coherent superposition (e.g. a quantum computer). If you
quantum-split into a trillion copies, those trillion copies would have
the same total quantum measure after being merged or converged.

It's a remarkable fact that the one sort of branching we do have
extensive actual experience with - though we don't know /why/ it behaves
the way it does - seems to behave in a very strange way that is exactly
right to avoid anthropic superpowers /and/ goes on obeying the standard
axioms for conditional probability.

In quantum copying and merging, every "branch" operation preserves the
total measure of the original branch, and every "merge" operation (which
you could theoretically do in large coherent superpositions) likewise
preserves the total measure of the incoming branches.

Great for QM. But it's not clear to me at all how to set up an analogous
set of rules for making copies of sentient beings, in which the total
number of processors can go up or down and you can transfer processors
from one set of minds to another.

To sum up:

* The first horn of the anthropic trilemma is to confess that there
are simple algorithms whereby you can, indetectably to anyone but
yourself, exert the subjective equivalent of psychic powers - use a
temporary expenditure of computing power to permanently send your
subjective future into particular branches of reality.
* The second horn of the anthropic trilemma is to deny that subjective
joint probabilities behave like probabilities - you can coherently
anticipate winning the lottery after five seconds, anticipate the
experience of having lost the lottery after fifteen seconds, and
anticipate that once you experience winning the lottery you will
experience having still won it ten seconds later.
* The third horn of the anthropic trilemma is to deny that there is
any meaningful sense whatsoever in which you can anticipate being
/yourself/ in five seconds, rather than Britney Spears; to deny that
selfishness is coherently possible; to assert that you can hurl
yourself off a cliff without fear, because whoever hits the ground
will be another person not particularly connected to you by any such
ridiculous thing as a "thread of subjective experience".
* The fourth horn of the anthropic trilemma is to deny that increasing
the number of physical copies increases the weight of an experience,
which leads into Boltzmann brain
<http://lesswrong.com/lw/17d/forcing_anthropics_boltzmann_brains/>
problems, and may not help much (because alternatively designed
brains may be able to diverge and then converge as different
experiences have their details forgotten).
* The fifth horn of the anthropic trilemma is to observe that the only
form of splitting we have accumulated experience with, the
mysterious Born probabilities
<http://lesswrong.com/lw/py/the_born_probabilities/> of quantum
mechanics, would seem to avoid the trilemma; but it's not clear how
to have analogous rules could possibly govern information flows in
computer processors.

***
Onward!

Stephen


I gave a tentative (and likely wrong) possible solution to it in another thread. The trillema is much lessened if one considers a relative measure on histories (chains of OMs) and their length. That is, if a branch has more OMs, it should be more likely.

The first horn doesn't apply because you'd have to keep the copies running indefinitely (merging won't work). The second horn, I'm not so sure if it's avoided: COMP-immortality implies potentially infinite histories (although mergers may make them finite), which makes formalizing my idea not trivial.
The third horn only applies to ASSA, not RSSA (implicit in COMP).
The fourth horn is acceptable to me, we can't really deny Boltzmann brains, but they shouldn't be that important as the experience isn't spatially located anyway(MGA). The white rabbit problem is more of a worry in COMP than this horn. The fifth horn is interesting, but also the most difficult to solve: it would require deriving local physics from COMP.

My solution doesn't really solve the first horn though, it just makes it more difficult: if you do happen to make 3^^^3 copies of yourself in the future and they live very different and long lives, that might make it more likely that you end up with a continuation in such a future, however making copies and merging them shortly afterwards won't work.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to